gpt4 book ai didi

sockets - 异常: could not open socket on pyspark

转载 作者:行者123 更新时间:2023-12-02 05:01:07 25 4
gpt4 key购买 nike

每当我尝试在 pyspark 中执行简单处理时,它都无法打开套接字。

>>> myRDD = sc.parallelize(range(6), 3)
>>> sc.runJob(myRDD, lambda part: [x * x for x in part])

上面抛出异常 -

port 53554 , proto 6 , sa ('127.0.0.1', 53554)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Volumes/work/bigdata/spark-custom/python/pyspark/context.py", line 917, in runJob
return list(_load_from_socket(port, mappedRDD._jrdd_deserializer))
File "/Volumes/work/bigdata/spark-custom/python/pyspark/rdd.py", line 143, in _load_from_socket
raise Exception("could not open socket")
Exception: could not open socket

>>> 15/08/30 19:03:05 ERROR PythonRDD: Error while sending iterator
java.net.SocketTimeoutException: Accept timed out
at java.net.PlainSocketImpl.socketAccept(Native Method)
at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:404)
at java.net.ServerSocket.implAccept(ServerSocket.java:545)
at java.net.ServerSocket.accept(ServerSocket.java:513)
at org.apache.spark.api.python.PythonRDD$$anon$2.run(PythonRDD.scala:613)

我通过 rdd.py _load_from_socket 检查并意识到它获取了端口,但服务器甚至没有启动,或者 sp runJob 可能是问题 -

port = self._jvm.PythonRDD.runJob(self._jsc.sc(), mappedRDD._jrdd, partitions)

最佳答案

这不是理想的解决方案,但现在我知道原因了。Pyspark 无法使用 JDK 1.8(64 位)版本创建 jvm 套接字,因此我只需将 java 路径设置为 jdk 1.7 就可以了。

关于sockets - 异常: could not open socket on pyspark,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/32299656/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com