gpt4 book ai didi

pyspark - org.apache.spark.SparkException : No port number in pyspark. 守护进程的标准输出

转载 作者:行者123 更新时间:2023-12-04 13:02:07 24 4
gpt4 key购买 nike

我正在 Hadoop-Yarn 集群上执行 spark-submit 作业。

spark-submit/opt/spark/examples/src/main/python/pi.py 1000

但面临以下错误消息。似乎是 worker 没有启动。

  2018-12-20 07:25:14 INFO  SparkContext:54 - Created broadcast 0 from    broadcast at DAGScheduler.scala:1161
2018-12-20 07:25:14 INFO DAGScheduler:54 - Submitting 1000 missing tasks from ResultStage 0 (PythonRDD[1] at reduce at /opt/spark/examples/src/main/python/pi.py:44) (first 15 tasks are for partitions Vector(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14))
2018-12-20 07:25:14 INFO YarnScheduler:54 - Adding task set 0.0 with 1000 tasks
2018-12-20 07:25:14 INFO TaskSetManager:54 - Starting task 0.0 in stage 0.0 (TID 0, hadoop-slave2, executor 1, partition 0, PROCESS_LOCAL, 7863 bytes)
2018-12-20 07:25:14 INFO TaskSetManager:54 - Starting task 1.0 in stage 0.0 (TID 1, hadoop-slave1, executor 2, partition 1, PROCESS_LOCAL, 7863 bytes)
2018-12-20 07:25:15 INFO BlockManagerInfo:54 - Added broadcast_0_piece0 in memory on hadoop-slave2:37217 (size: 4.2 KB, free: 93.3 MB)
2018-12-20 07:25:15 INFO BlockManagerInfo:54 - Added broadcast_0_piece0 in memory on hadoop-slave1:35311 (size: 4.2 KB, free: 93.3 MB)
2018-12-20 07:25:15 INFO TaskSetManager:54 - Starting task 2.0 in stage 0.0 (TID 2, hadoop-slave2, executor 1, partition 2, PROCESS_LOCAL, 7863 bytes)
2018-12-20 07:25:15 INFO TaskSetManager:54 - Starting task 3.0 in stage 0.0 (TID 3, hadoop-slave1, executor 2, partition 3, PROCESS_LOCAL, 7863 bytes)
2018-12-20 07:25:16 WARN TaskSetManager:66 - Lost task 0.0 in stage 0.0 (TID 0, hadoop-slave2, executor 1): org.apache.spark.SparkException:
Error from python worker:
Traceback (most recent call last):
File "/usr/lib64/python2.6/runpy.py", line 104, in _run_module_as_main
loader, code, fname = _get_module_details(mod_name)
File "/usr/lib64/python2.6/runpy.py", line 79, in _get_module_details
loader = get_loader(mod_name)
File "/usr/lib64/python2.6/pkgutil.py", line 456, in get_loader
return find_loader(fullname)
File "/usr/lib64/python2.6/pkgutil.py", line 466, in find_loader
for importer in iter_importers(fullname):
File "/usr/lib64/python2.6/pkgutil.py", line 422, in iter_importers
__import__(pkg)
File "/tmp/hadoop-hdfs/nm-local-dir/usercache/hdfs/appcache/application_1545288386209_0005/container_1545288386209_0005_01_000002/pyspark.zip/pyspark/__init__.py", line 51, in <module>
File "/tmp/hadoop-hdfs/nm-local-dir/usercache/hdfs/appcache/application_1545288386209_0005/container_1545288386209_0005_01_000002/pyspark.zip/pyspark/context.py", line 31, in <module>
File "/tmp/hadoop-hdfs/nm-local-dir/usercache/hdfs/appcache/application_1545288386209_0005/container_1545288386209_0005_01_000002/pyspark.zip/pyspark/accumulators.py", line 97, in <module>
File "/tmp/hadoop-hdfs/nm-local-dir/usercache/hdfs/appcache/application_1545288386209_0005/container_1545288386209_0005_01_000002/pyspark.zip/pyspark/serializers.py", line 71, in <module>
File "/tmp/hadoop-hdfs/nm-local-dir/usercache/hdfs/appcache/application_1545288386209_0005/container_1545288386209_0005_01_000002/pyspark.zip/pyspark/cloudpickle.py", line 246, in <module>
File "/tmp/hadoop-hdfs/nm-local-dir/usercache/hdfs/appcache/application_1545288386209_0005/container_1545288386209_0005_01_000002/pyspark.zip/pyspark/cloudpickle.py", line 270, in CloudPickler
NameError: name 'memoryview' is not defined
PYTHONPATH was:
/tmp/hadoop-hdfs/nm-local-dir/usercache/hdfs/filecache/21/__spark_libs__3793296165132209773.zip/spark-core_2.11-2.4.0.jar: /tmp/hadoop-hdfs/nm-local-dir/usercache/hdfs/appcache/application_1545288386209_0005/container_1545288386209_0005_01_000002/pyspark.zip:/tmp/hadoop-hdfs/nm-local-dir/usercache/hdfs/appcache/application_1545288386209_0005/container_1545288386209_0005_01_000002/py4j-0.10.7-src.zip
org.apache.spark.SparkException: No port number in pyspark.daemon's stdout
at org.apache.spark.api.python.PythonWorkerFactory.startDaemon(PythonWorkerFactory.scala:204)
at org.apache.spark.api.python.PythonWorkerFactory.createThroughDaemon(PythonWorkerFactory.scala:122)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:95)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)

最佳答案

我相信当 Python 版本不匹配时会发生这个问题。

将以下内容添加到我的 ~/.bash_profile 对我有用:

alias spark-submit='PYSPARK_PYTHON=$(which python) spark-submit'

它应该强制 spark 使用您在模块中加载的相同版本的 python。

关于pyspark - org.apache.spark.SparkException : No port number in pyspark. 守护进程的标准输出,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/53865015/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com