gpt4 book ai didi

python - 无法从 Python 运行 Apache Spark 的 Pi 示例

转载 作者:行者123 更新时间:2023-12-02 11:47:08 25 4
gpt4 key购买 nike

我已经设置了我的第一个 Spark 集群(1 个主集群,2 个工作集群)和一个 iPython 笔记本服务器,我已将其设置为访问该集群。我正在运行 Anaconda 的工作人员,以确保每个盒子上的 python 设置都是正确的。 iPy 笔记本服务器似乎已正确设置所有内容,我能够初始化 Spark 并推出作业。然而,工作失败了,我不知道如何排除故障。代码如下:

from pyspark import SparkContext
from numpy import random
CLUSTER_URL = 'spark://192.168.1.20:7077'
sc = SparkContext( CLUSTER_URL, 'pyspark')
def sample(p):
from numpy import random
x, y = random(), random()
return 1 if x*x + y*y < 1 else 0

count = sc.parallelize(xrange(0, 20)).map(sample).reduce(lambda a, b: a + b)
print "Pi is roughly %f" % (4.0 * count / 20)

这是错误:

Py4JJavaError Traceback (most recent call last) in () 3 return 1 if xx + yy < 1 else 0 4 ----> 5 count = sc.parallelize(xrange(0, 20)).map(sample).reduce(lambda a, b: a + b) 6 print "Pi is roughly %f" % (4.0 * count / 20)

/opt/spark-1.2.0/python/pyspark/rdd.pyc in reduce(self, f) 713 yield reduce(f, iterator, initial) 714 --> 715 vals = self.mapPartitions(func).collect() 716 if vals: 717 return reduce(f, vals)

/opt/spark-1.2.0/python/pyspark/rdd.pyc in collect(self) 674 """ 675 with SCCallSiteSync(self.context) as css: --> 676 bytesInJava = self._jrdd.collect().iterator() 677 return list(self._collect_iterator_through_file(bytesInJava)) 678

/opt/spark-1.2.0/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py in call(self, *args) 536 answer = self.gateway_client.send_command(command) 537 return_value = get_return_value(answer, self.gateway_client, --> 538 self.target_id, self.name) 539 540 for temp_arg in temp_args:

/opt/spark-1.2.0/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name) 298 raise Py4JJavaError( 299 'An error occurred while calling {0}{1}{2}.\n'. --> 300 format(target_id, '.', name), value) 301 else: 302 raise Py4JError(

Py4JJavaError: An error occurred while calling o28.collect. : org.apache.spark.SparkException: Job aborted due to stage failure: Task 31 in stage 0.0 failed 4 times, most recent failure: Lost task 31.3 in stage 0.0 (TID 72, 192.168.1.21): org.apache.spark.api.python.PythonException: Traceback (most recent call last): File "/opt/spark-1.2.0/python/pyspark/worker.py", line 107, in main process() File "/opt/spark-1.2.0/python/pyspark/worker.py", line 98, in process serializer.dump_stream(func(split_index, iterator), outfile) File "/opt/spark-1.2.0/python/pyspark/serializers.py", line 227, in dump_stream vs = list(itertools.islice(iterator, batch)) File "/opt/spark-1.2.0/python/pyspark/rdd.py", line 710, in func initial = next(iterator) File "", line 2, in sample TypeError: 'module' object is not callable

at org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:137) at org.apache.spark.api.python.PythonRDD$$anon$1.(PythonRDD.scala:174) at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:96) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263) at org.apache.spark.rdd.RDD.iterator(RDD.scala:230) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61) at org.apache.spark.scheduler.Task.run(Task.scala:56) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745)

Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1214) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1203) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1202) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1202) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:696) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:696) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:696) at org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1420) at akka.actor.Actor$class.aroundReceive(Actor.scala:465) at org.apache.spark.scheduler.DAGSchedulerEventProcessActor.aroundReceive(DAGScheduler.scala:1375) at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516) at akka.actor.ActorCell.invoke(ActorCell.scala:487) at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238) at akka.dispatch.Mailbox.run(Mailbox.scala:220) at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393) at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

我什至不知道从哪里开始调试/诊断这个问题,所以任何帮助将不胜感激。如果有帮助的话,很乐意发布其他日志。

最佳答案

numpy.random 是一个 Python 包,您不能使用 random() 调用它。

我猜你想使用random.random(),这里是documentation .

关于python - 无法从 Python 运行 Apache Spark 的 Pi 示例,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/28072716/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com