- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我有一些我想并行计算的对象,因此我认为我可以求助于 pyspark。
考虑这个例子,一个类,它的对象确实有一个数字 i
,可以与 square()
平方:
class MyMathObject():
def __init__(self, i):
self.i = i
def square(self):
return self.i ** 2
print(MyMathObject(3).square()) # Test one instance with regular python - works
此外,我设置了 pyspark(在 jupyter 笔记本中),现在我想在我的对象上并行计算 0 到 4 的平方:
import findspark
findspark.init()
from pyspark import SparkContext
sc = SparkContext("local[2]")
rdd = sc.parallelize([MyMathObject(i) for i in range(5)])
rdd.map(lambda obj: obj.square()).collect() # This fails
这不起作用 - 它会导致很长的错误消息,对我来说大多是无用的。
square()
的调用方式有关。我在最后复制了完整的消息。
rdd = sc.parallelize([i for i in range(5)])
rdd.map(lambda i: i**2).collect()
因此,我创建或操作对象的方式似乎存在缺陷,但我无法追查错误。
Py4JJavaError Traceback (most recent call last) in 1 rdd = sc.parallelize([MyMathObject(i) for i in range(5)])----> 2 rdd.map(lambda obj: obj.square()).collect()
/opt/apache-spark-3/spark-3.0.2-bin-hadoop3.2/python/pyspark/rdd.py in collect(self)887 """888 with SCCallSiteSync(self.context) as css:--> 889 sock_info = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())890 return list(_load_from_socket(sock_info, self._jrdd_deserializer))891
/opt/apache-spark-3/spark-3.0.2-bin-hadoop3.2/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py in call(self, *args)13021303 answer = self.gateway_client.send_command(command)-> 1304 return_value = get_return_value(1305 answer, self.gateway_client, self.target_id, self.name)1306
/opt/apache-spark-3/spark-3.0.2-bin-hadoop3.2/python/lib/py4j-0.10.9-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)325 if answer[1] == REFERENCE_TYPE:--> 326 raise Py4JJavaError(327 "An error occurred while calling {0}{1}{2}.\n".328 format(target_id, ".", name), value)
Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 2, 192.168.2.108, executor driver): org.apache.spark.api.python.PythonException: Traceback (most recent call last):File "/opt/apache-spark-3/spark-3.0.2-bin-hadoop3.2/python/lib/pyspark.zip/pyspark/worker.py", line 605, in mainprocess()File "/opt/apache-spark-3/spark-3.0.2-bin-hadoop3.2/python/lib/pyspark.zip/pyspark/worker.py", line 597, in processserializer.dump_stream(out_iter, outfile)File "/opt/apache-spark-3/spark-3.0.2-bin-hadoop3.2/python/lib/pyspark.zip/pyspark/serializers.py", line 271, in dump_streamvs = list(itertools.islice(iterator, batch))File "/opt/apache-spark-3/spark-3.0.2-bin-hadoop3.2/python/lib/pyspark.zip/pyspark/serializers.py", line 147, in load_streamyield self._read_with_length(stream)File "/opt/apache-spark-3/spark-3.0.2-bin-hadoop3.2/python/lib/pyspark.zip/pyspark/serializers.py", line 172, in _read_with_lengthreturn self.loads(obj)File "/opt/apache-spark-3/spark-3.0.2-bin-hadoop3.2/python/lib/pyspark.zip/pyspark/serializers.py", line 458, in loadsreturn pickle.loads(obj, encoding=encoding)AttributeError: Can't get attribute 'MyMathObject' on <module 'pyspark.daemon' from '/opt/apache-spark-3/spark-3.0.2-bin-hadoop3.2/python/lib/pyspark.zip/pyspark/daemon.py'>
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:503)at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:638)at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:621)at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:456)at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)at scala.collection.Iterator.foreach(Iterator.scala:941)at scala.collection.Iterator.foreach$(Iterator.scala:941)at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)at scala.collection.TraversableOnce.to(TraversableOnce.scala:315)at scala.collection.TraversableOnce.to$(TraversableOnce.scala:313)at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28)at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:307)at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:307)at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28)at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:294)at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:288)at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28)at org.apache.spark.rdd.RDD.$anonfun$collect$2(RDD.scala:1004)at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2154)at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)at org.apache.spark.scheduler.Task.run(Task.scala:127)at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:462)at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:465)at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)at java.base/java.lang.Thread.run(Thread.java:834)
Driver stacktrace:at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2059)at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2008)at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2007)at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2007)at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:973)at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:973)at scala.Option.foreach(Option.scala:407)at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:973)at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2239)at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2188)at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2177)at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:775)at org.apache.spark.SparkContext.runJob(SparkContext.scala:2114)at org.apache.spark.SparkContext.runJob(SparkContext.scala:2135)at org.apache.spark.SparkContext.runJob(SparkContext.scala:2154)at org.apache.spark.SparkContext.runJob(SparkContext.scala:2179)at org.apache.spark.rdd.RDD.$anonfun$collect$1(RDD.scala:1004)at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)at org.apache.spark.rdd.RDD.withScope(RDD.scala:388)at org.apache.spark.rdd.RDD.collect(RDD.scala:1003)at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:168)at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.base/java.lang.reflect.Method.invoke(Method.java:566)at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)at py4j.Gateway.invoke(Gateway.java:282)at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)at py4j.commands.CallCommand.execute(CallCommand.java:79)at py4j.GatewayConnection.run(GatewayConnection.java:238)at java.base/java.lang.Thread.run(Thread.java:834)Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):File "/opt/apache-spark-3/spark-3.0.2-bin-hadoop3.2/python/lib/pyspark.zip/pyspark/worker.py", line 605, in mainprocess()File "/opt/apache-spark-3/spark-3.0.2-bin-hadoop3.2/python/lib/pyspark.zip/pyspark/worker.py", line 597, in processserializer.dump_stream(out_iter, outfile)File "/opt/apache-spark-3/spark-3.0.2-bin-hadoop3.2/python/lib/pyspark.zip/pyspark/serializers.py", line 271, in dump_streamvs = list(itertools.islice(iterator, batch))File "/opt/apache-spark-3/spark-3.0.2-bin-hadoop3.2/python/lib/pyspark.zip/pyspark/serializers.py", line 147, in load_streamyield self._read_with_length(stream)File "/opt/apache-spark-3/spark-3.0.2-bin-hadoop3.2/python/lib/pyspark.zip/pyspark/serializers.py", line 172, in _read_with_lengthreturn self.loads(obj)File "/opt/apache-spark-3/spark-3.0.2-bin-hadoop3.2/python/lib/pyspark.zip/pyspark/serializers.py", line 458, in loadsreturn pickle.loads(obj, encoding=encoding)AttributeError: Can't get attribute 'MyMathObject' on <module 'pyspark.daemon' from '/opt/apache-spark-3/spark-3.0.2-bin-hadoop3.2/python/lib/pyspark.zip/pyspark/daemon.py'>
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:503)at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:638)at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:621)at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:456)at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)at scala.collection.Iterator.foreach(Iterator.scala:941)at scala.collection.Iterator.foreach$(Iterator.scala:941)at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)at scala.collection.TraversableOnce.to(TraversableOnce.scala:315)at scala.collection.TraversableOnce.to$(TraversableOnce.scala:313)at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28)at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:307)at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:307)at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28)at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:294)at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:288)at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28)at org.apache.spark.rdd.RDD.$anonfun$collect$2(RDD.scala:1004)at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2154)at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)at org.apache.spark.scheduler.Task.run(Task.scala:127)at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:462)at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:465)at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)... 1 more
最佳答案
它看起来不像您的错误,但解决此问题的一种方法是将模块放在单独的 Python 文件中并导入它:
例如打开文件mymodule.py
和
class MyMathObject():
def __init__(self, i):
self.i = i
def square(self):
return self.i ** 2
在你的主脚本中,你可以做
from mymodule import MyMathObject
rdd = sc.parallelize([MyMathObject(i) for i in range(5)])
rdd.map(lambda obj: obj.square()).collect()
这应该给出 [0, 1, 4, 9, 16]。
关于python - 在 self 实现的对象/类的功能上使用 Pysparks rdd.parallelize().map(),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/67140209/
我的一位教授给了我们一些考试练习题,其中一个问题类似于下面(伪代码): a.setColor(blue); b.setColor(red); a = b; b.setColor(purple); b
我似乎经常使用这个测试 if( object && object !== "null" && object !== "undefined" ){ doSomething(); } 在对象上,我
C# Object/object 是值类型还是引用类型? 我检查过它们可以保留引用,但是这个引用不能用于更改对象。 using System; class MyClass { public s
我在通过 AJAX 发送 json 时遇到问题。 var data = [{"name": "Will", "surname": "Smith", "age": "40"},{"name": "Wil
当我尝试访问我的 View 中的对象 {{result}} 时(我从 Express js 服务器发送该对象),它只显示 [object][object]有谁知道如何获取 JSON 格式的值吗? 这是
我有不同类型的数据(可能是字符串、整数......)。这是一个简单的例子: public static void main(String[] args) { before("one"); }
嗨,我是 json 和 javascript 的新手。 我在这个网站找到了使用json数据作为表格的方法。 我很好奇为什么当我尝试使用 json 数据作为表时,我得到 [Object,Object]
已关闭。此问题需要 debugging details 。目前不接受答案。 编辑问题以包含 desired behavior, a specific problem or error, and the
我听别人说 null == object 比 object == null check 例如: void m1(Object obj ) { if(null == obj) // Is thi
Match 对象 提供了对正则表达式匹配的只读属性的访问。 说明 Match 对象只能通过 RegExp 对象的 Execute 方法来创建,该方法实际上返回了 Match 对象的集合。所有的
Class 对象 使用 Class 语句创建的对象。提供了对类的各种事件的访问。 说明 不允许显式地将一个变量声明为 Class 类型。在 VBScript 的上下文中,“类对象”一词指的是用
Folder 对象 提供对文件夹所有属性的访问。 说明 以下代码举例说明如何获得 Folder 对象并查看它的属性: Function ShowDateCreated(f
File 对象 提供对文件的所有属性的访问。 说明 以下代码举例说明如何获得一个 File 对象并查看它的属性: Function ShowDateCreated(fil
Drive 对象 提供对磁盘驱动器或网络共享的属性的访问。 说明 以下代码举例说明如何使用 Drive 对象访问驱动器的属性: Function ShowFreeSpac
FileSystemObject 对象 提供对计算机文件系统的访问。 说明 以下代码举例说明如何使用 FileSystemObject 对象返回一个 TextStream 对象,此对象可以被读
我是 javascript OOP 的新手,我认为这是一个相对基本的问题,但我无法通过搜索网络找到任何帮助。我是否遗漏了什么,或者我只是以错误的方式解决了这个问题? 这是我的示例代码: functio
我可以很容易地创造出很多不同的对象。例如像这样: var myObject = { myFunction: function () { return ""; } };
function Person(fname, lname) { this.fname = fname, this.lname = lname, this.getName = function()
任何人都可以向我解释为什么下面的代码给出 (object, Object) 吗? (console.log(dope) 给出了它应该的内容,但在 JSON.stringify 和 JSON.parse
我正在尝试完成散点图 exercise来自免费代码营。然而,我现在只自己学习了 d3 几个小时,在遵循 lynda.com 的教程后,我一直在尝试确定如何在工具提示中显示特定数据。 This code
我是一名优秀的程序员,十分优秀!