- android - 多次调用 OnPrimaryClipChangedListener
- android - 无法更新 RecyclerView 中的 TextView 字段
- android.database.CursorIndexOutOfBoundsException : Index 0 requested, 光标大小为 0
- android - 使用 AppCompat 时,我们是否需要明确指定其 UI 组件(Spinner、EditText)颜色
我有一个脚本可以解析二进制文件并将其数据作为 pandas DataFrames 返回。当我在没有集群的情况下运行脚本时,它工作正常:
sc = SparkContext('local', "TDMS parser")
但是当我尝试将 master 设置为我的本地集群(我之前已启动并将工作人员附加到它)时:
sc = SparkContext('spark://roman-pc:7077', "TDMS parser")
它记录了这样的错误
> 15/07/03 16:36:20 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID
> 0, 192.168.0.193): org.apache.spark.api.python.PythonException:
> Traceback (most recent call last): File
> "/home/roman/dev/spark-1.4.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/worker.py",
> line 98, in main
> command = pickleSer._read_with_length(infile) File "/home/roman/dev/spark-1.4.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/serializers.py",
> line 164, in _read_with_length
> return self.loads(obj) File "/home/roman/dev/spark-1.4.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/serializers.py",
> line 421, in loads
> return pickle.loads(obj) File "/home/roman/dev/spark-1.4.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/cloudpickle.py",
> line 629, in subimport
> __import__(name) ImportError: ('No module named pandas', <function subimport at 0x7fef3731cd70>, ('pandas',))
>
> at
> org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:138)
> at
> org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:179)
> at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:97)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:244) at
> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63) at
> org.apache.spark.scheduler.Task.run(Task.scala:70) at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
>
> 15/07/03 16:36:20 INFO TaskSetManager: Lost task 1.0 in stage 0.0 (TID
> 1) on executor 192.168.0.193:
> org.apache.spark.api.python.PythonException (Traceback (most recent
> call last): File
> "/home/roman/dev/spark-1.4.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/worker.py",
> line 98, in main
> command = pickleSer._read_with_length(infile) File "/home/roman/dev/spark-1.4.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/serializers.py",
> line 164, in _read_with_length
> return self.loads(obj) File "/home/roman/dev/spark-1.4.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/serializers.py",
> line 421, in loads
> return pickle.loads(obj) File "/home/roman/dev/spark-1.4.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/cloudpickle.py",
> line 629, in subimport
> __import__(name) ImportError: ('No module named pandas', <function subimport at 0x7fef3731cd70>, ('pandas',)) ) [duplicate 1] 15/07/03
> 16:36:20 INFO TaskSetManager: Starting task 1.1 in stage 0.0 (TID 2,
> 192.168.0.193, PROCESS_LOCAL, 1491 bytes) 15/07/03 16:36:20 INFO TaskSetManager: Starting task 0.1 in stage 0.0 (TID 3, 192.168.0.193,
> PROCESS_LOCAL, 1412 bytes) 15/07/03 16:36:20 INFO TaskSetManager: Lost
> task 0.1 in stage 0.0 (TID 3) on executor 192.168.0.193:
> org.apache.spark.api.python.PythonException (Traceback (most recent
> call last): File
> "/home/roman/dev/spark-1.4.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/worker.py",
> line 98, in main
> command = pickleSer._read_with_length(infile) File "/home/roman/dev/spark-1.4.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/serializers.py",
> line 164, in _read_with_length
> return self.loads(obj) File "/home/roman/dev/spark-1.4.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/serializers.py",
> line 421, in loads
> return pickle.loads(obj) File "/home/roman/dev/spark-1.4.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/cloudpickle.py",
> line 629, in subimport
> __import__(name) ImportError: ('No module named pandas', <function subimport at 0x7fef3731cd70>, ('pandas',)) ) [duplicate 2] 15/07/03
> 16:36:20 INFO TaskSetManager: Starting task 0.2 in stage 0.0 (TID 4,
> 192.168.0.193, PROCESS_LOCAL, 1412 bytes) 15/07/03 16:36:21 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on
> 192.168.0.193:40099 (size: 13.7 KB, free: 265.4 MB) 15/07/03 16:36:23 WARN TaskSetManager: Lost task 1.1 in stage 0.0 (TID 2,
> 192.168.0.193): org.apache.spark.api.python.PythonException: Traceback (most recent call last): File
> "/home/roman/dev/spark-1.4.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/worker.py",
> line 98, in main
> command = pickleSer._read_with_length(infile) File "/home/roman/dev/spark-1.4.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/serializers.py",
> line 164, in _read_with_length
> return self.loads(obj) File "/home/roman/dev/spark-1.4.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/serializers.py",
> line 421, in loads
> return pickle.loads(obj) File "/home/roman/dev/spark-1.4.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/cloudpickle.py",
> line 629, in subimport
> __import__(name) ImportError: ('No module named pandas', <function subimport at 0x7fb5c3d5cd70>, ('pandas',))
>
> at
> org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:138)
> at
> org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:179)
> at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:97)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:244) at
> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63) at
> org.apache.spark.scheduler.Task.run(Task.scala:70) at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
>
> 15/07/03 16:36:23 INFO TaskSetManager: Starting task 1.2 in stage 0.0
> (TID 5, 192.168.0.193, PROCESS_LOCAL, 1491 bytes) 15/07/03 16:36:23
> INFO TaskSetManager: Lost task 0.2 in stage 0.0 (TID 4) on executor
> 192.168.0.193: org.apache.spark.api.python.PythonException (Traceback (most recent call last): File
> "/home/roman/dev/spark-1.4.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/worker.py",
> line 98, in main
> command = pickleSer._read_with_length(infile) File "/home/roman/dev/spark-1.4.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/serializers.py",
> line 164, in _read_with_length
> return self.loads(obj) File "/home/roman/dev/spark-1.4.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/serializers.py",
> line 421, in loads
> return pickle.loads(obj) File "/home/roman/dev/spark-1.4.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/cloudpickle.py",
> line 629, in subimport
> __import__(name) ImportError: ('No module named pandas', <function subimport at 0x7fb5c3d5cd70>, ('pandas',)) ) [duplicate 1] 15/07/03
> 16:36:23 INFO TaskSetManager: Starting task 0.3 in stage 0.0 (TID 6,
> 192.168.0.193, PROCESS_LOCAL, 1412 bytes) 15/07/03 16:36:23 INFO TaskSetManager: Lost task 0.3 in stage 0.0 (TID 6) on executor
> 192.168.0.193: org.apache.spark.api.python.PythonException (Traceback (most recent call last): File
> "/home/roman/dev/spark-1.4.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/worker.py",
> line 98, in main
> command = pickleSer._read_with_length(infile) File "/home/roman/dev/spark-1.4.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/serializers.py",
> line 164, in _read_with_length
> return self.loads(obj) File "/home/roman/dev/spark-1.4.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/serializers.py",
> line 421, in loads
> return pickle.loads(obj) File "/home/roman/dev/spark-1.4.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/cloudpickle.py",
> line 629, in subimport
> __import__(name) ImportError: ('No module named pandas', <function subimport at 0x7fef3731cd70>, ('pandas',)) ) [duplicate 3] 15/07/03
> 16:36:23 ERROR TaskSetManager: Task 0 in stage 0.0 failed 4 times;
> aborting job 15/07/03 16:36:23 INFO TaskSchedulerImpl: Cancelling
> stage 0 15/07/03 16:36:23 INFO TaskSchedulerImpl: Stage 0 was
> cancelled 15/07/03 16:36:23 INFO DAGScheduler: ResultStage 0 (collect
> at /home/roman/dev/python/AWO-72/tdms_reader.py:461) failed in 16,581
> s 15/07/03 16:36:23 INFO DAGScheduler: Job 0 failed: collect at
> /home/roman/dev/python/AWO-72/tdms_reader.py:461, took 17,456362 s
> Traceback (most recent call last): File
> "/home/roman/dev/python/AWO-72/tdms_reader.py", line 461, in <module>
> rdd.map(lambda f: read_file(f)).collect() File "/home/roman/dev/spark-1.4.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/rdd.py",
> line 745, in collect File
> "/home/roman/dev/spark-1.4.0-bin-hadoop2.6/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py",
> line 538, in __call__ File
> "/home/roman/dev/spark-1.4.0-bin-hadoop2.6/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value py4j.protocol.Py4JJavaError: An error
> occurred while calling
> z:org.apache.spark.api.python.PythonRDD.collectAndServe. :
> org.apache.spark.SparkException: Job aborted due to stage failure:
> Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3
> in stage 0.0 (TID 6, 192.168.0.193):
> org.apache.spark.api.python.PythonException: Traceback (most recent
> call last): File
> "/home/roman/dev/spark-1.4.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/worker.py",
> line 98, in main
> command = pickleSer._read_with_length(infile) File "/home/roman/dev/spark-1.4.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/serializers.py",
> line 164, in _read_with_length
> return self.loads(obj) File "/home/roman/dev/spark-1.4.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/serializers.py",
> line 421, in loads
> return pickle.loads(obj) File "/home/roman/dev/spark-1.4.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/cloudpickle.py",
> line 629, in subimport
> __import__(name) ImportError: ('No module named pandas', <function subimport at 0x7fef3731cd70>, ('pandas',))
>
> at
> org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:138)
> at
> org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:179)
> at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:97)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:244) at
> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63) at
> org.apache.spark.scheduler.Task.run(Task.scala:70) at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
>
> Driver stacktrace: at
> org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1266)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1257)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1256)
> at
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
> at
> org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1256)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
> at scala.Option.foreach(Option.scala:236) at
> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:730)
> at
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1450)
> at
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1411)
> at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
你知道问题出在哪里吗?
最佳答案
正如@Holden 提到的,我建议查看
如果安装了多个 python 版本,请确保您使用的是正确的版本或带有 pandas 的版本。您可以通过添加以下内容来指定要在 ./conf/spark-eng.sh.template
中使用哪个 Python:
导出 PYSPARK_PYTHON=/Users/schang/anaconda/bin/python
或您想要使用的 Python 版本。
导出 PYSPARK_DRIVER_PYTHON=/Users/schang/anaconda/bin/ipython
关于python - 集群上的 Pandas 和 Spark,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/31207295/
pandas.crosstab 和 Pandas 数据透视表似乎都提供了完全相同的功能。有什么不同吗? 最佳答案 pivot_table没有 normalize争论,不幸的是。 在 crosstab
我能找到的最接近的答案似乎太复杂:How I can create an interval column in pandas? 如果我有一个如下所示的 pandas 数据框: +-------+ |
这是我用来将某一行的一列值移动到同一行的另一列的当前代码: #Move 2014/15 column ValB to column ValA df.loc[(df.Survey_year == 201
我有一个以下格式的 Pandas 数据框: df = pd.DataFrame({'a' : [0,1,2,3,4,5,6], 'b' : [-0.5, 0.0, 1.0, 1.2, 1.4,
所以我有这两个数据框,我想得到一个新的数据框,它由两个数据框的行的克罗内克积组成。正确的做法是什么? 举个例子:数据框1 c1 c2 0 10 100 1 11 110 2 12
TL;DR:在 pandas 中,如何绘制条形图以使其 x 轴刻度标签看起来像折线图? 我制作了一个间隔均匀的时间序列(每天一个项目),并且可以像这样很好地绘制它: intensity[350:450
我有以下两个时间列,“Time1”和“Time2”。我必须计算 Pandas 中的“差异”列,即 (Time2-Time1): Time1 Time2
从这个 df 去的正确方法是什么: >>> df=pd.DataFrame({'a':['jeff','bob','jill'], 'b':['bob','jeff','mike']}) >>> df
我想按周从 Pandas 框架中的列中累积计算唯一值。例如,假设我有这样的数据: df = pd.DataFrame({'user_id':[1,1,1,2,2,2],'week':[1,1,2,1,
数据透视表的表示形式看起来不像我在寻找的东西,更具体地说,结果行的顺序。 我不知道如何以正确的方式进行更改。 df示例: test_df = pd.DataFrame({'name':['name_1
我有一个数据框,如下所示。 Category Actual Predicted 1 1 1 1 0
我有一个 df,如下所示。 df: ID open_date limit 1 2020-06-03 100 1 2020-06-23 500
我有一个 df ,其中包含与唯一值关联的各种字符串。对于这些唯一值,我想删除不等于单独列表的行,最后一行除外。 下面使用 Label 中的各种字符串值与 Item 相关联.所以对于每个唯一的 Item
考虑以下具有相同名称的列的数据框(显然,这确实发生了,目前我有一个像这样的数据集!:() >>> df = pd.DataFrame({"a":range(10,15),"b":range(5,10)
我在 Pandas 中有一个 DF,它看起来像: Letters Numbers A 1 A 3 A 2 A 1 B 1 B 2
如何减去两列之间的时间并将其转换为分钟 Date Time Ordered Time Delivered 0 1/11/19 9:25:00 am 10:58:00 am
我试图理解 pandas 中的下/上百分位数计算,但有点困惑。这是它的示例代码和输出。 test = pd.Series([7, 15, 36, 39, 40, 41]) test.describe(
我有一个多索引数据框,如下所示: TQ bought HT Detailed Instru
我需要从包含值“低”,“中”或“高”的数据框列创建直方图。当我尝试执行通常的df.column.hist()时,出现以下错误。 ex3.Severity.value_counts() Out[85]:
我试图根据另一列的长度对一列进行子串,但结果集是 NaN .我究竟做错了什么? import pandas as pd df = pd.DataFrame([['abcdefghi','xyz'],
我是一名优秀的程序员,十分优秀!