- android - RelativeLayout 背景可绘制重叠内容
- android - 如何链接 cpufeatures lib 以获取 native android 库?
- java - OnItemClickListener 不起作用,但 OnLongItemClickListener 在自定义 ListView 中起作用
- java - Android 文件转字符串
我正在尝试在 Yarn 中执行一个简单的 Pyspark 作业。这是代码:
from pyspark import SparkConf, SparkContext
conf = (SparkConf()
.setMaster("yarn-client")
.setAppName("HDFS Filter")
.set("spark.executor.memory", "1g"))
sc = SparkContext(conf = conf)
inputFile = sc.textFile("hdfs://myserver:9000/1436304078054.json.gz").cache()
matchTerm = "spark"
numMatches = inputFile.filter(lambda line: matchTerm in line).count()
print(numMatches, "lines contain", matchTerm)
我不知道代码是否有效,这不是重点。问题是,当我从 spark 目录中使用命令 ./bin/pyspark ../job.py
运行它时,出现下一个错误(只是整个输出的一小部分):
15/09/01 17:57:02 INFO storage.BlockManagerInfo: Added broadcast_1_piece0 in memory on hadoop-05:44841 (size: 3.8 KB, free: 534.5 MB)
15/09/01 17:57:02 WARN scheduler.TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, hadoop-05): org.apache.spark.SparkException:
Error from python worker:
/usr/bin/python2.7: No module named pyspark
PYTHONPATH was:
/usr/local/hadoop_store/tmp/nm-local-dir/usercache/hduser/filecache/16/spark-assembly-1.4.1-hadoop2.2.0.jar
java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at org.apache.spark.api.python.PythonWorkerFactory.startDaemon(PythonWorkerFactory.scala:163)
at org.apache.spark.api.python.PythonWorkerFactory.createThroughDaemon(PythonWorkerFactory.scala:86)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:62)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:130)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:73)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
at org.apache.spark.scheduler.Task.run(Task.scala:70)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
15/09/01 17:57:02 INFO scheduler.TaskSetManager: Starting task 0.1 in stage 0.0 (TID 1, hadoop-03, RACK_LOCAL, 1475 bytes)
15/09/01 17:57:04 INFO storage.BlockManagerInfo: Added broadcast_1_piece0 in memory on hadoop-03:33268 (size: 3.8 KB, free: 534.5 MB)
15/09/01 17:57:05 WARN scheduler.TaskSetManager: Lost task 0.1 in stage 0.0 (TID 1, hadoop-03): org.apache.spark.SparkException:
Error from python worker:
/usr/bin/python2.7: No module named pyspark
PYTHONPATH was:
/usr/local/hadoop_store/tmp/nm-local-dir/usercache/hduser/filecache/21/spark-assembly-1.4.1-hadoop2.2.0.jar
java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at org.apache.spark.api.python.PythonWorkerFactory.startDaemon(PythonWorkerFactory.scala:163)
at org.apache.spark.api.python.PythonWorkerFactory.createThroughDaemon(PythonWorkerFactory.scala:86)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:62)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:130)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:73)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
at org.apache.spark.scheduler.Task.run(Task.scala:70)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
15/09/01 17:57:05 INFO scheduler.TaskSetManager: Starting task 0.2 in stage 0.0 (TID 2, hadoop-05, RACK_LOCAL, 1475 bytes)
15/09/01 17:57:05 INFO scheduler.TaskSetManager: Lost task 0.2 in stage 0.0 (TID 2) on executor hadoop-05: org.apache.spark.SparkException (
Error from python worker:
/usr/bin/python2.7: No module named pyspark
PYTHONPATH was:
/usr/local/hadoop_store/tmp/nm-local-dir/usercache/hduser/filecache/16/spark-assembly-1.4.1-hadoop2.2.0.jar
java.io.EOFException) [duplicate 1]
15/09/01 17:57:05 INFO scheduler.TaskSetManager: Starting task 0.3 in stage 0.0 (TID 3, hadoop-05, RACK_LOCAL, 1475 bytes)
15/09/01 17:57:05 INFO scheduler.TaskSetManager: Lost task 0.3 in stage 0.0 (TID 3) on executor hadoop-05: org.apache.spark.SparkException (
Error from python worker:
/usr/bin/python2.7: No module named pyspark
PYTHONPATH was:
/usr/local/hadoop_store/tmp/nm-local-dir/usercache/hduser/filecache/16/spark-assembly-1.4.1-hadoop2.2.0.jar
java.io.EOFException) [duplicate 2]
15/09/01 17:57:05 ERROR scheduler.TaskSetManager: Task 0 in stage 0.0 failed 4 times; aborting job
15/09/01 17:57:05 INFO cluster.YarnScheduler: Removed TaskSet 0.0, whose tasks have all completed, from pool
15/09/01 17:57:05 INFO cluster.YarnScheduler: Cancelling stage 0
15/09/01 17:57:05 INFO scheduler.DAGScheduler: ResultStage 0 (count at /home/hduser/spark-1.4.1-bin-without-hadoop/../test.py:11) failed in 5.093 s
15/09/01 17:57:05 INFO scheduler.DAGScheduler: Job 0 failed: count at /home/hduser/spark-1.4.1-bin-without-hadoop/../test.py:11, took 5.238381 s
Traceback (most recent call last):
File "/home/hduser/spark-1.4.1-bin-without-hadoop/../test.py", line 11, in <module>
numMatches = inputFile.filter(lambda line: matchTerm in line).count()
File "/home/hduser/spark-1.4.1-bin-without-hadoop/python/lib/pyspark.zip/pyspark/rdd.py", line 984, in count
File "/home/hduser/spark-1.4.1-bin-without-hadoop/python/lib/pyspark.zip/pyspark/rdd.py", line 975, in sum
File "/home/hduser/spark-1.4.1-bin-without-hadoop/python/lib/pyspark.zip/pyspark/rdd.py", line 852, in fold
File "/home/hduser/spark-1.4.1-bin-without-hadoop/python/lib/pyspark.zip/pyspark/rdd.py", line 757, in collect
File "/home/hduser/spark-1.4.1-bin-without-hadoop/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__
File "/home/hduser/spark-1.4.1-bin-without-hadoop/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, hadoop-05): org.apache.spark.SparkException:
Error from python worker:
/usr/bin/python2.7: No module named pyspark
PYTHONPATH was:
/usr/local/hadoop_store/tmp/nm-local-dir/usercache/hduser/filecache/16/spark-assembly-1.4.1-hadoop2.2.0.jar
java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at org.apache.spark.api.python.PythonWorkerFactory.startDaemon(PythonWorkerFactory.scala:163)
at org.apache.spark.api.python.PythonWorkerFactory.createThroughDaemon(PythonWorkerFactory.scala:86)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:62)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:130)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:73)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
at org.apache.spark.scheduler.Task.run(Task.scala:70)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1273)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1264)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1263)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1263)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:730)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1457)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1418)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
15/09/01 17:57:06 INFO spark.SparkContext: Invoking stop() from shutdown hook
最后,这是我的 spark-env.sh conf 文件:
export SPARK_DIST_CLASSPATH=$(hadoop classpath)
export HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop
知道我做错了什么吗?
最佳答案
为我解决这个问题的方法是在 SparkConf
中加入一些额外的设置,这似乎可以确保工作人员可以访问 PySpark 和 Py4J 模块:
conf = (SparkConf()
.setMaster("yarn-client")
.setAppName("HDFS Filter")
.set("spark.executor.memory", "1g")
.set('spark.yarn.dist.files','file:/usr/hdp/2.3.2.0-2950/spark/python/lib/pyspark.zip,file:/usr/hdp/2.3.2.0-2950/spark/python/lib/py4j-0.8.2.1-src.zip')
.setExecutorEnv('PYTHONPATH','pyspark.zip:py4j-0.8.2.1-src.zip'))
您需要根据系统的需要编辑路径。
关于python - 找不到 Pyspark 模块,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/32336498/
我最近在我的机器上安装了 cx_Oracle 模块,以便连接到远程 Oracle 数据库服务器。 (我身边没有 Oracle 客户端)。 Python:版本 2.7 x86 Oracle:版本 11.
我想从 python timeit 模块检查打印以下内容需要多少时间,如何打印, import timeit x = [x for x in range(10000)] timeit.timeit("
我盯着 vs 代码编辑器上的 java 脚本编码,当我尝试将外部模块包含到我的项目中时,代码编辑器提出了这样的建议 -->(文件是 CommonJS 模块;它可能会转换为 ES6 模块。 )..有什么
我有一个 Node 应用程序,我想在标准 ES6 模块格式中使用(即 "type": "module" in the package.json ,并始终使用 import 和 export)而不转译为
我正在学习将 BlueprintJS 合并到我的 React 网络应用程序中,并且在加载某些 CSS 模块时遇到了很多麻烦。 我已经安装了 npm install @blueprintjs/core和
我需要重构一堆具有这样的调用的文件 define(['module1','module2','module3' etc...], function(a, b, c etc...) { //bun
我是 Angular 的新手,正在学习各种教程(Codecademy、thinkster.io 等),并且已经看到了声明应用程序容器的两种方法。首先: var app = angular.module
我正在尝试将 OUnit 与 OCaml 一起使用。 单元代码源码(unit.ml)如下: open OUnit let empty_list = [] let list_a = [1;2;3] le
我在 Angular 1.x 应用程序中使用 webpack 和 ES6 模块。在我设置的 webpack.config 中: resolve: { alias: { 'angular':
internal/modules/cjs/loader.js:750 return process.dlopen(module, path.toNamespacedPath(filename));
在本教程中,您将借助示例了解 JavaScript 中的模块。 随着我们的程序变得越来越大,它可能包含许多行代码。您可以使用模块根据功能将代码分隔在单独的文件中,而不是将所有内容都放在一个文件
我想知道是否可以将此代码更改为仅调用 MyModule.RED 而不是 MyModule.COLORS.RED。我尝试将 mod 设置为变量来存储颜色,但似乎不起作用。难道是我方法不对? (funct
我有以下代码。它是一个 JavaScript 模块。 (function() { // Object var Cahootsy; Cahootsy = { hello:
关闭。这个问题是 opinion-based 。它目前不接受答案。 想要改进这个问题?更新问题,以便 editing this post 可以用事实和引文来回答它。 关闭 2 年前。 Improve
从用户的角度来看,一个模块能够通过 require 加载并返回一个 table,模块导出的接口都被定义在此 table 中(此 table 被作为一个 namespace)。所有的标准库都是模块。标
Ruby的模块非常类似类,除了: 模块不可以有实体 模块不可以有子类 模块由module...end定义. 实际上...模块的'模块类'是'类的类'这个类的父类.搞懂了吗?不懂?让我们继续看
我有一个脚本,它从 CLI 获取 3 个输入变量并将其分别插入到 3 个变量: GetOptions("old_path=s" => \$old_path, "var=s" =
我有一个简单的 python 包,其目录结构如下: wibble | |-----foo | |----ping.py | |-----bar | |----pong.py 简单的
这种语法会非常有用——这不起作用有什么原因吗?谢谢! module Foo = { let bar: string = "bar" }; let bar = Foo.bar; /* works *
我想运行一个命令: - name: install pip shell: "python {"changed": true, "cmd": "python <(curl https://boot
我是一名优秀的程序员,十分优秀!