gpt4 book ai didi

python - 运行PySpark命令时出错

转载 作者:行者123 更新时间:2023-12-02 21:30:21 26 4
gpt4 key购买 nike

我在Hadoop 2.6.0中安装了Spark 1.4.1,并尝试运行以下PySpark命令来计算行数。它引发以下错误。我是Spark的新手,无法找到错误。

任何人都可以提供解决方案。

>>> distFile = sc.textFile("/home/hduser2/spark-1.4.1-bin-hadoop2.6/README.md")
15/12/31 09:31:50 INFO storage.MemoryStore: ensureFreeSpace(213560) called with curMem=695185, maxMem=278019440
15/12/31 09:31:50 INFO storage.MemoryStore: Block broadcast_10 stored as values in memory (estimated size 208.6 KB, free 264.3 MB)
15/12/31 09:31:50 INFO storage.MemoryStore: ensureFreeSpace(19929) called with curMem=908745, maxMem=278019440
15/12/31 09:31:50 INFO storage.MemoryStore: Block broadcast_10_piece0 stored as bytes in memory (estimated size 19.5 KB, free 264.3 MB)
15/12/31 09:31:50 INFO storage.BlockManagerInfo: Added broadcast_10_piece0 in memory on localhost:60765 (size: 19.5 KB, free: 265.1 MB)
15/12/31 09:31:50 INFO spark.SparkContext: Created broadcast 10 from textFile at NativeMethodAccessorImpl.java:-2


>>> distFile.count()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/spark/python/pyspark/rdd.py", line 984, in count
return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum()
File "/usr/local/spark/python/pyspark/rdd.py", line 975, in sum
return self.mapPartitions(lambda x: [sum(x)]).fold(0, operator.add)
File "/usr/local/spark/python/pyspark/rdd.py", line 852, in fold
vals = self.mapPartitions(func).collect()
File "/usr/local/spark/python/pyspark/rdd.py", line 757, in collect
port = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
File "/usr/local/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__
File "/usr/local/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://localhost:9000/home/hduser2/spark-1.4.1-bin-hadoop2.6/README.md
at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:285)
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:207)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
at org.apache.spark.api.python.PythonRDD.getPartitions(PythonRDD.scala:58)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1781)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:885)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:286)
at org.apache.spark.rdd.RDD.collect(RDD.scala:884)
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:378)
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
at py4j.Gateway.invoke(Gateway.java:259)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:207)
at java.lang.Thread.run(Thread.java:745)

最佳答案

您说该文件位于本地文件系统中,但是错误指出它正在HDFS上寻找文件。
Input path does not exist: hdfs://localhost:9000/home/hduser2/spark-1.4.1-bin-hadoop2.6/README.md.
Spark懒惰地执行,这意味着它实际上直到需要时才真正读取文件。调用count()。这就解释了为什么上一行不会出错。

您可以将文件移动到HDFS中的该路径,也可以在本地模式下设置SparkContext。

关于python - 运行PySpark命令时出错,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/34540942/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com