> test.txt 使用 vanilla spark-2.1.0-bin-hadoop2.7.tgz,以下失败。相同的测试适用于旧版本的 S-6ren">
gpt4 book ai didi

apache-spark - Spark 2.1 PySpark 错误 : sc. textFile ("test.txt".repartition(2).collect()

转载 作者:行者123 更新时间:2023-12-02 03:05:57 24 4
gpt4 key购买 nike

使用原始文本文件:

echo "a\nb\nc\nd" >> test.txt

使用 vanilla spark-2.1.0-bin-hadoop2.7.tgz,以下失败。相同的测试适用于旧版本的 Spark:

$ bin/pyspark

Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/__ / .__/\_,_/_/ /_/\_\ version 2.1.0
/_/

Using Python version 2.7.13 (default, Dec 18 2016 07:03:39)
SparkSession available as 'spark'.

>>> sc.textFile("test.txt").collect()
[u'a', u'b', u'c', u'd']

>>> sc.textFile("test.txt").repartition(2).collect()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/admin/opt/spark/spark-2.1.0-bin-hadoop2.7/python/pyspark/rdd.py", line 810, in collect
return list(_load_from_socket(port, self._jrdd_deserializer))
File "/Users/admin/opt/spark/spark-2.1.0-bin-hadoop2.7/python/pyspark/rdd.py", line 140, in _load_from_socket
for item in serializer.load_stream(rf):
File "/Users/admin/opt/spark/spark-2.1.0-bin-hadoop2.7/python/pyspark/serializers.py", line 529, in load_stream
yield self.loads(stream)
File "/Users/admin/opt/spark/spark-2.1.0-bin-hadoop2.7/python/pyspark/serializers.py", line 524, in loads
return s.decode("utf-8") if self.use_unicode else s
File "/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/encodings/utf_8.py", line 16, in decode
return codecs.utf_8_decode(input, errors, True)
UnicodeDecodeError: 'utf8' codec can't decode byte 0x80 in position 0: invalid start byte

使用 vanilla Spark 2.1 本地安装,相同的文本文件,但基于 Scala 的 spark-shell,完全相同的命令有效:

$ bin/spark-shell

Spark session available as 'spark'.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.1.0
/_/

Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_112)
Type in expressions to have them evaluated.
Type :help for more information.

scala> sc.textFile("test.txt").collect()
res0: Array[String] = Array(a, b, c, d)

scala> sc.textFile("test.txt").repartition(2).collect()
res1: Array[String] = Array(a, c, d, b)

最佳答案

这是一个已知错误。它将在 Spark 2.1 之后修复

更新:已确认已在 Spark 2.1.1 中修复

关于apache-spark - Spark 2.1 PySpark 错误 : sc. textFile ("test.txt".repartition(2).collect(),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/42844207/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com