- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我正在尝试从 pyspark 中的列表创建字典。我有以下列表:
rawPositions
给予
[[1009794, 'LPF6 Comdty', 'BC22', 'Enterprise', 3.0, 3904.125, 390412.5],
[1009794, 'LPF6 Comdty', 'BC22', 'Enterprise', 3.0, 3900.75, 390075.0],
[1009794, 'LPF6 Comdty', 'BC22', 'Enterprise', 3.0, 3882.5625, 388256.25],
[1009794, 'LPF6 Comdty', 'BC22', 'Enterprise', 3.0, 3926.25, 392625.0],
[2766232,
'CDX IG CDSI S25 V1 5Y CBBT CORP',
'BC85',
'Enterprise',
30000000.0,
-16323.2439825,
30000000.0],
[2766232,
'CDX IG CDSI S25 V1 5Y CBBT CORP',
'BC85',
'Enterprise',
30000000.0,
-16928.620101900004,
30000000.0],
[1009804, 'LPM6 Comdty', 'BC29', 'Jet', 105.0, 129596.25, 12959625.0],
[1009804, 'LPM6 Comdty', 'BC29', 'Jet', 128.0, 162112.0, 16211200.0],
[1009804, 'LPM6 Comdty', 'BC29', 'Jet', 135.0, 167146.875, 16714687.5],
[1009804, 'LPM6 Comdty', 'BC29', 'Jet', 109.0, 132884.625, 13288462.5]]
然后使用我的 SparkContext 变量 sc 我并行化列表
i = sc.parallelize(rawPositions)
#i.collect()
然后我尝试通过在每个列表条目的第三个元素上使用 groupby 函数将其转换为字典。
j = i.groupBy(lambda x: x[3])
j.collect()
给予
---------------------------------------------------------------------------
Py4JJavaError Traceback (most recent call last)
<ipython-input-143-6113a75f0a9e> in <module>()
2 #i.collect()
3 j = i.groupBy(lambda x: x[3])
----> 4 j.collect()
/net/nas/uxhome/condor_ldrt-s/spark-1.6.1-bin-hadoop2.6/python/pyspark/rdd.py in collect(self)
769 """
770 with SCCallSiteSync(self.context) as css:
--> 771 port = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
772 return list(_load_from_socket(port, self._jrdd_deserializer))
773
/net/nas/uxhome/condor_ldrt-s/spark-1.6.1-bin-hadoop2.6/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py in __call__(self, *args)
811 answer = self.gateway_client.send_command(command)
812 return_value = get_return_value(
--> 813 answer, self.gateway_client, self.target_id, self.name)
814
815 for temp_arg in temp_args:
/net/nas/uxhome/condor_ldrt-s/spark-1.6.1-bin-hadoop2.6/python/pyspark/sql/utils.py in deco(*a, **kw)
43 def deco(*a, **kw):
44 try:
---> 45 return f(*a, **kw)
46 except py4j.protocol.Py4JJavaError as e:
47 s = e.java_exception.toString()
/net/nas/uxhome/condor_ldrt-s/spark-1.6.1-bin-hadoop2.6/python/lib/py4j-0.9-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
306 raise Py4JJavaError(
307 "An error occurred while calling {0}{1}{2}.\n".
--> 308 format(target_id, ".", name), value)
309 else:
310 raise Py4JError(
Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 14 in stage 50.0 failed 4 times, most recent failure: Lost task 14.3 in stage 50.0 (TID 7583, brllxhtce01.bluecrest.local): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/net/nas/uxhome/condor_ldrt-s/spark-1.6.1-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/worker.py", line 111, in main
process()
File "/net/nas/uxhome/condor_ldrt-s/spark-1.6.1-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/worker.py", line 106, in process
serializer.dump_stream(func(split_index, iterator), outfile)
File "/net/nas/uxhome/condor_ldrt-s/spark-1.6.1-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/serializers.py", line 133, in dump_stream
for obj in iterator:
File "/net/nas/uxhome/condor_ldrt-s/spark-1.6.1-bin-hadoop2.6/python/pyspark/rdd.py", line 1703, in add_shuffle_key
buckets[partitionFunc(k) % numPartitions].append((k, v))
File "/net/nas/uxhome/condor_ldrt-s/spark-1.6.1-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/rdd.py", line 74, in portable_hash
raise Exception("Randomness of hash of string should be disabled via PYTHONHASHSEED")
Exception: Randomness of hash of string should be disabled via PYTHONHASHSEED
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166)
at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.api.python.PairwiseRDD.compute(PythonRDD.scala:342)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1929)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:927)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.RDD.collect(RDD.scala:926)
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:405)
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
at sun.reflect.GeneratedMethodAccessor31.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
at py4j.Gateway.invoke(Gateway.java:259)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:209)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/net/nas/uxhome/condor_ldrt-s/spark-1.6.1-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/worker.py", line 111, in main
process()
File "/net/nas/uxhome/condor_ldrt-s/spark-1.6.1-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/worker.py", line 106, in process
serializer.dump_stream(func(split_index, iterator), outfile)
File "/net/nas/uxhome/condor_ldrt-s/spark-1.6.1-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/serializers.py", line 133, in dump_stream
for obj in iterator:
File "/net/nas/uxhome/condor_ldrt-s/spark-1.6.1-bin-hadoop2.6/python/pyspark/rdd.py", line 1703, in add_shuffle_key
buckets[partitionFunc(k) % numPartitions].append((k, v))
File "/net/nas/uxhome/condor_ldrt-s/spark-1.6.1-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/rdd.py", line 74, in portable_hash
raise Exception("Randomness of hash of string should be disabled via PYTHONHASHSEED")
Exception: Randomness of hash of string should be disabled via PYTHONHASHSEED
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166)
at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.api.python.PairwiseRDD.compute(PythonRDD.scala:342)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
... 1 more
我不知道这个错误指的是什么...任何帮助都会很棒!
最佳答案
自 Python 3.2.3+ 起,Python 中的 str
、byte
和 datetime
对象的哈希值使用随机值进行加盐,以防止某些类型的拒绝服务攻击。这意味着哈希值在单个解释器 session 内是一致的,但在 session 之间是不同的。 PYTHONHASHSEED
设置 RNG 种子以在 session 之间提供一致的值。
您可以在 shell 中轻松检查这一点。如果 PYTHONHASHSEED
未设置,您将获得一些随机值:
unset PYTHONHASHSEED
for i in `seq 1 3`;
do
python3 -c "print(hash('foo'))";
done
## -7298483006336914254
## -6081529125171670673
## -3642265530762908581
但是设置后,您将在每次执行时获得相同的值:
export PYTHONHASHSEED=323
for i in `seq 1 3`;
do
python3 -c "print(hash('foo'))";
done
## 8902216175227028661
## 8902216175227028661
## 8902216175227028661
由于 groupBy
和其他依赖于默认分区程序的操作使用哈希,因此您需要在集群中的所有计算机上使用 相同的值 PYTHONHASHSEED
获得一致的结果。
另请参阅:
关于python-3.x - 异常 : Randomness of hash of string should be disabled via PYTHONHASHSEED mean in pyspark? 是什么意思,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/36798833/
如果您想使用 String.Concat() 连接 5 个或更多字符串,则它会使用 Concat(String[])。 为什么不一直使用 Concat(String[]) 而不再需要 Concat(S
今天在使用 String 时,我遇到了一种我以前不知道的行为。我无法理解内部发生的事情。 public String returnVal(){ return "5";
似乎在我所看到的任何地方,都有一些过时的版本,这些版本不再起作用。 我的问题似乎很简单。我有一个Java类,它映射到derby数据库。我正在使用注释,并且已经成功地在数据库中创建了所有其他表,但是在这
一、string::size_type() 在C++标准库类型 string ,在调用size函数求解string 对象时,返回值为size_type类型,一种类似于unsigned类型的int 数据
我正在尝试将数据保存到我的 plist 文件中,其中包含字符串数组的定义。我的plist - enter image description here 我将数据写入 plist 的代码是 -- let
我有一个带有键/值对的 JavaScript 对象,其中值是字符串数组: var errors = { "Message": ["Error #1", "Error #2"], "Em
例如,为了使用相同的函数迭代 List 和 List> ,我可以编写如下内容: import java.util.*; public class Test{ public static voi
第一个Dictionary就像 Dictionary ParentDict = new Dictionary(); ParentDict.Add("A_1", "1")
这是我的 jsp 文件: 我遇到了错误 The method replace(String, String, String) in the type Functions is not appl
我需要一些帮助。我有一个方法应该输出一个包含列表内容的 txt 文件(每行中的每个项目)。列表项是字符串数组。问题是,当我调用 string.Join 时,它返回文字字符串 "System.Strin
一位同事告诉我,使用以下方法: string url = "SomeURL"; string ext = "SomeExt"; string sub = "SomeSub"; string s
给定类: public class CategoryValuePair { String category; String value; } 还有一个方法: public
我正在尝试合并 Stream>>对象与所有 Streams 中的键一起映射到单个映射中. 例如, final Map someObject; final List>> list = someObjec
在这里使用 IDictionary 的值(value)是什么? 最佳答案 使用接口(interface)的值(value)始终相同:切换到另一个后端实现时,您不必更改客户端代码。 请考虑稍后分析您的代
我可以知道这两个字典声明之间的区别吗? var places = [String: String]() var places = [Dictionary()] 为什么当我尝试以这种方式附加声明时,只有
在 .NET 4.0 及更高版本中存在 string.IsNullOrWhiteSpace(string) 时,在检查字符串时使用 string.IsNullOrEmpty(string) 是否被视为
这个名字背后的原因是什么? SS64在 PowerShell 中解释此处的字符串如下: A here string is a single-quoted or double-quoted string
我打算离开 this 文章,尝试编写一个接受字符串和 &str 的函数,但我遇到了问题。我有以下功能: pub fn new(t_num: S) -> BigNum where S: Into {
我有一个结构为 [String: [String: String]] 的多维数组。我可以使用 for 循环到达 [String: String] 位,但我不知道如何访问主键(这个位 [String:
我正在尝试使用 sarama(管理员模式)创建主题。没有 ConfigEntries 工作正常。但我需要定义一些配置。 我设置了主题配置(这里发生了错误): tConfigs := map[s
我是一名优秀的程序员,十分优秀!