- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
给定一个 pyspark 数据框 given_df
,我需要用它来生成一个新的数据框 new_df
从中。
我正在尝试使用 foreach()
逐行处理 pyspark 数据帧方法。让我们说,为简单起见,两个数据帧 given_df
和 new_df
由单列组成。
我必须处理此数据帧的每一行,并根据该单元格中存在的值,创建一些新行并将其添加到 new_df
来自 union
与 Rows 一起使用。处理单行 given_df
时将生成的行数是可变的。
new_df=spark.createDataFrame([], schema=['SampleField']) // Create an empty dataframe initially
given_df.foreach(func) // given_df already contains some data loaded. Now I run a function for each row.
def func(row):
rows_to_append = getNewRowsAfterProcessingCurrentRow(row)
global new_df // without this line, the next line will result in an error, because it will think that new_df is a local variable and we are trying to access it without defining it first.
new_df=new_df.union(spark.createDataFrame(data=rows_to_append, schema=['SampleField'])
然而,这会导致泡菜错误。
PicklingError: Could not serialize object: Exception: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transformation. SparkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063.
Traceback (most recent call last):
File "/databricks/spark/python/pyspark/serializers.py", line 476, in dumps
return cloudpickle.dumps(obj, pickle_protocol)
File "/databricks/spark/python/pyspark/cloudpickle.py", line 1097, in dumps
cp.dump(obj)
File "/databricks/spark/python/pyspark/cloudpickle.py", line 356, in dump
return Pickler.dump(self, obj)
File "/databricks/python/lib/python3.7/pickle.py", line 437, in dump
self.save(obj)
File "/databricks/python/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/databricks/python/lib/python3.7/pickle.py", line 789, in save_tuple
save(element)
File "/databricks/python/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/databricks/spark/python/pyspark/cloudpickle.py", line 500, in save_function
self.save_function_tuple(obj)
File "/databricks/spark/python/pyspark/cloudpickle.py", line 729, in save_function_tuple
save(state)
File "/databricks/python/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/databricks/python/lib/python3.7/pickle.py", line 859, in save_dict
self._batch_setitems(obj.items())
File "/databricks/python/lib/python3.7/pickle.py", line 885, in _batch_setitems
save(v)
File "/databricks/python/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/databricks/python/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/databricks/python/lib/python3.7/pickle.py", line 843, in _batch_appends
save(x)
File "/databricks/python/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/databricks/spark/python/pyspark/cloudpickle.py", line 500, in save_function
self.save_function_tuple(obj)
File "/databricks/spark/python/pyspark/cloudpickle.py", line 729, in save_function_tuple
save(state)
File "/databricks/python/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/databricks/python/lib/python3.7/pickle.py", line 859, in save_dict
self._batch_setitems(obj.items())
File "/databricks/python/lib/python3.7/pickle.py", line 885, in _batch_setitems
save(v)
File "/databricks/python/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/databricks/python/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/databricks/python/lib/python3.7/pickle.py", line 843, in _batch_appends
save(x)
File "/databricks/python/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/databricks/spark/python/pyspark/cloudpickle.py", line 500, in save_function
self.save_function_tuple(obj)
File "/databricks/spark/python/pyspark/cloudpickle.py", line 729, in save_function_tuple
save(state)
File "/databricks/python/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/databricks/python/lib/python3.7/pickle.py", line 859, in save_dict
self._batch_setitems(obj.items())
File "/databricks/python/lib/python3.7/pickle.py", line 885, in _batch_setitems
save(v)
File "/databricks/python/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/databricks/python/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/databricks/python/lib/python3.7/pickle.py", line 843, in _batch_appends
save(x)
File "/databricks/python/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/databricks/spark/python/pyspark/cloudpickle.py", line 500, in save_function
self.save_function_tuple(obj)
File "/databricks/spark/python/pyspark/cloudpickle.py", line 729, in save_function_tuple
save(state)
File "/databricks/python/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/databricks/python/lib/python3.7/pickle.py", line 859, in save_dict
self._batch_setitems(obj.items())
File "/databricks/python/lib/python3.7/pickle.py", line 885, in _batch_setitems
save(v)
File "/databricks/python/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/databricks/python/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/databricks/python/lib/python3.7/pickle.py", line 846, in _batch_appends
save(tmp[0])
File "/databricks/python/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/databricks/spark/python/pyspark/cloudpickle.py", line 500, in save_function
self.save_function_tuple(obj)
File "/databricks/spark/python/pyspark/cloudpickle.py", line 729, in save_function_tuple
save(state)
File "/databricks/python/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/databricks/python/lib/python3.7/pickle.py", line 859, in save_dict
self._batch_setitems(obj.items())
File "/databricks/python/lib/python3.7/pickle.py", line 885, in _batch_setitems
save(v)
File "/databricks/python/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/databricks/python/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/databricks/python/lib/python3.7/pickle.py", line 846, in _batch_appends
save(tmp[0])
File "/databricks/python/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/databricks/spark/python/pyspark/cloudpickle.py", line 500, in save_function
self.save_function_tuple(obj)
File "/databricks/spark/python/pyspark/cloudpickle.py", line 729, in save_function_tuple
save(state)
File "/databricks/python/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/databricks/python/lib/python3.7/pickle.py", line 859, in save_dict
self._batch_setitems(obj.items())
File "/databricks/python/lib/python3.7/pickle.py", line 885, in _batch_setitems
save(v)
File "/databricks/python/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/databricks/python/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/databricks/python/lib/python3.7/pickle.py", line 846, in _batch_appends
save(tmp[0])
File "/databricks/python/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/databricks/spark/python/pyspark/cloudpickle.py", line 495, in save_function
self.save_function_tuple(obj)
File "/databricks/spark/python/pyspark/cloudpickle.py", line 729, in save_function_tuple
save(state)
File "/databricks/python/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/databricks/python/lib/python3.7/pickle.py", line 859, in save_dict
self._batch_setitems(obj.items())
File "/databricks/python/lib/python3.7/pickle.py", line 885, in _batch_setitems
save(v)
File "/databricks/python/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/databricks/python/lib/python3.7/pickle.py", line 859, in save_dict
self._batch_setitems(obj.items())
File "/databricks/python/lib/python3.7/pickle.py", line 885, in _batch_setitems
save(v)
File "/databricks/python/lib/python3.7/pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "/databricks/python/lib/python3.7/pickle.py", line 662, in save_reduce
save(state)
File "/databricks/python/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/databricks/python/lib/python3.7/pickle.py", line 859, in save_dict
self._batch_setitems(obj.items())
File "/databricks/python/lib/python3.7/pickle.py", line 885, in _batch_setitems
save(v)
File "/databricks/python/lib/python3.7/pickle.py", line 524, in save
rv = reduce(self.proto)
File "/databricks/spark/python/pyspark/context.py", line 356, in __getnewargs__
"It appears that you are attempting to reference SparkContext from a broadcast "
Exception: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transformation. SparkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063.
为了更好地理解我想要做什么,让我举一个例子来说明一个可能的用例:
given_df
是一个句子的数据框,其中每个句子由一些由空格分隔的单词组成。
given_df=spark.createDataframe([("The old brown fox",), ("jumps over",), ("the lazy log",)], schema=["SampleField"])
new_df 是一个数据帧,由不同行的每个单词组成。所以我们将处理
given_df
的每一行根据我们通过分割行得到的单词,我们将把每一行插入到
new_df
中。 .
new_df=spark.createDataFrame([("The",), ("old",), ("brown",), ("fox",), ("jumps",), ("over",), ("the",), ("lazy",), ("dog",)], schema=["SampleField"])
最佳答案
您正在尝试在不允许的执行器上使用 DataFrame API,因此 PicklingError
:
PicklingError: Could not serialize object: Exception: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transformation. SparkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063.
RDD.flatMap
或者,如果您更喜欢 DataFrame API,
explode()
功能。
given_df=spark.createDataFrame([("The old brown fox",), ("jumps over",), ("the lazy log",)], schema=["SampleField"])
from pyspark.sql.functions import udf, explode
from pyspark.sql.types import ArrayType, StringType
@udf(returnType=ArrayType(StringType()))
def getNewRowsAfterProcessingCurrentRow(str):
return str.split()
new_df= given_df\
.select(explode(getNewRowsAfterProcessingCurrentRow("SampleField")).alias("SampleField"))\
.unionAll(given_df)
new_df.show()
getNewRowsAfterProcessingCurrentRow()
在 udf()
.这只会使您的函数在 DataFrame API 中可用。 explode()
的函数中的函数。 .这是必需的,因为您想将拆分的句子“分解”(或转置)为多行,每行一个单词。 given_df
合并。 . +-----------------+
| SampleField|
+-----------------+
| The|
| old|
| brown|
| fox|
| jumps|
| over|
| the|
| lazy|
| log|
|The old brown fox|
| jumps over|
| the lazy log|
+-----------------+
关于python - 通过使用 foreach 方法处理旧数据帧来创建新的 pyspark 数据帧时出现 Pickle 错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/66694369/
对于 Metal ,如果对主纹理进行 mipmap 处理,是否还需要对多采样纹理进行 mipmap 处理?我阅读了苹果文档,但没有得到任何相关信息。 最佳答案 Mipmapping 适用于您将从中
我正在使用的代码在后端 Groovy 代码中具有呈现 GSP(Groovy 服务器页面)的 Controller 。对于前端,我们使用 React-router v4 来处理路由。我遇到的问题是,通过
我们正在 build 一个巨大的网站。我们正在考虑是在服务器端(ASP .Net)还是在客户端进行 HTML 处理。 例如,我们有 HTML 文件,其作用类似于用于生成选项卡的模板。服务器端获取 HT
我正在尝试将图像加载到 void setup() 中的数组中,但是当我这样做时出现此错误:“类型不匹配,'processing .core.PImage' does not匹配“processing.
我正在尝试使用其私有(private)应用程序更新 Shopify 上的客户标签。我用 postman 尝试过,一切正常,但通过 AJAX,它带我成功回调而不是错误,但成功后我得到了身份验证链接,而不
如何更改我的 Processing appIconTest.exe 导出的默认图标在窗口中的应用程序? 默认一个: 最佳答案 经过一些研究,我能找到的最简单的解决方案是: 进入 ...\process
我在 Processing 中做了一个简单的小游戏,但需要一些帮助。我有一个 mp3,想将它添加到我的应用程序中,以便在后台循环运行。 这可能吗?非常感谢。 最佳答案 您可以使用声音库。处理已经自带
我有几个这样创建的按钮: 在 setup() PImage[] imgs1 = {loadImage("AREA1_1.png"),loadImage("AREA1_2.png"),loadImage
我正在尝试使用 Processing 创建一个多人游戏,但无法弄清楚如何将屏幕分成两个以显示玩家的不同情况? 就像在 c# 中一样,我们有Viewport leftViewport,rightView
我一直在尝试使用 Moore 邻域在处理过程中创建元胞自动机,到目前为止非常成功。我已经设法使基本系统正常工作,现在我希望通过添加不同的功能来使用它。现在,我检查细胞是否存活。如果是,我使用 fill
有没有办法用 JavaScript 代码检查资源使用情况?我可以检查脚本的 RAM 使用情况和 CPU 使用情况吗? 由于做某事有多种方法,我可能会使用不同的方法编写代码,并将其保存为两个不同的文件,
我想弄清楚如何处理这样的列表: [ [[4,6,7], [1,2,4,6]] , [[10,4,2,4], [1]] ] 这是一个整数列表的列表 我希望我的函数将此列表作为输入并返回列表中没有重复的整
有没有办法在不需要时处理 MethodChannel/EventChannel ?我问是因为我想为对象创建多个方法/事件 channel 。 例子: class Call { ... fields
我有一个关于在 Python3 中处理 ConnectionResetError 的问题。这通常发生在我使用 urllib.request.Request 函数时。我想知道如果我们遇到这样的错误是否可
我一直在努力解决这个问题几个小时,但无济于事。代码很简单,一个弹跳球(粒子)。将粒子的速度初始化为 (0, 0) 将使其保持上下弹跳。将粒子的初始化速度更改为 (0, 0.01) 或任何十进制浮点数都
我把自己弄得一团糟。 我想在我的系统中添加 python3.6 所以我决定在我的 Ubuntu 19.10 中卸载现有的。但是现在每次我想安装一些东西我都会得到这样的错误: dpkg: error w
我正在努力解决 Rpart 包中的 NA 功能。我得到了以下数据框(下面的代码) Outcome VarA VarB 1 1 1 0 2 1 1 1
我将 Java 与 JSF 一起使用,这是 Glassfish 3 容器。 在我的 Web 应用程序中,我试图实现一个文件(图像)管理系统。 我有一个 config.properties我从中读取上传
所以我一直在Processing工作几个星期以来,虽然我没有编程经验,但我已经转向更复杂的项目。我正在编写一个进化模拟器,它会产生具有随机属性的生物。 最终,我将添加复制,但现在这些生物只是在屏幕上漂
有人知道 Delphi 2009 对“with”的处理有什么不同吗? 我昨天解决了一个问题,只是将“with”解构为完整引用,如“with Datamodule、Dataset、MainForm”。
我是一名优秀的程序员,十分优秀!