- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我正在附加到 AWS EMR 实例的 jupyter notebook 上尝试一些与 pyspark 相关的实验。我有一个 spark 数据框,它从 s3 读取数据,然后过滤掉一些东西。使用 df1.printSchema()
打印模式输出如下:
root
|-- idvalue: string (nullable = true)
|-- locationaccuracyhorizontal: float (nullable = true)
|-- hour: integer (nullable = true)
|-- day: integer (nullable = true)
|-- date: date (nullable = true)
|-- is_weekend: boolean (nullable = true)
|-- locationlatrad: float (nullable = true)
|-- locationlonrad: float (nullable = true)
|-- epochtimestamp: integer (nullable = true)
我正在尝试在此数据帧上应用 pandas_udf
(示例 here)。我的 udf 是:
@pandas_udf(df1.schema, PandasUDFType.GROUPED_MAP)
def normalize(pdf):
hour = pdf.hour
return pdf.assign(hour=(hour - hour.mean()) / hour.std())
调用是这样的:
df2 = df1.groupBy('idvalue') \
.apply(normalize).show()
不幸的是,这是抛出错误,说:
An error occurred while calling o522.showString.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 11.0 failed 4 times, most recent failure: Lost task 0.3 in stage 11.0 (TID 31, x.x.x.x, executor 7): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/usr/local/lib64/python3.6/site-packages/pandas/core/indexes/accessors.py", line 256, in _make_accessor
return maybe_to_datetimelike(data)
File "/usr/local/lib64/python3.6/site-packages/pandas/core/indexes/accessors.py", line 82, in maybe_to_datetimelike
"datetimelike index".format(type(data)))
TypeError: cannot convert an object of type <class 'pandas.core.series.Series'> to a datetimelike index
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/mnt1/yarn/usercache/livy/appcache/application_1555045880196_0210/container_1555045880196_0210_01_000013/pyspark.zip/pyspark/worker.py", line 372, in main
process()
File "/mnt1/yarn/usercache/livy/appcache/application_1555045880196_0210/container_1555045880196_0210_01_000013/pyspark.zip/pyspark/worker.py", line 367, in process
serializer.dump_stream(func(split_index, iterator), outfile)
File "/mnt1/yarn/usercache/livy/appcache/application_1555045880196_0210/container_1555045880196_0210_01_000013/pyspark.zip/pyspark/serializers.py", line 283, in dump_stream
for series in iterator:
File "/mnt1/yarn/usercache/livy/appcache/application_1555045880196_0210/container_1555045880196_0210_01_000013/pyspark.zip/pyspark/serializers.py", line 301, in load_stream
yield [self.arrow_to_pandas(c) for c in pa.Table.from_batches([batch]).itercolumns()]
File "/mnt1/yarn/usercache/livy/appcache/application_1555045880196_0210/container_1555045880196_0210_01_000013/pyspark.zip/pyspark/serializers.py", line 301, in <listcomp>
yield [self.arrow_to_pandas(c) for c in pa.Table.from_batches([batch]).itercolumns()]
File "/mnt1/yarn/usercache/livy/appcache/application_1555045880196_0210/container_1555045880196_0210_01_000013/pyspark.zip/pyspark/serializers.py", line 271, in arrow_to_pandas
s = _check_series_convert_date(s, from_arrow_type(arrow_column.type))
File "/mnt1/yarn/usercache/livy/appcache/application_1555045880196_0210/container_1555045880196_0210_01_000013/pyspark.zip/pyspark/sql/types.py", line 1692, in _check_series_convert_date
return series.dt.date
File "/usr/local/lib64/python3.6/site-packages/pandas/core/generic.py", line 3610, in __getattr__
return object.__getattribute__(self, name)
File "/usr/local/lib64/python3.6/site-packages/pandas/core/accessor.py", line 54, in __get__
return self.construct_accessor(instance)
File "/usr/local/lib64/python3.6/site-packages/pandas/core/indexes/accessors.py", line 258, in _make_accessor
raise AttributeError("Can only use .dt accessor with "
AttributeError: Can only use .dt accessor with datetimelike values
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:452)
at org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.read(ArrowPythonRunner.scala:172)
at org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.read(ArrowPythonRunner.scala:122)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:406)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage3.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$11$$anon$1.hasNext(WholeStageCodegenExec.scala:619)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:255)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:247)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:836)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:836)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:2039)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:2027)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:2026)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2026)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:966)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:966)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:966)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2260)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2209)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2198)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:777)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2082)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2101)
at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:365)
at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:3384)
at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2545)
at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2545)
at org.apache.spark.sql.Dataset$$anonfun$53.apply(Dataset.scala:3365)
at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3364)
at org.apache.spark.sql.Dataset.head(Dataset.scala:2545)
at org.apache.spark.sql.Dataset.take(Dataset.scala:2759)
at org.apache.spark.sql.Dataset.getRows(Dataset.scala:255)
at org.apache.spark.sql.Dataset.showString(Dataset.scala:292)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/usr/local/lib64/python3.6/site-packages/pandas/core/indexes/accessors.py", line 256, in _make_accessor
return maybe_to_datetimelike(data)
File "/usr/local/lib64/python3.6/site-packages/pandas/core/indexes/accessors.py", line 82, in maybe_to_datetimelike
"datetimelike index".format(type(data)))
TypeError: cannot convert an object of type <class 'pandas.core.series.Series'> to a datetimelike index
我不明白为什么会抛出与日期时间相关的错误。我正在做的所有操作都与此无关。任何帮助表示赞赏。
最佳答案
我认为 pandas_udf 还不支持所有的 spark 类型,而且您的 date_time 列似乎有问题。
任何 udf 的一个问题是,所有数据都必须为您的 udf 实现,即使 udf 忽略了这些值,这也可能导致此类问题,或者至少导致性能下降。在其他条件相同的情况下,您应该尝试减少传递给 udf 的列数。例如,通过在您的 groupby 之前添加一个选择。
df2 = df1.select('idvalue', 'hour').groupBy('idvalue').apply(normalize).show()
关于python - 无法在 pyspark 中应用 pandas_udf,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/56053572/
我在使用以下代码时遇到 pandas_udf 错误。代码是基于另一列创建具有数据类型的列。相同的代码适用于正常较慢的 udf(已注释掉)。 基本上任何更复杂的“字符串”+数据都会返回错误。 # fro
我面临着繁重的数据转换。简而言之,我有数据列,每个数据列都包含与一些序数相对应的字符串。例如,高、中和低。我的目标是将这些字符串映射到整数以保留顺序。在本例中,为LOW -> 0、MID -> 1 和
我正在尝试制作一个 pandas UDF,它接受两列整数值,并根据这些值之间的差异返回一个小数数组,其长度等于上述差异。 到目前为止,这是我的尝试,我一直在尝试各种不同的方法来让它发挥作用,但这是总体
我正在附加到 AWS EMR 实例的 jupyter notebook 上尝试一些与 pyspark 相关的实验。我有一个 spark 数据框,它从 s3 读取数据,然后过滤掉一些东西。使用 df1.
我开始在本地玩 Spark 并发现这个奇怪的问题 1) pip install pyspark==2.3.1 2)pyspark> 将 Pandas 导入为 pd 从 pyspark.sql.func
This answer很好地解释了如何使用 pyspark 的 groupby 和 pandas_udf 进行自定义聚合。但是,我不可能像示例的这一部分所示那样手动声明我的架构 from pyspar
我正在使用 pandas_udf 在我的 Spark 集群上应用机器学习模型,并且有兴趣预定义通过箭头发送到 UDF 的最小记录数。 我遵循了大部分 UDF 的 databricks 教程... ht
我创建了一个 Pandas UDF,它将输入一个数据帧,在 Primary_Key 和 Predictions 上预测并输出一个数据帧。 schema = StructType([StructFiel
我写了一个UDF。它非常慢。我想用 pandas_udf 替换它以利用矢量化。 实际的 udf 有点复杂,但我创建了一个简化的玩具版本。 我的问题:是否可以将玩具示例中的 UDF 替换为可以利用矢量化
我已经测试过 logger和 print无法在 pandas_udf 中打印消息,无论是集群模式还是客户端模式。 测试代码: import sys import numpy as np import
我有这个 df: df = spark.createDataFrame( [('row_a', 5.0, 0.0, 11.0), ('row_b', 3394.0, 0.0, 454
我在 Jupyter 笔记本中运行以下代码,但出现 ImportError。请注意,“udf”可以导入到 Jupyter 中。 从 pyspark.sql.functions 导入 pandas_ud
我有这个 df: df = spark.createDataFrame( [('row_a', 5.0, 0.0, 11.0), ('row_b', 3394.0, 0.0, 454
可以使用外部库,例如 textdistance在pandas_udf里面?我已经尝试过,但收到此错误: ValueError: The truth value of a Series is ambig
我目前正在使用 PySpark 开发我的第一个完整系统,我遇到了一些奇怪的、与内存相关的问题。在其中一个阶段,我想类似于 Split-Apply-Combine 策略以修改 DataFrame。也就是
我正在使用 PySpark 的新 pandas_udf 装饰器,我试图让它将多个列作为输入并返回一个系列作为输入,但是,我收到一个 TypeError : 无效参数 示例代码 @pandas_udf(
我正在使用 PySpark 的新 pandas_udf 装饰器,我试图让它将多个列作为输入并返回一个系列作为输入,但是,我收到一个 TypeError : 无效参数 示例代码 @pandas_udf(
我无法从可用的 Pyspark 文档中复制 Spark 代码 here. 例如,当我尝试以下与 Grouped Map 有关的代码时: import numpy as np import pandas
我正在尝试将函数应用于 pyspark 中的每个数据集组。我遇到的第一个错误是 Py4JError: An error occurred while calling o62.__getnewargs_
我正在构建多个 Prophet 模型,其中每个模型都传递给 pandas_udf 函数,该函数训练模型并使用 MLflow 存储结果。 @pandas_udf(result_schema, Panda
我是一名优秀的程序员,十分优秀!