gpt4 book ai didi

将 DF 列转换为列表时出现 PySpark 错误

转载 作者:行者123 更新时间:2023-12-04 01:51:09 26 4
gpt4 key购买 nike

我的 Spark 脚本有问题。

我有数据框 2,它是一个单列数据框。我想要实现的是,仅返回列表中用户所在的 df1 的结果。

我试过下面的方法,但出现错误(也在下面)

谁能给个建议?

    listx= df2.select('user2').collect()

df_agg = df1\
.coalesce(1000)\
.filter((df1.dt == 20181029) &(df1.user.isin(listx)))\
.select('list of fields')

Traceback (most recent call last):
File "/home/keenek1/indev/rax.py", line 31, in <module>
.filter((df1.dt == 20181029) &(df1.imsi.isin(listx)))\
File "/usr/hdp/current/spark2-client/python/lib/pyspark.zip/pyspark/sql/column.py", line 444, in isin
File "/usr/hdp/current/spark2-client/python/lib/pyspark.zip/pyspark/sql/column.py", line 36, in _create_column_from_literal
File "/usr/hdp/current/spark2-client/python/lib/py4j-0.10.6-src.zip/py4j/java_gateway.py", line 1160, in __call__
File "/usr/hdp/current/spark2-client/python/lib/pyspark.zip/pyspark/sql/utils.py", line 63, in deco
File "/usr/hdp/current/spark2-client/python/lib/py4j-0.10.6-src.zip/py4j/protocol.py", line 320, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.sql.functions.lit.
: java.lang.RuntimeException: Unsupported literal type class java.util.ArrayList [234101953127315]
at org.apache.spark.sql.catalyst.expressions.Literal$.apply(literals.scala:77)
at org.apache.spark.sql.catalyst.expressions.Literal$$anonfun$create$2.apply(literals.scala:163)
at org.apache.spark.sql.catalyst.expressions.Literal$$anonfun$create$2.apply(literals.scala:163)
at scala.util.Try.getOrElse(Try.scala:79)
at org.apache.spark.sql.catalyst.expressions.Literal$.create(literals.scala:162)
at org.apache.spark.sql.functions$.typedLit(functions.scala:113)
at org.apache.spark.sql.functions$.lit(functions.scala:96)
at org.apache.spark.sql.functions.lit(functions.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

最佳答案

不确定这是最佳答案但是:

# two single column dfs to try replicate your example:
df1 = spark.createDataFrame([{'a': 10}])
df2 = spark.createDataFrame([{'a': 10}, {'a': 18}])
l1 = df1.select('a').collect()
# l1 = [Row(a=10)] - this is not an accepted value for the isin as it seems:
df2.select('*').where(df2.a.isin(l_x)).show() # this will throw and error
df2.select('*').where(df2.a.isin([10])).show() # this will NOT throw and error

所以像这样:

l2 = [item.a for item in l1]
# l2 = [10]
df2.where(F.col('a').isin(l2)).show()

(老实说这有点奇怪但是......有一张支持isin with single column dataframes的票)

希望这对您有所帮助,祝您好运!

编辑:前提是收集的列表很小:)你的例子是:

listx= [item.user2 for item in df2.select('user2').collect()]
df_agg = df1\
.coalesce(1000)\
.filter((df1.dt == 20181029) &(df1.user.isin(listx)))\
.select('list of fields')

关于将 DF 列转换为列表时出现 PySpark 错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/53060961/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com