gpt4 book ai didi

python - Pyspark 2.1.0 中的自定义分区程序

转载 作者:行者123 更新时间:2023-11-28 18:18:12 28 4
gpt4 key购买 nike

我读到具有相同分区程序的 RDD 将位于同一位置。这对我很重要,因为我想连接几个未分区的大型 Hive 表。我的理论是,如果我可以让它们分区(通过字段调用 date_day)并位于同一位置,那么我将避免洗牌。

这是我要为每个表做的事情:

def date_day_partitioner(key):
return (key.date_day - datetime.date(2017,05,01)).days

df = sqlContext.sql("select * from hive.table")
rdd = df.rdd
rdd2 = rdd.partitionBy(100, date_day_partitioner)
df2 = sqlContext.createDataFrame(rdd2, df_log_entry.schema)

print df2.count()

不幸的是,我什至无法测试我关于共置和避免改组的理论,因为当我尝试 partitionBy 时出现以下错误:ValueError:要解包的值太多

Traceback (most recent call last):
File "/tmp/zeppelin_pyspark-118755547579363441.py", line 346, in <module>
raise Exception(traceback.format_exc())
Exception: Traceback (most recent call last):
File "/tmp/zeppelin_pyspark-118755547579363441.py", line 339, in <module>
exec(code)
File "<stdin>", line 15, in <module>
File "/usr/lib/spark/python/pyspark/sql/dataframe.py", line 380, in count
return int(self._jdf.count())
File "/usr/lib/spark/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1133, in __call__
answer, self.gateway_client, self.target_id, self.name)
File "/usr/lib/spark/python/pyspark/sql/utils.py", line 63, in deco
return f(*a, **kw)
File "/usr/lib/spark/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py", line 319, in get_return_value
format(target_id, ".", name), value)
Py4JJavaError: An error occurred while calling o115.count.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 21 in stage 6.0 failed 4 times, most recent failure: Lost task 21.3 in stage 6.0 (TID 182, ip-172-31-49-209.ec2.internal, executor 3): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/mnt/yarn/usercache/zeppelin/appcache/application_1509802099365_0013/container_1509802099365_0013_01_000007/pyspark.zip/pyspark/worker.py", line 174, in main
process()
File "/mnt/yarn/usercache/zeppelin/appcache/application_1509802099365_0013/container_1509802099365_0013_01_000007/pyspark.zip/pyspark/worker.py", line 169, in process
serializer.dump_stream(func(split_index, iterator), outfile)
File "/mnt/yarn/usercache/zeppelin/appcache/application_1509802099365_0013/container_1509802099365_0013_01_000007/pyspark.zip/pyspark/serializers.py", line 138, in dump_stream
for obj in iterator:
File "/usr/lib/spark/python/pyspark/rdd.py", line 1752, in add_shuffle_key
ValueError: too many values to unpack
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:193)
at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:234)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:152)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at org.apache.spark.api.python.PairwiseRDD.compute(PythonRDD.scala:390)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
...

我一定是做错了什么,你能帮忙吗?

最佳答案

这是因为您没有在键值对 rdd 上应用 partitionBy。您的 rdd 必须在键值对中。此外,您的 key 类型应该是整数。我没有您的配置单元表的示例数据。因此,让我们使用下面的配置单元表来证明这一事实:

我使用配置单元表创建了以下数据框:

df = spark.table("udb.emp_details_table");
+------+--------+--------+----------------+
|emp_id|emp_name|emp_dept|emp_joining_date|
+------+--------+--------+----------------+
| 1| AAA| HR| 2018-12-06|
| 1| BBB| HR| 2017-10-26|
| 2| XXX| ADMIN| 2018-10-22|
| 2| YYY| ADMIN| 2015-10-19|
| 2| ZZZ| IT| 2018-05-14|
| 3| GGG| HR| 2018-06-30|
+------+--------+--------+----------------+

现在,我希望对我的数据框进行分区,并将相似的键保留在一个分区中。因此,我已将我的数据帧转换为 rdd,因为您只能在 rdd 上应用 partitionBy 进行重新分区。

    myrdd = df.rdd
newrdd = myrdd.partitionBy(10,lambda k: int(k[0]))
newrdd.take(10)

同样的错误:

 File "/usr/hdp/current/spark2-client/python/pyspark/rdd.py", line 1767, in add_shuffle_key
for k, v in iterator:
ValueError: too many values to unpack

因此,我们需要将我们的rdd转换成key-value pair才能使用paritionBy

keypair_rdd = myrdd.map(lambda x : (x[0],x[1:]))

现在,您可以看到 rdd 已转换为键值对,因此您可以根据可用键将数据分布在分区中。

[(u'1', (u'AAA', u'HR', datetime.date(2018, 12, 6))), 
(u'1', (u'BBB', u'HR', datetime.date(2017, 10, 26))),
(u'2', (u'XXX', u'ADMIN', datetime.date(2018, 10, 22))),
(u'2', (u'YYY', u'ADMIN', datetime.date(2015, 10, 19))),
(u'2', (u'ZZZ', u'IT', datetime.date(2018, 5, 14))),
(u'3', (u'GGG', u'HR', datetime.date(2018, 6, 30)))]

现在在键值 rdd 上使用 paritionBy:

newrdd = keypair_rdd.partitionBy(5,lambda k: int(k[0]))

让我们看一下分区。数据被分组,相似的键现在被存储到相似的分区中。其中两个是空的。

>>> print("Partitions structure: {}".format(newrdd.glom().map(len).collect()))
Partitions structure: [0, 2, 3, 1, 0]

现在假设我想自定义分区我的数据。所以我创建了下面的函数来将键“1”和“3”保存在类似的分区中。

def partitionFunc(key):
import random
if key == 1 or key == 3:
return 0
else:
return random.randint(1,2)

newrdd = keypair_rdd.partitionBy(5,lambda k: partitionFunc(int(k[0])))

>>> print("Partitions structure: {}".format(newrdd.glom().map(len).collect()))
Partitions structure: [3, 3, 0, 0, 0]

正如您现在看到的那样,键 1 和 3 存储在一个分区中并位于另一个分区中。

希望对您有所帮助。您可以尝试按您的数据框进行分区。确保将其转换为键值对,并将键保持为整数类型。

关于python - Pyspark 2.1.0 中的自定义分区程序,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/47116294/

28 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com