gpt4 book ai didi

hadoop - Pig- 无法转储数据

转载 作者:可可西里 更新时间:2023-11-01 15:10:29 26 4
gpt4 key购买 nike

我有两个数据集,一个用于电影,另一个用于评级

电影数据看起来像

MovieID#Title#Genre
1#Toy Story (1995)#Animation|Children's|Comedy
2#Jumanji (1995)#Adventure|Children's|Fantasy
3#Grumpier Old Men (1995)#Comedy|Romance

评分数据看起来像

UserID#MovieID#Ratings#RatingsTimestamp
1#1193#5#978300760
1#661#3#978302109
1#914#3#978301968

我的脚本如下

    1) movies_data = LOAD '/user/admin/MoviesDataset/movies_new.dat' USING PigStorage('#') AS (movieid:int,
moviename:chararray,moviegenere:chararray);

2) ratings_data = LOAD '/user/admin/RatingsDataset/ratings_new.dat' USING PigStorage('#') AS (Userid:int,
movieid:int,ratings:int,timestamp:long);

3) moviedata_ratingsdata_join = JOIN movies_data BY movieid, ratings_data BY movieid;

4) moviedata_ratingsdata_join_group = GROUP moviedata_ratingsdata_join BY movies_data.movieid;

5) moviedata_ratingsdata_averagerating = FOREACH moviedata_ratingsdata_join_group GENERATE group,
AVG(moviedata_ratingsdata_join.ratings) AS Averageratings, (moviedata_ratingsdata_join.Userid) AS userid;

6) DUMP moviedata_ratingsdata_averagerating;

我收到这个错误

 2017-03-25 06:46:50,332 [main] ERROR org.apache.pig.tools.pigstats.PigStats - ERROR 0: org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception while executing (Name: moviedata_ratingsdata_join_group: Local Rearrange[tuple]{int}(false) - scope-95 Operator Key: scope-95): org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception while executing (Name: moviedata_ratingsdata_averagerating: New For Each(false,false)[bag] - scope-83 Operator Key: scope-83): org.apache.pig.backend.executionengine.ExecException: ERROR 0: Scalar has more than one row in the output. 1st : (1,Toy Story (1995),Animation|Children's|Comedy), 2nd :(2,Jumanji (1995),Adventure|Children's|Fantasy) (common cause: "JOIN" then "FOREACH ... GENERATE foo.bar" should be "foo::bar" )

如果删除第6行,脚本执行成功

为什么我不能 DUMP 在第 5 行生成的关系?

最佳答案

使用 disambiguate operator ( :: ) 识别 JOIN 之后的字段名称, COGROUP , CROSS , 或 FLATTEN运营商。

关系 movies_dataratings_data两者都有一列 movieid .形成关系时moviedata_ratingsdata_join_group , 使用 ::运算符来识别哪个列 movieid用于 GROUP .

所以你的 4) 看起来像,

4) moviedata_ratingsdata_join_group = GROUP moviedata_ratingsdata_join BY movies_data::movieid;

关于hadoop - Pig- 无法转储数据,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/43013651/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com