gpt4 book ai didi

斯卡拉 Spark : How to filter RDD after groupby

转载 作者:行者123 更新时间:2023-12-01 10:39:28 24 4
gpt4 key购买 nike

我已经开始使用具有管道分隔字符串的 RDD。我已经处理了数据并得到了以下格式:

((0001F46468,239394055),(7665710590658745,-414963169),0,1420276980302)
((0001F46468,239394055),(8016905020647641,183812619),1,1420347885727)
((0001F46468,239394055),(6633110906332136,294201185),1,1420398323110)
((0001F46468,239394055),(6633110906332136,294201185),0,1420451687525)
((0001F46468,239394055),(7722056727387069,1396896294),1,1420537469065)
((0001F46468,239394055),(7722056727387069,1396896294),1,1420623297340)
((0001F46468,239394055),(8045651092287275,-4814845),1,1420720722185)
((0001F46468,239394055),(5170029699836178,-1332814297),0,1420750531018)
((0001F46468,239394055),(7722056727387069,1396896294),0,1420807545137)
((0001F46468,239394055),(4784119468604853,1287554938),1,1421050087824)

只是为了对数据的描述给出一个高层次的看法。您可以将主元组(第一元组)中的第一个元素视为用户标识,第二个元组作为产品标识,第三个元素是用户对产品的偏好。 (为了将来引用,我将上面的数据集标记为 val userData )

我的目标是,如果用户对产品同时投了正面 (1) 和负面 (0) 偏好,则只记录正面的记录。例如:
((0001F46468,239394055),(6633110906332136,294201185),1,1420398323110)
((0001F46468,239394055),(6633110906332136,294201185),0,1420451687525)

我只想保留
((0001F46468,239394055),(6633110906332136,294201185),1,1420398323110) 

所以我按用户产品元组对用户进行分组
(0001F46468,239394055),(6633110906332136,294201185
val groupedFiltered = userData.groupBy(x => (x._1, x._2)).map(u => {
for(k <- u._2) {
if(k._3 > 0)
u
}
})

但这会返回空元组。

所以我采取了以下方法:
val groupedFiltered = userData. groupBy(x => (x._1, x._2)).flatMap(u => u._2).filter(m => m._3 > 0)

((47734739656882457,-1782798434),(7585453414177905,-461779195),1,1422013413082)
((47734739656882457,-1782798434),(7585453414177905,-461779195),1,1422533237758)
((55218449094787901,-1374432022),(6227831620534109,1195766703),1,1420410603596)
((71212122719822610,-807015489),(6769904840922490,1642054117),1,1422549467554)
((75414197560031509,1830213715),(6724015489416254,-1389654186),1,1420196951100)
((60422797294995441,734266951),(6335216393920738,1528026712),1,1421161253600)
((35091051395844216,451349158),(8135854751464083,-1751839326),1,1422083101033)
((16647193023519619,990937787),(5384884550662007,-910998857),1,1420659873572)
((43355867025936022,-945669937),(7336240855866885,518993644),1,1420880078266)
((12188366927481231,-2007889717),(5336507724485344,363519858),1,1420827788022)

这是有希望的,但它看起来像是在我想要的所有记录为零的情况下,如果用户对同一项目有 1 和 0,则只保留带有 1 的记录。

最佳答案

您只能从分组结果中保留最大用户偏好。

userData
// group by user and product
.groupBy(x => (x._1, x._2))
// only keep the maximum user preference per user/product
.mapValues(_.maxBy(_._3))
// only keep the values
.values

关于斯卡拉 Spark : How to filter RDD after groupby,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/31412527/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com