gpt4 book ai didi

sql - BigQuery - 从一个大表中为每个 subreddit 选择前 N 个帖子

转载 作者:行者123 更新时间:2023-12-01 15:51:16 25 4
gpt4 key购买 nike

我正在对 Google BigQuery 上的 Reddit 数据进行数据挖掘,我想根据整个 201704 数据的每个 subreddit 的分数排名前 1000 个帖子。我尝试了不同的技术,但由于 BigQuery 的限制,结果太大而无法返回。

select body, score, subreddit from 
(
select body, score, subreddit,row_number() over
(
partition by subreddit order by score desc
) mm
from [fh-bigquery:reddit_comments.2017_04]
)
where mm <= 1000 AND subreddit in
(
select subreddit from
(
select Count(subreddit) as counts, subreddit from
[fh-bigquery:reddit_comments.2017_04] GROUP BY subreddit ORDER BY counts DESC
LIMIT 10000
)
)
LIMIT 10000000

有没有办法分治这个问题,因为启用大查询结果意味着不能做任何复杂的查询。 Google 是否为大型查询资源提供付款选项?

最佳答案

I wanna top 1000 posts ranked by the score for each subreddit for the whole 201704 data



我刚刚测试了这个查询:
SELECT 
subreddit,
ARRAY_AGG(STRUCT(body, score) ORDER BY score DESC LIMIT 1000) data
FROM `fh-bigquery.reddit_comments.2017_04`
GROUP BY 1

它在 22 秒内处理了整个数据集:

enter image description here

在您的查询中,您似乎想要前 10000 个最受欢迎的 subreddits 的帖子和分数。我试过这个查询:
SELECT 
subreddit,
ARRAY_AGG(STRUCT(body, score) ORDER BY score DESC LIMIT 1000) data
FROM `fh-bigquery.reddit_comments.2017_04`
WHERE subreddit IN(
SELECT subreddit FROM(
SELECT
subreddit
FROM `fh-bigquery.reddit_comments.2017_04`
GROUP BY subreddit
ORDER BY count(body) DESC
LIMIT 10000)
)
GROUP BY 1

并在 26 秒内得到结果:

enter image description here

希望这些结果是您正在寻找的。让我知道一切是否正确。

关于sql - BigQuery - 从一个大表中为每个 subreddit 选择前 N 个帖子,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/44610637/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com