gpt4 book ai didi

r - R中的词云+语料库错误

转载 作者:行者123 更新时间:2023-12-04 15:57:28 24 4
gpt4 key购买 nike

我想使用 Wordcloud 功能对 twitter 数据进行云计算。我已经安装了 twitter 包并使用了 api。之后我会做以下事情。

bigdata <- searchTwitter("#bigdata", n=20)

bigdata_list <- sapply(bigdata, function(x) x$getText())
bigdata_corpus <- Corpus(VectorSource(bigdata_list))
bigdata_corpus <- tm_map(bigdata_corpus, content_transformer(tolower), lazy=TRUE)
bigdata_corpus <- tm_map(bigdata_corpus, removePunctuation, lazy=TRUE)
bigdata_corpus <- tm_map(bigdata_corpus,
function(x)removeWords(x,stopwords()), lazy=TRUE)
wordcloud(bigdata_corpus)

这会为 Wordcloud 命令生成错误消息:
Error in UseMethod("meta", x) : 
no applicable method for 'meta' applied to an object of class "try-error"
In addition: Warning messages:
1: In mclapply(x$content[i], function(d) tm_reduce(d, x$lazy$maps)) :
all scheduled cores encountered errors in user code
2: In mclapply(unname(content(x)), termFreq, control) :
all scheduled cores encountered errors in user code

我尝试了不同的语料库命令,但似乎无法正确使用。
有任何想法吗?

最佳答案

你可以试试这个:

library("tm")
# Transform your corpus in a term document matrix
bigdata_tdm <- as.matrix(TermDocumentMatrix(bigdata_corpus))
# Get the frequency by words
bigdata_freq <- data.frame(Words = rownames(bigdata_tdm), Freq = rowSums(bigdata_tdm), stringsAsFactors = FALSE)
# sort
bigdata_freq <- bigdata_freq[order(bigdata_freq$Freq, decreasing = TRUE), ]
# keep the 50 most frequent words
bigdata_freq <- bigdata_freq[1:50, ]

# Draw the wordcloud
library("wordcloud")
wordcloud(words = bigdata_freq$Words, freq = bigdata_freq$Freq)

tm_0.6wordcloud_2.5两种方式都有效。

关于r - R中的词云+语料库错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/27130608/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com