gpt4 book ai didi

r - 从 R 中的文本中提取关键字

转载 作者:行者123 更新时间:2023-12-02 01:33:05 26 4
gpt4 key购买 nike

我想从 R 中的文本中提取与保险服务相关的关键字。我创建了关键字列表并使用了 qdap 库中的常用函数。

   bag <- bag_o_words(corpus) 
b <- common(bag,keywords,overlap="all")

但结果只是出现频率大于 1 的常用词。我还使用了 RKEA 库。

keywords <- c("directasia", "directasia.com", "Frank", "frank", "OCBC", "NTUC",
"NTUC Income", "Frank by OCBC", "customer service", "atm",
"insurance", "claim", "agent", "premium", "policy", "customer care",
"customer", "draft", "account", "credit", "savings","debit","ivr",
"offer", "transacation", "banking", "website", "mobile", "i-safe",
"customer", "demat", "network", "phone", "interest", "loan",
"transfer", "deposit", "otp", "rewards", "redemption")
tmpdir <- tempfile()
dir.create(tmpdir)
model <- file.path(tmpdir, "crudeModel")
createModel(corpus,keywords,model)
extractKeywords(corpus, model)

但是我收到以下错误

Error in createModel(corpus, keywords, model) : number of documents and keywords does not match

Error in .jcall(ke, "V", "extractKeyphrases", .jcall(ke,Ljava/util/Hashtable;", : java.io.FileNotFoundException: C:\Users\Bitanshu\AppData\Local\Temp\RtmpEHu9uA\file14c4160f41c2\crudeModel (The system cannot find the file specified)

第二个错误我认为是因为createModel没有成功

任何人都可以建议如何纠正这个或替代方法吗?文本数据已从推特中提取。

最佳答案

你可以试试quanteda包。我建议获取 GitHub 版本而不是 CRAN 版本,因为就在两天前我彻底检查了 kwic() 函数。示例:

> require(quanteda)
> kwic(inaugTexts, "asia")
contextPre keyword contextPost
[1841-Harrison, 8599] or Egypt and the lesser Asia would furnish the larger dividend
[1909-Taft, 1872] our shores from Europe and Asia of course reduces the necessity
[1925-Coolidge, 2215] differences in both Europe and Asia . But there is a
[1953-Eisenhower, 325] the earth. Masses of Asia have awakened to strike off
[2013-Obama, 1514] We will support democracy from Asia to Africa, from the

关于r - 从 R 中的文本中提取关键字,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/32986417/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com