gpt4 book ai didi

r - tidytext、Quanteda 和 tm 返回不同的 tf-idf 分数

转载 作者:行者123 更新时间:2023-12-02 03:43:41 25 4
gpt4 key购买 nike

我正在尝试研究 tf-idf 加权语料库(我希望 tf 是按文档划分的比例,而不是简单的计数)。我希望所有经典文本挖掘库都会返回相同的值,但我得到了不同的值。我的代码中是否存在错误(例如,我是否需要转置对象?)或者 tf-idf 计数的默认参数在包之间是否有所不同?

library(tm)
library(tidyverse)
library(quanteda)
df <- as.data.frame(cbind(doc = c("doc1", "doc2"), text = c("the quick brown fox jumps over the lazy dog", "The quick brown foxy ox jumps over the lazy god")), stringsAsFactors = F)

df.count1 <- df %>% unnest_tokens(word, text) %>%
count(doc, word) %>%
bind_tf_idf(word, doc, n) %>%
select(doc, word, tf_idf) %>%
spread(word, tf_idf, fill = 0)

df.count2 <- df %>% unnest_tokens(word, text) %>%
count(doc, word) %>%
cast_dtm(document = doc,term = word, value = n, weighting = weightTfIdf) %>%
as.matrix() %>% as.data.frame()

df.count3 <- df %>% unnest_tokens(word, text) %>%
count(doc, word) %>%
cast_dfm(document = doc,term = word, value = n) %>%
dfm_tfidf() %>% as.data.frame()

> df.count1
# A tibble: 2 x 12
doc brown dog fox foxy god jumps lazy over ox quick the
<chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 doc1 0 0.0770 0.0770 0 0 0 0 0 0 0 0
2 doc2 0 0 0 0.0693 0.0693 0 0 0 0.0693 0 0

> df.count2
brown dog fox jumps lazy over quick the foxy god ox
doc1 0 0.1111111 0.1111111 0 0 0 0 0 0.0 0.0 0.0
doc2 0 0.0000000 0.0000000 0 0 0 0 0 0.1 0.1 0.1

> df.count3
brown dog fox jumps lazy over quick the foxy god ox
doc1 0 0.30103 0.30103 0 0 0 0 0 0.00000 0.00000 0.00000
doc2 0 0.00000 0.00000 0 0 0 0 0 0.30103 0.30103 0.30103

最佳答案

您偶然发现了计算术语频率的差异。

标准定义:

TF: Term Frequency: TF(t) = (Number of times term t appears in a document) / (Total number of terms in the document).

IDF: Inverse Document Frequency: IDF(t) = log(Total number of documents / Number of documents with term t in it)

Tf-idf weight is the product of these quantities TF * IDF

看似简单,实则不然。我们来计算 doc1 中单词“dog”的 tf_idf。

狗的第一个 TF:即 doc 中的 1 个术语/9 个术语 = 0.11111

1/9 = 0.1111111

现在狗的 IDF:(2 个文档/1 个术语)的日志。现在有多种可能性,即:log(或自然对数)、log2或log10!

log(2) = 0.6931472
log2(2) = 1
log10(2) = 0.30103

#tf_idf on log:
1/9 * log(2) = 0.07701635

#tf_idf on log2:
1/9 * log2(2) = 0.11111

#tf_idf on log10:
1/9 * log10(2) = 0.03344778

现在变得有趣了。 Tidytext 为您提供基于日志的正确权重。 tm 返回基于 log2 的 tf_idf。我预计 Quanteda 的值为 0.03344778,因为它们的基数是 log10。

但是查看 quanteda,它会正确返回结果,但使用计数作为默认值而不是比例计数。要获得应有的一切,请尝试以下代码:

df.count3 <- df %>% unnest_tokens(word, text) %>% 
count(doc, word) %>%
cast_dfm(document = doc,term = word, value = n)


dfm_tfidf(df.count3, scheme_tf = "prop", scheme_df = "inverse")
Document-feature matrix of: 2 documents, 11 features (22.7% sparse).
2 x 11 sparse Matrix of class "dfm"
features
docs brown fox god jumps lazy over quick the dog foxy ox
doc1 0 0.03344778 0.03344778 0 0 0 0 0 0 0 0
doc2 0 0 0 0 0 0 0 0 0.030103 0.030103 0.030103

看起来更好,这是基于 log10 的。

如果您使用quanteda并调整参数,则可以通过更改base<来获得tidytexttm结果 参数。

# same as tidytext the natural log
dfm_tfidf(df.count3, scheme_tf = "prop", scheme_df = "inverse", base = exp(1))

# same as tm
dfm_tfidf(df.count3, scheme_tf = "prop", scheme_df = "inverse", base = 2)

关于r - tidytext、Quanteda 和 tm 返回不同的 tf-idf 分数,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/48806699/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com