gpt4 book ai didi

python-2.7 - 过滤掉在 gensim 字典中恰好出现一次的标记

转载 作者:行者123 更新时间:2023-12-04 07:06:07 25 4
gpt4 key购买 nike

gensim 字典对象有一个非常好的过滤功能,可以删除出现在少于设定数量的文档中的标记。但是,我希望删除语料库中恰好出现一次的标记。有谁知道一种快速简便的方法来做到这一点?

最佳答案

您可能应该在问题中包含一些可重现的代码;但是,我将使用上一篇文章中的文档。我们可以在不使用 gensim 的情况下实现您的目标。

from collections import defaultdict
documents = ["Human machine interface for lab abc computer applications",
"A survey of user opinion of computer system response time",
"The EPS user interface management system",
"System and human system engineering testing of EPS",
"Relation of user perceived response time to error measurement",
"The generation of random binary unordered trees",
"The intersection graph of paths in trees",
"Graph minors IV Widths of trees and well quasi ordering",
"Graph minors A survey"]

# remove common words and tokenize
stoplist = set('for a of the and to in'.split())
texts = [[word for word in document.lower().split() if word not in stoplist] for document in documents]

# word frequency
d=defaultdict(int)
for lister in texts:
for item in lister:
d[item]+=1

# remove words that appear only once
tokens=[key for key,value in d.items() if value>1]
texts = [[word for word in document if word in tokens] for document in texts]


不过,要添加一些信息,您可能会认为 gensim 教程除了前面提到的方法之外,还有一种更节省内存的技术。我添加了一些打印语句,以便您可以看到每个步骤发生了什么。您的具体问题在 DICTERATOR 步骤中得到解答;我意识到以下答案可能对您的问题有点过分,但是如果您需要进行任何类型的主题建模,那么此信息将是朝着正确方向迈出的一步。

$cat mycorpus.txt

Human machine interface for lab abc computer applications
A survey of user opinion of computer system response time
The EPS user interface management system
System and human system engineering testing of EPS
Relation of user perceived response time to error measurement
The generation of random binary unordered trees
The intersection graph of paths in trees
Graph minors IV Widths of trees and well quasi ordering
Graph minors A survey

运行以下 create_corpus.py:

#!/usr/bin/env python
from gensim import corpora, models, similarities

stoplist = set('for a of the and to in'.split())

class MyCorpus(object):
def __iter__(self):
for line in open('mycorpus.txt'):
# assume there's one document per line, tokens separated by whitespace
yield dictionary.doc2bow(line.lower().split())

# TOKENIZERATOR: collect statistics about all tokens
dictionary = corpora.Dictionary(line.lower().split() for line in open('mycorpus.txt'))
print (dictionary)
print (dictionary.token2id)

# DICTERATOR: remove stop words and words that appear only once
stop_ids = [dictionary.token2id[stopword] for stopword in stoplist if stopword in dictionary.token2id]
once_ids = [tokenid for tokenid, docfreq in dictionary.dfs.iteritems() if docfreq == 1]
dictionary.filter_tokens(stop_ids + once_ids)
print (dictionary)
print (dictionary.token2id)

dictionary.compactify() # remove gaps in id sequence after words that were removed
print (dictionary)
print (dictionary.token2id)

# VECTORERATOR: map tokens frequency per doc to vectors
corpus_memory_friendly = MyCorpus() # doesn't load the corpus into memory!
for item in corpus_memory_friendly:
print item

祝你好运!

关于python-2.7 - 过滤掉在 gensim 字典中恰好出现一次的标记,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/22079418/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com