gpt4 book ai didi

python - 使用 JSON 文件中的整篇文章的连续对之间的余弦相似度

转载 作者:太空宇宙 更新时间:2023-11-04 05:32:14 24 4
gpt4 key购买 nike

我想计算 JSON 文件中连续文章对的余弦相似度。到目前为止,我设法做到了,但是......我只是意识到,在转换每篇文章的 tfidf 时,我没有使用文件中所有可用文章的术语,而是仅使用每对文章中的术语。这是我正在使用的代码,它提供了每对连续文章的余弦相似系数。

import json
import nltk
with open('SDM_2015.json') as f:
data = [json.loads(line) for line in f]

## Loading the packages needed:
import nltk, string
from sklearn.feature_extraction.text import TfidfVectorizer

## Defining our functions to filter the data

# Short for stemming each word (common root)
stemmer = nltk.stem.porter.PorterStemmer()

# Short for removing puctuations etc
remove_punctuation_map = dict((ord(char), None) for char in string.punctuation)

## First function that creates the tokens
def stem_tokens(tokens):
return [stemmer.stem(item) for item in tokens]

## Function that incorporating the first function, converts all words into lower letters and removes puctuations maps (previously specified)
def normalize(text):
return stem_tokens(nltk.word_tokenize(text.lower().translate(remove_punctuation_map)))

## Lastly, a super function is created that contains all the previous ones plus stopwords removal
vectorizer = TfidfVectorizer(tokenizer=normalize, stop_words='english')

## Calculation one by one of the cosine similatrity

def foo(x, y):
tfidf = vectorizer.fit_transform([x, y])
return ((tfidf * tfidf.T).A)[0,1]

my_funcs = {}
for i in range(len(data) - 1):
x = data[i]['body']
y = data[i+1]['body']
foo.func_name = "cosine_sim%d" % i
my_funcs["cosine_sim%d" % i] = foo
print(foo(x,y))

关于如何使用 JSON 文件中可用的所有文章的全部术语而不是仅使用每对文章的术语来开发余弦相似度的任何想法?

亲切的问候,

安德烈斯

最佳答案

我认为,根据我们上面的讨论,您需要更改 foo 函数和下面的所有内容。请参阅下面的代码。请注意,我实际上并没有运行它,因为我没有您的数据,也没有提供示例行。

## Loading the packages needed:
import nltk, string
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics import pairwise_distances
from scipy.spatial.distance import cosine
import json
from sklearn.metrics.pairwise import cosine_similarity

with open('SDM_2015.json') as f:
data = [json.loads(line) for line in f]

## Defining our functions to filter the data

# Short for stemming each word (common root)
stemmer = nltk.stem.porter.PorterStemmer()

# Short for removing puctuations etc
remove_punctuation_map = dict((ord(char), None) for char in string.punctuation)

## First function that creates the tokens
def stem_tokens(tokens):
return [stemmer.stem(item) for item in tokens]

## Function that incorporating the first function, converts all words into lower letters and removes puctuations maps (previously specified)
def normalize(text):
return stem_tokens(nltk.word_tokenize(text.lower().translate(remove_punctuation_map)))

## tfidf
vectorizer = TfidfVectorizer(tokenizer=normalize, stop_words='english')
tfidf_data = vectorizer.fit_transform(data)

#cosine dists
similarity matrix = cosine_similarity(tfidf_data)

关于python - 使用 JSON 文件中的整篇文章的连续对之间的余弦相似度,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/36774557/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com