gpt4 book ai didi

python - 如何获得 Pandas 中每组的平均成对余弦相似度

转载 作者:行者123 更新时间:2023-12-02 01:40:25 25 4
gpt4 key购买 nike

我有一个示例数据框如下

df=pd.DataFrame(np.array([['facebook', "women tennis"], ['facebook', "men basketball"], ['facebook', 'club'],['apple', "vice president"], ['apple', 'swimming contest']]),columns=['firm','text'])

enter image description here

现在我想使用词嵌入来计算每个公司内的文本相似度。例如,facebook 的平均余弦相似度将是第 0、1 和 2 行之间的余弦相似度。最终数据帧应在每个公司的每行旁边有一列 ['mean_cos_ Between_items']。每个公司的值都是相同的,因为它是公司内部的成对比较。

我写了下面的代码:

import gensim
from gensim import utils
from gensim.models import Word2Vec
from gensim.models import KeyedVectors
from gensim.scripts.glove2word2vec import glove2word2vec
from sklearn.metrics.pairwise import cosine_similarity

# map each word to vector space
def represent(sentence):
vectors = []
for word in sentence:
try:
vector = model.wv[word]
vectors.append(vector)
except KeyError:
pass
return np.array(vectors).mean(axis=0)

# get average if more than 1 word is included in the "text" column
def document_vector(items):
# remove out-of-vocabulary words
doc = [word for word in items if word in model_glove.vocab]
if doc:
doc_vector = model_glove[doc]
mean_vec=np.mean(doc_vector, axis=0)
else:
mean_vec = None
return mean_vec

# get average pairwise cosine distance score
def mean_cos_sim(grp):
output = []
for i,j in combinations(grp.index.tolist(),2 ):
doc_vec=document_vector(grp.iloc[i]['text'])
if doc_vec is not None and len(doc_vec) > 0:
sim = cosine_similarity(document_vector(grp.iloc[i]['text']).reshape(1,-1),document_vector(grp.iloc[j]['text']).reshape(1,-1))
output.append([i, j, sim])
return np.mean(np.array(output), axis=0)

# save the result to a new column
df['mean_cos_between_items']=df.groupby(['firm']).apply(mean_cos_sim)

但是,我收到以下错误:

enter image description here

您能帮忙吗?谢谢!

最佳答案

请注意,sklearn.metrics.pairwise.cosine_similarity,当传递单个矩阵X时,automatically returns the pairwise similarities between all samples in X 。即,无需手动构建对。

假设您使用类似的东西构建平均嵌入(我在这里使用glove-twitter-25),

def mean_embeddings(s):
"""Transfer a list of words into mean embedding"""
return np.mean([model_glove.get_vector(x) for x in s], axis=0)

df["embeddings"] = df.text.str.split().apply(mean_embeddings)

结果是df.embeddings

>>> df.embeddings
0 [-0.2597, -0.153495, -0.5106895, -1.070115, 0....
1 [0.0600965, 0.39806002, -0.45810497, -1.375365...
2 [-0.43819, 0.66232, 0.04611, -0.91103, 0.32231...
3 [0.1912625, 0.0066999793, -0.500785, -0.529915...
4 [-0.82556, 0.24555385, 0.38557374, -0.78941, 0...
Name: embeddings, dtype: object

您可以像这样获得平均成对余弦相似度,要点是您可以直接将 cosine_similarity 应用于每个组的充分准备的矩阵: p>

(
df.groupby("firm").embeddings # extract 'embeddings' for each group
.apply(np.stack) # turns sequence of arrays into proper matrix
.apply(cosine_similarity) # the magic: compute pairwise similarity matrix
.apply(np.mean) # get the mean
)

对于我使用的模型,结果是:

firm
apple 0.765953
facebook 0.893262
Name: embeddings, dtype: float32

关于python - 如何获得 Pandas 中每组的平均成对余弦相似度,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/71666450/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com