gpt4 book ai didi

nlp - Gensim word2vec中的most_similar和similar_by_vector之间的区别?

转载 作者:行者123 更新时间:2023-12-04 14:54:21 25 4
gpt4 key购买 nike

我对gensim的Word2vecKeyedVectors的most_similar和相似_by_vector的结果感到困惑。他们应该以相同的方式计算余弦相似度-但是:

一词运行它们会得到相同的结果,例如:
model.most_similar(['obama'])和similar_by_vector(model ['obama'])

但是如果我给它一个等式:

model.most_similar(positive=['king', 'woman'], negative=['man'])

给出:
[('queen', 0.7515910863876343), ('monarch', 0.6741327047348022), ('princess', 0.6713887453079224), ('kings', 0.6698989868164062), ('kingdom', 0.5971318483352661), ('royal', 0.5921063423156738), ('uncrowned', 0.5911505818367004), ('prince', 0.5909028053283691), ('lady', 0.5904011130332947), ('monarchs', 0.5884358286857605)]

同时使用:
q = model['king'] - model['man'] + model['woman']
model.similar_by_vector(q)

给出:
[('king', 0.8655095100402832), ('queen', 0.7673765420913696), ('monarch', 0.695580005645752), ('kings', 0.6929547786712646), ('princess', 0.6909604668617249), ('woman', 0.6528975963592529), ('lady', 0.6286187767982483), ('prince', 0.6222133636474609), ('kingdom', 0.6208546161651611), ('royal', 0.6090123653411865)]

皇后,君主等单词的余弦距离有显着差异。我想知道为什么吗?

谢谢!

最佳答案

类似于most_similar的函数检索与"king""woman""man"对应的向量,并在计算king - man + woman(source code:use_norm=True)之前对其进行归一化。

函数调用model.similar_by_vector(v)仅调用model.most_similar(positive=[v])。因此,差异是由于most_similar的行为取决于输入的类型(字符串或向量)。

最后,当most_similar具有字符串输入时,它将从输出中删除单词(这就是为什么“king”不出现在结果中的原因)。

一段代码来查看差异:

>>> un = False
>>> v = model.word_vec("king", use_norm=un) + model.word_vec("woman", use_norm=un) - model.word_vec("man", use_norm=un)
>>> un = True
>>> v2 = model.word_vec("king", use_norm=un) + model.word_vec("woman", use_norm=un) - model.word_vec("man", use_norm=un)
>>> model.most_similar(positive=[v], topn=6)
[('king', 0.8449392318725586), ('queen', 0.7300517559051514), ('monarch', 0.6454660892486572), ('princess', 0.6156251430511475), ('crown_prince', 0.5818676948547363), ('prince', 0.5777117609977722)]
>>> model.most_similar(positive=[v2], topn=6)
[('king', 0.7992597222328186), ('queen', 0.7118192911148071), ('monarch', 0.6189674139022827), ('princess', 0.5902431011199951), ('crown_prince', 0.5499460697174072), ('prince', 0.5377321243286133)]
>>> model.most_similar(positive=["king", "woman"], negative=["man"], topn=6)
[('queen', 0.7118192911148071), ('monarch', 0.6189674139022827), ('princess', 0.5902431011199951), ('crown_prince', 0.5499460697174072), ('prince', 0.5377321243286133), ('kings', 0.5236844420433044)]

关于nlp - Gensim word2vec中的most_similar和similar_by_vector之间的区别?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/50275623/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com