gpt4 book ai didi

python - 使用 levenahtein 将大文件聚类为 3 组

转载 作者:太空宇宙 更新时间:2023-11-03 19:42:18 25 4
gpt4 key购买 nike

嗨,我有一个小文件和一个大文件,这里的代码甚至不适用于大文件,仅适用于小文件,那么我如何读取大文件并对其执行操作?当我阅读并尝试在一个循环中进行聚类时,它不起作用,因为每次迭代都只在线。这是小文件的问题:文件行,我需要将它们分为 3 组。我尝试过亲和性传播,但它没有获得组大小参数,它给了我 4 个组,而第 4 组只有一个单词与另一组非常接近:

0
- *Bras5emax Estates, L.T.D.
:* Bras5emax Estates, L.T.D.

1
- *BOZEMAN Enterprises
:* BBAZEMAX ESTATES, LTD
, BOZEMAN Ent.
, BOZEMAN Enterprises
, BOZERMAN ENTERPRISES
, BRAZEMAX ESTATYS, LTD
, Bozeman Enterprises

2
- *PC Adelman
:* John Smith
, Michele LTD
, Nadelman, Jr
, PC Adelman

3
- *Gramkai, Inc.
:* Gramkai Books
, Gramkai, Inc.
, Gramkat Estates, Inc., Gramkat, Inc.

然后我尝试了 K-MEANS 但结果:

0
- *Gramkai Books
, Gramkai, Inc.
, Gramkat Estates, Inc., Gramkat, Inc.
:*
1
- *BBAZEMAX ESTATES, LTD
, BOZEMAN Enterprises
, BOZERMAN ENTERPRISES
, BRAZEMAX ESTATYS, LTD
, Bozeman Enterprises
, Bras5emax Estates, L.T.D.
:*
2
- *BOZEMAN Ent.
, John Smith
, Michele LTD
, Nadelman, Jr
, PC Adelman
:*

如您所见,BOZEMAN Ent。属于组 2,而不是组 1。

我的问题是:有没有办法做更好的聚类? K-MEANS 中有 cluster_center 吗?

代码:

import numpy as np
import sklearn.cluster
import distance

f = open("names.txt", "r")
words = f.readlines()
words = np.asarray(words) #So that indexing with a list will work
lev_similarity = -1*np.array([[distance.levenshtein(w1,w2) for w1 in words] for w2 in words])
affprop = sklearn.cluster.KMeans(n_clusters=3)
affprop.fit(lev_similarity)
for cluster_id in np.unique(affprop.labels_):
print(cluster_id)
cluster = np.unique(words[np.nonzero(affprop.labels_==cluster_id)])
cluster_str = ", ".join(cluster)
print(" - *%s:*" % ( cluster_str))

最佳答案

可以通过多种方式改进给定文本名称(企业)的聚类。

  1. 介绍一些文本清理和领域知识,例如删除点、常见企业停用词和降低字符:
words = [re.sub(r"(,|\.|ltd|l\.t\.d|inc|estates|enterprises|ent|estatys)","", w.lower()).strip() for w in words]
  • 使用“标准化”版本的 distance.levenshtein,以便可以有意义地比较距离,例如:
  • distance.nlevenshtein("abc", "acd", method=1)  # shortest alignment
    distance.nlevenshtein("abc", "acd", method=2) # longest alignment
  • 尝试其他距离测量方法:已标准化的 sorensenjaccard
  • 下面是完整的代码示例:

    words = \
    ["Gramkai Books",
    "Gramkai, Inc.",
    "Gramkat Estates, Inc.",
    "Gramkat, Inc.",
    "BBAZEMAX ESTATES, LTD",
    "BOZEMAN Enterprises",
    "BOZERMAN ENTERPRISES",
    "BRAZEMAX ESTATYS, LTD",
    "Bozeman Enterprises",
    "Bras5emax Estates, L.T.D.",
    "BOZEMAN Ent.",
    "John Smith",
    "Michele LTD",
    "Nadelman, Jr",
    "PC Adelman"]

    import re
    import sklearn
    from sklearn import cluster
    words = [re.sub(r"(,|\.|ltd|l\.t\.d|inc|estates|enterprises|ent|estatys)","", w.lower()).strip() for w in words]
    words = np.asarray(words) #So that indexing with a list will work
    lev_similarity = -1*np.array([[distance.nlevenshtein(w1,w2,method = 1) for w1 in words] for w2 in words])
    affprop = sklearn.cluster.KMeans(n_clusters=3)
    affprop.fit(lev_similarity)
    for cluster_id in np.unique(affprop.labels_):
    print(cluster_id)
    cluster = np.unique(words[np.nonzero(affprop.labels_==cluster_id)])
    cluster_str = ", ".join(cluster)
    print(" - *%s:*" % ( cluster_str))

    结果:

    0
    - *john smith, michele, nadelman jr, pc adelman:*
    1
    - *bbazemax, bozeman, bozerman, bras5emax, brazemax:*
    2
    - *gramkai, gramkai books, gramkat:*

    最后,您可能需要将更改后的名称与原始名称连接起来。

    关于python - 使用 levenahtein 将大文件聚类为 3 组,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/60356797/

    25 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com