gpt4 book ai didi

python - 有效地构建具有给定汉明距离的单词图

转载 作者:IT老高 更新时间:2023-10-28 20:52:19 31 4
gpt4 key购买 nike

我想用 Hamming distance 的单词列表构建一个图表(比如说)1,或者换一种说法,如果两个单词仅与一个字母不同(lol -> lot )。

这样给定

words = [lol, lot, bot]

图表将是

{
'lol' : [ 'lot' ],
'lot' : [ 'lol', 'bot' ],
'bot' : [ 'lot' ]
}

简单的方法是将列表中的每个单词相互比较并计算不同的字符;遗憾的是,这是一个 O(N^2) 算法。

我可以使用哪种算法/ds/策略来获得更好的性能?

另外,我们假设只有拉丁字符,并且所有单词的长度都相同。

最佳答案

假设您将字典存储在 set() 中,因此 lookup is O(1) in the average (worst case O(n))

您可以从一个单词生成汉明距离为 1 的所有有效单词:

>>> def neighbours(word):
... for j in range(len(word)):
... for d in string.ascii_lowercase:
... word1 = ''.join(d if i==j else c for i,c in enumerate(word))
... if word1 != word and word1 in words: yield word1
...
>>> {word: list(neighbours(word)) for word in words}
{'bot': ['lot'], 'lol': ['lot'], 'lot': ['bot', 'lol']}

如果M是单词的长度,L是字母表的长度(即26),最坏情况的时间复杂度为使用这种方法查找相邻单词是 O(L*M*N)

“简单方法”方法的时间复杂度为 O(N^2)

什么时候这种方法更好?当 L*M < N 时,即如果只考虑小写字母,当 M < N/26 时。 (我这里只考虑最坏的情况)

注意:the average length of an english word is 5.1 letters 。因此,如果您的字典大小超过 132 个单词,您应该考虑这种方法。

也许有可能获得比这更好的性能。不过这实现起来真的很简单。

实验基准:

“简单方法”算法(A1):

from itertools import zip_longest
def hammingdist(w1,w2): return sum(1 if c1!=c2 else 0 for c1,c2 in zip_longest(w1,w2))
def graph1(words): return {word: [n for n in words if hammingdist(word,n) == 1] for word in words}

这个算法(A2):

def graph2(words): return {word: list(neighbours(word)) for word in words}

基准代码:

for dict_size in range(100,6000,100):
words = set([''.join(random.choice(string.ascii_lowercase) for x in range(3)) for _ in range(dict_size)])
t1 = Timer(lambda: graph1()).timeit(10)
t2 = Timer(lambda: graph2()).timeit(10)
print('%d,%f,%f' % (dict_size,t1,t2))

输出:

100,0.119276,0.136940
200,0.459325,0.233766
300,0.958735,0.325848
400,1.706914,0.446965
500,2.744136,0.545569
600,3.748029,0.682245
700,5.443656,0.773449
800,6.773326,0.874296
900,8.535195,0.996929
1000,10.445875,1.126241
1100,12.510936,1.179570
...

data plot

我运行了另一个具有更小步长 N 的基准测试,以便更接近:

10,0.002243,0.026343
20,0.010982,0.070572
30,0.023949,0.073169
40,0.035697,0.090908
50,0.057658,0.114725
60,0.079863,0.135462
70,0.107428,0.159410
80,0.142211,0.176512
90,0.182526,0.210243
100,0.217721,0.218544
110,0.268710,0.256711
120,0.334201,0.268040
130,0.383052,0.291999
140,0.427078,0.312975
150,0.501833,0.338531
160,0.637434,0.355136
170,0.635296,0.369626
180,0.698631,0.400146
190,0.904568,0.444710
200,1.024610,0.486549
210,1.008412,0.459280
220,1.056356,0.501408
...

data plot 2

您看到折衷非常低(对于长度为 3 的单词的字典,折衷为 100)。对于小型词典,O(N^2) 算法的性能好一些,但随着 N 的增长,O(LMN) 算法很容易击败。

对于单词较长的字典,O(LMN) 算法在 N 中保持线性,只是斜率不同,因此权衡稍微向右移动(长度 = 5 时为 130)。

关于python - 有效地构建具有给定汉明距离的单词图,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/31100623/

31 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com