gpt4 book ai didi

python - 如何优化算法以更快地找到具有fuzzywuzzy的相似字符串?

转载 作者:行者123 更新时间:2023-12-04 00:53:18 26 4
gpt4 key购买 nike

在我的数据库中查找相似的食物名称时遇到问题(大约有 10 万个产品名称)。我决定使用 lib fuzzywuzzy 中的 fuzz.token_sort_ratio 来查找类似的产品名称。它是这样工作的:

s1 = 'Pepsi Light'
s2 = 'Light Pepsi'
fuzz.token_sort_ratio(s1, s2)

100

现在我想查找所有具有相似词的产品名称,结果为 fuzz.token_sort_ratio >= 90这是我的代码:

#Find similar
start=datetime.now()
l = list(v_foods.name[0:20000])
i=0
df = pd.DataFrame(columns=['name1', 'name2', 'probab_same'])
for k in range(len(l)):
for s in range(k+1,len(l)):
probability = fuzz.token_sort_ratio(l[k], l[s])
if probability >= 90:
df.loc[i] = [l[k], l[s], probability]
i +=1
print('Spent time: {}' .format(datetime.now() - start))
df.head(5)

这需要很多时间。我拥有的产品越多,花费的时间就越多

  1. l = list(v_foods.name[0:5000])花费时间:~3 分钟
  2. l = list(v_foods.name[0:10000])花费时间:~13 分钟
  3. l = list(v_foods.name[0:20000])花费时间:~53 分钟

正如我上面所说,我的基地有 10 万个名字,它的工作速度很慢。有什么方法可以优化我的算法吗?

最佳答案

您的问题是您将每个名称与其他名称进行比较。这是 n^2 比较,所以会变慢。您需要做的只是比较有可能足够相似的名称对。

为了做得更好,我们需要知道图书馆实际上在做什么。感谢 this excellent answer我们可以这么说。它在两个名称上调用 fuzz._process_and_sort(name, True),然后查找 Levenshtein 比率。也就是说,它计算从一个字符串到另一个字符串的最佳方式,然后计算 100 *matched_chars/(matched_chars + edits)。要使这个分数达到 90+,编辑次数最多 len(name)/9。 (这个条件是必要的,但不是充分的,如果这些编辑在这个字符串中包含替换和删除,这会降低匹配字符的数量并降低比率。)

所以你可以很容易地规范化所有的名字。问题是你能找到一个给定的规范化名称,所有其他规范化名称的最大编辑次数吗?

诀窍是首先将所有标准化名称放入 Trie数据结构。然后我们可以并行遍历 Trie 以探索在一定编辑距离内的所有分支。这允许在不单独检查它们的情况下丢弃超出该距离的大量规范化名称。

这是 Trie 的 Python 实现,可让您找到这些规范化名称对。

import re

# Now we will build a trie. Every node has a list of words, and a dictionary
# from the next letter farther in the trie.
class Trie:
def __init__(self, path=''):
self.strings = []
self.dict = {}
self.count_strings = 0
self.path = path

def add_string (self, string):
trie = self

for letter in string:
trie.count_strings += 1
if letter not in trie.dict:
trie.dict[letter] = Trie(trie.path + letter)
trie = trie.dict[letter]
trie.count_strings += 1
trie.strings.append(string)

def __hash__ (self):
return id(self)

def __repr__ (self):
answer = self.path + ":\n count_strings:" + str(self.count_strings) + "\n strings: " + str(self.strings) + "\n dict:"
def indent (string):
p = re.compile("^(?!:$)", re.M)
return p.sub(" ", string)
for letter in sorted(self.dict.keys()):
subtrie = self.dict[letter]
answer = answer + indent("\n" + subtrie.__repr__())
return answer

def within_edits(self, string, max_edits):
# This will be all trie/string pos pairs that we have seen
found = set()
# This will be all trie/string pos pairs that we start the next edit with
start_at_edit = set()

# At distance 0 we start with the base of the trie can match the start of the string.
start_at_edit.add((self, 0))
answers = []
for edits in range(max_edits + 1): # 0..max_edits inclusive
start_at_next_edit = set()
todo = list(start_at_edit)
for trie, pos in todo:
if (trie, pos) not in found: # Have we processed this?
found.add((trie, pos))
if pos == len(string):
answers.extend(trie.strings) # ANSWERS FOUND HERE!!!
# We have to delete from the other string
for next_trie in trie.dict.values():
start_at_next_edit.add((next_trie, pos))
else:
# This string could have an insertion
start_at_next_edit.add((trie, pos+1))
for letter, next_trie in trie.dict.items():
# We could have had a a deletion in this string
start_at_next_edit.add((next_trie, pos))
if letter == string[pos]:
todo.append((next_trie, pos+1)) # we matched farther
else:
# Could have been a substitution
start_at_next_edit.add((next_trie, pos+1))
start_at_edit = start_at_next_edit
return answers

# Sample useage
trie = Trie()
trie.add_string('foo')
trie.add_string('bar')
trie.add_string('baz')
print(trie.within_edits('ba', 1))

关于python - 如何优化算法以更快地找到具有fuzzywuzzy的相似字符串?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/64711569/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com