gpt4 book ai didi

python - 在字符串列表中查找唯一 n-gram 的最小列表

转载 作者:IT老高 更新时间:2023-10-28 21:12:40 26 4
gpt4 key购买 nike

我有一个 50K 的字符串列表(城市名称),我需要一个最小的字符三元组(最好是 n-gram)列表,其中每个字符串至少被一个三元组命中一次。考虑以下列表: ['阿姆斯特丹','鹿特丹','哈勒姆','乌得勒支','格罗宁根']

识别三元组的列表是 4 长,应该是(可能的替代方案):

['ter', 'haa', 'utr', 'gro']

我认为我的解决方案找到了正确的正确答案,但在其他列表中使用时给出了错误的答案。

from collections import Counter

def identifying_grams(list, n=3):

def f7(seq):
seen = set()
seen_add = seen.add
return [x for x in seq if not (x in seen or seen_add(x))]

def ngrams(text, n=3):
return [text[i:i + n] for i in range(len(text) - n + 1)]

hits = []
trigrams = []
for item in list:
# trigrams += ngrams(item)
trigrams += f7(ngrams(item))

counts = Counter(trigrams).most_common()

for trigram, count in counts:
items = []
for item in list:
if trigram in item:
hits.append(trigram)
items.append(item)
for i in items:
list.remove(i)

return(f7(hits))

list1 = ['amsterdam','rotterdam','haarlem','utrecht','groningen']
print(identifying_grams(list1))
# Good, we get: ['ter', 'haa', 'utr', 'gro']

list2 = ['amsterdam','schiedam']
print(identifying_grams(list2))
# Good, we get: ['dam']

list3 = ['amsterdam','schiedam','terwolde','wolstad']
print(identifying_grams(list3))
# Ouch, we get: ['ter', 'dam', 'wol']
# this should be ['dam', 'wol'] as this is only 2 trigrams that identify the list...

到目前为止,我得到了两个答案,但它们都有缺陷。 Rupesh 的一个适用于小于 10 项的列表。我的列表有超过 50K 项。来自 mujjiga 的人确实提出了解决方案,尽管不是完美的解决方案。

Python 忍者的赏金,他们提出了一个可扩展的完美解决方案。如果它表现良好并且每次运行时都给出相同的解决方案,那就加分!

最佳答案

这是对@mujjiga 答案的理论分析:

您可以创建共享相同 ngram 的单词类别。您想选择涵盖整个单词集的最少数量的类(即最少数量的 ngram)。这是set cover problem .不幸的是,这个问题是 NP-hard(不是 NP-complete ,感谢@mujjiga)。 (编辑:因此,没有已知的解决方案可以在合理的时间内为您提供预期的结果。)贪婪算法几乎是最好的解决方案(参见 https://cs.stackexchange.com/questions/49777/is-greedy-algorithm-the-best-algorithm-for-set-cover-problem)。

请注意,即使是贪心算法也可能给出奇怪的结果。取集合 {a, b}, {b, c}, {c, d} 和超集 {a, b, c, d}。这三个子集是最大的。如果您首先采用 {b, c},则需要另外两个子集来覆盖超集。如果你取 {a, b}{c, d},两个子集就足够了。

让我们使用贪心算法,并考虑实现。创建将 ngram 映射到单词的字典的代码非常简单:

all_words= ['amsterdam','schiedam','werkendam','amstelveen','schiebroek','werkstad','den haag','rotjeknor','gouda']
n=3
words_by_ngram = {}
for word in all_words:
for ngram in (word[i:i+n] for i in range(0, len(word)-n+1)):
words_by_ngram.setdefault(ngram, set()).add(word)

如果键 ngram 存在,则 setdefault 等效于 get,否则创建一个空集。这是 O(|all_words|*|len max word|) 复杂度。

现在,我们要获取单词最多的 ngram,然后从字典中删除这些单词。重复直到你得到你想要的单词。

这是简单的版本:

s = set(all_words) # the target
gs = set()
d = words_by_ngram.copy() # for the display
while s:
# take the the best ngram
ngram, words = max(d.items(), key=lambda i: len(i[1])) # sort on word count
# remove the words from the dictionary and delete the ngrams whose words have been already found
d = {k:v for k, v in ((k, v - words) for k, v in d.items()) if len(v)}
gs.add(ngram) # add the ngram to the result
s -= words # remove the words from the target

# check
assert set().union(*[words_by_ngram[g] for g in gs]) == set(all_words)
# display
for g in gs:
print("{} -> {}".format(g, words_by_ngram[g]))

输出:

ams -> {'amstelveen', 'amsterdam'}
gou -> {'gouda'}
wer -> {'werkstad', 'werkendam'}
rot -> {'rotjeknor'}
dam -> {'amsterdam', 'werkendam', 'schiedam'}
sch -> {'schiebroek', 'schiedam'}
den -> {'den haag'}

第二步的复杂度为 O(|all_words|*|ngrams|),因为循环查找最大值和字典的更新。因此,总体复杂度为 O(|all_words|*|ngrams|)

使用优先级队列可以降低复杂性。检索最佳 ngram 的成本为 O(1),但更新映射到 ngram 的单词的 len 具有优先级 O(lg |ngrams| ):

import heapq
class PriorityQueue:
"""Adapted from https://docs.python.org/3/library/heapq.html#priority-queue-implementation-notes
A prority of 1 invalidates the entries
"""
def __init__(self, words_by_ngram):
self._d = {ngram:[-len(words), (ngram, words)] for ngram, words in words_by_ngram.items()}
self._pq = list(self._d.values())
heapq.heapify(self._pq)

def pop(self):
"""get the ngram, words tuple with the max word count"""
minus_len, (ngram, words) = heapq.heappop(self._pq)
while minus_len == 1: # entry is not valid
minus_len, (ngram, words) = heapq.heappop(self._pq)
return ngram, words

def update(self, ngram, words_to_remove):
"""remove the words from the sets and update priorities"""
del self._d[ngram]
ngrams_to_inspect = set(word[i:i+n] for i in range(0, len(word)-n+1)
for word in words_to_remove)
for ngram in ngrams_to_inspect:
if ngram not in self._d: continue
self._d[ngram][0] = 1 # use the reference to invalidate the entry
[L, (ngram, words)] = self._d[ngram]
words -= words_to_remove
if words:
self._d[ngram] = [-len(words), (ngram, words)] # new entry
heapq.heappush(self._pq, self._d[ngram]) # add to the pq (O(lg ngrams))
else: # nothing left: remove it from dict
del self._d[ngram]


pq = PriorityQueue(words_by_ngram)
gs = set()
s = set(all_words) # the target
while s:
# take the the best ngram
ngram, words = pq.pop()
gs.add(ngram) # add the ngram to the result
s -= words # remove the words from the target
# remove the words from the dictionary and update priorities
pq.update(ngram, words)

使用此代码,总体优先级降到 O(|all_words|*|lg ngrams|)。话虽如此,我很想知道这是否比带有 50k 项目的天真以前的版本更快。

关于python - 在字符串列表中查找唯一 n-gram 的最小列表,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55140208/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com