gpt4 book ai didi

python - 如何在python中高效地搜索字符串中的列表元素

转载 作者:IT老高 更新时间:2023-10-28 20:47:07 25 4
gpt4 key购买 nike

我有一个概念列表(myconcepts)和一个句子列表(sentences),如下所示。

concepts = [['natural language processing', 'text mining', 'texts', 'nlp'], ['advanced data mining', 'data mining', 'data'], ['discourse analysis', 'learning analytics', 'mooc']]


sentences = ['data mining and text mining', 'nlp is mainly used by discourse analysis community', 'data mining in python is fun', 'mooc data analysis involves texts', 'data and data mining are both very interesting']

简而言之,我想在 concepts中找到 sentences。更具体地说,给定 concepts中的列表(例如 ['natural language processing', 'text mining', 'texts', 'nlp']),我想在句子中标识这些概念,并用其第一个元素(即 natural language processing)替换它们。

示例:
因此,如果我们考虑句子 data mining and text mining;结果应该是 advanced data mining and natural language processing。 (因为 data miningtext mining的前两个元素分别是 advanced data miningnatural language processing)。

以上虚拟数据的结果应为:
['advanced data mining and natural language processing', 'natural language processing is mainly used by discourse analysis community', 'advanced data mining in python is fun', 'discourse analysis advanced data mining analysis involves natural language processing', 'advanced data mining and advanced data mining are both very interesting']

我目前正在使用正则表达式执行此操作,如下所示:
concepts_re = []

for item in sorted_wikipedia_redirects:
item_re = "|".join(re.escape(item) for item in item)
concepts_re.append(item_re)

sentences_mapping = []

for sentence in sentences:
for terms in concepts:
if len(terms) > 1:
for item in terms:
if item in sentence:
sentence = re.sub(concepts_re[concepts.index(terms)], item[0], sentence)
sentences_mapping.append(sentence)

在我的真实数据集中,我大约有800万个 concepts。因此,我的方法效率很低,大约需要5分钟才能处理一个句子。我想知道在python中是否有任何有效的方法。

对于那些想要处理一长串 concepts来测量时间的人,我附上了一个更长的列表: https://drive.google.com/file/d/1OsggJTDZx67PGH4LupXIkCTObla0gDnX/view?usp=sharing

如果需要,我很乐意提供更多详细信息。

最佳答案

下面提供的解决方案在运行时具有大约 O(n)复杂度,其中 n 是每个句子中标记的数量。

对于500万个句子和您的concepts.txt,它会在约30秒内执行所需的操作,请参阅第三部分中的基本测试。

当涉及到空间复杂性时,您必须保留一个嵌套的字典结构(现在让它简化一下),说它是 O(c * u),其中 u 特定长度的 token 概念(按 token 表示),而c是概念的长度。

很难确定确切的复杂性,但是它与此非常相似(对于您的示例数据和您提供的[concepts.txt] 数据来说,它们非常准确,但是在实现过程中,我们将详细介绍细节)。

我假设您可以在空格上拆分概念和句子,如果不是这种情况,我建议您看一下spaCy,它提供了更智能的方式来标记数据。

1.简介

让我们举个例子:

concepts = [
["natural language processing", "text mining", "texts", "nlp"],
["advanced data mining", "data mining", "data"],
["discourse analysis", "learning analytics", "mooc"],
]

如您所说,概念中的每个元素都必须映射到第一个元素,因此,在Pythonish中,它大致遵循以下原则:
for concept in concepts:
concept[1:] = concept[0]

如果所有概念的 token 长度都等于1(在这里不是这种情况),那么任务将很容易,并且将是唯一的。让我们集中讨论第二种情况和 concept的一个特定示例(稍作修改)以了解我的观点:
["advanced data mining", "data something", "data"]

此处 data将映射到 advanced data mining data something,它由 data组成,应在其之前映射。如果我对您的理解正确,那么您将需要以下句子:
"Here is data something and another data"

要映射到:
"Here is advanced data mapping and another advanced data mining"

代替天真的方法:
"Here is advanced data mapping something and another advanced data mining"

请参见,在第二个示例中,我们仅映射了data,而不是data something

为了优先处理 data something(以及其他适合此模式的代码),我使用了一个充满字典的数组结构,其中数组中较早的概念是较长的 token 方式的概念。

继续我们的示例,此类数组如下所示:
structure = [
{"data": {"something": "advanced data mining"}},
{"data": "advanced data mining"},
]

请注意,如果我们按此顺序浏览 token (例如,首先通过具有连续 token 的第一本词典,如果找不到匹配项,请转到第二本词典,依此类推),我们将首先获得最长的概念。

2.代码

好的,我希望您有基本的想法(如果没有,请在下面发表评论,我将尝试更详细地解释不清楚的部分)。

免责声明:我并不对此代码方式感到特别自豪,但是它可以完成工作,而且我想可能会更糟。

2.1分层字典

首先,让我们以 token 方式获得最长的概念(不包括第一个元素,因为这是我们的目标,而且我们永远都不必更改它):
def get_longest(concepts: List[List[str]]):
return max(len(text.split()) for concept in concepts for text in concept[1:])

使用这些信息,我们可以通过创建与概念长度不同的词典一样多的字典来初始化结构(在上面的示例中为2,因此适用于您的所有数据。但是可以使用任何长度的概念):
def init_hierarchical_dictionaries(longest: int):
return [(length, {}) for length in reversed(range(longest))]

请注意,我正在将每个概念的长度添加到数组,IMO在遍历时更容易做到这一点,尽管对实现进行了一些更改,也可以不用它。

现在,有了这些辅助功能后,就可以从概念列表创建结构了:
def create_hierarchical_dictionaries(concepts: List[List[str]]):
# Initialization
longest = get_longest(concepts)
hierarchical_dictionaries = init_hierarchical_dictionaries(longest)

for concept in concepts:
for text in concept[1:]:
tokens = text.split()
# Initialize dictionary; get the one with corresponding length.
# The longer, the earlier it is in the hierarchy
current_dictionary = hierarchical_dictionaries[longest - len(tokens)][1]
# All of the tokens except the last one are another dictionary mapping to
# the next token in concept.
for token in tokens[:-1]:
current_dictionary[token] = {}
current_dictionary = current_dictionary[token]

# Last token is mapped to the first concept
current_dictionary[tokens[-1]] = concept[0].split()

return hierarchical_dictionaries

此函数将创建我们的分层字典,请参见源代码中的注释以获取一些说明。您可能想创建一个保留该内容的自定义类,因此使用起来应该更容易。

这与 1中描述的对象完全相同。简介

2.2遍历字典

这部分要难得多,但是这次让我们使用自上而下的方法。我们将轻松开始:
def embed_sentences(sentences: List[str], hierarchical_dictionaries):
return (traverse(sentence, hierarchical_dictionaries) for sentence in sentences)

提供分层字典后,它会创建一个生成器,该生成器根据概念映射来转换每个句子。

现在 traverse函数:
def traverse(sentence: str, hierarchical_dictionaries):
# Get all tokens in the sentence
tokens = sentence.split()
output_sentence = []
# Initialize index to the first token
index = 0
# Until any tokens left to check for concepts
while index < len(tokens):
# Iterate over hierarchical dictionaries (elements of the array)
for hierarchical_dictionary_tuple in hierarchical_dictionaries:
# New index is returned based on match and token-wise length of concept
index, concept = traverse_through_dictionary(
index, tokens, hierarchical_dictionary_tuple
)
# Concept was found in current hierarchical_dictionary_tuple, let's add it
# to output
if concept is not None:
output_sentence.extend(concept)
# No need to check other hierarchical dictionaries for matching concept
break
# Token (and it's next tokens) do not match with any concept, return original
else:
output_sentence.append(tokens[index])
# Increment index in order to move to the next token
index += 1

# Join list of tokens into a sentence
return " ".join(output_sentence)

如果您不确定发生了什么,请再次发表评论

悲观地使用这种方法,我们将执行 O(n * c!)检查,其中n是句子中标记的数量,c是最长概念的标记方式长度,并且是阶乘。这种情况下 在实践中极不可能发生,句子中的每个标记都必须几乎完全适合最长概念 ,所有较短的概念都必须是最短概念的前缀(例如 super data miningsuper datadata)。

如我之前所说,对于任何实际问题,它的 会比omt_rstrong更接近O(n),使用您在.txt文件中提供的数据,它是O(3 * n)最坏的情况,通常是O(2 * n)。

遍历每个字典:
def traverse_through_dictionary(index, tokens, hierarchical_dictionary_tuple):
# Get the level of nested dictionaries and initial dictionary
length, current_dictionary = hierarchical_dictionary_tuple
# inner_index will loop through tokens until match or no match was found
inner_index = index
for _ in range(length):
# Get next nested dictionary and move inner_index to the next token
current_dictionary = current_dictionary.get(tokens[inner_index])
inner_index += 1
# If no match was found in any level of dictionary
# Return current index in sentence and None representing lack of concept.
if current_dictionary is None or inner_index >= len(tokens):
return index, None

# If everything went fine through all nested dictionaries, check whether
# last token corresponds to concept
concept = current_dictionary.get(tokens[inner_index])
if concept is None:
return index, None
# If so, return inner_index (we have moved length tokens, so we have to update it)
return inner_index, concept

这构成了我解决方案的“肉”。

3.结果

现在,为简便起见,下面提供了完整的源代码(concepts.txt是您提供的源代码):
import ast
import time
from typing import List


def get_longest(concepts: List[List[str]]):
return max(len(text.split()) for concept in concepts for text in concept[1:])


def init_hierarchical_dictionaries(longest: int):
return [(length, {}) for length in reversed(range(longest))]


def create_hierarchical_dictionaries(concepts: List[List[str]]):
# Initialization
longest = get_longest(concepts)
hierarchical_dictionaries = init_hierarchical_dictionaries(longest)

for concept in concepts:
for text in concept[1:]:
tokens = text.split()
# Initialize dictionary; get the one with corresponding length.
# The longer, the earlier it is in the hierarchy
current_dictionary = hierarchical_dictionaries[longest - len(tokens)][1]
# All of the tokens except the last one are another dictionary mapping to
# the next token in concept.
for token in tokens[:-1]:
current_dictionary[token] = {}
current_dictionary = current_dictionary[token]

# Last token is mapped to the first concept
current_dictionary[tokens[-1]] = concept[0].split()

return hierarchical_dictionaries


def traverse_through_dictionary(index, tokens, hierarchical_dictionary_tuple):
# Get the level of nested dictionaries and initial dictionary
length, current_dictionary = hierarchical_dictionary_tuple
# inner_index will loop through tokens until match or no match was found
inner_index = index
for _ in range(length):
# Get next nested dictionary and move inner_index to the next token
current_dictionary = current_dictionary.get(tokens[inner_index])
inner_index += 1
# If no match was found in any level of dictionary
# Return current index in sentence and None representing lack of concept.
if current_dictionary is None or inner_index >= len(tokens):
return index, None

# If everything went fine through all nested dictionaries, check whether
# last token corresponds to concept
concept = current_dictionary.get(tokens[inner_index])
if concept is None:
return index, None
# If so, return inner_index (we have moved length tokens, so we have to update it)
return inner_index, concept


def traverse(sentence: str, hierarchical_dictionaries):
# Get all tokens in the sentence
tokens = sentence.split()
output_sentence = []
# Initialize index to the first token
index = 0
# Until any tokens left to check for concepts
while index < len(tokens):
# Iterate over hierarchical dictionaries (elements of the array)
for hierarchical_dictionary_tuple in hierarchical_dictionaries:
# New index is returned based on match and token-wise length of concept
index, concept = traverse_through_dictionary(
index, tokens, hierarchical_dictionary_tuple
)
# Concept was found in current hierarchical_dictionary_tuple, let's add it
# to output
if concept is not None:
output_sentence.extend(concept)
# No need to check other hierarchical dictionaries for matching concept
break
# Token (and it's next tokens) do not match with any concept, return original
else:
output_sentence.append(tokens[index])
# Increment index in order to move to the next token
index += 1

# Join list of tokens into a sentence
return " ".join(output_sentence)


def embed_sentences(sentences: List[str], hierarchical_dictionaries):
return (traverse(sentence, hierarchical_dictionaries) for sentence in sentences)


def sanity_check():
concepts = [
["natural language processing", "text mining", "texts", "nlp"],
["advanced data mining", "data mining", "data"],
["discourse analysis", "learning analytics", "mooc"],
]
sentences = [
"data mining and text mining",
"nlp is mainly used by discourse analysis community",
"data mining in python is fun",
"mooc data analysis involves texts",
"data and data mining are both very interesting",
]

targets = [
"advanced data mining and natural language processing",
"natural language processing is mainly used by discourse analysis community",
"advanced data mining in python is fun",
"discourse analysis advanced data mining analysis involves natural language processing",
"advanced data mining and advanced data mining are both very interesting",
]

hierarchical_dictionaries = create_hierarchical_dictionaries(concepts)

results = list(embed_sentences(sentences, hierarchical_dictionaries))
if results == targets:
print("Correct results")
else:
print("Incorrect results")


def speed_check():
with open("./concepts.txt") as f:
concepts = ast.literal_eval(f.read())

initial_sentences = [
"data mining and text mining",
"nlp is mainly used by discourse analysis community",
"data mining in python is fun",
"mooc data analysis involves texts",
"data and data mining are both very interesting",
]

sentences = initial_sentences.copy()

for i in range(1_000_000):
sentences += initial_sentences

start = time.time()
hierarchical_dictionaries = create_hierarchical_dictionaries(concepts)
middle = time.time()
letters = []
for result in embed_sentences(sentences, hierarchical_dictionaries):
letters.append(result[0].capitalize())
end = time.time()
print(f"Time for hierarchical creation {(middle-start) * 1000.0} ms")
print(f"Time for embedding {(end-middle) * 1000.0} ms")
print(f"Overall time elapsed {(end-start) * 1000.0} ms")


def main():
sanity_check()
speed_check()


if __name__ == "__main__":
main()

速度检查的结果如下:
Time for hierarchical creation 107.71822929382324 ms
Time for embedding 30460.427284240723 ms
Overall time elapsed 30568.145513534546 ms

因此,对于500万个句子(您提供的5个句子进行了1百万次连接)和您提供的概念文件(1.1 mb),执行概念映射大约需要30秒钟,我想这还不错。

在最坏的情况下,字典应该占用与输入文件一样多的内存(在这种情况下为concepts.txt),但通常会更低/更低,因为字典取决于概念长度和这些单词的唯一单词的组合。

关于python - 如何在python中高效地搜索字符串中的列表元素,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/54474299/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com