- mongodb - 在 MongoDB mapreduce 中,如何展平值对象?
- javascript - 对象传播与 Object.assign
- html - 输入类型 ="submit"Vs 按钮标签它们可以互换吗?
- sql - 使用 MongoDB 而不是 MS SQL Server 的优缺点
我有一个概念列表(myconcepts
)和一个句子列表(sentences
),如下所示。
concepts = [['natural language processing', 'text mining', 'texts', 'nlp'], ['advanced data mining', 'data mining', 'data'], ['discourse analysis', 'learning analytics', 'mooc']]
sentences = ['data mining and text mining', 'nlp is mainly used by discourse analysis community', 'data mining in python is fun', 'mooc data analysis involves texts', 'data and data mining are both very interesting']
concepts
中找到
sentences
。更具体地说,给定
concepts
中的列表(例如
['natural language processing', 'text mining', 'texts', 'nlp']
),我想在句子中标识这些概念,并用其第一个元素(即
natural language processing
)替换它们。
data mining and text mining
;结果应该是
advanced data mining and natural language processing
。 (因为
data mining
和
text mining
的前两个元素分别是
advanced data mining
和
natural language processing
)。
['advanced data mining and natural language processing', 'natural language processing is mainly used by discourse analysis community', 'advanced data mining in python is fun', 'discourse analysis advanced data mining analysis involves natural language processing', 'advanced data mining and advanced data mining are both very interesting']
concepts_re = []
for item in sorted_wikipedia_redirects:
item_re = "|".join(re.escape(item) for item in item)
concepts_re.append(item_re)
sentences_mapping = []
for sentence in sentences:
for terms in concepts:
if len(terms) > 1:
for item in terms:
if item in sentence:
sentence = re.sub(concepts_re[concepts.index(terms)], item[0], sentence)
sentences_mapping.append(sentence)
concepts
。因此,我的方法效率很低,大约需要5分钟才能处理一个句子。我想知道在python中是否有任何有效的方法。
concepts
来测量时间的人,我附上了一个更长的列表:
https://drive.google.com/file/d/1OsggJTDZx67PGH4LupXIkCTObla0gDnX/view?usp=sharing
最佳答案
下面提供的解决方案在运行时具有大约 O(n)复杂度,其中 n 是每个句子中标记的数量。
对于500万个句子和您的concepts.txt
,它会在约30秒内执行所需的操作,请参阅第三部分中的基本测试。
当涉及到空间复杂性时,您必须保留一个嵌套的字典结构(现在让它简化一下),说它是 O(c * u),其中 u 是特定长度的 token 概念(按 token 表示),而c是概念的长度。
很难确定确切的复杂性,但是它与此非常相似(对于您的示例数据和您提供的[concepts.txt
] 数据来说,它们非常准确,但是在实现过程中,我们将详细介绍细节)。
我假设您可以在空格上拆分概念和句子,如果不是这种情况,我建议您看一下spaCy,它提供了更智能的方式来标记数据。
1.简介
让我们举个例子:
concepts = [
["natural language processing", "text mining", "texts", "nlp"],
["advanced data mining", "data mining", "data"],
["discourse analysis", "learning analytics", "mooc"],
]
for concept in concepts:
concept[1:] = concept[0]
concept
的一个特定示例(稍作修改)以了解我的观点:
["advanced data mining", "data something", "data"]
data
将映射到
advanced data mining
,
但
data something
,它由
data
组成,应在其之前映射。如果我对您的理解正确,那么您将需要以下句子:
"Here is data something and another data"
"Here is advanced data mapping and another advanced data mining"
"Here is advanced data mapping something and another advanced data mining"
data
,而不是data something
。
data something
(以及其他适合此模式的代码),我使用了一个充满字典的数组结构,其中数组中较早的概念是较长的 token 方式的概念。
structure = [
{"data": {"something": "advanced data mining"}},
{"data": "advanced data mining"},
]
def get_longest(concepts: List[List[str]]):
return max(len(text.split()) for concept in concepts for text in concept[1:])
def init_hierarchical_dictionaries(longest: int):
return [(length, {}) for length in reversed(range(longest))]
def create_hierarchical_dictionaries(concepts: List[List[str]]):
# Initialization
longest = get_longest(concepts)
hierarchical_dictionaries = init_hierarchical_dictionaries(longest)
for concept in concepts:
for text in concept[1:]:
tokens = text.split()
# Initialize dictionary; get the one with corresponding length.
# The longer, the earlier it is in the hierarchy
current_dictionary = hierarchical_dictionaries[longest - len(tokens)][1]
# All of the tokens except the last one are another dictionary mapping to
# the next token in concept.
for token in tokens[:-1]:
current_dictionary[token] = {}
current_dictionary = current_dictionary[token]
# Last token is mapped to the first concept
current_dictionary[tokens[-1]] = concept[0].split()
return hierarchical_dictionaries
def embed_sentences(sentences: List[str], hierarchical_dictionaries):
return (traverse(sentence, hierarchical_dictionaries) for sentence in sentences)
traverse
函数:
def traverse(sentence: str, hierarchical_dictionaries):
# Get all tokens in the sentence
tokens = sentence.split()
output_sentence = []
# Initialize index to the first token
index = 0
# Until any tokens left to check for concepts
while index < len(tokens):
# Iterate over hierarchical dictionaries (elements of the array)
for hierarchical_dictionary_tuple in hierarchical_dictionaries:
# New index is returned based on match and token-wise length of concept
index, concept = traverse_through_dictionary(
index, tokens, hierarchical_dictionary_tuple
)
# Concept was found in current hierarchical_dictionary_tuple, let's add it
# to output
if concept is not None:
output_sentence.extend(concept)
# No need to check other hierarchical dictionaries for matching concept
break
# Token (and it's next tokens) do not match with any concept, return original
else:
output_sentence.append(tokens[index])
# Increment index in order to move to the next token
index += 1
# Join list of tokens into a sentence
return " ".join(output_sentence)
super data mining
,
super data
和
data
)。
def traverse_through_dictionary(index, tokens, hierarchical_dictionary_tuple):
# Get the level of nested dictionaries and initial dictionary
length, current_dictionary = hierarchical_dictionary_tuple
# inner_index will loop through tokens until match or no match was found
inner_index = index
for _ in range(length):
# Get next nested dictionary and move inner_index to the next token
current_dictionary = current_dictionary.get(tokens[inner_index])
inner_index += 1
# If no match was found in any level of dictionary
# Return current index in sentence and None representing lack of concept.
if current_dictionary is None or inner_index >= len(tokens):
return index, None
# If everything went fine through all nested dictionaries, check whether
# last token corresponds to concept
concept = current_dictionary.get(tokens[inner_index])
if concept is None:
return index, None
# If so, return inner_index (we have moved length tokens, so we have to update it)
return inner_index, concept
concepts.txt
是您提供的源代码):import ast
import time
from typing import List
def get_longest(concepts: List[List[str]]):
return max(len(text.split()) for concept in concepts for text in concept[1:])
def init_hierarchical_dictionaries(longest: int):
return [(length, {}) for length in reversed(range(longest))]
def create_hierarchical_dictionaries(concepts: List[List[str]]):
# Initialization
longest = get_longest(concepts)
hierarchical_dictionaries = init_hierarchical_dictionaries(longest)
for concept in concepts:
for text in concept[1:]:
tokens = text.split()
# Initialize dictionary; get the one with corresponding length.
# The longer, the earlier it is in the hierarchy
current_dictionary = hierarchical_dictionaries[longest - len(tokens)][1]
# All of the tokens except the last one are another dictionary mapping to
# the next token in concept.
for token in tokens[:-1]:
current_dictionary[token] = {}
current_dictionary = current_dictionary[token]
# Last token is mapped to the first concept
current_dictionary[tokens[-1]] = concept[0].split()
return hierarchical_dictionaries
def traverse_through_dictionary(index, tokens, hierarchical_dictionary_tuple):
# Get the level of nested dictionaries and initial dictionary
length, current_dictionary = hierarchical_dictionary_tuple
# inner_index will loop through tokens until match or no match was found
inner_index = index
for _ in range(length):
# Get next nested dictionary and move inner_index to the next token
current_dictionary = current_dictionary.get(tokens[inner_index])
inner_index += 1
# If no match was found in any level of dictionary
# Return current index in sentence and None representing lack of concept.
if current_dictionary is None or inner_index >= len(tokens):
return index, None
# If everything went fine through all nested dictionaries, check whether
# last token corresponds to concept
concept = current_dictionary.get(tokens[inner_index])
if concept is None:
return index, None
# If so, return inner_index (we have moved length tokens, so we have to update it)
return inner_index, concept
def traverse(sentence: str, hierarchical_dictionaries):
# Get all tokens in the sentence
tokens = sentence.split()
output_sentence = []
# Initialize index to the first token
index = 0
# Until any tokens left to check for concepts
while index < len(tokens):
# Iterate over hierarchical dictionaries (elements of the array)
for hierarchical_dictionary_tuple in hierarchical_dictionaries:
# New index is returned based on match and token-wise length of concept
index, concept = traverse_through_dictionary(
index, tokens, hierarchical_dictionary_tuple
)
# Concept was found in current hierarchical_dictionary_tuple, let's add it
# to output
if concept is not None:
output_sentence.extend(concept)
# No need to check other hierarchical dictionaries for matching concept
break
# Token (and it's next tokens) do not match with any concept, return original
else:
output_sentence.append(tokens[index])
# Increment index in order to move to the next token
index += 1
# Join list of tokens into a sentence
return " ".join(output_sentence)
def embed_sentences(sentences: List[str], hierarchical_dictionaries):
return (traverse(sentence, hierarchical_dictionaries) for sentence in sentences)
def sanity_check():
concepts = [
["natural language processing", "text mining", "texts", "nlp"],
["advanced data mining", "data mining", "data"],
["discourse analysis", "learning analytics", "mooc"],
]
sentences = [
"data mining and text mining",
"nlp is mainly used by discourse analysis community",
"data mining in python is fun",
"mooc data analysis involves texts",
"data and data mining are both very interesting",
]
targets = [
"advanced data mining and natural language processing",
"natural language processing is mainly used by discourse analysis community",
"advanced data mining in python is fun",
"discourse analysis advanced data mining analysis involves natural language processing",
"advanced data mining and advanced data mining are both very interesting",
]
hierarchical_dictionaries = create_hierarchical_dictionaries(concepts)
results = list(embed_sentences(sentences, hierarchical_dictionaries))
if results == targets:
print("Correct results")
else:
print("Incorrect results")
def speed_check():
with open("./concepts.txt") as f:
concepts = ast.literal_eval(f.read())
initial_sentences = [
"data mining and text mining",
"nlp is mainly used by discourse analysis community",
"data mining in python is fun",
"mooc data analysis involves texts",
"data and data mining are both very interesting",
]
sentences = initial_sentences.copy()
for i in range(1_000_000):
sentences += initial_sentences
start = time.time()
hierarchical_dictionaries = create_hierarchical_dictionaries(concepts)
middle = time.time()
letters = []
for result in embed_sentences(sentences, hierarchical_dictionaries):
letters.append(result[0].capitalize())
end = time.time()
print(f"Time for hierarchical creation {(middle-start) * 1000.0} ms")
print(f"Time for embedding {(end-middle) * 1000.0} ms")
print(f"Overall time elapsed {(end-start) * 1000.0} ms")
def main():
sanity_check()
speed_check()
if __name__ == "__main__":
main()
Time for hierarchical creation 107.71822929382324 ms
Time for embedding 30460.427284240723 ms
Overall time elapsed 30568.145513534546 ms
concepts.txt
),但通常会更低/更低,因为字典取决于概念长度和这些单词的唯一单词的组合。
关于python - 如何在python中高效地搜索字符串中的列表元素,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/54474299/
如何使用 SPListCollection.Add(String, String, String, String, Int32, String, SPListTemplate.QuickLaunchO
我刚刚开始使用 C++ 并且对 C# 有一些经验,所以我有一些一般的编程经验。然而,似乎我马上就被击落了。我试过在谷歌上寻找,以免浪费任何人的时间,但没有结果。 int main(int argc,
这个问题已经有答案了: In Java 8 how do I transform a Map to another Map using a lambda? (8 个回答) Convert a Map>
我正在使用 node + typescript 和集成的 swagger 进行 API 调用。我 Swagger 提出以下要求 http://localhost:3033/employees/sear
我是 C++ 容器模板的新手。我收集了一些记录。每条记录都有一个唯一的名称,以及一个字段/值对列表。将按名称访问记录。字段/值对的顺序很重要。因此我设计如下: typedef string
我需要这两种方法,但j2me没有,我找到了一个replaceall();但这是 replaceall(string,string,string); 第二个方法是SringBuffer但在j2me中它没
If string is an alias of String in the .net framework为什么会发生这种情况,我应该如何解释它: type JustAString = string
我有两个列表(或字符串):一个大,另一个小。 我想检查较大的(A)是否包含小的(B)。 我的期望如下: 案例 1. B 是 A 的子集 A = [1,2,3] B = [1,2] contains(A
我有一个似乎无法解决的小问题。 这里...我有一个像这样创建的输入... var input = $(''); 如果我这样做......一切都很好 $(this).append(input); 如果我
我有以下代码片段 string[] lines = objects.Split(new string[] { "\r\n", "\n" }, StringSplitOptions.No
这可能真的很简单,但我已经坚持了一段时间了。 我正在尝试输出一个字符串,然后输出一个带有两位小数的 double ,后跟另一个字符串,这是我的代码。 System.out.printf("成本:%.2
以下是 Cloud Firestore 列表查询中的示例之一 citiesRef.where("state", ">=", "CA").where("state", "= 字符串,我们在Stack O
我正在尝试检查一个字符串是否包含在另一个字符串中。后面的代码非常简单。我怎样才能在 jquery 中做到这一点? function deleteRow(locName, locID) { if
这个问题在这里已经有了答案: How to implement big int in C++ (14 个答案) 关闭 9 年前。 我有 2 个字符串,都只包含数字。这些数字大于 uint64_t 的
我有一个带有自定义转换器的 Dozer 映射: com.xyz.Customer com.xyz.CustomerDAO customerName
这个问题在这里已经有了答案: How do I compare strings in Java? (23 个回答) 关闭 6 年前。 我想了解字符串池的工作原理以及一个字符串等于另一个字符串的规则是
我已阅读 this问题和其他一些问题。但它们与我的问题有些无关 对于 UILabel 如果你不指定 ? 或 ! 你会得到这样的错误: @IBOutlet property has non-option
这两种方法中哪一种在理论上更快,为什么? (指向字符串的指针必须是常量。) destination[count] 和 *destination++ 之间的确切区别是什么? destination[co
This question already has answers here: Closed 11 years ago. Possible Duplicates: Is String.Format a
我有一个Stream一个文件的,现在我想将相同的单词组合成 Map这很重要,这个词在 Stream 中出现的频率. 我知道我必须使用 collect(Collectors.groupingBy(..)
我是一名优秀的程序员,十分优秀!