gpt4 book ai didi

python - 错误 'power iteration failed to converge within 100 iterations' ) 当我尝试使用 python networkx 汇总文本文档时

转载 作者:行者123 更新时间:2023-12-04 09:25:08 31 4
gpt4 key购买 nike

当我尝试使用 python networkx 汇总文本文档时,我得到了一个 PowerIterationFailedConvergence:(PowerIterationFailedConvergence(...), 'power iteration failed to conver in 100 iterations'),如下面的代码所示。代码“scores = nx.pagerank(sentence_similarity_graph)”中显示的错误

def read_article(file_name):
file = open(file_name, "r",encoding="utf8")
filedata = file.readlines()
text=""
for s in filedata:
text=text+s.replace("\n","")
text=re.sub(' +', ' ', text) #remove space
text=re.sub('—',' ',text)

article = text.split(". ")
sentences = []
for sentence in article:
# print(sentence)
sentences.append(sentence.replace("[^a-zA-Z]", "").split(" "))
sentences.pop()
new_sent=[]
for lst in sentences:
newlst=[]
for i in range(len(lst)):
if lst[i].lower()!=lst[i-1].lower():
newlst.append(lst[i])
else:
newlst=newlst
new_sent.append(newlst)
return new_sent
def sentence_similarity(sent1, sent2, stopwords=None):
if stopwords is None:
stopwords = []

sent1 = [w.lower() for w in sent1]
sent2 = [w.lower() for w in sent2]

all_words = list(set(sent1 + sent2))

vector1 = [0] * len(all_words)
vector2 = [0] * len(all_words)

# build the vector for the first sentence
for w in sent1:
if w in stopwords:
continue
vector1[all_words.index(w)] += 1

# build the vector for the second sentence
for w in sent2:
if w in stopwords:
continue
vector2[all_words.index(w)] += 1

return 1 - cosine_distance(vector1, vector2)
def build_similarity_matrix(sentences, stop_words):
# Create an empty similarity matrix
similarity_matrix = np.zeros((len(sentences), len(sentences)))

for idx1 in range(len(sentences)):
for idx2 in range(len(sentences)):
if idx1 == idx2: #ignore if both are same sentences
continue
similarity_matrix[idx1][idx2] = sentence_similarity(sentences[idx1], sentences[idx2], stop_words)

return similarity_matrix
stop_words = stopwords.words('english')
summarize_text = []

# Step 1 - Read text anc split it
new_sent = read_article("C:\\Users\\Documents\\fedPressConference_0620.txt")

# Step 2 - Generate Similary Martix across sentences
sentence_similarity_martix = build_similarity_matrix(new_sent1, stop_words)

# Step 3 - Rank sentences in similarity martix
sentence_similarity_graph = nx.from_numpy_array(sentence_similarity_martix)
scores = nx.pagerank(sentence_similarity_graph)

# Step 4 - Sort the rank and pick top sentences
ranked_sentence = sorted(((scores[i],s) for i,s in enumerate(new_sent1)), reverse=True)
print("Indexes of top ranked_sentence order are ", ranked_sentence)

for i in range(10):
summarize_text.append(" ".join(ranked_sentence[i][1]))

# Step 5 - Offcourse, output the summarize texr
print("Summarize Text: \n", ". ".join(summarize_text))

最佳答案

也许你现在已经解决了。
问题是您使用向量的时间太长。您的向量是使用整个词汇表构建的,这可能太长,模型无法仅在 100 个周期内收敛(这是 pagerank 的默认值)。
您可以减少词汇表的长度(您是否检查过它是否正确删除了停用词?)或使用任何其他技术,例如减少不太频繁的单词,或使用 TF-IDF。
就我而言,我遇到了同样的问题,但使用了 Glove 词嵌入。对于 300 维,我无法收敛,使用 100 维模型很容易解决这个问题。
您可以尝试的另一件事是在调用 nx.pagerank 时扩展 max_iter 参数:

nx.pagerank(nx_graph, max_iter=600) # Or any number that will work for you.
默认值是 100 个周期。

关于python - 错误 'power iteration failed to converge within 100 iterations' ) 当我尝试使用 python networkx 汇总文本文档时,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/63026282/

31 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com