gpt4 book ai didi

Python SpaCy 创建 nlp 文档 - 参数 'string' 的类型不正确

转载 作者:太空宇宙 更新时间:2023-11-03 14:53:34 35 4
gpt4 key购买 nike

我对 Python NLP 比较陌生,我正在尝试使用 SpaCy 处理 CSV 文件。我可以使用 Pandas 很好地加载该文件,但是当我尝试使用 SpaCy 的 nlp 函数处理该文件时,编译器在文件内容的大约 5% 处出错。

代码块如下:

import pandas as pd
df = pd.read_csv('./reviews.washington.dc.csv')

import spacy
nlp = spacy.load('en')

for parsed_doc in nlp.pipe(iter(df['comments']), batch_size=1, n_threads=4):
print (parsed_doc.text)

我也尝试过:

df['parsed'] = df['comments'].apply(nlp)

结果相同。

我收到的回溯是:

Traceback (most recent call last):
File "/Users/john/Downloads/spacy_load.py", line 11, in <module>
for parsed_doc in nlp.pipe(iter(df['comments']), batch_size=1,
n_threads=4):
File "/usr/local/lib/python3.6/site-packages/spacy/language.py",
line 352, in pipe for doc in stream:
File "spacy/syntax/parser.pyx", line 239, in pipe
(spacy/syntax/parser.cpp:8912)
File "spacy/matcher.pyx", line 465, in pipe (spacy/matcher.cpp:9904)
File "spacy/syntax/parser.pyx", line 239, in pipe (spacy/syntax/parser.cpp:8912)
File "spacy/tagger.pyx", line 231, in pipe (spacy/tagger.cpp:6548)
File "/usr/local/lib/python3.6/site-packages/spacy/language.py", line 345,
in <genexpr> stream = (self.make_doc(text) for text in texts)
File "/usr/local/lib/python3.6/site-packages/spacy/language.py", line 293,
in <lambda> self.make_doc = lambda text: self.tokenizer(text)
TypeError: Argument 'string' has incorrect type (expected str, got float)

任何人都可以解释为什么会发生这种情况,以及我如何解决它吗?我已经尝试了该网站的各种解决方法,但均无济于事。 Try/except block 也没有效果。

最佳答案

我刚刚遇到了与您收到的错误非常相似的错误。

>>> c.add_texts(df.DetailedDescription.astype('object'))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\Anonymous\AppData\Local\Programs\Python\Python36\lib\site
-packages\textacy\corpus.py", line 297, in add_texts
for i, spacy_doc in enumerate(spacy_docs):
File "C:\Users\Anonymous\AppData\Local\Programs\Python\Python36\lib\site
-packages\spacy\language.py", line 554, in pipe
for doc in docs:
File "nn_parser.pyx", line 369, in pipe
File "cytoolz/itertoolz.pyx", line 1046, in cytoolz.itertoolz.partition_all.__
next__ (cytoolz/itertoolz.c:14538)
for item in self.iterseq:
File "nn_parser.pyx", line 369, in pipe
File "cytoolz/itertoolz.pyx", line 1046, in cytoolz.itertoolz.partition_all.__
next__ (cytoolz/itertoolz.c:14538)
for item in self.iterseq:
File "pipeline.pyx", line 395, in pipe
File "cytoolz/itertoolz.pyx", line 1046, in cytoolz.itertoolz.partition_all.__
next__ (cytoolz/itertoolz.c:14538)
for item in self.iterseq:
File "C:\Users\Anonymous\AppData\Local\Programs\Python\Python36\lib\site
-packages\spacy\language.py", line 534, in <genexpr>
docs = (self.make_doc(text) for text in texts)
File "C:\Users\Anonymous\AppData\Local\Programs\Python\Python36\lib\site
-packages\spacy\language.py", line 357, in make_doc
return self.tokenizer(text)
TypeError: Argument 'string' has incorrect type (expected str, got float)

最后,我遇到了一个解决方案,即使用 Pandas 数据帧将值转换为 Unicode,然后将值作为 native 数组检索并将其输入到 Textacy 的 add_texts 方法中 语料库对象。

c = textacy.corpus.Corpus(lang='en_core_web_lg')
c.add_texts(df.DetailedDescription.astype('unicode').values)
df.DetailedDescription.astype('unicode').values

这样做允许我将所有文本添加到我的语料库中,尽管尝试事先强制加载符合 Unicode 的文件(下面包含代码片段,以防对其他人有帮助)。

with codecs.open('Base Data\Base Data.csv', 'r', encoding='utf-8', errors='replace') as base_data:
df = pd.read_csv(StringIO(re.sub(r'(?!\n)[\x00-\x1F\x80-\xFF]', '', base_data.read())), dtype={"DetailedDescription":object, "OtherDescription":object}, na_values=[''])

关于Python SpaCy 创建 nlp 文档 - 参数 'string' 的类型不正确,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/45748424/

35 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com