gpt4 book ai didi

python - 无法标记数据框中的多个列

转载 作者:行者123 更新时间:2023-11-30 09:05:12 25 4
gpt4 key购买 nike

我有一个表,其中包含数字和字符串数据,但位于不同的列中。该表是网络表单的答案,包含空单元格。我想对字符串列使用文本处理。我无法删除包含空单元格的行,因此对于空字符串列,我用 aplhabet 'a' 替换了 NaN。

示例数据

colmun_name1    column_name2     column_name3 column_name4 classify
This is a cat This is a dog 1 2 0
This is a rat This is a mouse 45 32 1
a Good mouse 0 0 0

我使用以下代码来确保字符串列中的所有数据实际上都是字符串数据。

df2=df[[column_name1, column_name2]]
for i in range(0,len(df2)):
cell=df2.iloc[i]
cell=str(str)
df2.iloc[i]=cell

然后当我标记化时,我收到错误

    <ipython-input-64-24a99733ba19> in <module>
1 from nltk.tokenize import word_tokenize
----> 2 tokenized_word=word_tokenize(df2)
3 print(tokenized_word)

/anaconda3/lib/python3.6/site-packages/nltk/tokenize/__init__.py in word_tokenize(text, language, preserve_line)
126 :type preserver_line: bool
127 """
--> 128 sentences = [text] if preserve_line else sent_tokenize(text, language)
129 return [token for sent in sentences
130 for token in _treebank_word_tokenizer.tokenize(sent)]

/anaconda3/lib/python3.6/site-packages/nltk/tokenize/__init__.py in sent_tokenize(text, language)
93 """
94 tokenizer = load('tokenizers/punkt/{0}.pickle'.format(language))
---> 95 return tokenizer.tokenize(text)
96
97 # Standard word tokenizer.

/anaconda3/lib/python3.6/site-packages/nltk/tokenize/punkt.py in tokenize(self, text, realign_boundaries)
1239 Given a text, returns a list of the sentences in that text.
1240 """
-> 1241 return list(self.sentences_from_text(text, realign_boundaries))
1242
1243 def debug_decisions(self, text):

/anaconda3/lib/python3.6/site-packages/nltk/tokenize/punkt.py in sentences_from_text(self, text, realign_boundaries)
1289 follows the period.
1290 """
-> 1291 return [text[s:e] for s, e in self.span_tokenize(text, realign_boundaries)]
1292
1293 def _slices_from_text(self, text):

/anaconda3/lib/python3.6/site-packages/nltk/tokenize/punkt.py in <listcomp>(.0)
1289 follows the period.
1290 """
-> 1291 return [text[s:e] for s, e in self.span_tokenize(text, realign_boundaries)]
1292
1293 def _slices_from_text(self, text):

/anaconda3/lib/python3.6/site-packages/nltk/tokenize/punkt.py in span_tokenize(self, text, realign_boundaries)
1279 if realign_boundaries:
1280 slices = self._realign_boundaries(text, slices)
-> 1281 for sl in slices:
1282 yield (sl.start, sl.stop)
1283

/anaconda3/lib/python3.6/site-packages/nltk/tokenize/punkt.py in _realign_boundaries(self, text, slices)
1320 """
1321 realign = 0
-> 1322 for sl1, sl2 in _pair_iter(slices):
1323 sl1 = slice(sl1.start + realign, sl1.stop)
1324 if not sl2:

/anaconda3/lib/python3.6/site-packages/nltk/tokenize/punkt.py in _pair_iter(it)
311 """
312 it = iter(it)
--> 313 prev = next(it)
314 for el in it:
315 yield (prev, el)

/anaconda3/lib/python3.6/site-packages/nltk/tokenize/punkt.py in _slices_from_text(self, text)
1293 def _slices_from_text(self, text):
1294 last_break = 0
-> 1295 for match in self._lang_vars.period_context_re().finditer(text):
1296 context = match.group() + match.group('after_tok')
1297 if self.text_contains_sentbreak(context):

TypeError: expected string or bytes-like object

我尝试改变

df2=df[column_name1][column_name2]

但我遇到了同样的错误。

我应该做什么?

最佳答案

请参阅How to apply NLTK word_tokenize library on a Pandas dataframe for Twitter data?

TL;DR

# Creates a `colmun_name1_tokenized` column by 
# taking the `colmun_name1` column and
# applying the word_tokenize function on every cell in the column.

>>> df['colmun_name1_tokenized'] = df['colmun_name1'].apply(word_tokenize)

>>> df.head()
colmun_name1 column_name2 column_name3 column_name4 classify \
0 This is a cat This is a dog 1 2 0
1 This is a rat This is a mouse 45 32 1
2 a Good mouse 0 0 0

colmun_name1_tokenized
0 [This, is, a, cat]
1 [This, is, a, rat]
2 [a]

如果您需要对多个列进行标记,并且希望使用标记化输出覆盖该列:

>>> with StringIO(file_str) as fin:
... df = pd.read_csv(fin, sep='\t')
...
>>> for col_name in ['colmun_name1', 'column_name2']:
... df[col_name] = df[col_name].apply(word_tokenize)
...
>>> df.head()
colmun_name1 column_name2 column_name3 column_name4 \
0 [This, is, a, cat] [This, is, a, dog] 1 2
1 [This, is, a, rat] [This, is, a, mouse] 45 32
2 [a] [Good, mouse] 0 0

classify
0 0
1 1
2 0

只是代码:

from io import StringIO

import pandas as pd

from nltk import word_tokenize

file_str = """colmun_name1\tcolumn_name2\tcolumn_name3\tcolumn_name4\tclassify
This is a cat\tThis is a dog\t1\t2\t0
This is a rat\tThis is a mouse\t45\t32\t1
a\tGood mouse\t0\t0\t0 """

with StringIO(file_str) as fin:
df = pd.read_csv(fin, sep='\t')

for col_name in ['colmun_name1', 'column_name2']:
df[col_name] = df[col_name].apply(word_tokenize)

关于python - 无法标记数据框中的多个列,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/53811948/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com