gpt4 book ai didi

python - 在 Pandas 数据框中编码文本列

转载 作者:太空宇宙 更新时间:2023-11-04 09:27:32 26 4
gpt4 key购买 nike

我哪里出错了?我正在尝试遍历我的数据框的每一行并对文本进行编码。

data['text'] = data.apply(lambda row: 
codecs(row['text'], "r", 'utf-8'), axis=1)

我收到此错误 - 为什么 uft 编码会影响代码的一部分,如果我不运行 UTF 编码我不会收到错误:

    TypeError                                 Traceback (most recent call last)
<ipython-input-101-0e1d5977a3b3> in <module>
----> 1 data['text'] = codecs(data['text'], "r", 'utf-8')
2
3 data['text'] = data.apply(lambda row:
4 codecs(row['text'], "r", 'utf-8'), axis=1)

TypeError: 'module' object is not callable

当我应用这些解决方案时,两者都有效,但我收到此错误:

    data['text_tokens'] = data.apply(lambda row: 
nltk.word_tokenize(row['text']), axis=1)

错误:

---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-138-73972d522748> in <module>
1 data['text_tokens'] = data.apply(lambda row:
----> 2 nltk.word_tokenize(row['text']), axis=1)

~/env/lib64/python3.6/site-packages/pandas/core/frame.py in apply(self, func, axis, broadcast, raw, reduce, result_type, args, **kwds)
6485 args=args,
6486 kwds=kwds)
-> 6487 return op.get_result()
6488
6489 def applymap(self, func):

~/env/lib64/python3.6/site-packages/pandas/core/apply.py in get_result(self)
149 return self.apply_raw()
150
--> 151 return self.apply_standard()
152
153 def apply_empty_result(self):

~/env/lib64/python3.6/site-packages/pandas/core/apply.py in apply_standard(self)
255
256 # compute the result using the series generator
--> 257 self.apply_series_generator()
258
259 # wrap results

~/env/lib64/python3.6/site-packages/pandas/core/apply.py in apply_series_generator(self)
284 try:
285 for i, v in enumerate(series_gen):
--> 286 results[i] = self.f(v)
287 keys.append(v.name)
288 except Exception as e:

<ipython-input-138-73972d522748> in <lambda>(row)
1 data['text_tokens'] = data.apply(lambda row:
----> 2 nltk.word_tokenize(row['text']), axis=1)

~/env/lib64/python3.6/site-packages/nltk/tokenize/__init__.py in word_tokenize(text, language, preserve_line)
142 :type preserve_line: bool
143 """
--> 144 sentences = [text] if preserve_line else sent_tokenize(text, language)
145 return [
146 token for sent in sentences for token in _treebank_word_tokenizer.tokenize(sent)

~/env/lib64/python3.6/site-packages/nltk/tokenize/__init__.py in sent_tokenize(text, language)
104 """
105 tokenizer = load('tokenizers/punkt/{0}.pickle'.format(language))
--> 106 return tokenizer.tokenize(text)
107
108

~/env/lib64/python3.6/site-packages/nltk/tokenize/punkt.py in tokenize(self, text, realign_boundaries)
1275 Given a text, returns a list of the sentences in that text.
1276 """
-> 1277 return list(self.sentences_from_text(text, realign_boundaries))
1278
1279 def debug_decisions(self, text):

~/env/lib64/python3.6/site-packages/nltk/tokenize/punkt.py in sentences_from_text(self, text, realign_boundaries)
1329 follows the period.
1330 """
-> 1331 return [text[s:e] for s, e in self.span_tokenize(text, realign_boundaries)]
1332
1333 def _slices_from_text(self, text):

~/env/lib64/python3.6/site-packages/nltk/tokenize/punkt.py in <listcomp>(.0)
1329 follows the period.
1330 """
-> 1331 return [text[s:e] for s, e in self.span_tokenize(text, realign_boundaries)]
1332
1333 def _slices_from_text(self, text):

~/env/lib64/python3.6/site-packages/nltk/tokenize/punkt.py in span_tokenize(self, text, realign_boundaries)
1319 if realign_boundaries:
1320 slices = self._realign_boundaries(text, slices)
-> 1321 for sl in slices:
1322 yield (sl.start, sl.stop)
1323

~/env/lib64/python3.6/site-packages/nltk/tokenize/punkt.py in _realign_boundaries(self, text, slices)
1360 """
1361 realign = 0
-> 1362 for sl1, sl2 in _pair_iter(slices):
1363 sl1 = slice(sl1.start + realign, sl1.stop)
1364 if not sl2:

~/env/lib64/python3.6/site-packages/nltk/tokenize/punkt.py in _pair_iter(it)
316 it = iter(it)
317 try:
--> 318 prev = next(it)
319 except StopIteration:
320 return

~/env/lib64/python3.6/site-packages/nltk/tokenize/punkt.py in _slices_from_text(self, text)
1333 def _slices_from_text(self, text):
1334 last_break = 0
-> 1335 for match in self._lang_vars.period_context_re().finditer(text):
1336 context = match.group() + match.group('after_tok')
1337 if self.text_contains_sentbreak(context):

TypeError: ('cannot use a string pattern on a bytes-like object', 'occurred at index 0')

最佳答案

编码

如第一个错误所述,codecs 不可调用。事实上是模块的名称。

你可能想要:

data['text'] = data.apply(lambda row: 
codecs.encode(row['text'], 'utf-8'), axis=1)

代币化

word_tokenize 引发的错误是由于该函数用于先前编码的字符串:codecs.encode 将文本呈现为字节 literal字符串。
来自编解码器 doc :

Most standard codecs are text encodings, which encode text to bytes, but there are also codecs provided that encode text to text, and bytes to bytes.

word_tokenize 不适用于 bytes literar,如错误所述(错误回溯的最后一行)。
如果您删除编码段落,它将起作用。


关于您对视频的担忧:前缀 u 表示 unicode1
前缀 b 表示 bytes literal2如果您在使用 codecs.encode 后打印数据帧,则这是字符串的前缀。
在 python 3 中(我从回溯中看到你的版本是 3.6)默认的字符串类型是 Unicode,所以 u 是多余的并且通常不显示,但字符串已经是 unicode。
所以我很确定你是安全的:你可以安全地不使用 codecs.encode

关于python - 在 Pandas 数据框中编码文本列,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/57012428/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com