gpt4 book ai didi

python - UTF-8编码、字典查找

转载 作者:行者123 更新时间:2023-12-01 05:24:28 24 4
gpt4 key购买 nike

我使用 Scrapy 编写了一个网络爬虫,抓取了一些数据来补充我目前掌握的有关几家公司的信息。在打印记录之前,我想将它们与我的旧记录进行匹配。因此,我创建了一个字典,以公司名称作为键,值是一些相关数据。我遇到的问题是编码,Test_of_company.csv 以 UTF-8 编码(我对其进行编码并在 Notepad++ 中将其转换为 UTF-8)。我不断收到异常。UnicodeDecodeError: 'utf8' 编解码器无法解码位置 67 中的字节 0xe4: 无效的连续字节。由于accumulator包含UTF-8编码,而check具有不同的编码,因此我无法匹配记录。总的来说,我认为 python-2.7 中的编码很麻烦。

scrapy项目中的管道文件

class DomainPipeline(object):
accumulator = collections.defaultdict(list)
check=collections.defaultdict(list)
def process_item(self, item, spider):
output="{0},{1},{2},{3},{4},{5},{6}".format(item['founded'],item['employee3'],item['employee2'],item['employee1'],item['rev3'],item['rev2'],item['rev1'])
self.accumulator[item['company']].append(output)
return item

def close_spider(self,spider):
root = os.getcwd()
p = os.path.join(root, 'Company_Lists', 'Test_of_company.csv')
with codecs.open(p,'r','utf-8') as f:
for line in f:
field = line.split(',')
company=str(field[1])
self.check[company.strip()].append(line)

file = open('output.txt','w')
for company,record in self.check.items():
for person in record:
for info in self.accumulator[company]:
output="{0},{1}\n".format(person.strip(),info)
file.write(output)

日志文件

2014-02-09 17:47:27+0000 [AllaBolag] INFO: Closing spider (finished)
2014-02-09 17:47:27+0000 [AllaBolag] Unhandled Error
Traceback (most recent call last):
File "C:\Anaconda\lib\site-packages\scrapy\middleware.py", line 59, in _process_parallel
return process_parallel(self.methods[methodname], obj, *args)
File "C:\Anaconda\lib\site-packages\scrapy\utils\defer.py", line 84, in process_parallel
dfds = [defer.succeed(input).addCallback(x, *a, **kw) for x in callbacks]
File "C:\Anaconda\lib\site-packages\twisted\internet\defer.py", line 306, in addCallback
callbackKeywords=kw)
File "C:\Anaconda\lib\site-packages\twisted\internet\defer.py", line 295, in addCallbacks
self._runCallbacks()
--- <exception caught here> ---
File "C:\Anaconda\lib\site-packages\twisted\internet\defer.py", line 577, in _runCallbacks
current.result = callback(current.result, *args, **kw)
File "Autotask_Prospecting\pipelines.py", line 28, in close_spider
for line in f:
File "C:\Anaconda\lib\codecs.py", line 684, in next
return self.reader.next()
File "C:\Anaconda\lib\codecs.py", line 615, in next
line = self.readline()
File "C:\Anaconda\lib\codecs.py", line 530, in readline
data = self.read(readsize, firstline=True)
File "C:\Anaconda\lib\codecs.py", line 477, in read
newchars, decodedbytes = self.decode(data, self.errors)
exceptions.UnicodeDecodeError: 'utf8' codec can't decode byte 0xe4 in position 67: invalid continuation byte

2014-02-09 17:47:27+0000 [AllaBolag] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 9699,
'downloader/request_count': 23,
'downloader/request_method_count/GET': 23,
'downloader/response_bytes': 618283,
'downloader/response_count': 23,
'downloader/response_status_count/200': 23,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2014, 2, 9, 17, 47, 27, 889000),
'item_scraped_count': 10,
'log_count/DEBUG': 39,
'log_count/ERROR': 3,
'log_count/INFO': 3,
'request_depth_max': 1,
'response_received_count': 23,
'scheduler/dequeued': 23,
'scheduler/dequeued/memory': 23,
'scheduler/enqueued': 23,
'scheduler/enqueued/memory': 23,
'spider_exceptions/IndexError': 2,
'start_time': datetime.datetime(2014, 2, 9, 17, 47, 25, 428000)}
2014-02-09 17:47:27+0000 [AllaBolag] INFO: Spider closed (finished)

最佳答案

鉴于 OP 已经找到了答案,我将补充一个有用的选项来调试任何异常。

Scrapy 有一个非常有用的命令行选项,名为 --pdb:

$ scrapy crawl -h
Usage
=====
...


Global Options
--------------
...
--pdb enable pdb on failure

例如,为了重现您的错误,我使用以下蜘蛛代码:

# file: myspider.py
# encoding: utf-8
import codecs
import tempfile

from scrapy.spider import Spider


class MyspiderSpider(Spider):
name = "myspider"
start_urls = ["http://www.example.org/"]

def parse(self, response):
filename = self._create_test_file()
with codecs.open(filename, 'r', 'utf-8') as fp:
for line in fp:
self.log(line)

def _create_test_file(self):
fp = tempfile.NamedTemporaryFile(delete=False)
fp.write(u'Westfälisch'.encode('latin1'))
return fp.name

然后使用选项 --pdb 运行蜘蛛会启动 Python Debugger当异常弹出时。下面是一个示例 session ,展示了如何找到失败的值、重现异常并找到解决方案:

$ scrapy crawl myspider --pdb
2014-02-10 13:05:53-0400 [scrapy] INFO: Scrapy 0.22.1 started (bot: myproject)
...
2014-02-10 13:05:53-0400 [myspider] DEBUG: Crawled (200) <GET http://www.example.org/> (referer: None)
Jumping into debugger for post-mortem of exception ''utf8' codec can't decode byte 0xe4 in position 5: invalid continuation byte':
> /usr/lib/python2.7/codecs.py(477)read()
-> newchars, decodedbytes = self.decode(data, self.errors)
(Pdb) print data
Westf�lisch
(Pdb) repr(data)
"'Westf\\xe4lisch'"
(Pdb) data.decode('utf8')
*** UnicodeDecodeError: 'utf8' codec can't decode byte 0xe4 in position 5: invalid continuation byte
(Pdb) data.decode('latin1')
u'Westf\xe4lisch'

关于python - UTF-8编码、字典查找,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/21660875/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com