gpt4 book ai didi

python - 从脚本运行 Scrapy,需要帮助理解它

转载 作者:太空宇宙 更新时间:2023-11-04 06:36:25 24 4
gpt4 key购买 nike

我对 Python 比较陌生,因此非常感谢任何帮助/建议。

我正在尝试构建一个将运行 Scrapy 蜘蛛的脚本。到目前为止,我有下面的代码,

from scrapy.contrib.loader import XPathItemLoader
from scrapy.item import Item, Field
from scrapy.selector import HtmlXPathSelector
from scrapy.spider import BaseSpider
from scrapy.crawler import CrawlerProcess


class QuestionItem(Item):
"""Our SO Question Item"""
title = Field()
summary = Field()
tags = Field()

user = Field()
posted = Field()

votes = Field()
answers = Field()
views = Field()


class MySpider(BaseSpider):
"""Our ad-hoc spider"""
name = "myspider"
start_urls = ["http://stackoverflow.com/"]

question_list_xpath = '//div[@id="content"]//div[contains(@class, "question- summary")]'

def parse(self, response):
hxs = HtmlXPathSelector(response)

for qxs in hxs.select(self.question_list_xpath):
loader = XPathItemLoader(QuestionItem(), selector=qxs)
loader.add_xpath('title', './/h3/a/text()')
loader.add_xpath('summary', './/h3/a/@title')
loader.add_xpath('tags', './/a[@rel="tag"]/text()')
loader.add_xpath('user', './/div[@class="started"]/a[2]/text()')
loader.add_xpath('posted', './/div[@class="started"]/a[1]/span/@title')
loader.add_xpath('votes', './/div[@class="votes"]/div[1]/text()')
loader.add_xpath('answers', './/div[contains(@class, "answered")]/div[1]/text()')
loader.add_xpath('views', './/div[@class="views"]/div[1]/text()')

yield loader.load_item()

class CrawlerWorker(Process):
def __init__(self, spider, results):
Process.__init__(self)
self.results = results

self.crawler = CrawlerProcess(settings)
if not hasattr(project, 'crawler'):
self.crawler.install()
self.crawler.configure()

self.items = []
self.spider = spider
dispatcher.connect(self._item_passed, signals.item_passed)

def _item_passed(self, item):
self.items.append(item)

def run(self):
self.crawler.crawl(self.spider)
self.crawler.start()
self.crawler.stop()
self.results.put(self.items)

def main():
results = Queue()
crawler = CrawlerWorker(MySpider(BaseSpider), results)
crawler.start()
for item in results.get():
pass # Do something with item

我在下面收到这个错误,

ERROR: An unexpected error occurred while tokenizing input
The following traceback may be corrupted or invalid
The error message is: ('EOF in multi-line statement', (157, 0))
...
C:\Python27\lib\site-packages\twisted\internet\win32eventreactor.py:64: UserWarn
ing: Reliable disconnection notification requires pywin32 215 or later
category=UserWarning)
ERROR: An unexpected error occurred while tokenizing input
The following traceback may be corrupted or invalid
The error message is: ('EOF in multi-line statement', (157, 0))
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Python27\lib\multiprocessing\forking.py", line 374, in main
self = load(from_parent)
File "C:\Python27\lib\pickle.py", line 1378, in load
ERROR: An unexpected error occurred while tokenizing input
The following traceback may be corrupted or invalid
The error message is: ('EOF in multi-line statement', (157, 0))
return Unpickler(file).load()
File "C:\Python27\lib\pickle.py", line 858, in load
dispatch[key](self)
File "C:\Python27\lib\pickle.py", line 1090, in load_global
klass = self.find_class(module, name)
File "C:\Python27\lib\pickle.py", line 1124, in find_class
__import__(module)
File "Webscrap.py", line 53, in <module>
class CrawlerWorker(Process):
NameError: name 'Process' is not defined
ERROR: An unexpected error occurred while tokenizing input
The following traceback may be corrupted or invalid
The error message is: ('EOF in multi-line statement', (157, 0))
...
"PicklingError: <function remove at 0x07871CB0>: Can't pickle <function remove at 0x077F6BF0>: it's not found as weakref.remove".

我意识到我做的事情在逻辑上是错误的。作为新手,我无法发现它。谁能帮我运行这段代码?

最终我只想要一个脚本来运行、抓取所需数据并将其存储在数据库中,但首先我只想让抓取工作正常进行。我以为这会运行它,但到目前为止运气不好。

最佳答案

我假设你想要一个独立的蜘蛛/爬虫......这实际上很简单,虽然我没有使用自定义 Process

class StandAloneSpider( CyledgeSpider ):
#a regular spider

settings.overrides['LOG_ENABLED'] = True
#more settings can be changed...

crawler = CrawlerProcess( settings )
crawler.install()
crawler.configure()

spider = StandAloneSpider()

crawler.crawl( spider )
crawler.start()

关于python - 从脚本运行 Scrapy,需要帮助理解它,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/10461024/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com