gpt4 book ai didi

selenium - 在scrapy中推迟部分抓取

转载 作者:行者123 更新时间:2023-12-01 02:33:46 24 4
gpt4 key购买 nike

我有下面给出的解析方法,我使用 selenium 首先加载一个页面,访问某些无法通过蜘蛛直接抓取访问的页面,将单个 url 收集到另一个从页面中提取项目的解析方法。问题是,这个解析方法会阻塞其他解析,直到所有页面都被访问。这使系统窒息。我尝试添加 sleep ,但这会完全停止引擎,而不仅仅是这个 parse方法。

关于我如何优化这一点的任何指示,或者至少让 sleep 工作以便它不会停止引擎?

def parse(self, response):
'''Parse first page and extract page links'''

item_link_xpath = "/html/body/form/div[@class='wrapper']//a[@title='View & Apply']"
pagination_xpath = "//div[@class='pagination']/input"
page_xpath = pagination_xpath + "[@value=%d]"

display = Display(visible=0, size=(800, 600))
display.start()

browser = webdriver.Firefox()
browser.get(response.url)
log.msg('Loaded search results', level=log.DEBUG)

page_no = 1
while True:
log.msg('Scraping page: %d'%page_no, level=log.DEBUG)
for link in [item_link.get_attribute('href') for item_link in browser.find_elements_by_xpath(item_link_xpath)]:
yield Request(link, callback=self.parse_item_page)
page_no += 1
log.msg('Using xpath: %s'%(page_xpath%page_no), level=log.DEBUG)
page_element = browser.find_element_by_xpath(page_xpath%page_no)
if not page_element or page_no > settings['PAGINATION_PAGES']:
break
page_element.click()
if settings['PAGINATION_SLEEP_INTERVAL']:
seconds = int(settings['PAGINATION_SLEEP_INTERVAL'])
log.msg('Sleeping for %d'%seconds, level=log.DEBUG)
time.sleep(seconds)
log.msg('Scraped listing pages, closing browser.', level=log.DEBUG)
browser.close()
display.stop()

最佳答案

This可能有帮助:

# delayspider.py
from scrapy.spider import BaseSpider
from twisted.internet import reactor, defer
from scrapy.http import Request

DELAY = 5 # seconds


class MySpider(BaseSpider):

name = 'wikipedia'
max_concurrent_requests = 1

start_urls = ['http://www.wikipedia.org']

def parse(self, response):
nextreq = Request('http://en.wikipedia.org')
dfd = defer.Deferred()
reactor.callLater(DELAY, dfd.callback, nextreq)
return dfd

输出:
$ scrapy runspider delayspider.py 
2012-05-24 11:01:54-0300 [scrapy] INFO: Scrapy 0.15.1 started (bot: scrapybot)
2012-05-24 11:01:54-0300 [scrapy] DEBUG: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2012-05-24 11:01:54-0300 [scrapy] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, RedirectMiddleware, CookiesMiddleware, HttpCompressionMiddleware, ChunkedTransferMiddleware, DownloaderStats
2012-05-24 11:01:54-0300 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2012-05-24 11:01:54-0300 [scrapy] DEBUG: Enabled item pipelines:
2012-05-24 11:01:54-0300 [wikipedia] INFO: Spider opened
2012-05-24 11:01:54-0300 [wikipedia] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2012-05-24 11:01:54-0300 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2012-05-24 11:01:54-0300 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2012-05-24 11:01:56-0300 [wikipedia] DEBUG: Crawled (200) <GET http://www.wikipedia.org> (referer: None)
2012-05-24 11:02:04-0300 [wikipedia] DEBUG: Redirecting (301) to <GET http://en.wikipedia.org/wiki/Main_Page> from <GET http://en.wikipedia.org>
2012-05-24 11:02:06-0300 [wikipedia] DEBUG: Crawled (200) <GET http://en.wikipedia.org/wiki/Main_Page> (referer: http://www.wikipedia.org)
2012-05-24 11:02:11-0300 [wikipedia] INFO: Closing spider (finished)
2012-05-24 11:02:11-0300 [wikipedia] INFO: Dumping spider stats:
{'downloader/request_bytes': 745,
'downloader/request_count': 3,
'downloader/request_method_count/GET': 3,
'downloader/response_bytes': 29304,
'downloader/response_count': 3,
'downloader/response_status_count/200': 2,
'downloader/response_status_count/301': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2012, 5, 24, 14, 2, 11, 447498),
'request_depth_max': 2,
'scheduler/memory_enqueued': 3,
'start_time': datetime.datetime(2012, 5, 24, 14, 1, 54, 408882)}
2012-05-24 11:02:11-0300 [wikipedia] INFO: Spider closed (finished)
2012-05-24 11:02:11-0300 [scrapy] INFO: Dumping global stats:
{}

它使用 Twisted 的 callLater sleep 。

关于selenium - 在scrapy中推迟部分抓取,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/11373181/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com