gpt4 book ai didi

Scrapy 抓取简历不会抓取任何东西,只是完成

转载 作者:行者123 更新时间:2023-12-01 04:50:20 28 4
gpt4 key购买 nike

我使用 CrawlSpider 派生类开始爬网,然后使用 Ctrl+C 暂停它。当我再次执行命令恢复它时,它不会继续。

我的开始和恢复命令:

scrapy crawl mycrawler -s JOBDIR=crawls/test5_mycrawl

Scrapy 创建文件夹。权限为777。

当我恢复抓取时,它只输出:

/home/adminuser/.virtualenvs/rg_harvest/lib/python2.7/site-packages/twisted/internet/_sslverify.py:184: UserWarning: You do not have the service_identity module installed. Please install it from <https://pypi.python.org/pypi/service_identity>. Without the service_identity module and a recent enough pyOpenSSL tosupport it, Twisted can perform only rudimentary TLS client hostnameverification.  Many valid certificate/hostname mappings may be rejected.
verifyHostname, VerificationError = _selectVerifyImplementation()
2014-11-21 11:05:10-0500 [scrapy] INFO: Scrapy 0.24.4 started (bot: rg_harvest_scrapy)
2014-11-21 11:05:10-0500 [scrapy] INFO: Optional features available: ssl, http11, django
2014-11-21 11:05:10-0500 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'rg_harvest_scrapy.spiders', 'SPIDER_MODULES': ['rg_harvest_scrapy.spiders'], 'BOT_NAME': 'rg_harvest_scrapy'}
2014-11-21 11:05:10-0500 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2014-11-21 11:05:10-0500 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2014-11-21 11:05:10-0500 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2014-11-21 11:05:10-0500 [scrapy] INFO: Enabled item pipelines: ValidateMandatory, TypeConversion, ValidateRange, ValidateLogic, RestegourmetImagesPipeline, SaveToDB
2014-11-21 11:05:10-0500 [mycrawler] INFO: Spider opened
2014-11-21 11:05:10-0500 [mycrawler] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2014-11-21 11:05:10-0500 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2014-11-21 11:05:10-0500 [scrapy] DEBUG: Web service listening on 127.0.0.1:6080
2014-11-21 11:05:10-0500 [mycrawler] DEBUG: Crawled (200) <GET http://eatsmarter.de/suche/rezepte> (referer: None)
2014-11-21 11:05:10-0500 [mycrawler] DEBUG: Filtered duplicate request: <GET http://eatsmarter.de/suche/rezepte?page=1> - no more duplicates will be shown (see DUPEFILTER_DEBUG to show all duplicates)
2014-11-21 11:05:10-0500 [mycrawler] INFO: Closing spider (finished)
2014-11-21 11:05:10-0500 [mycrawler] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 225,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 19242,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'dupefilter/filtered': 29,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2014, 11, 21, 16, 5, 10, 733196),
'log_count/DEBUG': 4,
'log_count/INFO': 7,
'request_depth_max': 1,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/disk': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/disk': 1,
'start_time': datetime.datetime(2014, 11, 21, 16, 5, 10, 528629)}

我有一个 start_url。这可能是原因吗?我的爬虫使用一个 start_url,然后使用 LinkExtractor 按照规则进行分页,并通过特定的 url 格式调用解析项:

我的蜘蛛代码:

class MyCrawlSpiderBase(CrawlSpider):
name = 'test_spider'

testmode = True
crawl_start = datetime.utcnow().isoformat()

def __init__(self, testmode=True, urls=None, *args, **kwargs):
self.testmode = bool(int(testmode))
super(MyCrawlSpiderBase, self).__init__(*args, **kwargs)

def parse_item(self, response):
# Item Values
l = MyItemLoader(RecipeItem(), response=response)

l.replace_value('url', response.url)
l.replace_value('crawl_start', self.crawl_start)

return l.load_item()


class MyCrawlSpider(MyCrawlSpiderBase):
name = 'example_de'
allowed_domains = ['example.de']
start_urls = [
"http://example.de",

]

rules = (
Rule(
LinkExtractor(
allow=('/search/entry\?page=', )
)
),


Rule(
LinkExtractor(
allow=('/entry/[0-9A-z\-]{3,}$', ),
),
callback='parse_item'
),
)

def parse_item(self, response):
item = super(MyCrawlSpider, self).parse_item(response)

l = MyItemLoader(item=item, response=response)

l.replace_xpath("name", "//h1[@class='fn title']/text()")

(...)

return l.load_item()

最佳答案

由于您的 URL 始终相同,因此很可能会过滤请求。您可以通过两种方式解决此问题:

  1. 在您的settings.py 文件中,添加以下行:
    DUPEFILTER_CLASS = 'scrapy.dupefilter.BaseDupeFilter'
    这会将默认的 RFPDupeFilter 替换为不会过滤任何请求的 BaseDupeFilter。如果您实际上想过滤掉一些与此问题无关的其他请求,这可能不是您想要的。

  2. 您可以更多地参与创建请求的过程,并使用参数 dont_filter=True 创建它们,这将禁用基于每个请求的过滤。为此,您可以删除 start_urls 并将其替换为将产生解析请求的方法 start_requests()。在 official documentation 中查看更多信息.

关于Scrapy 抓取简历不会抓取任何东西,只是完成,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/27065816/

28 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com