gpt4 book ai didi

python - 在我的 scrapy 蜘蛛中,我的解析模块无法工作,无法打印

转载 作者:行者123 更新时间:2023-12-01 00:41:03 29 4
gpt4 key购买 nike

我的解析模块没有被调用,它没有打印任何内容,这是我的代码,如果有人可以帮助我,请解决它

class myspider(scrapy.Spider):

name='myspider'

def start_requests(self):
print("h1"+"\n")
Url="https://www.datacamp.com/courses"
return scrapy.Request(url=Url ,callback=self.parse)

def parse(self, response):
print("hello")


process = CrawlerProcess()

process.crawl(myspider)

process.start()

最佳答案

你的错误在这里 - 使用返回,所以它会打印出“h1”,但不是“hello”。您应该使用yield,您可以仅对函数调用链中的最后一个函数使用return(但不要),在本例中为(解析)。但最好还是使用yield。像这样的事情:

import scrapy
from scrapy.crawler import CrawlerProcess


class myspider(scrapy.Spider):
name = 'myspider'

def start_requests(self):
print("h1")
url = "https://www.datacamp.com/courses"
yield scrapy.Request(url=url, callback=self.parse)

def parse(self, response):
print("hello")
blabla = set(response.css('.course-block__title::text').getall())
for bla in blabla:
print(bla)
yield {
'coursename': bla
}


process = CrawlerProcess()
process.crawl(myspider)
process.start()

此外,它很高兴显示错误的回溯,在您的情况下使用 return 而不是yield 它会显示如下内容:

h1
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2019-08-04 21:07:11 [scrapy.middleware] INFO: Enabled item pipelines:
[]
Unhandled error in Deferred:
2019-08-04 21:07:11 [twisted] CRITICAL: Unhandled error in Deferred:

Traceback (most recent call last):
File "D:\anaconda3\lib\site-packages\scrapy\crawler.py", line 184, in crawl
return self._crawl(crawler, *args, **kwargs)
File "D:\anaconda3\lib\site-packages\scrapy\crawler.py", line 188, in _crawl
d = crawler.crawl(*args, **kwargs)
File "D:\anaconda3\lib\site-packages\twisted\internet\defer.py", line 1613, in unwindGenerator
return _cancellableInlineCallbacks(gen)
File "D:\anaconda3\lib\site-packages\twisted\internet\defer.py", line 1529, in _cancellableInlineCallbacks
_inlineCallbacks(None, g, status)
--- <exception caught here> ---
File "D:\anaconda3\lib\site-packages\twisted\internet\defer.py", line 1418, in _inlineCallbacks
result = g.send(result)
File "D:\anaconda3\lib\site-packages\scrapy\crawler.py", line 87, in crawl
start_requests = iter(self.spider.start_requests())
builtins.TypeError: 'Request' object is not iterable

2019-08-04 21:07:11 [twisted] CRITICAL:
Traceback (most recent call last):
File "D:\anaconda3\lib\site-packages\twisted\internet\defer.py", line 1418, in _inlineCallbacks
result = g.send(result)
File "D:\anaconda3\lib\site-packages\scrapy\crawler.py", line 87, in crawl
start_requests = iter(self.spider.start_requests())
TypeError: 'Request' object is not iterable

关于python - 在我的 scrapy 蜘蛛中,我的解析模块无法工作,无法打印,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/57347335/

29 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com