gpt4 book ai didi

python - 使用scrapy抓取页面

转载 作者:太空宇宙 更新时间:2023-11-04 05:39:56 25 4
gpt4 key购买 nike

我是 scrapy 的新手。我想在 this 上抓取产品页。我的代码只抓取了第一页,也抓取了大约 15 种产品,然后它停止了。并且也想抓取下一页。有什么帮助吗?

这是我的课

class AllyouneedSpider(CrawlSpider):
name = "allyouneed"
allowed_domains = ["de.allyouneed.com"]


start_urls = [ 'http://de.allyouneed.com/de/sportschuhe-/8799665488014/',]

rules = (
Rule(LxmlLinkExtractor(allow=(), restrict_xpaths='//*[@class="itm fst jf-lDiv"]//a[@href]'), callback='parse_obj', process_links="parse_filter") ,
Rule(LxmlLinkExtractor(restrict_xpaths='//*[@id="M62_searchhit"]//a[@href]')),

)

def parse_filter(self, links):
for link in links:
if self.allowed_domains[0] not in link.url:
pass # print link.url
# print links
return links



def parse_obj(self, response):
item = AllyouneedItem()
sel = scrapy.Selector(response)
item['url'] = []
url = response.selector.xpath('//*[@id="M62_searchhit"]//a[@href]').extract()
ti = response.selector.xpath('//span[@itemprop="name"]/text()').extract()
dec = response.selector.xpath('//div[@class="m-desc m-desc-t"]//text()').extract()
cat = response.selector.xpath('//span[@itemprop="title"]/text()').extract()

if ti:
item['title'] = ti
item['url'] = response.url
item['category'] = cat
item['decription'] = dec
print item
yield item

最佳答案

使用 restrict_xpaths=('//a[@class="nxtPge"]') 将找到下一页的链接,无需找到所有链接,只需找到那个链接.您也不需要过滤 URL,因为 scrapy 默认会这样做。

Rule(LinkExtractor(allow=(), restrict_xpaths='//a[@class="nxtPge"]', callback='parse_obj')

您还可以通过删除选择器部分而不是初始化项目来简化 parse_obj(),

item = AllyouneedItem()
url = response.xpath( etc...

关于python - 使用scrapy抓取页面,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/34271570/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com