gpt4 book ai didi

python - Scrapy CrawlSpider 不关注链接

转载 作者:行者123 更新时间:2023-12-01 04:41:00 25 4
gpt4 key购买 nike

我正在尝试从此类别页面上给出的所有(#123)详细信息页面中抓取一些属性 - http://stinkybklyn.com/shop/cheese/但 scrapy 无法遵循我设置的链接模式,我也检查了 scrapy 文档和一些教程,但运气不好!

下面是代码:

import scrapy

from scrapy.contrib.linkextractors import LinkExtractor
from scrapy.contrib.spiders import CrawlSpider, Rule

class Stinkybklyn(CrawlSpider):
name = "Stinkybklyn"
allowed_domains = ["stinkybklyn.com"]
start_urls = [
"http://stinkybklyn.com/shop/cheese/chandoka",
]
Rule(LinkExtractor(allow=r'\/shop\/cheese\/.*'),
callback='parse_items', follow=True)


def parse_items(self, response):
print "response", response
hxs= HtmlXPathSelector(response)
title=hxs.select("//*[@id='content']/div/h4").extract()
title="".join(title)
title=title.strip().replace("\n","").lstrip()
print "title is:",title

有人可以告诉我我在这里做错了什么吗?

最佳答案

您的代码的关键问题是您尚未设置 rules 对于CrawlSpider

我建议的其他改进:

  • 无需实例化HtmlXPathSelector ,您可以使用response直接
  • select()现已弃用,请使用 xpath()
  • 获取text() title的元素以便检索,例如获取 Chandoka而不是<h4>Chandoka</h4>
  • 我认为您应该从奶酪店目录页面开始:http://stinkybklyn.com/shop/cheese

应用了改进的完整代码:

from scrapy.contrib.linkextractors import LinkExtractor
from scrapy.contrib.spiders import CrawlSpider, Rule


class Stinkybklyn(CrawlSpider):
name = "Stinkybklyn"
allowed_domains = ["stinkybklyn.com"]

start_urls = [
"http://stinkybklyn.com/shop/cheese",
]

rules = [
Rule(LinkExtractor(allow=r'\/shop\/cheese\/.*'), callback='parse_items', follow=True)
]

def parse_items(self, response):
title = response.xpath("//*[@id='content']/div/h4/text()").extract()
title = "".join(title)
title = title.strip().replace("\n", "").lstrip()
print "title is:", title

关于python - Scrapy CrawlSpider 不关注链接,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/30722486/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com