gpt4 book ai didi

python - Scrapy,使用不同的 XPathSelector 进行递归爬行

转载 作者:太空宇宙 更新时间:2023-11-03 19:12:46 25 4
gpt4 key购买 nike

晚上好,感谢您的帮助。

我正在挖掘 Scrappy,我的需要是从网站获取信息并重新创建该网站的相同树结构。示例:

books [
python [
first [
title = 'Title'
author = 'John Doe'
price = '200'
]

first [
title = 'Other Title'
author = 'Mary Doe'
price = '100'
]
]

php [
first [
title = 'PhpTitle'
author = 'John Smith'
price = '100'
]

first [
title = 'Php Other Title'
author = 'Mary Smith'
price = '300'
]
]
]

从教程中我已经正确完成了我的基本蜘蛛:

from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from pippo.items import PippoItem

class PippoSpider(BaseSpider):
name = "pippo"
allowed_domains = ["www.books.net"]
start_urls = [
"http://www.books.net/index.php"
]

def parse(self, response):
hxs = HtmlXPathSelector(response)
sites = hxs.select('//div[@id="28008_LeftPane"]/div/ul/li')
items = []
for site in sites:
item = PippoItem()
item['subject'] = site.select('a/b/text()').extract()
item['link'] = site.select('a/@href').extract()
items.append(item)
return items

我的问题是,我的结构的任何级别在网站中都更深一层,因此如果在我的基础级别中我获得了我需要的书籍主题,则抓取相应的 itemitem['link'] 以获取其他项目。但在下一个网址中,我将需要一个不同的 HtmlXPathSelector 来正确提取我的数据,依此类推,直到结构结束。

您能基本上帮助我并让我走上正确的道路吗?谢谢。

最佳答案

您需要手动请求链接:(另请参阅 CrawlSpider )

from urlparse import urljoin

from scrapy.http import Request
from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector

from pippo.items import PippoItem

class PippoSpider(BaseSpider):
name = "pippo"
allowed_domains = ["www.books.net"]
start_urls = ["http://www.books.net/"]

def parse(self, response):
hxs = HtmlXPathSelector(response)
sites = hxs.select('//div[@id="28008_LeftPane"]/div/ul/li')

for site in sites:
item = PippoItem()
item['subject'] = site.select('.//text()').extract()
item['link'] = site.select('.//a/@href').extract()
link = item['link'][0] if len(item['link']) else None
if link:
yield Request(urljoin(response.url, link),
callback=self.parse_link,
errback=lambda _: item,
meta=dict(item=item),
)
else:
yield item

def parse_link(self, response):
item = response.meta.get('item')
item['alsothis'] = 'more data'
return item

关于python - Scrapy,使用不同的 XPathSelector 进行递归爬行,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/12397259/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com