gpt4 book ai didi

python - 如何按照链接列表从 scrapy 的页面获取数据?

转载 作者:行者123 更新时间:2023-11-28 17:18:00 26 4
gpt4 key购买 nike

我有一个网页要抓取。在页面上,是 <table> 中的链接列表。 .我正在尝试使用规则部分要求 Scrapy 通过链接,并获取链接目标页面上的数据。下面是我的代码:

class ToScrapeSpiderXPath(scrapy.Spider):
name = 'coinmarketcap'
start_urls = [
'https://coinmarketcap.com/currencies/views/all/'
]

rules = (
Rule(LinkExtractor(allow=(), restrict_xpaths=('//tr/td[2]/a/@href',)), callback="parse", follow= True),
)

def parse(self, response):
print("TEST TEST TEST")
BTC = BTCItem()
BTC['source'] = str(response.request.url).split("/")[2]
BTC['asset'] = str(response.request.url).split("/")[4],
BTC['asset_price'] = response.xpath('//*[@id="quote_price"]/text()').extract(),
BTC['asset_price_change'] = response.xpath('/html/body/div[2]/div/div[1]/div[3]/div[2]/span[2]/text()').extract(),
BTC['BTC_price'] = response.xpath('/html/body/div[2]/div/div[1]/div[3]/div[2]/small[1]/text()').extract(),
BTC['Prct_change'] = response.xpath('/html/body/div[2]/div/div[1]/div[3]/div[2]/small[2]/text()').extract()
yield (BTC)

我的问题是 Scrapy 没有跟踪链接。它只是在尝试从该链接中提取数据时获取链接。我错过了什么?

更新 #1:为什么要抓取而不是抓取?

2017-03-28 23:10:33 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://coinmarketcap.com/currencies/pivx/> (referer: None)
2017-03-28 23:10:33 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://coinmarketcap.com/currencies/zcash/> (referer: None)
2017-03-28 23:10:33 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://coinmarketcap.com/currencies/bitcoin/> (referer: None)
2017-03-28 23:10:33 [scrapy.core.scraper] DEBUG: Scraped from <200 https://coinmarketcap.com/currencies/nem/>

最佳答案

你需要继承一个CrawlSpider链接提取器工作的类:

from scrapy.spiders import CrawlSpider
from scrapy.spiders import Rule
from scrapy.contrib.linkextractors import LinkExtractor


class ToScrapeSpiderXPath(CrawlSpider):
name = 'coinmarketcap'
start_urls = [
'https://coinmarketcap.com/currencies/views/all/'
]

rules = (
Rule(LinkExtractor(restrict_xpaths='//tr/td[2]/a'), callback="parse_table_links", follow= True),
)

def parse_table_links(self, response):
print(response.url)

请注意,您需要修复 restrict_xpaths 值 - 它应该指向 a 元素而不是元素的 @href 属性.而且,您可以将其定义为字符串而不是元组。

此外,allow 参数是可选的。

关于python - 如何按照链接列表从 scrapy 的页面获取数据?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/43083864/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com