gpt4 book ai didi

python - Scrapy 蜘蛛不跟踪链接和错误

转载 作者:太空宇宙 更新时间:2023-11-04 05:10:42 24 4
gpt4 key购买 nike

我正在尝试使用 scrapy 编写我的第一个网络爬虫/数据提取器,但无法通过链接获取它。我也收到一个错误:

ERROR: Spider error processing < GET https://en.wikipedia.org/wiki/Wikipedia:Unusual_articles>

我知道蜘蛛正在扫描页面一次,因为我能够从我弄乱的 a 标签和 h1 元素中提取信息。

有谁知道我怎样才能使它跟随页面上的链接并消除错误?

import scrapy
from scrapy.linkextractors import LinkExtractor
from wikiCrawler.items import WikicrawlerItem
from scrapy.spiders import Rule


class WikispyderSpider(scrapy.Spider):
name = "wikiSpyder"

allowed_domains = ['https://en.wikipedia.org/']

start_urls = ['https://en.wikipedia.org/wiki/Wikipedia:Unusual_articles']

rules = (
Rule(LinkExtractor(canonicalize=True, unique=True), follow=True, callback="parse"),
)

def start_requests(self):
for url in self.start_urls:
yield scrapy.Request(url, callback=self.parse, dont_filter=True)

def parse(self, response):
items = []
links = LinkExtractor(canonicalize=True, unique=True).extract_links(response)
for link in links:
item = WikicrawlerItem()
item['url_from'] = response.url
item['url_to'] = link.url
items.append(item)
print(items)
return items

最佳答案

如果你想使用链接提取器,你需要使用一个特殊的蜘蛛类 - CrawlSpider :

from scrapy.spiders import CrawlSpider

class WikispyderSpider(CrawlSpider):
# ...

这是一个简单的蜘蛛程序,它跟踪您的起始 url 中的链接并打印出页面标题:

from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider

from scrapy.spiders import Rule


class WikispyderSpider(CrawlSpider):
name = "wikiSpyder"

allowed_domains = ['en.wikipedia.org']
start_urls = ['https://en.wikipedia.org/wiki/Wikipedia:Unusual_articles']

rules = (
Rule(LinkExtractor(canonicalize=True, unique=True), follow=True, callback="parse_link"),
)

def parse_link(self, response):
print(response.xpath("//title/text()").extract_first())

关于python - Scrapy 蜘蛛不跟踪链接和错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/43083353/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com