gpt4 book ai didi

python - 使用Scrapy获取整个网站的所有URL

转载 作者:太空宇宙 更新时间:2023-11-03 14:39:08 25 4
gpt4 key购买 nike

各位!我正在尝试获取整个网站中的所有内部 URL 以用于 SEO 目的,我最近发现 Scrapy 可以帮助我完成此任务。但我的代码总是返回错误:

2017-10-11 10:32:00 [scrapy.core.engine] INFO: Spider opened
2017-10-11 10:32:00 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min
)
2017-10-11 10:32:00 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2017-10-11 10:32:01 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.**test**.com/> from
<GET http://www.**test**.com/robots.txt>
2017-10-11 10:32:02 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.**test**.com/> (referer: None)
2017-10-11 10:32:03 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.**test**.com/> from
<GET http://www.**test**.com>
2017-10-11 10:32:03 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.**test**.com/> (referer: None)
2017-10-11 10:32:03 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.**test**.com/> (referer: None)
Traceback (most recent call last):
File "c:\python27\lib\site-packages\twisted\internet\defer.py", line 653, in _runCallbacks
current.result = callback(current.result, *args, **kw)
File "c:\python27\lib\site-packages\scrapy\spiders\__init__.py", line 90, in parse
raise NotImplementedError
NotImplementedError

我更改了原来的网址。

这是我正在运行的代码

# -*- coding: utf-8 -*-
import scrapy
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor


class TestSpider(scrapy.Spider):
name = "test"
allowed_domains = ["http://www.test.com"]
start_urls = ["http://www.test.com"]

rules = [Rule (LinkExtractor(allow=['.*']))]

谢谢!

编辑:

这对我有用:

rules = (
Rule(LinkExtractor(), callback='parse_item', follow=True),
)

def parse_item(self, response):
filename = response.url
arquivo = open("file.txt", "a")
string = str(filename)
arquivo.write(string+ '\n')
arquivo.close

=D

最佳答案

您收到的错误是由于您没有在蜘蛛中定义 parse 方法而引起的,如果您的蜘蛛基于scrapy.Spider,则这是强制性的。类。

为了您的目的(即爬行整个网站),最好将您的蜘蛛基于 scrapy.CrawlSpider类(class)。另外,在Rule中,您必须将callback属性定义为解析您访问的每个页面的方法。最后一个修饰性更改,在 LinkExtractor 中,如果您想访问每个页面,则可以省略 allow,因为它的默认值为空元组,这意味着它将匹配找到的所有链接.

咨询CrawlSpider example具体代码。

关于python - 使用Scrapy获取整个网站的所有URL,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/46689783/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com