gpt4 book ai didi

python - 在 Scrapy 中将参数传递给 allowed_domains

转载 作者:太空宇宙 更新时间:2023-11-04 00:37:35 25 4
gpt4 key购买 nike

我正在创建一个抓取工具,它接受用户输入并抓取网站上的所有链接。但是,我需要将链接的抓取和提取限制为仅来自该域的链接,而不是外部域。就爬虫而言,我把它带到了我需要的地方。我的问题是,对于我的 allows_domains 函数,我似乎无法传递通过命令输入的 scrapy 选项。波纹管是第一个运行的脚本:

# First Script
import os

def userInput():
user_input = raw_input("Please enter URL. Please do not include http://: ")
os.system("scrapy runspider -a user_input='http://" + user_input + "' crawler_prod.py")

userInput()

它运行的脚本是爬虫,爬虫将爬取给定的域。这是爬虫代码:

#Crawler
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from scrapy.item import Item
from scrapy.spider import BaseSpider
from scrapy import Request
from scrapy.http import Request

class InputSpider(CrawlSpider):
name = "Input"
#allowed_domains = ["example.com"]

def allowed_domains(self):
self.allowed_domains = user_input

def start_requests(self):
yield Request(url=self.user_input)

rules = [
Rule(SgmlLinkExtractor(allow=()), follow=True, callback='parse_item')
]

def parse_item(self, response):
x = HtmlXPathSelector(response)
filename = "output.txt"
open(filename, 'ab').write(response.url + "\n")

我已经尝试生成通过终端命令发送的请求,但这会使爬虫崩溃。我现在如何拥有它也会使爬虫崩溃。我也试过只输入 allowed_domains=[user_input] 并且它向我报告它没有定义。我正在使用 Scrapy 的 Request 库来让它工作,但运气不好。有没有更好的方法来限制在给定域之外的抓取?

编辑:

这是我的新代码:

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from scrapy.item import Item
from scrapy.spiders import BaseSpider
from scrapy import Request
from scrapy.http import Request
from scrapy.utils.httpobj import urlparse
#from run_first import *

class InputSpider(CrawlSpider):
name = "Input"
#allowed_domains = ["example.com"]

#def allowed_domains(self):
#self.allowed_domains = user_input

#def start_requests(self):
#yield Request(url=self.user_input)

def __init__(self, *args, **kwargs):
inputs = kwargs.get('urls', '').split(',') or []
self.allowed_domains = [urlparse(d).netloc for d in inputs]
# self.start_urls = [urlparse(c).netloc for c in inputs] # For start_urls

rules = [
Rule(SgmlLinkExtractor(allow=()), follow=True, callback='parse_item')
]

def parse_item(self, response):
x = HtmlXPathSelector(response)
filename = "output.txt"
open(filename, 'ab').write(response.url + "\n")

这是新代码的输出日志

2017-04-18 18:18:01 [scrapy] INFO: Scrapy 1.0.3 started (bot: scrapybot)
2017-04-18 18:18:01 [scrapy] INFO: Optional features available: ssl, http11, boto
2017-04-18 18:18:01 [scrapy] INFO: Overridden settings: {'LOG_FILE': 'output.log'}
2017-04-18 18:18:43 [scrapy] INFO: Scrapy 1.0.3 started (bot: scrapybot)
2017-04-18 18:18:43 [scrapy] INFO: Optional features available: ssl, http11, boto
2017-04-18 18:18:43 [scrapy] INFO: Overridden settings: {'LOG_FILE': 'output.log'}
2017-04-18 18:18:43 [py.warnings] WARNING: /home/****-you/Python_Projects/Network-Multitool/crawler/crawler_prod.py:1: ScrapyDeprecationWarning: Module `scrapy.contrib.spiders` is deprecated, use `scrapy.spiders` instead
from scrapy.contrib.spiders import CrawlSpider, Rule

2017-04-18 18:18:43 [py.warnings] WARNING: /home/****-you/Python_Projects/Network-Multitool/crawler/crawler_prod.py:2: ScrapyDeprecationWarning: Module `scrapy.contrib.linkextractors` is deprecated, use `scrapy.linkextractors` instead
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor

2017-04-18 18:18:43 [py.warnings] WARNING: /home/****-you/Python_Projects/Network-Multitool/crawler/crawler_prod.py:2: ScrapyDeprecationWarning: Module `scrapy.contrib.linkextractors.sgml` is deprecated, use `scrapy.linkextractors.sgml` instead
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor

2017-04-18 18:18:43 [py.warnings] WARNING: /home/****-you/Python_Projects/Network-Multitool/crawler/crawler_prod.py:27: ScrapyDeprecationWarning: SgmlLinkExtractor is deprecated and will be removed in future releases. Please use scrapy.linkextractors.LinkExtractor
Rule(SgmlLinkExtractor(allow=()), follow=True, callback='parse_item')

2017-04-18 18:18:43 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsole, LogStats, CoreStats, SpiderState
2017-04-18 18:18:43 [boto] DEBUG: Retrieving credentials from metadata server.
2017-04-18 18:18:44 [boto] ERROR: Caught exception reading instance data
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/boto/utils.py", line 210, in retry_url
r = opener.open(req, timeout=timeout)
File "/usr/lib/python2.7/urllib2.py", line 429, in open
response = self._open(req, data)
File "/usr/lib/python2.7/urllib2.py", line 447, in _open
'_open', req)
File "/usr/lib/python2.7/urllib2.py", line 407, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 1228, in http_open
return self.do_open(httplib.HTTPConnection, req)
File "/usr/lib/python2.7/urllib2.py", line 1198, in do_open
raise URLError(err)
URLError: <urlopen error timed out>
2017-04-18 18:18:44 [boto] ERROR: Unable to read instance data, giving up
2017-04-18 18:18:44 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2017-04-18 18:18:44 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2017-04-18 18:18:44 [scrapy] INFO: Enabled item pipelines:
2017-04-18 18:18:44 [scrapy] INFO: Spider opened
2017-04-18 18:18:44 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-04-18 18:18:44 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2017-04-18 18:18:44 [scrapy] ERROR: Error while obtaining start requests
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/scrapy/core/engine.py", line 110, in _next_request
request = next(slot.start_requests)
File "/usr/lib/python2.7/dist-packages/scrapy/spiders/__init__.py", line 70, in start_requests
yield self.make_requests_from_url(url)
File "/usr/lib/python2.7/dist-packages/scrapy/spiders/__init__.py", line 73, in make_requests_from_url
return Request(url, dont_filter=True)
File "/usr/lib/python2.7/dist-packages/scrapy/http/request/__init__.py", line 24, in __init__
self._set_url(url)
File "/usr/lib/python2.7/dist-packages/scrapy/http/request/__init__.py", line 59, in _set_url
raise ValueError('Missing scheme in request url: %s' % self._url)
ValueError: Missing scheme in request url:
2017-04-18 18:18:44 [scrapy] INFO: Closing spider (finished)
2017-04-18 18:18:44 [scrapy] INFO: Dumping Scrapy stats:
{'finish_reason': 'finished',
'finish_time': datetime.datetime(2017, 4, 18, 22, 18, 44, 794155),
'log_count/DEBUG': 2,
'log_count/ERROR': 3,
'log_count/INFO': 7,
'start_time': datetime.datetime(2017, 4, 18, 22, 18, 44, 790331)}
2017-04-18 18:18:44 [scrapy] INFO: Spider closed (finished)

编辑:

通过查看答案并重新阅读文档,我能够找到问题的答案。以下是我添加到爬虫脚本以使其正常工作的内容。

def __init__(self, url=None, *args, **kwargs):
super(InputSpider, self).__init__(*args, **kwargs)
self.allowed_domains = [url]
self.start_urls = ["http://" + url]

最佳答案

您在这里缺少的东西很少。

  1. 来自 start_urls 的第一个请求未被过滤。
  2. 一旦运行开始,您就不能覆盖 allowed_domains

要处理这些问题,您需要编写自己的 offiste 中间件,或者至少用您需要的更改修改现有的中间件。

处理 allowed_domainsOffsiteMiddleware 会在蜘蛛打开 后将 allowed_domains 值转换为正则表达式字符串,并且然后该参数将不再使用。

给你添加这样的东西 middlewares.py:

from scrapy.spidermiddlewares.offsite import OffsiteMiddleware
from scrapy.utils.httpobj import urlparse_cached
class MyOffsiteMiddleware(OffsiteMiddleware):

def should_follow(self, request, spider):
"""Return bool whether to follow a request"""
# hostname can be None for wrong urls (like javascript links)
host = urlparse_cached(request).hostname or ''
if host in spider.allowed_domains:
return True
return False

setting.py中激活:

SPIDER_MIDDLEWARES = {
# enable our middleware
'myspider.middlewares.MyOffsiteMiddleware': 500,
# disable old middleware
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware': None,

}

现在您的蜘蛛应该遵循您在 allowed_domains 中的任何内容,即使您在运行中修改它也是如此。

编辑:针对您的情况:

from scrapy.utils.httpobj import urlparse
class MySpider(Spider):
def __init__(self, *args, **kwargs):
input = kwargs.get('urls', '').split(',') or []
self.allowed_domains = [urlparse(d).netloc for d in input]

现在你可以运行:

scrapy crawl myspider -a "urls=foo.com,bar.com"

关于python - 在 Scrapy 中将参数传递给 allowed_domains,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/43335638/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com