gpt4 book ai didi

python - 将图像 src 包含到 LinkExtractor Scrapy CrawlSpider

转载 作者:行者123 更新时间:2023-12-02 17:14:25 25 4
gpt4 key购买 nike

我致力于在网站上进行抓取,我使用 LinkExtractor 从 scrapy 抓取链接并确定其响应状态。

此外,我还想使用链接提取器从站点获取图像源。我有一个代码,它适用于网站网址,但我似乎无法获取图像。因为它不会登录控制台。

handle_httpstatus_list = [404,502]
# allowed_domains = [''mydomain']

start_urls = ['somedomain.com/']

http_user = '###'
http_pass = '#####'

rules = (
Rule(LinkExtractor(allow=('domain.com',),canonicalize = True, unique = True), process_links='filter_links', follow = False, callback='parse_local_link'),
Rule(LinkExtractor(allow=('cdn.domain.com'),tags = ('img',),attrs=('src',),canonicalize = True, unique = True), follow = False, callback='parse_image_link'),
)

def filter_links(self,links):
for link in

def parse_local_link(self, response):
if response.status != 200:
item = LinkcheckerItem()
item['url'] = response.url
item['status'] = response.status
item['link_type'] = 'local'
item['referer'] = response.request.headers.get('Referer',None)
yield item

def parse_image_link(self, response):
print "Got image link"
if response.status != 200:
item = LinkcheckerItem()
item['url'] = response.url
item['status'] = response.status
item['link_type'] = 'img'
item['referer'] = response.request.headers.get('Referer',None)
yield item

最佳答案

如果有人有兴趣继续使用 CrawlSpiderLinkExtractor,只需添加 kwarg deny_extensions,即替换:

    Rule(LinkExtractor(allow=('cdn.domain.com'),tags = ('img',),attrs=('src',),canonicalize = True, unique = True), follow = False, callback='parse_image_link'),

    Rule(LinkExtractor(allow=('cdn.domain.com'),deny_extensions=set(), tags = ('img',),attrs=('src',),canonicalize = True, unique = True), follow = False, callback='parse_image_link')

不设置该参数时,默认为scrapy.linkextractors.IGNORED_EXTENSIONS,包含jpeg、png等扩展名。这意味着链接提取器会避免找到包含所述扩展名的链接。

关于python - 将图像 src 包含到 LinkExtractor Scrapy CrawlSpider,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/47283619/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com