gpt4 book ai didi

Python + Scrapy : Issues running "ImagesPipeline" when running crawler from script

转载 作者:太空宇宙 更新时间:2023-11-03 20:29:15 31 4
gpt4 key购买 nike

我是 Python 新手,所以如果这里有一个愚蠢的错误,我深表歉意...我已经在网络上搜索了好几天,查看类似的问题并梳理 Scrapy 文档,但似乎没有什么能真正为我解决这个问题...

我有一个 Scrapy 项目它成功抓取源网站,返回所需的项目,然后使用 ImagePipeline 从返回的图像链接下载(然后相应地重命名)图像... 但仅当我使用“runspider”从终端运行时。

每当我从终端使用“crawl”或CrawlProcess从脚本内运行蜘蛛时,它会返回项目但不下载图像 而且,我认为完全错过了 ImagePipeline。

我读到,我需要在以这种方式运行时导入我的设置,以便正确加载管道,在研究了“crawl”和“runspider<”之间的差异后,这是有意义的/em>”,但我仍然无法让管道工作。

没有错误消息,但我注意到它确实返回“[scrapy.middleware] INFO: Enabled item pipelines: []” ...我认为它表明它仍然丢失我的管道?

这是我的蜘蛛.py:

import scrapy
from scrapy2.items import Scrapy2Item
from scrapy.crawler import CrawlerProcess
from scrapy.utils.project import get_project_settings

class spider1(scrapy.Spider):
name = "spider1"
domain = "https://www.amazon.ca/s?k=821826022317"

def start_requests(self):
yield scrapy.Request(url=spider1.domain ,callback = self.parse)

def parse(self, response):

items = Scrapy2Item()

titlevar = response.css('span.a-text-normal ::text').extract_first()
imgvar = [response.css('img ::attr(src)').extract_first()]
skuvar = response.xpath('//meta[@name="keywords"]/@content')[0].extract()

items['title'] = titlevar
items['image_urls'] = imgvar
items['sku'] = skuvar

yield items

process = CrawlerProcess(get_project_settings())
process.crawl(spider1)
process.start()

这是我的 items.py:

import scrapy

class Scrapy2Item(scrapy.Item):
title = scrapy.Field()
image_urls = scrapy.Field()
sku = scrapy.Field()

这是我的 pipelines.py:

import scrapy
from scrapy.pipelines.images import ImagesPipeline

class Scrapy2Pipeline(ImagesPipeline):
def get_media_requests(self, item, info):
return [scrapy.Request(x, meta={'image_name': item['sku']})
for x in item.get('image_urls', [])]

def file_path(self, request, response=None, info=None):
return '%s.jpg' % request.meta['image_name']

这是我的设置.py:

BOT_NAME = 'scrapy2'

SPIDER_MODULES = ['scrapy2.spiders']
NEWSPIDER_MODULE = 'scrapy2.spiders'

ROBOTSTXT_OBEY = True

ITEM_PIPELINES = {
'scrapy2.pipelines.Scrapy2Pipeline': 1,
}

IMAGES_STORE = 'images'

感谢任何关注此问题甚至尝试帮助我的人。非常感谢。

最佳答案

由于您将蜘蛛作为脚本运行,因此没有scrapy项目环境,get_project_settings将不起作用(除了获取默认设置之外)。该脚本必须是独立的,即包含运行蜘蛛所需的所有内容(或从 python 搜索路径导入它,就像任何常规的旧 python 代码一样)。

我已经为您重新格式化了该代码,以便当您使用普通 python 解释器执行它时它可以运行:python3 script.py

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import scrapy
from scrapy.pipelines.images import ImagesPipeline

BOT_NAME = 'scrapy2'
ROBOTSTXT_OBEY = True
IMAGES_STORE = 'images'


class Scrapy2Item(scrapy.Item):
title = scrapy.Field()
image_urls = scrapy.Field()
sku = scrapy.Field()

class Scrapy2Pipeline(ImagesPipeline):
def get_media_requests(self, item, info):
return [scrapy.Request(x, meta={'image_name': item['sku']})
for x in item.get('image_urls', [])]

def file_path(self, request, response=None, info=None):
return '%s.jpg' % request.meta['image_name']

class spider1(scrapy.Spider):
name = "spider1"
domain = "https://www.amazon.ca/s?k=821826022317"

def start_requests(self):
yield scrapy.Request(url=spider1.domain ,callback = self.parse)

def parse(self, response):

items = Scrapy2Item()

titlevar = response.css('span.a-text-normal ::text').extract_first()
imgvar = [response.css('img ::attr(src)').extract_first()]
skuvar = response.xpath('//meta[@name="keywords"]/@content')[0].extract()

items['title'] = titlevar
items['image_urls'] = imgvar
items['sku'] = skuvar

yield items

if __name__ == "__main__":
from scrapy.crawler import CrawlerProcess
from scrapy.settings import Settings

settings = Settings(values={
'BOT_NAME': BOT_NAME,
'ROBOTSTXT_OBEY': ROBOTSTXT_OBEY,
'ITEM_PIPELINES': {
'__main__.Scrapy2Pipeline': 1,
},
'IMAGES_STORE': IMAGES_STORE,
'TELNETCONSOLE_ENABLED': False,
})

process = CrawlerProcess(settings=settings)
process.crawl(spider1)
process.start()

关于Python + Scrapy : Issues running "ImagesPipeline" when running crawler from script,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/57616611/

31 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com