gpt4 book ai didi

python - 如何在程序中将参数传递给scrapy spider?

转载 作者:行者123 更新时间:2023-11-28 20:04:25 33 4
gpt4 key购买 nike

我是python和scrapy的新手。我用了这个博客里的方法Running multiple scrapy spiders programmatically在 flask 应用程序中运行我的蜘蛛程序。这是代码:

# list of crawlers
TO_CRAWL = [DmozSpider, EPGDspider, GDSpider]

# crawlers that are running
RUNNING_CRAWLERS = []

def spider_closing(spider):
"""
Activates on spider closed signal
"""
log.msg("Spider closed: %s" % spider, level=log.INFO)
RUNNING_CRAWLERS.remove(spider)
if not RUNNING_CRAWLERS:
reactor.stop()

# start logger
log.start(loglevel=log.DEBUG)

# set up the crawler and start to crawl one spider at a time
for spider in TO_CRAWL:
settings = Settings()

# crawl responsibly
settings.set("USER_AGENT", "Kiran Koduru (+http://kirankoduru.github.io)")
crawler = Crawler(settings)
crawler_obj = spider()
RUNNING_CRAWLERS.append(crawler_obj)

# stop reactor when spider closes
crawler.signals.connect(spider_closing, signal=signals.spider_closed)
crawler.configure()
crawler.crawl(crawler_obj)
crawler.start()

# blocks process; so always keep as the last statement
reactor.run()

这是我的蜘蛛程序代码:

class EPGDspider(scrapy.Spider):
name = "EPGD"
allowed_domains = ["epgd.biosino.org"]
term = "man"
start_urls = ["http://epgd.biosino.org/EPGD/search/textsearch.jsp?textquery="+term+"&submit=Feeling+Lucky"]
MONGODB_DB = name + "_" + term
MONGODB_COLLECTION = name + "_" + term

def parse(self, response):
sel = Selector(response)
sites = sel.xpath('//tr[@class="odd"]|//tr[@class="even"]')
url_list = []
base_url = "http://epgd.biosino.org/EPGD"

for site in sites:
item = EPGD()
item['genID'] = map(unicode.strip, site.xpath('td[1]/a/text()').extract())
item['genID_url'] = base_url+map(unicode.strip, site.xpath('td[1]/a/@href').extract())[0][2:]
item['taxID'] = map(unicode.strip, site.xpath('td[2]/a/text()').extract())
item['taxID_url'] = map(unicode.strip, site.xpath('td[2]/a/@href').extract())
item['familyID'] = map(unicode.strip, site.xpath('td[3]/a/text()').extract())
item['familyID_url'] = base_url+map(unicode.strip, site.xpath('td[3]/a/@href').extract())[0][2:]
item['chromosome'] = map(unicode.strip, site.xpath('td[4]/text()').extract())
item['symbol'] = map(unicode.strip, site.xpath('td[5]/text()').extract())
item['description'] = map(unicode.strip, site.xpath('td[6]/text()').extract())
yield item

sel_tmp = Selector(response)
link = sel_tmp.xpath('//span[@id="quickPage"]')

for site in link:
url_list.append(site.xpath('a/@href').extract())

for i in range(len(url_list[0])):
if cmp(url_list[0][i], "#") == 0:
if i+1 < len(url_list[0]):
print url_list[0][i+1]
actual_url = "http://epgd.biosino.org/EPGD/search/"+ url_list[0][i+1]
yield Request(actual_url, callback=self.parse)
break
else:
print "The index is out of range!"

如您所见,我的代码中有一个参数 term = 'man',它是我的 start urls 的一部分。我不想固定这个参数,所以我想知道如何在我的程序中动态地给 start url 或参数 term ?就像在命令行中运行蜘蛛一样,有一种方法可以传递参数,如下所示:

class MySpider(BaseSpider):

name = 'my_spider'

def __init__(self, *args, **kwargs):
super(MySpider, self).__init__(*args, **kwargs)

self.start_urls = [kwargs.get('start_url')]
And start it like: scrapy crawl my_spider -a start_url="http://some_url"

谁能告诉我如何处理这个问题?

最佳答案

首先,要在一个脚本中运行多个爬虫,推荐的方式是使用scrapy.crawler.CrawlerProcess。 , where you pass spider classes而不是蜘蛛实例。

要使用 CrawlerProcess 将参数传递给您的蜘蛛,您只需将参数添加到 .crawl() 调用中,在蜘蛛子类之后,例如

    process.crawl(DmozSpider, term='someterm', someotherterm='anotherterm')

以这种方式传递的参数随后可用作蜘蛛属性(与命令行上的 -a term=someterm 相同)

最后,不用在 __init__ 中构建 start_urls,您可以使用 start_requests 实现相同的目的,您可以使用 self.term 像这样构建初始请求:

def start_requests(self):
yield Request("http://epgd.biosino.org/"
"EPGD/search/textsearch.jsp?"
"textquery={}"
"&submit=Feeling+Lucky".format(self.term))

关于python - 如何在程序中将参数传递给scrapy spider?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/36689190/

33 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com