gpt4 book ai didi

python - Scrapy start_urls

转载 作者:太空狗 更新时间:2023-10-29 22:01:15 25 4
gpt4 key购买 nike

The script (下)来自 this教程包含两个 start_urls

from scrapy.spider import Spider
from scrapy.selector import Selector

from dirbot.items import Website

class DmozSpider(Spider):
name = "dmoz"
allowed_domains = ["dmoz.org"]
start_urls = [
"http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",
"http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/",
]

def parse(self, response):
"""
The lines below is a spider contract. For more info see:
http://doc.scrapy.org/en/latest/topics/contracts.html
@url http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/
@scrapes name
"""
sel = Selector(response)
sites = sel.xpath('//ul[@class="directory-url"]/li')
items = []

for site in sites:
item = Website()
item['name'] = site.xpath('a/text()').extract()
item['url'] = site.xpath('a/@href').extract()
item['description'] = site.xpath('text()').re('-\s[^\n]*\\r')
items.append(item)

return items

但为什么它只抓取这 2 个网页?我看到 allowed_domains = ["dmoz.org"] 但这两个页面还包含指向 dmoz.org 域内其他页面的链接!为什么它不也抓取掉它们?

最佳答案

start_urls 类属性包含起始 url - 仅此而已。如果你已经提取了你想要抓取的其他页面的 url - 从 parse 回调中产生相应的请求 [another] callback:

class Spider(BaseSpider):

name = 'my_spider'
start_urls = [
'http://www.domain.com/'
]
allowed_domains = ['domain.com']

def parse(self, response):
'''Parse main page and extract categories links.'''
hxs = HtmlXPathSelector(response)
urls = hxs.select("//*[@id='tSubmenuContent']/a[position()>1]/@href").extract()
for url in urls:
url = urlparse.urljoin(response.url, url)
self.log('Found category url: %s' % url)
yield Request(url, callback = self.parseCategory)

def parseCategory(self, response):
'''Parse category page and extract links of the items.'''
hxs = HtmlXPathSelector(response)
links = hxs.select("//*[@id='_list']//td[@class='tListDesc']/a/@href").extract()
for link in links:
itemLink = urlparse.urljoin(response.url, link)
self.log('Found item link: %s' % itemLink, log.DEBUG)
yield Request(itemLink, callback = self.parseItem)

def parseItem(self, response):
...

如果您仍想自定义启动请求创建,请覆盖方法 BaseSpider.start_requests()

关于python - Scrapy start_urls,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/8903730/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com