gpt4 book ai didi

python - 用于 AJAX 内容的 Scrapy CrawlSpider

转载 作者:太空狗 更新时间:2023-10-29 18:04:39 26 4
gpt4 key购买 nike

我正在尝试抓取新闻文章的站点。我的 start_url 包含:

(1) 每篇文章的链接:http://example.com/symbol/TSLA

(2) 一个“更多”按钮,它进行 AJAX 调用,在同一 start_url 中动态加载更多文章:http://example.com/account/ajax_headlines_content?type=in_focus_articles&page=0&slugs=tsla&is_symbol_page=true

AJAX 调用的一个参数是“页面”,每次单击“更多”按钮时该参数都会递增。例如,点击“更多”一次,会额外加载n篇文章,并在“更多”按钮的onClick事件中更新page参数,这样下次点击“更多”时,将加载“page”两篇文章(假设“page"0 最初加载,并且 "page"1 在第一次点击时加载)。

对于每个“页面”,我想使用 Rules 抓取每篇文章的内容,但我不知道有多少“页面”,我也不想选择任意的 m(例如 10k)。我似乎无法弄清楚如何设置它。

从这个问题,Scrapy Crawl URLs in Order ,我试图创建潜在 URL 的 URL 列表,但在解析以前的 URL 并确保它包含 CrawlSpider 的新闻链接后,我无法确定如何以及从池中发送新 URL 的位置。我的规则将响应发送到 parse_items 回调,其中解析了文章内容。

有没有办法在应用规则和调用 parse_items 之前观察链接页面的内容(类似于 BaseSpider 示例),以便我知道何时停止抓取?

简化的代码(为清楚起见,我删除了几个正在解析的字段):

class ExampleSite(CrawlSpider):

name = "so"
download_delay = 2

more_pages = True
current_page = 0

allowed_domains = ['example.com']

start_urls = ['http://example.com/account/ajax_headlines_content?type=in_focus_articles&page=0'+
'&slugs=tsla&is_symbol_page=true']

##could also use
##start_urls = ['http://example.com/symbol/tsla']

ajax_urls = []
for i in range(1,1000):
ajax_urls.append('http://example.com/account/ajax_headlines_content?type=in_focus_articles&page='+str(i)+
'&slugs=tsla&is_symbol_page=true')

rules = (
Rule(SgmlLinkExtractor(allow=('/symbol/tsla', ))),
Rule(SgmlLinkExtractor(allow=('/news-article.*tesla.*', '/article.*tesla.*', )), callback='parse_item')
)

##need something like this??
##override parse?
## if response.body == 'no results':
## self.more_pages = False
## ##stop crawler??
## else:
## self.current_page = self.current_page + 1
## yield Request(self.ajax_urls[self.current_page], callback=self.parse_start_url)


def parse_item(self, response):

self.log("Scraping: %s" % response.url, level=log.INFO)

hxs = Selector(response)

item = NewsItem()

item['url'] = response.url
item['source'] = 'example'
item['title'] = hxs.xpath('//title/text()')
item['date'] = hxs.xpath('//div[@class="article_info_pos"]/span/text()')

yield item

最佳答案

Crawl spider 可能对您的用途来说太有限了。如果您需要很多逻辑,通常最好从 Spider 继承。

Scrapy 提供了 CloseSpider 异常,当你需要在特定条件下停止解析时可以抛出该异常。您正在抓取的页面返回消息“您的股票没有焦点文章”,当您超过最大页面时,您可以检查此消息并在出现此消息时停止迭代。

在您的情况下,您可以使用以下内容:

from scrapy.spider import Spider
from scrapy.http import Request
from scrapy.exceptions import CloseSpider

class ExampleSite(Spider):
name = "so"
download_delay = 0.1

more_pages = True
next_page = 1

start_urls = ['http://example.com/account/ajax_headlines_content?type=in_focus_articles&page=0'+
'&slugs=tsla&is_symbol_page=true']

allowed_domains = ['example.com']

def create_ajax_request(self, page_number):
"""
Helper function to create ajax request for next page.
"""
ajax_template = 'http://example.com/account/ajax_headlines_content?type=in_focus_articles&page={pagenum}&slugs=tsla&is_symbol_page=true'

url = ajax_template.format(pagenum=page_number)
return Request(url, callback=self.parse)

def parse(self, response):
"""
Parsing of each page.
"""
if "There are no Focus articles on your stocks." in response.body:
self.log("About to close spider", log.WARNING)
raise CloseSpider(reason="no more pages to parse")


# there is some content extract links to articles
sel = Selector(response)
links_xpath = "//div[@class='symbol_article']/a/@href"
links = sel.xpath(links_xpath).extract()
for link in links:
url = urljoin(response.url, link)
# follow link to article
# commented out to see how pagination works
#yield Request(url, callback=self.parse_item)

# generate request for next page
self.next_page += 1
yield self.create_ajax_request(self.next_page)

def parse_item(self, response):
"""
Parsing of each article page.
"""
self.log("Scraping: %s" % response.url, level=log.INFO)

hxs = Selector(response)

item = NewsItem()

item['url'] = response.url
item['source'] = 'example'
item['title'] = hxs.xpath('//title/text()')
item['date'] = hxs.xpath('//div[@class="article_info_pos"]/span/text()')

yield item

关于python - 用于 AJAX 内容的 Scrapy CrawlSpider,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/23706111/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com