gpt4 book ai didi

python - 有没有办法在Scrapy的Crawlspider中获取 anchor 标记内的文本?

转载 作者:行者123 更新时间:2023-12-01 01:04:52 25 4
gpt4 key购买 nike

我有一个crawlspider,它可以将给定站点爬行到某个部门并下载该站点上的pdf。一切正常,但除了 pdf 链接之外,我还需要 anchor 标记内的文本。

例如:

<a href='../some/pdf/url/pdfname.pdf'>Project Report</a>

考虑这个 anchor 标记,在回调中我得到响应对象,除了这个对象之外,我还需要该标记内的文本,例如“项目报告”。有什么方法可以获取此信息以及响应对象。我已经经历过https://docs.scrapy.org/en/latest/topics/selectors.html链接,但它不是我正在寻找的东西。

示例代码:

class DocumunetPipeline(scrapy.Item):
document_url = scrapy.Field()
name = scrapy.Field() # name of pdf/doc file
depth = scrapy.Field()

class MySpider(CrawlSpider):
name = 'pdf'
start_urls = ['http://www.someurl.com']
allowed_domains = ['someurl.com']
rules = (
Rule(LinkExtractor(tags="a", deny_extensions=[]),
callback='parse_document', follow=True),
)


def parse_document(self, response):
content_type = (response.headers
.get('Content-Type', None)
.decode("utf-8"))
url = response.url
if content_type == "application/pdf":
name = response.headers.get('Content-Disposition', None)
document = DocumunetPipeline()
document['document_url'] = url
document['name'] = name
document['depth'] = response.meta.get('depth', None)
yield document

最佳答案

似乎没有记录,但 meta 属性确实包含链接文本。更新于this line 。一个最小的例子是:

from scrapy.spiders import Rule, CrawlSpider
from scrapy.linkextractors import LinkExtractor


class LinkTextSpider(CrawlSpider):
name = 'linktext'
start_urls = ['https://example.org']
rules = [
Rule(LinkExtractor(), callback='parse_document'),
]

def parse_document(self, response):
return dict(
url=response.url,
link_text=response.meta['link_text'],
)

这会产生类似于以下内容的输出:

2019-04-01 12:03:30 [scrapy.core.engine] INFO: Spider opened
2019-04-01 12:03:30 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-04-01 12:03:30 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2019-04-01 12:03:31 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://example.org> (referer: None)
2019-04-01 12:03:32 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.iana.org/domains/reserved> from <GET http://www.iana.org/domains/example>
2019-04-01 12:03:33 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.iana.org/domains/reserved> (referer: None)
2019-04-01 12:03:33 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.iana.org/domains/reserved>
{'url': 'https://www.iana.org/domains/reserved', 'link_text': 'More information...'}
2019-04-01 12:03:33 [scrapy.core.engine] INFO: Closing spider (finished)

关于python - 有没有办法在Scrapy的Crawlspider中获取 anchor 标记内的文本?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55450472/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com