gpt4 book ai didi

python - 通过 Scrapy-Splash 将真实 URL 传递到字典

转载 作者:行者123 更新时间:2023-11-30 21:59:49 25 4
gpt4 key购买 nike

当尝试通过 ('url' : response.request.url) 在字典中保存 URL 时,Scrapy 会从 Scrapy-Splash 保存完全相同的 URL ( http://localhost:8050/render.html )

我尝试添加额外的参数来传递真实的 URL,但没有成功。

from scrapy import Spider
from scrapy.http import FormRequest
from scrapy.utils.response import open_in_browser
from scrapy import Request
import scrapy
from scrapy_splash import SplashRequest

class QuotesJSSpider(scrapy.Spider):
name = 'quotesjs'
start_urls = ('https://www.facebook.com/login',)
custom_settings = {
'SPLASH_URL': 'http://localhost:8050',
'DOWNLOADER_MIDDLEWARES': {
'scrapy_splash.SplashCookiesMiddleware': 723,
'scrapy_splash.SplashMiddleware': 725,
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
},
'SPIDER_MIDDLEWARES': {
'scrapy_splash.SplashDeduplicateArgsMiddleware': 100,
},
'DUPEFILTER_CLASS': 'scrapy_splash.SplashAwareDupeFilter',
}

def parse(self, response):
token = response.xpath('//*[@id="u_0_a"]').extract_first()
return FormRequest.from_response(response,
formdata={'lgndim' : token,
'pass': 'xxx',
'email': 'xxxx'},
callback=self.load_sites)

def load_sites(self, response):
urls = [
'https://www.facebook.com/page1/about',
'https://www.facebook.com/page2/about',
]
for url in urls:
yield SplashRequest(url=url, callback=self.scrape_pages)

def scrape_pages(self, response):
shops = {
'company_name' : response.css('title::text').extract(),
'url' : response.request.url,

}

yield shops

结果应该是这样的:'网址':https://www.facebook.com/page1/about '

而不是这个:'网址':http://localhost:8050/render.html ,

最佳答案

原始请求的网址可在此处找到:response.request._original_url

为了避免访问内部属性,您还可以尝试:

  • 在元数据中传递网址:
    def load_sites(self, response):
urls = [
'https://www.facebook.com/page1/about',
'https://www.facebook.com/page2/about',
]
for url in urls:
yield SplashRequest(url=url, callback=self.scrape_pages, meta={'original_url': url})

def scrape_pages(self, response)
shops = {
'company_name' : response.css('title::text').extract(),
'url' : response.meta['original_url'],
}
yield shops
  • 使用响应中的 URL:
    def scrape_pages(self, response):
shops = {
'company_name' : response.css('title::text').extract(),
'url' : response.url,
}

关于python - 通过 Scrapy-Splash 将真实 URL 传递到字典,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/54485316/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com