gpt4 book ai didi

python - SCRAPY:每次我的蜘蛛爬行时,它都会抓取同一页面(第一页)

转载 作者:行者123 更新时间:2023-12-01 07:32:08 28 4
gpt4 key购买 nike

我已经编写了一段代码,使用Python中的Scrapy来抓取页面。下面我粘贴了 main.py 代码。但是,每当我运行我的蜘蛛时,它仅从第一页抓取(DEBUG:从<200 https://www.tuscc.si/produkti/instant-juhe>抓取),这也是请求中的Referer标题(检查时)。

我尝试添加“Request Payload”字段数据的来源,该数据粘贴在此处:{"action":"loadList","skip":64,"filter":{"1005":[], "1006":[],"1007":[],"1009":[],"1013":[]}},当我尝试用它打开页面时(在此监视中修改:

https://www.tuscc.si/produkti/instant-juhe#32;'action':'loadList';'skip':'32';'sort':'none'

),浏览器将其打开。但 scrapy shell 却没有。我还尝试添加请求 URL 中的数字:https://www.tuscc.si/cache/script/tuscc.js?1563872492384 ,其中查询字符串参数为1563872492384;但它仍然不会从请求的页面上抓取。

另外,我尝试了很多变体并添加了很多东西,所有这些都是我在网上阅读的只是为了看看是否会有进展,但没有......

代码是:

from scrapy.spiders import CrawlSpider
from tus_pomos.items import TusPomosItem
from tus_pomos.scrapy_splash import SplashRequest


class TusPomosSpider(CrawlSpider):
name = 'TUSP'
allowed_domains = ['www.tuscc.si']
start_urls = ["https://www.tuscc.si/produkti/instant-juhe#0;1563872492384;",
"https://www.tuscc.si/produkti/instant-juhe#64;1563872492384;", ]
download_delay = 5.0

def start_requests(self):
# payload = [
# {"action": "loadList",
# "skip": 0,
# "filter": {
# "1005": [],
# "1006": [],
# "1007": [],
# "1009": [],
# "1013": []}
# }]
for url in self.start_urls:
r = SplashRequest(url, self.parse, magic_response=False, dont_filter=True, endpoint='render.json', meta={
'original_url': url,
'dont_redirect': True},
args={
'wait': 2,
'html': 1
})
r.meta['dont_redirect'] = True
yield r

def parse(self, response):
items = TusPomosItem()
pro = response.css(".thumb-box")
for p in pro:
pro_link = p.css("a::attr(href)").extract_first()
pro_name = p.css(".description::text").extract_first()
items['pro_link'] = pro_link
items['pro_name'] = pro_name
yield items

总之,我请求抓取分页中的所有页面,例如此页面(我也尝试使用命令 scrapy shell url):

https://www.tuscc.si/produkti/instant-juhe#64;1563872492384;

但响应始终是第一页,并且重复抓取它:

https://www.tuscc.si/produkti/instant-juhe

如果您能帮助我,我将不胜感激。谢谢

<小时/>

PARSE_DETAILS 生成器函数

def parse_detail(self, response):
items = TusPomosItem()
pro = response.css(".thumb-box")
for p in pro:
pro_link = p.css("a::attr(href)").extract_first()
pro_name = p.css(".description::text").extract_first()
items['pro_link'] = pro_link
items['pro_name'] = pro_name
my_details = {
'pro_link': pro_link,
'pro_name': pro_name
}
with open('pro_file.json', 'w') as json_file:
json.dump(my_details, json_file)

yield items
# yield scrapy.FormRequest(
# url='https://www.tuscc.si/produkti/instant-juhe',
# callback=self.parse_detail,
# method='POST',
# headers=self.headers
# )

在这里,我不确定是否应该按原样分配“items”变量,还是从response.body中获取它?另外,产量应该是这样,还是应该用请求来改变它(它被给定的答案代码部分复制)?

我是新来的,所以感谢您的理解!

最佳答案

从发出的底层请求中获取数据可能更有效,而不是使用 Splash 来呈现页面。下面的代码遍历所有包含文章的页面。在parse_detail下,您可以编写逻辑将响应中的数据加载到json中,您可以在其中找到产品的“pro_link”和“pro_name”。

import scrapy
import json
from scrapy.spiders import Spider
from ..items import TusPomosItem


class TusPomosSpider(Spider):
name = 'TUSP'
allowed_domains = ['tuscc.si']
start_urls = ["https://www.tuscc.si/produkti/instant-juhe"]
download_delay = 5.0

headers = {
'Origin': 'https://www.tuscc.si',
'Accept-Encoding': 'gzip, deflate, br',
'Accept-Language': 'en-GB,en;q=0.9,nl-BE;q=0.8,nl;q=0.7,ro-RO;q=0.6,ro;q=0.5,en-US;q=0.4',
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36',
'Content-Type': 'application/json; charset=UTF-8',
'Accept': 'application/json, text/javascript, */*; q=0.01',
'X-Requested-With': 'XMLHttpRequest',
'Connection': 'keep-alive',
'Referer': 'https://www.tuscc.si/produkti/instant-juhe',
}

def parse(self, response):
number_of_pages = int(response.xpath(
'//*[@class="paginationHolder"]//@data-size').extract_first())
number_per_page = int(response.xpath(
'//*[@name="pageSize"]/*[@selected="selected"]/text()').extract_first())

for page_number in range(0, number_of_pages):
skip = number_per_page * page_number
data = {"action": "loadList",
"filter": {"1005": [], "1006": [], "1007": [], "1009": [],
"1013": []},
"skip": str(skip),
"sort": "none"
}
yield scrapy.Request(
url='https://www.tuscc.si/produkti/instant-juhe',
callback=self.parse_detail,
method='POST',
body=json.dumps(data),
headers=self.headers
)

def parse_detail(self, response):
detail_page = json.loads(response.text)
for product in detail_page['docs']:
item = TusPomosItem()
item['pro_link'] = product['url']
item['pro_name'] = product['title']
yield item

关于python - SCRAPY:每次我的蜘蛛爬行时,它都会抓取同一页面(第一页),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/57162796/

28 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com