gpt4 book ai didi

python - 我想抓取多个页面,但我得到了最后一个 url 的结果。为什么?

转载 作者:行者123 更新时间:2023-12-04 15:03:16 24 4
gpt4 key购买 nike

为什么结果输出的是最后一个url?我的代码有问题吗?

import requests as uReq
from bs4 import BeautifulSoup as soup
import numpy as np

#can i use while loop instead for?
for page in np.arange(1,15):
url = uReq.get('https://www.myanmarbusiness-directory.com/en/categories-index/car-wheels-tyres-tubes-dealers/page{}.html?city=%E1%80%99%E1%80%9B%E1%80%99%E1%80%B9%E1%80%B8%E1%80%80%E1%80%AF%E1%80%94%E1%80%B9%E1%80%B8%E1%81%BF%E1%80%99%E1%80%AD%E1%80%B3%E1%82%95%E1%80%94%E1%80%9A%E1%80%B9'.format(page)).text

#have used for loop,but result is the last url
page_soup = soup(url,"html.parser")
info = page_soup.findAll("div",{"class: ","row detail_row"})

#Do all the url return output in one file?
filename = "wheel.csv"
file = open(filename,"w",encoding="utf-8")

最佳答案

您应该检查 for 循环之后发生的事情的缩进,否则,变量 url 会在循环的每次迭代中被替换,因此只保留最后一个。

import requests as uReq
from bs4 import BeautifulSoup as soup
import numpy as np

for page in np.arange(1,15):
url = uReq.get('https://www.myanmarbusiness-directory.com/en/categories-index/car-wheels-tyres-tubes-dealers/page{}.html?city=%E1%80%99%E1%80%9B%E1%80%99%E1%80%B9%E1%80%B8%E1%80%80%E1%80%AF%E1%80%94%E1%80%B9%E1%80%B8%E1%81%BF%E1%80%99%E1%80%AD%E1%80%B3%E1%82%95%E1%80%94%E1%80%9A%E1%80%B9'.format(page)).text

# this should be done N times (where N is the range param)
page_soup = soup(url,"html.parser")
info = page_soup.findAll("div",{"class: ","row detail_row"})

# append the results to the csv file
filename = "wheel.csv"
file = open(filename,"a",encoding="utf-8")
... # code for writing in the csv file
file.close()

然后,您将在文件中找到所有内容。请注意,您还应该关闭文件以进行保存。

关于python - 我想抓取多个页面,但我得到了最后一个 url 的结果。为什么?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/66596538/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com