gpt4 book ai didi

带有请求的 Python 网络抓取 - 在响应中只得到一小部分数据

转载 作者:太空宇宙 更新时间:2023-11-04 10:29:12 25 4
gpt4 key购买 nike

我正在尝试从此 url 获取一些财务数据:

http://www.casablanca-bourse.com/bourseweb/en/Negociation-History.aspx?Cat=24&IdLink=225

我的代码仅适用于非常小的日期间隔(少于 19 天),但在网站中我们可以获取 3 年的数据!

我的代码如下:

import requests
import string
import csv
from bs4 import BeautifulSoup


# a simple helper function
def formatIt(s) :
output = ''
for i in s :
if i in string.printable :
output += i
return output

# default url
uri = "http://www.casablanca-bourse.com/bourseweb/en/Negociation-History.aspx?Cat=24&IdLink=225"


def get_viewState_and_symVal (symbolName, session) :
#session = requests.Session()
r = session.get(uri)
soup = BeautifulSoup(r.content) #soup = BeautifulSoup(r.text)
# let's get the viewstate value
viewstate_val = soup.find('input', attrs = {"id" : "__VIEWSTATE"})['value']
# let's get the symbol value
selectSymb = soup.find('select', attrs = {"name" : "HistoriqueNegociation1$HistValeur1$DDValeur"})
for i in selectSymb.find_all('option') :
if i.text == symbolName :
symbol_val = i['value']
# simple sanity check before return !
try :
symbol_val
except :
raise NameError ("Symbol Name not found !!!")
else :
return (viewstate_val, symbol_val)


def MainFun (symbolName, dateFrom, dateTo) :
session = requests.Session()
request1 = get_viewState_and_symVal (symbolName, session)
viewstate = request1[0]
symbol = request1[1]
payload = {
'TopControl1$ScriptManager1' : r'HistoriqueNegociation1$UpdatePanel1|HistoriqueNegociation1$HistValeur1$Image1',
'__VIEWSTATE' : viewstate,
'HistoriqueNegociation1$HistValeur1$DDValeur' : symbol,
'HistoriqueNegociation1$HistValeur1$historique' : r'RBSearchDate',
'HistoriqueNegociation1$HistValeur1$DateTimeControl1$TBCalendar' : dateFrom,
'HistoriqueNegociation1$HistValeur1$DateTimeControl2$TBCalendar' : dateTo,
'HistoriqueNegociation1$HistValeur1$DDuree' : r'6',
'hiddenInputToUpdateATBuffer_CommonToolkitScripts' : r'1',
'HistoriqueNegociation1$HistValeur1$Image1.x' : r'27',
'HistoriqueNegociation1$HistValeur1$Image1.y' : r'8'
}

request2 = session.post(uri, data = payload)
soup2 = BeautifulSoup(request2.content)
ops = soup2.find_all('table', id = "arial11bleu")
for i in ops :
try :
i['class']
except :
rslt = i
break

output = []
for i in rslt.find_all('tr')[1:] :
temp = []
for j in i.find_all('td') :
sani = j.text.strip()
if not sani in string.whitespace :
temp.append(formatIt(sani))
if len(temp) > 0 :
output.append(temp)

with open("output.csv", "wb") as f :
writer = csv.writer(f, delimiter = ';')
writer.writerows(output)

return writer



# working example
MainFun ("ATLANTA", "1/1/2014", "30/01/2014")

# not working example
MainFun ("ATLANTA", "1/1/2014", "30/03/2014")

最佳答案

可能是该站点自动检测了抓取程序并阻止了您。尝试在某处添加一个小的 sleep 语句,让他们的服务器有一些喘息的时间。无论如何,这通常是一种礼貌的做法。

from time import sleep
sleep(1) # pauses 1 second

关于带有请求的 Python 网络抓取 - 在响应中只得到一小部分数据,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/27879973/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com