gpt4 book ai didi

Python - 从 aspx 表单下载文件

转载 作者:太空宇宙 更新时间:2023-11-03 12:07:02 24 4
gpt4 key购买 nike

我正在尝试从该站点自动获取一些数据: http://www.casablanca-bourse.com/bourseweb/en/Negociation-History.aspx?Cat=24&IdLink=225

在 python 中使用 urllib2,我成功地获得了一个 html 文件,就好像我点击了这个网站中的“提交”按钮一样。

但是,当我模拟点击“下载数据”链接的行为时,我得到了任何输出。

我的代码是:

import urllib
import urllib2

uri = 'http://www.casablanca-bourse.com/bourseweb/en/Negociation-History.aspx?Cat=24&IdLink=225'
headers = {
'HTTP_USER_AGENT': 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36',
'HTTP_ACCEPT': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8'
}

formFields = (
(r'TopControl1$ScriptManager1', r'HistoriqueNegociation1$UpdatePanel1|HistoriqueNegociation1$HistValeur1$LinkButton1'),
(r'__EVENTTARGET', r'HistoriqueNegociation1$HistValeur1$LinkButton1'),
(r'__EVENTARGUMENT', r''),
(r'__VIEWSTATE', r'/wEPDwUKMTcy/ ... +ZHYQBq1hB/BZ2BJyHdLM='), #just a small part because it's so long !
(r'TopControl1$TxtRecherche', r''),
(r'TopControl1$txtValeur', r''),
(r'HistoriqueNegociation1$HistValeur1$DDValeur', r'9000 '),
(r'HistoriqueNegociation1$HistValeur1$historique', r'RBSearchDate'),
(r'HistoriqueNegociation1$HistValeur1$DateTimeControl1$TBCalendar', r'22/12/2014'),
(r'HistoriqueNegociation1$HistValeur1$DateTimeControl2$TBCalendar', r'28/12/2014'),
(r'HistoriqueNegociation1$HistValeur1$DDuree', r'6'),
(r'hiddenInputToUpdateATBuffer_CommonToolkitScripts', r'1')
)


encodedFields = urllib.urlencode(formFields)

req = urllib2.Request(uri, encodedFields, headers)
f = urllib2.urlopen(req)

我应该怎么做才能获得与单击站点中的“下载数据”链接相同的文件?

谢谢

最佳答案

首先,我建议您给我们requests库而不是 urllib。我们还需要一个 BeautifulSoup使用 HTML 标签:

pip install requests

pip install beautifulsoup4

然后,代码将如下所示:

import requests
from bs4 import BeautifulSoup

session = requests.Session()

payload = {
r'TopControl1$ScriptManager1': r'HistoriqueNegociation1$UpdatePanel1|HistoriqueNegociation1$HistValeur1$LinkButton1',
r'__EVENTTARGET': r'HistoriqueNegociation1$HistValeur1$LinkButton1',
r'__EVENTARGUMENT': r'',
r'TopControl1$TxtRecherche': r'',
r'TopControl1$txtValeur': r'',
r'HistoriqueNegociation1$HistValeur1$DDValeur': r'9000 ',
r'HistoriqueNegociation1$HistValeur1$historique': r'RBSearchDate',
r'HistoriqueNegociation1$HistValeur1$DateTimeControl1$TBCalendar': r'22/12/2014',
r'HistoriqueNegociation1$HistValeur1$DateTimeControl2$TBCalendar': r'28/12/2014',
r'HistoriqueNegociation1$HistValeur1$DDuree': r'6',
r'hiddenInputToUpdateATBuffer_CommonToolkitScripts': r'1'
}


uri = 'http://www.casablanca-bourse.com/bourseweb/en/Negociation-History.aspx?Cat=24&IdLink=225'
r = session.get(uri)

#Find __VIEWSTATE value, there is only one input tag with type="hidden"
soup = BeautifulSoup(r.text)
viewstate_tag = soup.find('input', attrs={"type" : "hidden"})
payload[viewstate_tag['name']] = viewstate_tag['value']

r = session.post(uri, payload)
print r.text #contains html table with data

首先,我们获取原始页面,提取 __VIEWSTATE 值并将该值用于第二个请求。

关于Python - 从 aspx 表单下载文件,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/27697487/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com