gpt4 book ai didi

python - 使用Python通过javascript onclick下载文件?

转载 作者:行者123 更新时间:2023-11-30 22:08:51 25 4
gpt4 key购买 nike

此页面上有一个按钮,内容为下载 CSV: http://www.nasdaqomxnordic.com/aktier/microsite?Instrument=CSE77855&name=Pandora&ISIN=DK0060252690

如何使用 Python 下载文件?

页面的 html 内容如下:

<a class="floatRight exportTrades" id="exportIntradayTradesCSV">Download CSV</a>

<script>
// #*

var tradesForShare = {
load: function () {
var q = {
"SubSystem": "Prices",
"Action": "GetInstrument",
"inst.an": "nm",
"inst.e": "3",
"Exchange": "NMF",
"Instrument": webCore.getInstrument(),
"cache": "skip",
"app": location["pathname"],
"datasource": "prod",
"translationFile": "translation",
"DefaultDecimals": false
};

$("#tradesForShareOutput").loading("/static/nordic/css/img/loading.gif");
var nordicRTI = NordicRTI.getInstance();
var url = window.webCore.getWebAPIURL("prod", "MarketData/GetMarketData", true);
var tradesRTI = new RTIObject(url, q, function (data) {
tradesForShare.parseData(data);
console.log(tradesRTI);
});
nordicRTI.addRTIObject(tradesRTI);

if($("tradesForShareTable").has("tr.odd")) {
$('.exportTrades').removeClass('disabled');
$('.exportTrades.disabled').css("pointer-events","auto");
} else {
$('.exportTrades').addClass('disabled');
$('.exportTrades').css("pointer-events","none");
}
/*webCore.getMarketData(q, function (data) {
tradesForShare.parseData(data);
}, true);*/

//var url = window.webCore.getWebAPIURL("prod", "MarketData/GetMarketData", true);
/*$.getJSON(url, q, function (data) {
tradesForShare.parseData(data);
});*/
/*$.ajax({
type: "get",
url: url,
data: q,
dataType: "jsonp",
cache: true,
success: function (data) {
tradesForShare.parseData(data);
},
jsonp: "callback"
});*/

//setTimeout ( tradesForShare.load, 1000*30 ); // update every minute
},
parseData: function (data) {
if(data.instruments != null) {
$("#tradesForShareOutput").empty();
var table = $("<table></table>").attr("id", "tradesForShareTable").addClass("tablesorter");
var thead = $("<thead></thead>");
var row = $("<tr></tr>");
var kurs = $("<th></th>").text(webCore.getTranslationFor("trades", "p", data));// data.attributeTranslations.trades.p.trans[window.currentLanguage]);
var vol = $("<th></th>").text(webCore.getTranslationFor("trades", "v", data));// data.attributeTranslations.trades.v.trans[window.currentLanguage]);
var name = $("<th></th>").text(webCore.getTranslationFor("trades", "nm", data));// data.attributeTranslations.trades.nm.trans[window.currentLanguage]);
var buyer = $("<th></th>").text(webCore.getTranslationFor("trades", "b", data));// data.attributeTranslations.trades.b.trans[window.currentLanguage]);
var seller = $("<th></th>").text(webCore.getTranslationFor("trades", "s", data));// data.attributeTranslations.trades.s.trans[window.currentLanguage]);
var time = $("<th></th>").text(webCore.getTranslationFor("trades", "t", data));// data.attributeTranslations.trades.t.trans[window.currentLanguage]);
row.append(kurs).append(vol).append(name).append(buyer).append(seller).append(time);
thead.append(row);
var tbody = $("<tbody></tbody>");
$.each(data.instruments[webCore.getInstrument().toLowerCase()].trades, function (k, v) {
row = $("<tr></tr>");
kurs = $("<td></td>").text(webCore.formatNumeric(v.values.p, 3));
vol = $("<td></td>").text(window.webCore.formatNumeric(v.values.v, 0));
name = $("<td></td>").text(v.values.nm);
buyer = $("<td></td>").text(v.values.b);
seller = $("<td></td>").text(v.values.s);
time = $("<td></td>").text(webCore.getTimeFromDateString(v.values.t));
row.append(kurs).append(vol).append(name).append(buyer).append(seller).append(time);
tbody.append(row);
});
table.append(thead).append(tbody);
$("#tradesForShareOutput").append(table);
$("#tradesForShareTable").tablesorter({widgets: ['zebra']});
}
},
excel: function () {
var instrument = null;
instrument = window.webCore.getInstrument();
var utc = new Date().toJSON().slice(0,10).replace(/-/g,'-');
$("#xlsForm").attr( "action", webCore.getProxyURL("prod"));
var xmlquery = webCore.createQuery( Utils.Constants.marketAction.getTrades, {}, {
t__a: "1,2,5,10,7,8,18",
FromDate : utc,
Instrument : instrument,
ext_contenttype : "application/vnd.ms-excel",
ext_contenttypefilename : "share_export.xls",
ext_xslt:"t_table_simple.xsl",
ext_xslt_lang: currentLanguage,
showall: "1"
});
console.log(xmlquery);
$("#xmlquery").val( xmlquery );
$("#xlsForm").submit();
}
};

$(function () {
tradesForShare.load();
$("#exportIntradayTradesCSV").on({
click: function (e) {
tradesForShare.excel();
//window.webCore.exportTableToCSVClickEvent($("#exportIntradayTradesCSV"), $("#tradesForShareOutput"), '_' + window.webCore.getInstrument() + '.csv');
}
});
});



</script>

我尝试在 Google Chrome 中使用 Inspect 并单击Event Listeners

单击按钮时,我得到以下输出:

<post>
<param name="SubSystem" value="Prices"/>
<param name="Action" value="GetTrades"/>
<param name="Exchange" value="NMF"/>
<param name="t__a" value="1,2,5,10,7,8,18"/>
<param name="FromDate" value="2018-08-29"/>
<param name="Instrument" value="CSE77855"/>
<param name="ext_contenttype" value="application/vnd.ms-excel"/>
<param name="ext_contenttypefilename" value="share_export.xls"/>
<param name="ext_xslt" value="/nordicV3/t_table_simple.xsl"/>
<param name="ext_xslt_lang" value="en"/>
<param name="showall" value="1"/>
<param name="app" value="/aktier/microsite"/>
</post>

所以我想我可以做类似下面的事情,但它不起作用;请参阅下面的进一步输出。

import requests

url = 'http://www.nasdaqomxnordic.com/WebAPI/api/MarketData/GetMarketData'
params = {
"SubSystem": "Prices",
"Action": "GetTrades",
"Exchange": "NMF",
"t__a": "1,2,5,10,7,8,18",
"FromDate": "2018-08-29",
"Instrument": "CSE77855",
"ext_contenttype": "application/vnd.ms-excel",
"ext_contenttypefilename": "share_export.xls",
"ext_xslt": "/nordicV3/t_table_simple.xsl",
"ext_xslt_lang": "en",
"showall": "1",
"app": "/aktier/microsite",
}

r = requests.get(url, params=params)

print(r.json())

我得到以下输出:

{'linkCall': 'SubSystem=Prices&Action=GetTrades&Exchange=NMF&t.a=1&t.a=2&t.a=5&t.a=10&t.a=7&t.a=8&t.a=18&FromDate=2018-08-29&Instrument=CSE77855&ext_contenttype=application%2fvnd.ms-excel&ext_contenttypefilename=share_export.xls&ext_xslt=%2fnordicV3%2ft_table_simple.xsl&ext_xslt_lang=en&showall=1&app=%2faktier%2fmicrosite', 'instruments': None, 'derivatives': None, 'warrants': None, 'attributeTranslations': {}, 'message': None, 'success': False}

如果可能的话,我想避免使用Selenium

最佳答案

检查 html,我注意到表单的操作是 /webproxy/DataFeedProxy.aspx,方法是 post。这意味着表单通过 POST 请求提交到:http://www.nasdaqomxnordic.com/webproxy/DataFeedProxy.aspx。该表单有一个名为 xmlquery 的字段,并值您问题中的 html。下面的代码应该下载该文件。

import requests

url = 'http://www.nasdaqomxnordic.com/webproxy/DataFeedProxy.aspx'
xmlquery = '''<post>
<param name="SubSystem" value="Prices"/>
<param name="Action" value="GetTrades"/>
<param name="Exchange" value="NMF"/>
<param name="t__a" value="1,2,5,10,7,8,18"/>
<param name="FromDate" value="2018-08-29"/>
<param name="Instrument" value="CSE77855"/>
<param name="ext_contenttype" value="application/vnd.ms-excel"/>
<param name="ext_contenttypefilename" value="share_export.xls"/>
<param name="ext_xslt" value="/nordicV3/t_table_simple.xsl"/>
<param name="ext_xslt_lang" value="en"/>
<param name="showall" value="1"/>
<param name="app" value="/aktier/microsite"/>
</post>'''

r = requests.post(url, data = {'xmlquery': xmlquery})
html = r.text

该文件不是 csv(也不是我从浏览器获取的文件),它具有 .xls 扩展名,但包含一个大型 html 表。不过,您可以借助 BeautifulSoupcsv 创建 csv 文件。

from bs4 import BeautifulSoup
import csv

soup = BeautifulSoup(html, 'html.parser')
names = [i.text for i in soup.select('th')] + ['Name']
values = [
[td.text for td in tr.select('td')] + [tr.td['title'].rstrip(' - ')]
for tr in soup.select('tr')[1:]
]

with open('file.csv', 'w', newline='') as f:
writer = csv.writer(f)
writer.writerow(names)
writer.writerows(values)

请注意,由于文件很大,BeautifulSoup 可能需要一些时间来解析该文件。如果您使用的是 Python 2x open 则不接受 newline 参数。在这种情况下,您必须以二进制模式打开文件,否则文件可能包含空行。

<小时/>

正如tommy.carstensen所述, pandas 更适合这项任务。它拥有合适的工具( read_htmlto_csv ),并且比 BeautifulSoup 更快。

import pandas as pd

pd.read_html(htm_string, index_col='Time', parse_dates=True)[0].to_csv(path)

Name 列不包含在文件中,因为它不在表列中,但它是 title 属性的值。但我们可以通过其他方式获取此列 - 例如从原始 url。由于所有列都相同,因此我们可以使用查询字符串的 name 值创建一个新的 Name 列。

import pandas as pd
from urllib.parse import urlparse, parse_qs

url = 'http://www.nasdaqomxnordic.com/aktier/microsite?Instrument=CSE77855&name=Pandora&ISIN=DK0060252690'
df = pd.read_html(html, index_col='Time', parse_dates=True)[0]
df['Name'] = parse_qs(urlparse(url).query)['name'][0]
df.to_csv('file.csv')

关于python - 使用Python通过javascript onclick下载文件?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/52084829/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com