gpt4 book ai didi

php - Python : extract . 将数据提交到具有机械化的表单后的 csv 结果

转载 作者:可可西里 更新时间:2023-11-01 01:15:01 25 4
gpt4 key购买 nike

我刚开始使用 Python 从网络中提取数据。感谢其他一些帖子和这个webpage ,我想出了如何使用模块 mechanize 将数据提交到表单。

现在,我一直在寻找如何提取结果。提交表单时有很多不同的结果,但如果我可以访问 csv 文件,那将是完美的。我假设您必须使用模块 re,但是您如何通过 Python 下载结果?

运行作业后,csv 文件在这里:Summary => Results => Download Heavy Chain Table(您可以直接点击“load example”查看网页运行情况)。

import re
import mechanize

br = mechanize.Browser()
br.set_handle_robots(False) # ignore robots
br.set_handle_refresh(False) # can sometimes hang without this

url = 'http://circe.med.uniroma1.it/proABC/index.php'
response = br.open(url)

br.form = list(br.forms())[1]

# Controls can be found by name
control1 = br.form.find_control("light")

# Text controls can be set as a string
br["light"] = "DIQMTQSPASLSASVGETVTITCRASGNIHNYLAWYQQKQGKSPQLLVYYTTTLADGVPSRFSGSGSGTQYSLKINSLQPEDFGSYYCQHFWSTPRTFGGGTKLEIKRADAAPTVSIFPPSSEQLTSGGASVVCFLNNFYPKDINVKWKIDGSERQNGVLNSWTDQDSKDSTYSMSSTLTLTKDEYERHNSYTCEATHKTSTSPIVKSFNRNEC"
br["heavy"] = "QVQLKESGPGLVAPSQSLSITCTVSGFSLTGYGVNWVRQPPGKGLEWLGMIWGDGNTDYNSALKSRLSISKDNSKSQVFLKMNSLHTDDTARYYCARERDYRLDYWGQGTTLTVSSASTTPPSVFPLAPGSAAQTNSMVTLGCLVKGYFPEPVTVTWNSGSLSSGVHTFPAVLQSDLYTLSSSVTVPSSPRPSETVTCNVAHPASSTKVDKKIVPRDC"

# To submit form
response = br.submit()
content = response.read()
# print content

result = re.findall(r"Prob_Heavy.csv", content)
print result

打印 content 时,我感兴趣的行如下所示:

<h2>Results</h2><br>
Predictions for Heavy Chain:
<a href='u17003I9f1/Prob_Heavy.csv'>Download Heavy Chain Table</a><br>
Predictions for Light Chain:
<a href='u17003I9f1/Prob_Light.csv'>Download Light Chain Table</a><br>

所以问题是:如何下载/访问 href='u17003I9f1/Prob_Heavy.csv'

最佳答案

这是一个使用 BeautifulSouprequests 来避免使用正则表达式解析 HTML 的快速粗略示例。 sudo pip install bs4 如果你有 pip 但还没有安装 BeautifulSoup

import re
import mechanize
from bs4 import BeautifulSoup as bs
import requests
import time


br = mechanize.Browser()
br.set_handle_robots(False) # ignore robots
br.set_handle_refresh(False) # can sometimes hang without this

url_base = "http://circe.med.uniroma1.it/proABC/"
url_index = url_base + "index.php"

response = br.open(url_index)

br.form = list(br.forms())[1]

# Controls can be found by name
control1 = br.form.find_control("light")

# Text controls can be set as a string
br["light"] = "DIQMTQSPASLSASVGETVTITCRASGNIHNYLAWYQQKQGKSPQLLVYYTTTLADGVPSRFSGSGSGTQYSLKINSLQPEDFGSYYCQHFWSTPRTFGGGTKLEIKRADAAPTVSIFPPSSEQLTSGGASVVCFLNNFYPKDINVKWKIDGSERQNGVLNSWTDQDSKDSTYSMSSTLTLTKDEYERHNSYTCEATHKTSTSPIVKSFNRNEC"
br["heavy"] = "QVQLKESGPGLVAPSQSLSITCTVSGFSLTGYGVNWVRQPPGKGLEWLGMIWGDGNTDYNSALKSRLSISKDNSKSQVFLKMNSLHTDDTARYYCARERDYRLDYWGQGTTLTVSSASTTPPSVFPLAPGSAAQTNSMVTLGCLVKGYFPEPVTVTWNSGSLSSGVHTFPAVLQSDLYTLSSSVTVPSSPRPSETVTCNVAHPASSTKVDKKIVPRDC"

# To submit form
response = br.submit()
content = response.read()
# print content

soup = bs(content)
urls_csv = [x.get("href") for x in soup.findAll("a") if ".csv" in x.get("href")]
for file_path in urls_csv:
status_code = 404
retries = 0
url_csv = url_base + file_path
file_name = url_csv.split("/")[-1]
while status_code == 404 and retries < 10:
print "{} not ready yet".format(file_name)
req = requests.get(url_csv )
status_code = req.status_code
time.sleep(5)
print "{} ready. Saving.".format(file_name)
with open(file_name, "wb") as f:
f.write(req.content)

在 REPL 中运行脚本:

Prob_Heavy.csv not ready yet
Prob_Heavy.csv not ready yet
Prob_Heavy.csv not ready yet
Prob_Heavy.csv ready. Saving.
Prob_Light.csv not ready yet
Prob_Light.csv ready. Saving.
>>>
>>>

关于php - Python : extract . 将数据提交到具有机械化的表单后的 csv 结果,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/41452036/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com