gpt4 book ai didi

python - 使用python从内网下载文件

转载 作者:可可西里 更新时间:2023-11-01 10:36:20 27 4
gpt4 key购买 nike

我想从我的 Intranet 下载一系列 pdf 文件。我可以在我的网络浏览器中毫无问题地查看文件,但是当尝试通过 python 自动提取文件时,我遇到了问题。通过在我办公室设置的代理交谈后,我可以使用这个很容易地从互联网上下载文件 answer :

url = 'http://www.sample.com/fileiwanttodownload.pdf'

user = 'username'
pswd = 'password'
proxy_ip = '12.345.56.78:80'
proxy_url = 'http://' + user + ':' + pswd + '@' + proxy_ip
proxy_support = urllib2.ProxyHandler({"http":proxy_url})
opener = urllib2.build_opener(proxy_support,urllib2.HTTPHandler)
urllib2.install_opener(opener)

file_name = url.split('/')[-1]
u = urllib2.urlopen(url)
f = open(file_name, 'wb')
f.close()

但是无论出于何种原因,如果 url 指向我的 Intranet 上的某些内容,它就不会工作。返回以下错误:

Traceback (most recent call last):

File "<ipython-input-13-a055d9eaf05e>", line 1, in <module>
runfile('C:/softwaredev/python/pdfwrite.py', wdir='C:/softwaredev/python')

File "C:\Anaconda\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 585, in runfile
execfile(filename, namespace)

File "C:/softwaredev/python/pdfwrite.py", line 26, in <module>
u = urllib2.urlopen(url)

File "C:\Anaconda\lib\urllib2.py", line 127, in urlopen
return _opener.open(url, data, timeout)

File "C:\Anaconda\lib\urllib2.py", line 410, in open
response = meth(req, response)

File "C:\Anaconda\lib\urllib2.py", line 523, in http_response
'http', request, response, code, msg, hdrs)

File "C:\Anaconda\lib\urllib2.py", line 442, in error
result = self._call_chain(*args)

File "C:\Anaconda\lib\urllib2.py", line 382, in _call_chain
result = func(*args)

File "C:\Anaconda\lib\urllib2.py", line 629, in http_error_302
return self.parent.open(new, timeout=req.timeout)

File "C:\Anaconda\lib\urllib2.py", line 410, in open
response = meth(req, response)

File "C:\Anaconda\lib\urllib2.py", line 523, in http_response
'http', request, response, code, msg, hdrs)

File "C:\Anaconda\lib\urllib2.py", line 448, in error
return self._call_chain(*args)

File "C:\Anaconda\lib\urllib2.py", line 382, in _call_chain
result = func(*args)

File "C:\Anaconda\lib\urllib2.py", line 531, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)

HTTPError: Service Unavailable

在下面的代码中使用 requests.py,我可以成功地从互联网上下载文件,但是当我试图从我办公室的内联网上下载一个 pdf 文件时,我只是收到一个连接错误发送回我在 html 中。运行以下代码:

import requests

url = 'www.intranet.sample.com/?layout=attachment&cfapp=26&attachmentid=57142'

proxies = {
"http": "http://12.345.67.89:80",
"https": "http://12.345.67.89:80"
}

local_filename = 'test.pdf'
r = requests.get(url, proxies=proxies, stream=True)
with open(local_filename, 'wb') as f:
for chunk in r.iter_content(chunk_size=1024):
print chunk
if chunk:
f.write(chunk)
f.flush()

返回的html:

Network Error (tcp_error) 

A communication error occurred: "No route to host"
The Web Server may be down, too busy, or experiencing other problems preventing it from responding to requests. You may wish to try again at a later time.

For assistance, contact your network support team.

是否有一些网络安全设置可以阻止网络浏览器环境之外的自动请求?

最佳答案

将 opener 安装到 urllib2 中不会影响请求。您需要使用请求自身对代理的支持。将它们在 proxies 参数中传递给 get 应该就足够了,或者您可以设置 HTTP_PROXYHTTPS_PROXY环境变量。参见 http://docs.python-requests.org/en/latest/user/advanced/#proxies

import requests

proxies = {
"http": "http://10.10.1.10:3128",
"https": "http://10.10.1.10:1080",
}

requests.get("http://example.org", proxies=proxies)

关于python - 使用python从内网下载文件,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/24520133/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com