gpt4 book ai didi

Python循环下载多个文件

转载 作者:塔克拉玛干 更新时间:2023-11-02 23:59:15 25 4
gpt4 key购买 nike

我的代码有问题。

#!/usr/bin/env python3.1

import urllib.request;

# Disguise as a Mozila browser on a Windows OS
userAgent = 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)';

URL = "www.example.com/img";
req = urllib.request.Request(URL, headers={'User-Agent' : userAgent});

# Counter for the filename.
i = 0;

while True:
fname = str(i).zfill(3) + '.png';
req.full_url = URL + fname;

f = open(fname, 'wb');

try:
response = urllib.request.urlopen(req);
except:
break;
else:
f.write(response.read());
i+=1;
response.close();
finally:
f.close();

问题似乎是在我创建 urllib.request.Request 对象(称为 req)时出现的。我用一个不存在的 url 创建它,但后来我将 url 更改为它应该是的。我这样做是为了可以使用相同的 urllib.request.Request 对象,而不必在每次迭代时都创建新的对象。在 python 中可能有一种机制可以做到这一点,但我不确定它是什么。

编辑错误信息是:

>>> response = urllib.request.urlopen(req);
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.1/urllib/request.py", line 121, in urlopen
return _opener.open(url, data, timeout)
File "/usr/lib/python3.1/urllib/request.py", line 356, in open
response = meth(req, response)
File "/usr/lib/python3.1/urllib/request.py", line 468, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python3.1/urllib/request.py", line 394, in error
return self._call_chain(*args)
File "/usr/lib/python3.1/urllib/request.py", line 328, in _call_chain
result = func(*args)
File "/usr/lib/python3.1/urllib/request.py", line 476, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 403: Forbidden

编辑 2:我的解决方案如下。可能应该在一开始就这样做,因为我知道它会起作用:

import urllib.request;

# Disguise as a Mozila browser on a Windows OS
userAgent = 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)';

# Counter for the filename.
i = 0;

while True:
fname = str(i).zfill(3) + '.png';
URL = "www.example.com/img" + fname;

f = open(fname, 'wb');

try:
req = urllib.request.Request(URL, headers={'User-Agent' : userAgent});
response = urllib.request.urlopen(req);
except:
break;
else:
f.write(response.read());
i+=1;
response.close();
finally:
f.close();

最佳答案

urllib2 适用于只需要进行一两次网络交互的小脚本,但如果您要做更多的工作,您可能会发现 urllib3 , 或 requests (这并非巧合地建立在前者之上),可能更适合您的需求。您的特定示例可能如下所示:

from itertools import count
import requests

HEADERS = {'user-agent': 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'}
URL = "http://www.example.com/img%03d.png"

# with a session, we get keep alive
session = requests.session()

for n in count():
full_url = URL % n
ignored, filename = URL.rsplit('/', 1)

with file(filename, 'wb') as outfile:
response = session.get(full_url, headers=HEADERS)
if not response.ok:
break
outfile.write(response.content)

编辑:如果您可以使用常规 HTTP 身份验证(强烈建议使用 403 Forbidden 响应),那么您可以将其添加到 requests.getauth 参数,如:

response = session.get(full_url, headers=HEADERS, auth=('username','password))

关于Python循环下载多个文件,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/9900398/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com