gpt4 book ai didi

Python 请求开始从检查点下载文件

转载 作者:塔克拉玛干 更新时间:2023-11-01 21:22:11 26 4
gpt4 key购买 nike

我想下载一个带有 Python 请求库的文件。问题是,当我失去与网络的连接时,必须重新下载文件。问题是:我怎样才能让程序知道他最后完成的位置并从这一点开始下载文件?

下面粘贴代码

res = requests.get(link)
playfile = open(file_name, 'wb')

for chunk in res.iter_content(100000):
playfile.write(chunk)

最佳答案

可以通过 Range 从检查点继续下载。其实你的问题类似于How to `pause`, and `resume` download work? .

这是一个展示其工作原理的示例。

import requests

def DownloadFile(url):
local_filename = url.split('/')[-1]
with requests.Session() as s:
r = s.get(url,headers={"Range": "bytes=0-999"})
with open(local_filename, 'wb') as fd:
fd.write(r.content)

r2 = s.get(url,headers={"Range": "bytes=1000-"})
with open(local_filename, 'ab') as fd:
fd.write(r2.content)
return
url = "https://upload.wikimedia.org/wikipedia/commons/thumb/6/63/BBC_Radio_logo.svg/210px-BBC_Radio_logo.svg.png"
DownloadFile(url)

现在,我们可以构建一个从检查点开始下载文件的函数。

import requests
import os

def Continue_(url):
local_filename = url.split('/')[-1]
with requests.Session() as s:
if os.path.exists(local_filename):
position = os.stat(local_filename).st_size
else:
position = 0
r2 = s.get(url,headers={"Range": "bytes={}-".format(position)})
with open(local_filename, 'ab+') as fd:
for c in r2.iter_content():
fd.write(c)

url = "https://upload.wikimedia.org/wikipedia/commons/thumb/6/63/BBC_Radio_logo.svg/210px-BBC_Radio_logo.svg.png"

def DownloadFile(url):
local_filename = url.split('/')[-1]
with requests.Session() as s:
r = s.get(url,headers={"Range": "bytes=0-999"})
with open(local_filename, 'wb') as fd:
fd.write(r.content)

DownloadFile(url)
Continue_(url)

关于Python 请求开始从检查点下载文件,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/53578503/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com