gpt4 book ai didi

python - 如何对请求使用线程?

转载 作者:太空宇宙 更新时间:2023-11-03 20:34:10 25 4
gpt4 key购买 nike

您好,我正在使用请求模块,我想提高速度,因为我有很多网址,所以我想我可以使用线程来获得更好的速度。这是我的代码:

import requests

urls = ["http://www.google.com", "http://www.apple.com", "http://www.microsoft.com", "http://www.amazon.com", "http://www.facebook.com"]
for url in urls:
reponse = requests.get(url)
value = reponse.json()

但我不知道如何使用线程请求......

你能帮我一下吗?

谢谢!

最佳答案

只需从 bashrc 添加,您也可以将其与请求一起使用。您不需要使用 urllib.request 方法。

它会是这样的:

from concurrent import futures

URLS = ['http://www.foxnews.com/',
'http://www.cnn.com/',
'http://europe.wsj.com/',
'http://www.bbc.co.uk/',
'http://some-made-up-domain.com/']
with futures.ThreadPoolExecutor(max_workers=5) as executor: ## you can increase the amount of workers, it would increase the amount of thread created
res = executor.map(requests.get,URLS)
responses = list(res) ## the future is returning a generator. You may want to turn it to list.

然而,我喜欢做的是创建一个直接从响应返回 json 的函数(如果你想抓取,则返回文本)。并在线程池中使用该函数

import requests
from concurrent import futures
URLS = ['http://www.foxnews.com/',
'http://www.cnn.com/',
'http://europe.wsj.com/',
'http://www.bbc.co.uk/',
'http://some-made-up-domain.com/']

def getData(url):
res = requests.get(url)
try:
return res.json()
except:
return res.text
with futures.ThreadPoolExecutor(max_workers=5) as executor:
res = executor.map(getData,URLS)
responses = list(res) ## your list will already be pre-formated

关于python - 如何对请求使用线程?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/57284126/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com