gpt4 book ai didi

python - 发送请求时如何从 ThreadPoolExecutor 获取辅助列表项?

转载 作者:行者123 更新时间:2023-12-03 08:49:35 25 4
gpt4 key购买 nike

使用 ThreadPoolExecutor 上的 python 文档有这个请求函数:

import concurrent.futures
import urllib.request

URLS = ['http://www.foxnews.com/',
'http://www.cnn.com/',
'http://europe.wsj.com/',
'http://www.bbc.co.uk/',
'http://some-made-up-domain.com/']

# Retrieve a single page and report the URL and contents
def load_url(url, timeout):
with urllib.request.urlopen(url, timeout=timeout) as conn:
return conn.read()

# We can use a with statement to ensure threads are cleaned up promptly
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
# Start the load operations and mark each future with its URL
future_to_url = {executor.submit(load_url, url, 60): url for url in URLS}
for future in concurrent.futures.as_completed(future_to_url):
url = future_to_url[future]
try:
data = future.result()
except Exception as exc:
print('%r generated an exception: %s' % (url, exc))
else:
print('%r page is %d bytes' % (url, len(data)))

如果 URL 列表被这样调整:

URLS = [['http://www.foxnews.com/','American'],
['http://www.cnn.com/','American'],
['http://europe.wsj.com/', 'European'],
['http://www.bbc.co.uk/', 'Eurpoean']
['http://some-made-up-domain.com/','Unknown']]

您可以通过索引列表轻松提取 URL:

future_to_url = {executor.submit(load_url, url, 60): url[0] for url in URLS}

我正在努力解决的是如何从该列表(索引 1)中提取区域以包含在 as_completed 结果中,因此打印结果如下:

print('%r %r page is %d bytes' % (region, url, len(data))

最佳答案

您可以将 URLS 列表转换为字典 (url_region_mapper),将 url 与其区域进行映射,这样您就可以根据给定的信息了解它所在的区域网址。

import concurrent.futures
import urllib.request

URLS = [['http://www.foxnews.com/','American'],
['http://www.cnn.com/','American'],
['http://europe.wsj.com/', 'European'],
['http://www.bbc.co.uk/', 'Eurpoean'],
['http://some-made-up-domain.com/','Unknown']]

url_region_mapper = dict(URLS)

# Retrieve a single page and report the URL and contents
def load_url(url, timeout):
with urllib.request.urlopen(url, timeout=timeout) as conn:
return conn.read()

# We can use a with statement to ensure threads are cleaned up promptly
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
# Start the load operations and mark each future with its URL
future_to_url = {executor.submit(load_url, url[0], 60): url[0] for url in URLS}
for future in concurrent.futures.as_completed(future_to_url):
url = future_to_url[future]
try:
data = future.result()
except Exception as exc:
print('%r generated an exception: %s' % (url, exc))
else:
print('%r %r page is %d bytes' % (url_region_mapper[url], url, len(data)))

如果存在映射到不同区域的重复 URL,您可以将 URL 和区域作为列表而不是 URL 字符串包含到 future_to_url 字典中。

future_to_url = {executor.submit(load_url, url[0], 60): [url[0], url[1]] for url in URLS}`)
import concurrent.futures
import urllib.request

URLS = [['http://www.foxnews.com/','American'],
['http://www.cnn.com/','American'],
['http://europe.wsj.com/', 'European'],
['http://www.bbc.co.uk/', 'Eurpoean'],
['http://some-made-up-domain.com/','Unknown']]

# Retrieve a single page and report the URL and contents
def load_url(url, timeout):
with urllib.request.urlopen(url, timeout=timeout) as conn:
return conn.read()

# We can use a with statement to ensure threads are cleaned up promptly
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
# Start the load operations and mark each future with its URL
future_to_url = {executor.submit(load_url, url[0], 60): [url[0], url[1]] for url in URLS}
for future in concurrent.futures.as_completed(future_to_url):
url = future_to_url[future][0]
region = future_to_url[future][1]
try:
data = future.result()
except Exception as exc:
print('%r generated an exception: %s' % (url, exc))
else:
print('%r %r page is %d bytes' % (region, url, len(data)))

关于python - 发送请求时如何从 ThreadPoolExecutor 获取辅助列表项?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/59821508/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com