gpt4 book ai didi

python - Linux允许python使用多少个网口?

转载 作者:IT王子 更新时间:2023-10-29 00:40:39 26 4
gpt4 key购买 nike

所以我一直在尝试在 python 中对一些互联网连接进行多线程处理。我一直在使用多处理模块,所以我可以绕过“全局解释器锁”。但是好像系统只给python开放了一个连接端口,或者至少一次只允许一个连接发生。这是我所说的示例。

*注意这是在linux服务器上运行

from multiprocessing import Process, Queue
import urllib
import random

# Generate 10,000 random urls to test and put them in the queue
queue = Queue()
for each in range(10000):
rand_num = random.randint(1000,10000)
url = ('http://www.' + str(rand_num) + '.com')
queue.put(url)

# Main funtion for checking to see if generated url is active
def check(q):
while True:
try:
url = q.get(False)
try:
request = urllib.urlopen(url)
del request
print url + ' is an active url!'
except:
print url + ' is not an active url!'
except:
if q.empty():
break

# Then start all the threads (50)
for thread in range(50):
task = Process(target=check, args=(queue,))
task.start()

因此,如果您运行它,您会注意到它在该函数上启动了 50 个实例,但一次只运行一个。您可能认为“全局解释器锁”正在执行此操作,但事实并非如此。尝试将函数更改为数学函数而不是网络请求,您将看到所有五十个线程同时运行。

那么我必须使用套接字吗?或者我能做些什么来让 python 访问更多端口?或者有什么我没有看到的东西?让我知道你的想法!谢谢!

*编辑

所以我写了这个脚本来更好地测试请求库。好像我之前没有很好地测试过它。 (我主要用的是urllib和urllib2)

from multiprocessing import Process, Queue
from threading import Thread
from Queue import Queue as Q
import requests
import time

# A main timestamp
main_time = time.time()

# Generate 100 urls to test and put them in the queue
queue = Queue()
for each in range(100):
url = ('http://www.' + str(each) + '.com')
queue.put(url)

# Timer queue
time_queue = Queue()

# Main funtion for checking to see if generated url is active
def check(q, t_q): # args are queue and time_queue
while True:
try:
url = q.get(False)
# Make a timestamp
t = time.time()
try:
request = requests.head(url, timeout=5)
t = time.time() - t
t_q.put(t)
del request
except:
t = time.time() - t
t_q.put(t)
except:
break

# Then start all the threads (20)
thread_list = []
for thread in range(20):
task = Process(target=check, args=(queue, time_queue))
task.start()
thread_list.append(task)

# Join all the threads so the main process don't quit
for each in thread_list:
each.join()
main_time_end = time.time()

# Put the timerQueue into a list to get the average
time_queue_list = []
while True:
try:
time_queue_list.append(time_queue.get(False))
except:
break

# Results of the time
average_response = sum(time_queue_list) / float(len(time_queue_list))
total_time = main_time_end - main_time
line = "Multiprocessing: Average response time: %s sec. -- Total time: %s sec." % (average_response, total_time)
print line

# A main timestamp
main_time = time.time()

# Generate 100 urls to test and put them in the queue
queue = Q()
for each in range(100):
url = ('http://www.' + str(each) + '.com')
queue.put(url)

# Timer queue
time_queue = Queue()

# Main funtion for checking to see if generated url is active
def check(q, t_q): # args are queue and time_queue
while True:
try:
url = q.get(False)
# Make a timestamp
t = time.time()
try:
request = requests.head(url, timeout=5)
t = time.time() - t
t_q.put(t)
del request
except:
t = time.time() - t
t_q.put(t)
except:
break

# Then start all the threads (20)
thread_list = []
for thread in range(20):
task = Thread(target=check, args=(queue, time_queue))
task.start()
thread_list.append(task)

# Join all the threads so the main process don't quit
for each in thread_list:
each.join()
main_time_end = time.time()

# Put the timerQueue into a list to get the average
time_queue_list = []
while True:
try:
time_queue_list.append(time_queue.get(False))
except:
break

# Results of the time
average_response = sum(time_queue_list) / float(len(time_queue_list))
total_time = main_time_end - main_time
line = "Standard Threading: Average response time: %s sec. -- Total time: %s sec." % (average_response, total_time)
print line

# Do the same thing all over again but this time do each url at a time
# A main timestamp
main_time = time.time()

# Generate 100 urls and test them
timer_list = []
for each in range(100):
url = ('http://www.' + str(each) + '.com')
t = time.time()
try:
request = requests.head(url, timeout=5)
timer_list.append(time.time() - t)
except:
timer_list.append(time.time() - t)
main_time_end = time.time()

# Results of the time
average_response = sum(timer_list) / float(len(timer_list))
total_time = main_time_end - main_time
line = "Not using threads: Average response time: %s sec. -- Total time: %s sec." % (average_response, total_time)
print line

如您所见,它非常适合多线程。实际上,我的大部分测试表明线程模块实际上比多处理模块更快。 (我不明白为什么!)这是我的一些结果。

Multiprocessing: Average response time: 2.40511314869 sec. -- Total time: 25.6876308918 sec.
Standard Threading: Average response time: 2.2179402256 sec. -- Total time: 24.2941861153 sec.
Not using threads: Average response time: 2.1740363431 sec. -- Total time: 217.404567957 sec.

这是在我的家庭网络上完成的,我服务器上的响应时间要快得多。我认为我的问题已经得到间接回答,因为我在一个更复杂的脚本上遇到了问题。所有的建议都帮助我很好地优化了它。感谢大家!

最佳答案

it starts 50 instances on the function but only runs one at a time

您误解了 htop 的结果。只有少数(如果有的话)python 副本可以在任何特定实例上运行。它们中的大多数将被阻塞等待网络 I/O。

事实上,这些进程是并行运行的。

Try changing the function to a mathematical function instead of a network request and you will see that all fifty threads run simultaneously.

将任务更改为数学函数仅说明了 CPU 绑定(bind)(例如数学)和 IO 绑定(bind)(例如 urlopen)进程之间的区别。前者始终可运行,后者很少可运行。

it only prints one at a time. If it was actually running multiple processes it would print many out at once.

它一次打印一个,因为您正在向终端写入行。因为这些行是无法区分的,所以您无法判断它们是全部由一个线程编写的,还是由一个单独的线程依次编写的。

关于python - Linux允许python使用多少个网口?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/30130042/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com