gpt4 book ai didi

python - 僵尸进程,我们又来了

转载 作者:太空宇宙 更新时间:2023-11-04 04:50:05 28 4
gpt4 key购买 nike

我在多处理/线程/子处理方面遇到了很多困难。我基本上想做的是执行我的计算机上可用的每个二进制文件,我编写了一个 python 脚本来执行此操作。但我一直有僵尸进程(“已失效”),如果我的 4 个工作进程都处于这种状态,最终会陷入僵局。我尝试了很多不同的方法,但似乎没有任何效果:(

架构如下:

|   \_ python -m dataset --generate
| \_ worker1
| | \_ [thread1] firejail bin1
| \_ worker2
| | \_ [thread1] firejail bin1
| | \_ [thread2] firejail bin2
| | \_ [thread3] firejail bin3
| \_ worker3
| | \_ [thread1] [firejail] <defunct>
| \_ worker4
| | \_ [thread1] [firejail] <defunct>

我创建了 4 个工作线程:

# spawn mode prevents deadlocks https://codewithoutrules.com/2018/09/04/python-multiprocessing/
with get_context("spawn").Pool() as pool:

results = []

for binary in binaries:
result = pool.apply_async(legit.analyse, args=(binary,),
callback=_binary_analysis_finished_callback,
error_callback=error_callback)
results.append(result)

(注意我使用“spawn”池,但现在我想知道它是否有任何用处......)

每个工作人员将创建多个线程,如下所示:

threads = []
executions = []

def thread_wrapper(*args):
flows, output, returncode = _exec_using_firejail(*args)
executions.append(Execution(*args, flows, is_malware=False))

for command_line in potentially_working_command_lines:
thread = Thread(target=thread_wrapper, args=(command_line,))
threads.append(thread)
thread.start()

for thread in threads:
thread.join()

每个线程都会在 firejail 沙箱中启动一个新进程:

process = subprocess.Popen(FIREJAIL_COMMAND +
["strace", "-o", output_filename, "-ff", "-xx", "-qq", "-s", "1000"] + command_line,
stdout=subprocess.PIPE, stderr=subprocess.PIPE, preexec_fn=os.setsid)

try:
out, errs = process.communicate(timeout=5, input=b"Y\nY\nY\nY\nY\nY\nY\nY\nY\nY\nY\nY\nY\nY\nY\nY\n")
# print("stdout:", out)
# print("stderr:", errs)

except subprocess.TimeoutExpired:
# print(command_line, "timed out")
os.killpg(os.getpgid(process.pid), signal.SIGKILL)
out, errs = process.communicate()

我使用 os.killpg() 而不是 process.kill() ,因为由于某些原因,我的 Popen 进程的子进程没有被终止...这要归功于 preexec_fn=os.setsid 它设置了所有后代的 gid。但即使使用这种方法,某些进程(例如 zsh)也会引发僵尸进程,因为看起来 zsh 更改了其 gid,因此我的 os.killpg 无法按预期工作...

我正在寻找一种 100% 确定所有进程都会死亡的方法。

最佳答案

如果您想为此使用 subprocess 模块,则应直接使用 process 对象的 .kill 方法,而不是使用 os 模块。使用communicate是一个阻塞 Action ;所以Python将等待响应。使用 timeout 参数会有所帮助,但对于许多进程来说会很慢。

import subprocess

cmd_list = (
FIREJAIL_COMMAND
+ ["strace", "-o", output_filename, "-ff", "-xx", "-qq", "-s", "1000"]
+ command_line
)
proc = subprocess.Popen(
cmd_list,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
preexec_fn=os.setsid
)

try:
out, errs = proc.communicate(timeout=5, input=b"Y\n" * 16)
except subprocess.TimeoutExpired:
proc.kill()
out, errs = None, None

ret_code = process.wait()

如果您想在一组进程上以非阻塞循环的方式运行它,那就需要使用poll。这是一个例子。这假设您有一个要提供给进程创建的文件名列表和相应的command_lines

import subprocess
import time

def create_process(output_filename, command_line):
cmd_list = (
FIREJAIL_COMMAND
+ ["strace", "-o", output_filename, "-ff", "-xx", "-qq", "-s", "1000"]
+ command_line
)
proc = subprocess.Popen(
cmd_list,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
preexec_fn=os.setsid
)
return {proc: (output_filename, command_line)}

processes = [create_process for f, c in zip(filenames, command_lines)]

TIMEOUT = 5
WAIT = 0.25 # how long to wait between checking the processes
finished = []
for _ in range(round(TIMEOUT / WAIT)):
finished_new = []
if not processes:
break
for proc in processes:
if proc.poll():
finished_new.append(proc)
# cleanup
for proc in finished_new:
process.remove(proc)
finished.extend(finished_new)
time.sleep(WAIT)

关于python - 僵尸进程,我们又来了,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/59769560/

28 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com