gpt4 book ai didi

python-3.x - Python中跨多个程序的只读共享内存

转载 作者:行者123 更新时间:2023-12-04 18:32:59 29 4
gpt4 key购买 nike

我想将数据加载到内存一次,并希望其他进程随着时间的推移访问(只读)这些数据。这些过程基本上是不同的python程序,它们在不同的时间(当然是在加载数据之后)被调用。
为了实现这个功能,我使用了共享内存。请看以下代码片段:
服务器.py

import numpy as np
from multiprocessing import shared_memory


class DataUploader:
def __init__(self, shared_memory_name):
# let's share the following two numpy arrays
self._uint_np = np.random.randint(0, 255, size=(64, 4, 28, 28)).astype(np.uint8)
self._float_np = np.random.rand(64, 8).astype(np.float32)

name_1 = f"{shared_memory_name}_uint_np"
name_2 = f"{shared_memory_name}_float_np"

self._shm_1 = shared_memory.SharedMemory(name=name_1, create=True, size=self._uint_np.nbytes)
self._shm_2 = shared_memory.SharedMemory(name=name_2, create=True, size=self._float_np.nbytes)

# now create a numpy array backed by shared memory
self._shared_1 = np.ndarray(self._uint_np.shape, dtype=self._uint_np.dtype, buffer=self._shm_1.buf)
self._shared_2 = np.ndarray(self._float_np.shape, dtype=self._float_np.dtype, buffer=self._shm_2.buf)

# copy the original data into shared memory
self._shared_1[:] = self._uint_np[:]
self._shared_2[:] = self._float_np[:]

def __del__(self):
if self._shm_1 is not None and self._shm_2 is not None:
self._shm_1.close()
self._shm_1.unlink()
self._shm_2.close()
self._shm_2.unlink()
print("Shared memory destroyed")


if __name__ == "__main__":
data_uploader = DataUploader(shared_memory_name="test")
# keep running the program forever
input(f'Press "enter" key to exit: ')
客户端.py
import numpy as np
from multiprocessing import shared_memory


class DataProvider:
def __init__(self, shared_memory_name):
self._existing_shm_1 = shared_memory.SharedMemory(name=f"{shared_memory_name}_uint_np")
self._existing_shm_2 = shared_memory.SharedMemory(name=f"{shared_memory_name}_float_np")

self._uint_np = np.ndarray((64, 4, 28, 28), dtype=np.uint8, buffer=self._existing_shm_1.buf)
self._float_np = np.ndarray((64, 8), dtype=np.float32, buffer=self._existing_shm_2.buf)

def get_item(self, idx):
uint_np = self._uint_np[idx]
float_np = self._float_np[idx]
return uint_np, float_np

def __del__(self):
if self._existing_shm_1 is not None and self._existing_shm_2 is not None:
self._existing_shm_1.close()
self._existing_shm_2.close()


if __name__ == "__main__":
data_provider = DataProvider(shared_memory_name="test")
uint_np, float_np = data_provider.get_item(0)

# just print some information about the accessed data
print(uint_np.std(), float_np.std())
执行一次 server.py 后,我希望多次执行 client.py 以访问(只读)数据。 但是,在第一次执行 client.py 后,会出现以下警告:
$ python client.py 
73.84145388455019 0.25972846
/home/ravi/tools/anaconda/envs/py39/lib/python3.9/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 2 leaked shared_memory objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
从第二次运行开始,client.py 抛出以下错误:
$ python client.py 
Traceback (most recent call last):
File "/home/ravi/test/client.py", line 34, in <module>
data_provider = DataProvider(shared_memory_name="test")
File "/home/ravi/test/client.py", line 16, in __init__
self._existing_shm_1 = shared_memory.SharedMemory(name=f"{shared_memory_name}_uint_np")
File "/home/ravi/tools/anaconda/envs/py39/lib/python3.9/multiprocessing/shared_memory.py", line 103, in __init__
self._fd = _posixshmem.shm_open(
FileNotFoundError: [Errno 2] No such file or directory: '/test_uint_np'
显然,共享内存在第一次访问后被破坏/无法访问。
操作系统信息:
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.6 LTS
Release: 18.04
Codename: bionic

$ uname -r
5.4.0-86-generic
有没有办法让共享内存保持事件状态并从不同的程序多次访问(只读)它?

最佳答案

我在这个 thread 中找到了答案. close()被认为是子进程,而不是独立进程,因此在 close() 之后共享对象未从 resource_tracker 注销.解决方案是手动关闭它们:

from multiprocessing import resource_tracker
...

def __del__(self):
if self._existing_shm_1 is not None and self._existing_shm_2 is not None:
self._existing_shm_1.close()
self._existing_shm_2.close()
resource_tracker.unregister(self._existing_shm_1._name, "shared_memory")
resource_tracker.unregister(self._existing_shm_2._name, "shared_memory")
注: resource_tracker.unregister(self._existing_shm_2.name, "shared_memory")不工作,“\”丢失。
编辑:我想说如果不从 resurce_tracker 手动注册,就无法保持 shared_memory 存活。 .目前, resource_tracker将始终删除泄漏的共享内存。

关于python-3.x - Python中跨多个程序的只读共享内存,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/69553930/

29 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com