gpt4 book ai didi

c++ - 一次使用多个共享内存实例

转载 作者:太空狗 更新时间:2023-10-29 20:58:13 25 4
gpt4 key购买 nike

为了在录制程序和显示程序之间传输视频流(不能相同),我使用共享内存。
为了同步访问,我整理了一个类,该类包装了shared_memory_object,mapped_region和interprocess_sharable_mutex(所有boost::interprocess)

我写了2个构造函数,一个是“主机”端的,另一个是“客户端”端的。
当我使用我的类(class)来传输一个视频流时,它可以完美运行。
但是,当我尝试传输两个视频流时,存在一些问题。

首先,这是构造函数代码:
(第一个是主机构造器,第二个是客户端构造器)

    template<typename T>
SWMRSharedMemArray<T>::SWMRSharedMemArray(std::string Name, size_t length):
ShMutexSize(sizeof(interprocess_sharable_mutex)),
isManager(true), _length(length), Name(Name)
{
shared_memory_object::remove(Name.c_str());

shm = new shared_memory_object(create_only, Name.c_str(), read_write);
shm->truncate(ShMutexSize + sizeof(T)*length);

region = new mapped_region(*shm, read_write);

void *addr = region->get_address();
mtx = new(addr) interprocess_sharable_mutex;
DataPtr = static_cast<T*>(addr) + ShMutexSize;
}

template<typename T>
SWMRSharedMemArray<T>::SWMRSharedMemArray(std::string Name) :
ShMutexSize(sizeof(interprocess_sharable_mutex)),
isManager(false), Name(Name)
{
shm = new shared_memory_object(open_only, Name.c_str(), read_write);
region = new mapped_region(*shm, read_write);

_length = (region->get_size() - ShMutexSize) / sizeof(T);
void *addr = region->get_address();
mtx = static_cast<decltype(mtx)>(addr);
DataPtr = static_cast<T*>(addr) + ShMutexSize;
}

在主机端,一切仍然看起来不错。
但是在为客户建造时存在一些问题:
当我比较第一个和第二个实例的shm和region对象时
(具有不同的名称ofc,但长度相同,并且模板类型相同)
我看到很多应该有所不同的成员都没有。
地址和成员m_filename与预期的不同,但是成员m_handle相同。
对于区域,两个地址不同,但是所有成员都相同。

我希望有人知道发生了什么事。
最好的祝福
宇作

最佳答案

我还没有完全理解您的代码,但是我对手动内存管理的过时使用感到震惊。每当我在C++中看到“sizeof()”时,我都会有点担心:)

由于缺乏抽象,困惑几乎是不可避免的,并且编译器无法提供帮助,因为您身处“让我一个人-我知道我在做什么” Realm 。

具体来说,这看起来是错误的:

DataPtr = static_cast<T *>(addr) + ShMutexSize;

当使用 sizeof(T)==sizeof(char)(IOW, T是一个字节)时,这可能是正确的,但是否则您将获得指针算术,这意味着您将 sizeof(T)添加为 ShMutexSize次数。这绝对是错误的,因为您仅为互斥量+元素数据的大小(直接相邻)保留了空间。

因此,由于索引超出了共享内存区域的大小,您将获得未使用的空间和 Undefined Behavior

因此,让我与两个样本进行对比。
  • 减少了对指针算法的依赖
  • 通过使用托管共享内存段
  • 取消了所有手动内存管理

    1.手册

    不需要完全相同的指针欺骗/资源管理量的手动方法可能如下所示:

    Live Compiled On Coliru
    #include <boost/interprocess/shared_memory_object.hpp>
    #include <boost/interprocess/mapped_region.hpp>
    #include <boost/interprocess/sync/interprocess_sharable_mutex.hpp>
    #include <boost/thread/lock_guard.hpp>

    namespace bip = boost::interprocess;

    namespace SWMR {
    static struct server_mode_t {} const/*expr*/ server_mode = server_mode_t();
    static struct client_mode_t {} const/*expr*/ client_mode = client_mode_t();

    typedef bip::interprocess_sharable_mutex mutex;
    typedef boost::lock_guard<mutex> guard;

    template <typename T, size_t N> struct SharedMemArray {
    SharedMemArray(server_mode_t, std::string const& name)
    : isManager(true), _name(name),
    _shm(do_create(_name.c_str())),
    _region(_shm, bip::read_write)
    {
    _data = new (_region.get_address()) data_t;
    }

    SharedMemArray(client_mode_t, std::string const& name)
    : isManager(false), _name(name),
    _shm(do_open(_name.c_str())),
    _region(_shm, bip::read_write),
    _data(static_cast<data_t*>(_region.get_address()))
    {
    assert(sizeof(data_t) == _region.get_size());
    }

    private:
    typedef bip::shared_memory_object shm_t;
    struct data_t {
    mutable mutex mtx;
    T DataPtr[N];
    };

    bool isManager;
    const std::string _name;
    shm_t _shm;
    bip::mapped_region _region;
    data_t *_data;

    // functions to manage the shared memory
    shm_t static do_create(char const* name) {
    shm_t::remove(name);
    shm_t result(bip::create_only, name, bip::read_write);
    result.truncate(sizeof(data_t));
    return boost::move(result);
    }

    shm_t static do_open(char const* name) {
    return shm_t(bip::open_only, name, bip::read_write);
    }

    public:
    mutex& get_mutex() const { return _data->mtx; }

    typedef T *iterator;
    typedef T const *const_iterator;

    iterator data() { return _data->DataPtr; }
    const_iterator data() const { return _data->DataPtr; }

    iterator begin() { return data(); }
    const_iterator begin() const { return data(); }

    iterator end() { return begin() + N; }
    const_iterator end() const { return begin() + N; }

    const_iterator cbegin() const { return begin(); }
    const_iterator cend() const { return end(); }
    };
    }

    #include <vector>

    static const std::string APP_UUID = "61ab4f43-2d68-46e1-9c8d-31d577ce3aa7";

    struct UserData {
    int i;
    float f;
    };

    #include <boost/range/algorithm.hpp>
    #include <boost/foreach.hpp>
    #include <iostream>

    int main() {
    using namespace SWMR;
    SharedMemArray<int, 20> s_ints (server_mode, APP_UUID + "-ints");
    SharedMemArray<float, 72> s_floats (server_mode, APP_UUID + "-floats");
    SharedMemArray<UserData, 10> s_udts (server_mode, APP_UUID + "-udts");

    {
    guard lk(s_ints.get_mutex());
    boost::fill(s_ints, 42);
    }

    {
    guard lk(s_floats.get_mutex());
    boost::fill(s_floats, 31415);
    }

    {
    guard lk(s_udts.get_mutex());
    UserData udt = { 42, 3.14 };
    boost::fill(s_udts, udt);
    }

    SharedMemArray<int, 20> c_ints (client_mode, APP_UUID + "-ints");
    SharedMemArray<float, 72> c_floats (client_mode, APP_UUID + "-floats");
    SharedMemArray<UserData, 10> c_udts (client_mode, APP_UUID + "-udts");

    {
    guard lk(c_ints.get_mutex());
    assert(boost::equal(std::vector<int>(boost::size(c_ints), 42), c_ints));
    }

    {
    guard lk(c_floats.get_mutex());
    assert(boost::equal(std::vector<int>(boost::size(c_floats), 31415), c_floats));
    }

    {
    guard lk(c_udts.get_mutex());
    BOOST_FOREACH(UserData& udt, c_udts)
    std::cout << udt.i << "\t" << udt.f << "\n";
    }
    }

    笔记
  • 它重用了代码
  • 它不会做不必要的动态分配(这使类更容易“掌握”三分法)
  • 它使用data_t结构摆脱手动偏移量计算(您可以只执行data->mtxdata->DataPtr)
  • 它添加了iteratorbegin() / end()定义,以便您可以将SharedMemArray直接用作范围,例如使用像boost::equalBOOST_FOREACH这样的算法:
    assert(boost::equal(some_vector, c_floats));

    BOOST_FOREACH(UserData& udt, c_udts)
    std::cout << udt.i << "\t" << udt.f << "\n";
  • 现在,它使用静态已知数量的元素(N)。

  • 如果您不希望这样做,那么我肯定会选择使用托管段(在2下)的方法,因为这将为您解决所有(重新)分配机制。

    2.使用 managed_shared_memory

    当我们想要动态调整大小的数组时,我们在C++中使用什么?正确: std::vector

    现在可以教 std::vector从共享内存中进行分配,但是您需要将其传递给Boost Interprocess allocator。该分配器知道如何使用 segment_manager从共享内存中执行分配。

    这是使用 managed_shared_memory的相对简单的翻译

    Live Compiled On Coliru
    #include <boost/container/scoped_allocator.hpp>

    #include <boost/container/vector.hpp>
    #include <boost/container/string.hpp>

    #include <boost/interprocess/allocators/allocator.hpp>
    #include <boost/interprocess/managed_shared_memory.hpp>
    #include <boost/interprocess/offset_ptr.hpp>
    #include <boost/interprocess/sync/interprocess_sharable_mutex.hpp>
    #include <boost/thread/lock_guard.hpp>

    namespace Shared {
    namespace bip = boost::interprocess;
    namespace bc = boost::container;

    using shm_t = bip::managed_shared_memory;
    using mutex = bip::interprocess_sharable_mutex;
    using guard = boost::lock_guard<mutex>;

    template <typename T> using allocator = bc::scoped_allocator_adaptor<
    bip::allocator<T, shm_t::segment_manager>
    >;
    template <typename T> using vector = bc::vector<T, allocator<T> >;
    template <typename T> using basic_string = bc::basic_string<T, std::char_traits<T>, allocator<T> >;

    using string = basic_string<char>;
    using wstring = basic_string<wchar_t>;
    }

    namespace SWMR {
    namespace bip = boost::interprocess;

    static struct server_mode_t {} const/*expr*/ server_mode = server_mode_t();
    static struct client_mode_t {} const/*expr*/ client_mode = client_mode_t();

    template <typename T> struct SharedMemArray {

    private:
    struct data_t {
    using allocator_type = Shared::allocator<void>;

    data_t(size_t N, allocator_type alloc) : elements(alloc) { elements.resize(N); }
    data_t(allocator_type alloc) : elements(alloc) {}

    mutable Shared::mutex mtx;
    Shared::vector<T> elements;
    };

    bool isManager;
    const std::string _name;
    Shared::shm_t _shm;
    data_t *_data;

    // functions to manage the shared memory
    Shared::shm_t static do_create(char const* name) {
    bip::shared_memory_object::remove(name);
    Shared::shm_t result(bip::create_only, name, 1ul << 20); // ~1 MiB
    return boost::move(result);
    }

    Shared::shm_t static do_open(char const* name) {
    return Shared::shm_t(bip::open_only, name);
    }

    public:
    SharedMemArray(server_mode_t, std::string const& name, size_t N = 0)
    : isManager(true), _name(name), _shm(do_create(_name.c_str()))
    {
    _data = _shm.find_or_construct<data_t>(name.c_str())(N, _shm.get_segment_manager());
    }

    SharedMemArray(client_mode_t, std::string const& name)
    : isManager(false), _name(name), _shm(do_open(_name.c_str()))
    {
    auto found = _shm.find<data_t>(name.c_str());
    assert(found.second);
    _data = found.first;
    }

    Shared::mutex& mutex() const { return _data->mtx; }
    Shared::vector<T> & elements() { return _data->elements; }
    Shared::vector<T> const& elements() const { return _data->elements; }
    };
    }

    #include <vector>

    static const std::string APP_UUID = "93f6b721-1d34-46d9-9877-f967fea61cf2";

    struct UserData {
    using allocator_type = Shared::allocator<void>;

    UserData(allocator_type alloc) : text(alloc) {}
    UserData(UserData const& other, allocator_type alloc) : i(other.i), text(other.text, alloc) {}
    UserData(int i, Shared::string t) : i(i), text(t) {}
    template <typename T> UserData(int i, T&& t, allocator_type alloc) : i(i), text(std::forward<T>(t), alloc) {}

    // data
    int i;
    Shared::string text;
    };

    #include <boost/range/algorithm.hpp>
    #include <boost/foreach.hpp>
    #include <iostream>

    int main() {
    using namespace SWMR;
    SharedMemArray<int> s_ints(server_mode, APP_UUID + "-ints", 20);
    SharedMemArray<UserData> s_udts(server_mode, APP_UUID + "-udts");
    // server code

    {
    Shared::guard lk(s_ints.mutex());
    boost::fill(s_ints.elements(), 99);

    // or manipulate the vector. Any allocations go to the shared memory segment automatically
    s_ints.elements().push_back(42);
    s_ints.elements().assign(20, 42);
    }

    {
    Shared::guard lk(s_udts.mutex());
    s_udts.elements().emplace_back(1, "one");
    }

    // client code
    SharedMemArray<int> c_ints(client_mode, APP_UUID + "-ints");
    SharedMemArray<UserData> c_udts(client_mode, APP_UUID + "-udts");

    {
    Shared::guard lk(c_ints.mutex());
    auto& e = c_ints.elements();
    assert(boost::equal(std::vector<int>(20, 42), e));
    }

    {
    Shared::guard lk(c_udts.mutex());
    BOOST_FOREACH(UserData& udt, c_udts.elements())
    std::cout << udt.i << "\t'" << udt.text << "'\n";
    }
    }

    笔记:
  • 因为您现在要存储一流的C++对象,所以大小不是静态的。实际上,您可以使用push_back,如果超出容量,则容器将仅使用该段的分配器进行重新分配。
  • 我选择使用C++ 11作为namespace Shared中的便捷typedef。但是,所有这些都可以在c++ 03中工作,尽管具有更多详细信息
  • 我还选择使用范围分配器。这意味着,如果T是一种(用户定义的)类型,并且/ also /使用分配器(例如all standard containers, std::deque , std::packaged_task , std::tuple etc.,则分配器的段引用将在内部构造时隐式传递给元素。这就是为什么这些行
    elements.resize(N);


    s_udts.elements().emplace_back(1, "one");

    无需显式传递元素构造函数的分配器就可以编译。
  • 示例UserData类利用此示例来说明如何包含std::string(或实际上是Shared::string),该魔术师从与容器相同的内存段中进行魔术分配。

  • 3.奖金

    还要注意,这打开了将所有容器存储在单个 shared_memory_object中的可能性,这可能是有益的,因此,我提供了一个变体来说明这种方法:

    Live Compiled On Coliru
    #include <boost/container/scoped_allocator.hpp>

    #include <boost/container/vector.hpp>
    #include <boost/container/string.hpp>

    #include <boost/interprocess/allocators/allocator.hpp>
    #include <boost/interprocess/managed_shared_memory.hpp>
    #include <boost/interprocess/offset_ptr.hpp>
    #include <boost/interprocess/sync/interprocess_sharable_mutex.hpp>
    #include <boost/thread/lock_guard.hpp>

    namespace Shared {
    namespace bip = boost::interprocess;
    namespace bc = boost::container;

    using msm_t = bip::managed_shared_memory;
    using mutex = bip::interprocess_sharable_mutex;
    using guard = boost::lock_guard<mutex>;

    template <typename T> using allocator = bc::scoped_allocator_adaptor<
    bip::allocator<T, msm_t::segment_manager>
    >;
    template <typename T> using vector = bc::vector<T, allocator<T> >;
    template <typename T> using basic_string = bc::basic_string<T, std::char_traits<T>, allocator<T> >;

    using string = basic_string<char>;
    using wstring = basic_string<wchar_t>;
    }

    namespace SWMR {
    namespace bip = boost::interprocess;
    namespace bc = boost::container;

    class Segment {
    public:
    // LockableObject, base template
    //
    // LockableObject contains a `Shared::mutex` and an object of type T
    template <typename T, typename Enable = void> struct LockableObject;

    // Partial specialization for the case when the wrapped object cannot
    // use the shared allocator: the constructor is just forwarded
    template <typename T>
    struct LockableObject<T, typename boost::disable_if<bc::uses_allocator<T, Shared::allocator<T> >, void>::type>
    {
    template <typename... CtorArgs>
    LockableObject(CtorArgs&&... args) : object(std::forward<CtorArgs>(args)...) {}
    LockableObject() : object() {}

    mutable Shared::mutex mutex;
    T object;

    private:
    friend class Segment;
    template <typename... CtorArgs>
    static LockableObject& locate_by_name(Shared::msm_t& msm, const char* tag, CtorArgs&&... args) {
    return *msm.find_or_construct<LockableObject<T> >(tag)(std::forward<CtorArgs>(args)...);
    }
    };

    // Partial specialization for the case where the contained object can
    // use the shared allocator;
    //
    // Construction (using locate_by_name) adds the allocator as the last
    // argument.
    template <typename T>
    struct LockableObject<T, typename boost::enable_if<bc::uses_allocator<T, Shared::allocator<T> >, void>::type>
    {
    using allocator_type = Shared::allocator<void>;

    template <typename... CtorArgs>
    LockableObject(CtorArgs&&... args) : object(std::forward<CtorArgs>(args)...) {}
    LockableObject(allocator_type alloc = {}) : object(alloc) {}

    mutable Shared::mutex mutex;
    T object;

    private:
    friend class Segment;
    template <typename... CtorArgs>
    static LockableObject& locate_by_name(Shared::msm_t& msm, const char* tag, CtorArgs&&... args) {
    return *msm.find_or_construct<LockableObject>(tag)(std::forward<CtorArgs>(args)..., Shared::allocator<T>(msm.get_segment_manager()));
    }
    };

    Segment(std::string const& name, size_t capacity = 1024*1024) // default 1 MiB
    : _msm(bip::open_or_create, name.c_str(), capacity)
    {
    }

    template <typename T, typename... CtorArgs>
    LockableObject<T>& getLockable(char const* tag, CtorArgs&&... args) {
    return LockableObject<T>::locate_by_name(_msm, tag, std::forward<CtorArgs>(args)...);
    }

    private:
    Shared::msm_t _msm;
    };
    }

    #include <vector>

    static char const* const APP_UUID = "249f3878-3ddf-4473-84b2-755998952da1";

    struct UserData {
    using allocator_type = Shared::allocator<void>;
    using String = Shared::string;

    UserData(allocator_type alloc) : text(alloc) { }
    UserData(int i, String t) : i(i), text(t) { }
    UserData(UserData const& other, allocator_type alloc) : i(other.i), text(other.text, alloc) { }

    template <typename T>
    UserData(int i, T&& t, allocator_type alloc)
    : i(i), text(std::forward<T>(t), alloc)
    { }

    // data
    int i;
    String text;
    };

    #include <boost/range/algorithm.hpp>
    #include <boost/foreach.hpp>
    #include <iostream>

    int main() {
    using IntVec = Shared::vector<int>;
    using UdtVec = Shared::vector<UserData>;

    boost::interprocess::shared_memory_object::remove(APP_UUID); // for demo

    // server code
    {
    SWMR::Segment server(APP_UUID);

    auto& s_ints = server.getLockable<IntVec>("ints", std::initializer_list<int> {1,2,3,4,5,6,7,42}); // allocator automatically added
    auto& s_udts = server.getLockable<UdtVec>("udts");

    {
    Shared::guard lk(s_ints.mutex);
    boost::fill(s_ints.object, 99);

    // or manipulate the vector. Any allocations go to the shared memory segment automatically
    s_ints.object.push_back(42);
    s_ints.object.assign(20, 42);
    }

    {
    Shared::guard lk(s_udts.mutex);
    s_udts.object.emplace_back(1, "one"); // allocates the string in shared memory, and the UserData element too
    }
    }

    // client code
    {
    SWMR::Segment client(APP_UUID);

    auto& c_ints = client.getLockable<IntVec>("ints", 20, 999); // the ctor arguments are ignored here
    auto& c_udts = client.getLockable<UdtVec>("udts");

    {
    Shared::guard lk(c_ints.mutex);
    IntVec& ivec = c_ints.object;
    assert(boost::equal(std::vector<int>(20, 42), ivec));
    }

    {
    Shared::guard lk(c_udts.mutex);
    BOOST_FOREACH(UserData& udt, c_udts.object)
    std::cout << udt.i << "\t'" << udt.text << "'\n";
    }
    }
    }

    笔记:
  • 您现在可以存储任何内容,而不仅仅是“动态数组”(vector<T>)。您可以这样做:
    auto& c_udts = client.getLockable<double>("a_single_double");
  • 存储与共享分配器兼容的容器时,LockableObject的构造方法将透明地将分配器实例添加为所包含T object;的最后一个构造函数参数。
  • 我将remove()调用移出了Segment类,从而无需区分客户端/服务器模式。我们只使用open_or_createfind_or_construct
  • 关于c++ - 一次使用多个共享内存实例,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/28125397/

    25 4 0
    文章推荐: c# - 如何在 Visual Studio 2013 中安装 WindowsAzure.Storage?
    文章推荐: python - 如何计算 python 线性回归模型中斜率的 99% 置信区间?
    文章推荐: json - 类型错误 : is not JSON serializable