gpt4 book ai didi

c++ - libuv 与 Boost/ASIO 相比如何?

转载 作者:IT老高 更新时间:2023-10-28 11:26:31 26 4
gpt4 key购买 nike

我会对以下方面感兴趣:

  • 范围/功能
  • 业绩
  • 到期
  • 最佳答案

    范围

    Boost.Asio是一个以网络为重点的 C++ 库,但其异步 I/O 功能已扩展到其他资源。此外,由于 Boost.Asio 是 Boost 库的一部分,它的范围略有缩小,以防止与其他 Boost 库重复。例如,Boost.Asio 不会提供线程抽象,如 Boost.Thread已经提供了一个。

    另一方面,libuv是一个 C 库,旨在作为 Node.js 的平台层.它为 IOCP 提供了一个抽象。在 Windows 上,kqueue在 macOS 上,和 epoll在 Linux 上。此外,它的范围似乎略有增加,以包括抽象和功能,例如线程、线程池和线程间通信。

    每个库的核心都提供事件循环和异步 I/O 功能。它们在一些基本功能上有重叠,例如定时器、套接字和异步操作。 libuv 的范围更广,并提供了额外的功能,例如线程和同步抽象、同步和异步文件系统操作、进程管理等。相比之下,Boost.Asio 的原始网络焦点表面,因为它提供了一组更丰富的网络相关功能,例如 ICMP、SSL、同步阻塞和非阻塞操作,以及用于常见任务的更高级别的操作,包括从流中读取直到接收到换行符。

    功能列表

    以下是一些主要功能的简要并排比较。由于使用 Boost.Asio 的开发人员通常有其他可用的 Boost 库,因此我选择考虑额外的 Boost 库,如果它们是直接提供的或实现起来很简单的话。

                             libuv          BoostEvent Loop:              yes            AsioThreadpool:              yes            Asio + ThreadsThreading:                Threads:               yes            Threads  Synchronization:       yes            ThreadsFile System Operations:  Synchronous:           yes            FileSystem  Asynchronous:          yes            Asio + FilesystemTimers:                  yes            AsioScatter/Gather I/O[1]:    no             AsioNetworking:  ICMP:                  no             Asio  DNS Resolution:        async-only     Asio  SSL:                   no             Asio  TCP:                   async-only     Asio  UDP:                   async-only     AsioSignal:  Handling:              yes            Asio  Sending:               yes            noIPC:  UNIX Domain Sockets:   yes            Asio  Windows Named Pipe:    yes            AsioProcess Management:  Detaching:             yes            Process  I/O Pipe:              yes            Process  Spawning:              yes            ProcessSystem Queries:  CPU:                   yes            no  Network Interface:     yes            noSerial Ports:            no             yesTTY:                     yes            noShared Library Loading:  yes            Extension[2]

    1. Scatter/Gather I/O.

    2. Boost.Extension was never submitted for review to Boost. As noted here, the author considers it to be complete.

    Event Loop

    While both libuv and Boost.Asio provide event loops, there are some subtle differences between the two:

    • While libuv supports multiple event loops, it does not support running the same loop from multiple threads. For this reason, care needs to be taken when using the default loop (uv_default_loop()), rather than creating a new loop (uv_loop_new()), as another component may be running the default loop.
    • Boost.Asio does not have the notion of a default loop; all io_service are their own loops that allow for multiple threads to run. To support this Boost.Asio performs internal locking at the cost of some performance. Boost.Asio's revision history indicates that there have been several performance improvements to minimize the locking.

    Threadpool

    • libuv's provides a threadpool through uv_queue_work. The threadpool size is configurable via the environment variable UV_THREADPOOL_SIZE. The work will be executed outside of the event loop and within the threadpool. Once the work is completed, the completion handler will be queued to run within the event loop.
    • While Boost.Asio does not provide a threadpool, the io_service can easily function as one as a result of io_service allowing multiple threads to invoke run. This places the responsibility of thread management and behavior to the user, as can be seen in this example.

    Threading and Synchronization

    • libuv provides an abstraction to threads and synchronization types.
    • Boost.Thread provides a thread and synchronization types. Many of these types follow closely to the C++11 standard, but also provide some extensions. As a result of Boost.Asio allowing multiple threads to run a single event loop, it provides strands as a means to create a sequential invocation of event handlers without using explicit locking mechanisms.

    File System Operations

    • libuv provides an abstraction to many file system operations. There is one function per operation, and each operation can either be synchronous blocking or asynchronous. If a callback is provided, then the operation will be executed asynchronously within an internal threadpool. If a callback is not provided, then the call will be synchronous blocking.
    • Boost.Filesystem provides synchronous blocking calls for many file system operations. These can be combined with Boost.Asio and a threadpool to create asynchronous file system operations.

    Networking

    • libuv supports asynchronous operations on UDP and TCP sockets, as well as DNS resolution. Application developers should be aware that the underlying file descriptors are set to non-blocking. Therefore, native synchronous operations should check return values and errno for EAGAIN or EWOULDBLOCK.
    • Boost.Asio is a bit more rich in its networking support. In addition many of the features libuv's networking provides, Boost.Asio supporting SSL and ICMP sockets. Furthermore, Boost.Asio provides synchronous blocking and synchronous non-blocking operations, into addition to its asynchronous operations. There are numerous free standing functions that provide common higher-level operations, such as reading a set amount of bytes, or until a specified delimiter character is read.

    Signal

    • libuv provides an abstraction kill and signal handling with its uv_signal_t type and uv_signal_* operations.
    • Boost.Asio does not provde an abstraction to kill, but its signal_set provides signal handling.

    IPC


    API Differences

    While the APIs are different based on the language alone, here are a few key differences:

    Operation and Handler Association

    Within Boost.Asio, there is a one-to-one mapping between an operation and a handler. For instance, each async_write operation will invoke the WriteHandler once. This is true for many of libuv operations and handlers. However, libuv's uv_async_send supports a many-to-one mapping. Multiple uv_async_send calls may result in the uv_async_cb being called once.

    Call Chains vs. Watcher Loops

    When dealing with task, such as reading from a stream/UDP, handling signals, or waiting on timers, Boost.Asio's asynchronous call chains are a bit more explicit. With libuv, a watcher is created to designate interests in a particular event. A loop is then started for the watcher, where a callback is provided. Upon receiving the event of interests, the callback will be invoked. On the other hand, Boost.Asio requires an operation to be issued each time the application is interested in handling the event.

    To help illustrate this difference, here is an asynchronous read loop with Boost.Asio, where the async_receive call will be issued multiple times:

    void start()
    {
    socket.async_receive( buffer, handle_read ); ----.
    } |
    .----------------------------------------------'
    | .---------------------------------------.
    V V |
    void handle_read( ... ) |
    { |
    std::cout << "got data" << std::endl; |
    socket.async_receive( buffer, handle_read ); --'
    }

    这是 libuv 的相同示例,其中 handle_read每次观察者观察到套接字有数据时调用:
    uv_read_start( socket, alloc_buffer, handle_read ); --.
    |
    .-------------------------------------------------'
    |
    V
    void handle_read( ... )
    {
    fprintf( stdout, "got data\n" );
    }

    内存分配

    由于 Boost.Asio 中的异步调用链和 libuv 中的观察者,内存分配经常发生在不同的时间。对于观察者,libuv 将分配推迟到它接收到需要内存来处理的事件之后。分配是通过用户回调完成的,在 libuv 内部调用,并推迟应用程序的释放责任。另一方面,许多 Boost.Asio 操作要求在发出异步操作之前分配内存,例如 buffer 的情况。为 async_read . Boost.Asio 确实提供 null_buffers ,可用于监听事件,允许应用程序推迟内存分配,直到需要内存,尽管这已被弃用。

    这种内存分配差异也出现在 bind->listen->accept 中。环形。使用 libuv, uv_listen创建一个事件循环,当连接准备好被接受时,它将调用用户回调。这允许应用程序推迟客户端的分配,直到尝试连接。另一方面,Boost.Asio 的 listen 仅更改 acceptor 的状态. async_accept 监听连接事件,并要求在调用之前分配对等方。

    性能

    不幸的是,我没有任何具体的基准数字来比较 libuv 和 Boost.Asio。但是,我在实时和近实时应用程序中使用这些库观察到类似的性能。如果需要硬数字,libuv 的 benchmark test可以作为一个起点。

    此外,虽然应该进行分析以识别实际瓶颈,但请注意内存分配。对于 libuv,内存分配策略主要限于分配器回调。另一方面,Boost.Asio 的 API 不允许分配器回调,而是将分配策略推送给应用程序。但是,Boost.Asio 中的处理程序/回调可以被复制、分配和解除分配。 Boost.Asio 允许应用程序提供 custom memory allocation函数以实现处理程序的内存分配策略。

    成熟度

    Boost.Asio

    Asio 的开发至少可以追溯到 OCT-2004,经过 20 天的同行评审后,它于 2006 年 3 月 22 日被 Boost 1.35 接受。它还用作 Networking Library Proposal for TR2 的引用实现和 API。 . Boost.Asio 有相当数量的 documentation ,尽管它的用处因用户而异。

    API 也具有相当一致的感觉。此外,异步操作在操作名称中是显式的。例如, accept是同步阻塞和 async_accept是异步的。 API 为常见的 I/O 任务提供免费功能,例如,从流中读取直到 \r\n已读。还注意隐藏一些特定于网络的细节,例如 ip::address_v4::any()代表 0.0.0.0的“所有接口(interface)”地址.

    最后,Boost 1.47+ 提供了 handler tracking ,这在调试时证明是有用的,以及 C++11 支持。

    libuv

    根据他们的 github 图,Node.js 的开发至少可以追溯到 FEB-2009和 libuv 的开发日期为 MAR-2011 . uvbook是介绍 libuv 的好地方。 API 文档是 here .

    总体而言,该 API 相当一致且易于使用。一个可能引起混淆的异常情况是 uv_tcp_listen创建一个观察者循环。这与通常具有 uv_*_start 的其他观察者不同。和 uv_*_stop一对函数来控制观察者循环的生命周期。还有一些 uv_fs_*操作有相当数量的参数(最多 7 个)。通过根据回调(最后一个参数)的存在来确定同步和异步行为,同步行为的可见性可能会降低。

    最后,快速浏览一下 libuv commit history可见开发者非常活跃。

    关于c++ - libuv 与 Boost/ASIO 相比如何?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/11423426/

    26 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com