gpt4 book ai didi

multithreading - 在Linux中使用多队列NIC

转载 作者:行者123 更新时间:2023-12-03 13:00:46 24 4
gpt4 key购买 nike

我已经阅读了很多有关接收方扩展(RSS),接收数据包导向(RPS)和类似技术的信息,但是我对如何在程序中实际使用这些信息感到困惑,那就是在不同的应用程序之间划分传入的数据包。线程/进程。

我确实了解PF_RING,但是我想Linux内核本身必须有一些基本支持。毕竟,例如,Interl在其网站上以其RSS技术而自豪,并声称支持Linux。另外,RPS不在PF_RING的范围内。我不愿意使用PF_RING的另一个原因是它们已修补了网络驱动程序,而其中一些已修补的驱动程序似乎已过时。

我已经对该主题进行了广泛的搜索,但是发现的最好之处是有关启用RSS或RPS支持的问题,而不是我如何以编程方式使用它们。

最佳答案

内核3.19引入了SO_INCOMING_CPU套接字选项。这样,该过程可以确定数据包最初交付给哪个CPU。

Alternative to RPS/RFS is to use hardware support for multi queue.

Then split a set of million of sockets into worker threads, each one using epoll() to manage events on its own socket pool.

Ideally, we want one thread per RX/TX queue/cpu, but we have no way to know after accept() or connect() on which queue/cpu a socket is managed.

We normally use one cpu per RX queue (IRQ smp_affinity being properly set), so remembering on socket structure which cpu delivered last packet is enough to solve the problem.

After accept(), connect(), or even file descriptor passing around processes, applications can use :

int cpu; socklen_t len = sizeof(cpu);

getsockopt(fd, SOL_SOCKET, SO_INCOMING_CPU, &cpu, &len);

And use this information to put the socket into the right silo for optimal performance, as all networking stack should run on the appropriate cpu, without need to send IPI (RPS/RFS).



[ https://patchwork.ozlabs.org/patch/408257/][1]

关于multithreading - 在Linux中使用多队列NIC,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/10565468/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com