gpt4 book ai didi

c - 如何使用 epoll 管理从多个客户端接收多个缓冲区?

转载 作者:行者123 更新时间:2023-11-30 14:37:43 25 4
gpt4 key购买 nike

我有一个正在使用 epoll 的服务器。我的问题是处理所有缓冲区的最佳方法是什么,这样我就可以处理一半消息或多条消息进入缓冲区。

例如:

如果消息是“Hello From Client Number #”。

通过epoll接收缓冲区:

来自客户 1:“Hello From”

来自客户 2:“来自客户 2 的问候来自客户 2 的问候”

来自客户端 1:“客户端编号 1HelloFromClient 编号 1”

在这种情况下,我需要能够识别“Hello From”只是半条消息并将其存储在某个地方。然后我需要能够处理来自客户端 2 的两条完整消息。然后返回并继续我从客户端 1 中断的地方。我知道有一些方法可以使用分隔符或发送消息长度来区分消息,但我完全不确定如何处理收到半条消息。

有谁有任何想法,或者我可以查看的任何示例代码。

注意我知道目前将所有内容处理到一个缓冲区中是很糟糕的。我要改变这一点。我需要为每个客户端提供单独的缓冲区吗?

感谢您的帮助!

void epoll(int listening_port)
{
char buffer[500]; //buffer for message
int listen_sock = 0; //file descriptor (fd) for listening socket
int conn_sock = 0; //fd for connecting socket
int epollfd = 0; // fd for epoll
int nfds = 0; //number of fd's ready for i/o
int i = 0; //index to which file descriptor we are lookng at
int curr_fd = 0; //fd for socket we are currently looking at
bool loop = 1; //boolean value to help identify whether to keep in loop or not
socklen_t address_len;
struct sockaddr_in serv_addr;
struct epoll_event ev, events[EPOLL_MAX_EVENTS];
ssize_t result = 0;


bzero(buffer, sizeof(buffer));
bzero(&serv_addr, sizeof(serv_addr));

serv_addr.sin_family = AF_INET;
serv_addr.sin_port = listening_port;
serv_addr.sin_addr.s_addr = INADDR_ANY;

listen_sock = create_socket();


if(bind(listen_sock, SA &serv_addr, sizeof(serv_addr)) != 0)
{
perror("Bind failed");
}
else
{
printf("Bind successful\n");
}


set_socket_nonblocking(listen_sock);

listen_on_socket(listen_sock, SOMAXCONN); //specifying max connections in backlog

epollfd = initialize_epoll();

ev.events = EPOLLIN | EPOLLOUT | EPOLLET | EPOLLRDHUP;
ev.data.fd = listen_sock;

if(epoll_ctl(epollfd, EPOLL_CTL_ADD, listen_sock, &ev) == ERROR)
{
perror("Epoll_ctl: listen sock");
}
else
{
printf("Successfully added listen socket to epoll\n");
}

while (RUN_EPOLL)
{
nfds = epoll_wait(epollfd, events, EPOLL_MAX_EVENTS, 0); //waiting for incoming connection;
if(nfds == ERROR)
{
perror("EPOLL_Wait");
}
//printf("Finished waiting\i");

for(i = 0; i < nfds; ++i)
{
curr_fd = events[i].data.fd;
loop = true; //reset looping flag
//Notification from Listening Socket - Process Incoming Connections

if (curr_fd == listen_sock) {

while(loop)
{
conn_sock = accept(listen_sock, SA &serv_addr, &address_len); //accept incoming connection
printf("Accepted new incoming connection - socket fd: %d\n", conn_sock);
if (conn_sock > 0) //if successful set socket nonblocking and add it to epoll
{

set_socket_nonblocking(conn_sock);

ev.events = EPOLLIN | EPOLLOUT| EPOLLET | EPOLLRDHUP; //setting flags
ev.data.fd = conn_sock; //specify fd of new connection in event to follow
if (epoll_ctl(epollfd, EPOLL_CTL_ADD, conn_sock, &ev) == ERROR) //add fd to monitored fd's
{
perror("epoll_ctl: conn_sck");
}
else
{
printf("Added %d to monitor list\n", conn_sock);
}


}
else if (conn_sock == ERROR)
{
if ((errno == EAGAIN) || (errno == EWOULDBLOCK))
{
printf("All incoming connections processed\n");
loop = false;
}
else
{
perror("Accept remote socket");
loop = false;
}
}
}
}
else if(events[i].events & EPOLLRDHUP) //detecting if peer shutdown
{
printf("Detected socket peer shutdown. Closing now. \n");

if (epoll_ctl(epollfd, EPOLL_CTL_DEL, curr_fd, NULL) == ERROR) {
perror("epoll_ctl: conn_sck");
}

close_socket(curr_fd);
}
else if(events[i].events & EPOLLIN)
{
while(loop)
{
result = recv(curr_fd, buffer, sizeof(buffer), 0);
//printf("Length of incoming message is %d\i", result);

if(result > 0) //
{
printf("File Descriptor: %d. Message: %s\n", curr_fd, buffer); //I know this will need to be changd
bzero(buffer, sizeof(buffer));
}
else if(result == ERROR) //Message is completely sent
{

if(errno == EAGAIN)
{

loop = false;

}
}
else if(result == 0)
{
//Removing the fd from the monitored descriptors in epoll
if (epoll_ctl(epollfd, EPOLL_CTL_DEL, curr_fd, NULL) == ERROR) {
perror("epoll_ctl: conn_sck");
}
close_socket(curr_fd); //Closing the fd
loop = false;
}

}

}
}

}

close_socket(listen_sock);
//need to develop way to gracefully close out of epoll

return;

}

最佳答案

Would I need a separate buffer for each client?

是的。您将需要为每个客户端存储单独的客户端状态。作为该客户端状态的一部分,部分消息缓冲区也应该为每个客户端单独存储。

Does anyone have any ideas, or any sample code I could look at.

您可以查看 the facil.io library 的代码,它是 raw-HTTP example code .

在示例代码中,您会注意到 each HTTP client (protocol / state object)会有自己的目标buffer for reading .

facil.io 库在底层使用 epoll (或者在 BSD/macOS 上使用 kqueuepoll 如果你真的想要便携的话) ) - 所以框架的逻辑适用于您的情况。

有时可以使用堆栈分配(或每个线程)的缓冲区,但这仅在您稍后复制需要保留的数据时才适用。

关于c - 如何使用 epoll 管理从多个客户端接收多个缓冲区?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/57062087/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com