Relative advantages of one thread per client and queue flow models for a streaming server?

Let's say we create a stream server designed to work in a system with four cores. Two flow control schemes that I can think of are one stream per client connection and queue system.

As follows from the first name of the system, we will create one stream for each client that connects to our server. Assuming that one thread is always dedicated to the main thread of our program, we can simultaneously process up to three clients and for any more simultaneous clients than we have to rely on the functionality of the proactive multitasking operating system to switch between them (or a virtual machine in the case of green threads )

For our second approach, we will create two thread-safe queues. One for incoming messages and one for outgoing messages. In other words, requests and answers. This means that we will likely have one thread that accepts incoming connections and puts their requests in the incoming queue. One or two threads will process the processing of incoming requests, resolve the corresponding responses, and put these responses in the outgoing queue. Finally, we will have one thread that simply accepts responses from this queue and sends them back to clients.

What are the pros and cons of these approaches? Note that I did not mention what kind of server it is. I assume that one with a higher performance profile depends on whether the server handles short connections such as web servers and POP3 servers, or longer connections such as WebSocket servers, game servers, and messaging application servers .

Are there other flow management strategies besides these two?

+4
source share
2 answers

I believe that I did both organizations at one time or another.


Method 1

, , listen. accept. pthread_create, recv/send , . .

, . :

, . - . , . , [ ] . , recv send, , . .

: wait for input, process, send output, repeat. : sock = accept, pthread_create(sock), repeat

. , . .


2

N, N .

accept [] , , 1. , , , malloc [ mgmt] . , accept

N . , - select/poll, recv, enqueue request wait for result, select/poll, send.

, : [ ] / . , .

[] , :

// control thread for recv:
while (1) {
    // (1) do blocking poll on all client connection sockets for read
    poll(...)

    // (2) for all pending sockets do a recv for a request block and enqueue
    //     it on the request queue
    for (all in read_mask) {
        request_buf = dequeue(control_free_list)
        recv(request_buf);
        enqueue(request_list,request_buf);
    }
}

// control thread for recv:
while (1) {
    // (1) do blocking wait on result queue

    // (2) peek at all result queue elements and create aggregate write mask
    //     for poll from the socket numbers

    // (3) do blocking poll on all client connection sockets for write
    poll(...)

    // (4) for all pending sockets that can be written to
    for (all in write_mask) {
        // find and dequeue first result buffer from result queue that
        // matches the given client
        result_buf = dequeue(result_list,client_id);
        send(request_buf);
        enqueue(control_free_list,request_buf);
    }
}

// worker thread:
while (1) {
    // (1) do blocking wait on request queue
    request_buf = dequeue(request_list);

    // (2) process request ...

    // (3) do blocking poll on all client connection sockets for write
    enqueue(result_list,request_buf);
}

. . recv ( ) [ ].

, , . , , . [ ], H/W, .

, " "? , [, ], . .

. / [] . , "side/extra", .

, , , .

send .

, , recv . [- ] , .

.

/ enqueue/dequeue.

, " ". .

. recv , , , recv. .

, recv syscalls []. (.. ).

.

(, 50 000), (, 100).

,


, 1 , 2, , [, , ].

1 . 2. , N, 2.

1, . , , . . /, .

, .

" " .

[ 1] [ 2] , [] "-". .

, 1 . 2 . , , .

+2

, " " , , , .

1). , . , .

, :

  • . , , .

  • . , 10000 , . , .

  • , , . , , , websocket. , , , . .

  • - , CPU, - .

, ... / , , , .

2). ( ), , .

, . , , .

, , , . , , , . , , , , . .

, , , , . , , . .

, , . .

0

All Articles