First of all, the classic Unix server paradigm is based on filters. For example, various network services can be configured in / etc / services, and a program, such as inetd, listens on all TCP and UDP sockets for incoming connections and datagrams. When the / DG connection comes in forks, redirects stdin, stdout and stderr to the socket using the dup2 system call, and then runs the server process. You can take any program that reads from stdin and writes to stdout and turns it into a network service such as grep .
According to Stephen in Unix Network Programming , there are five types of server I / O models (p. 154):
- blocking
- non-blocking
- multiplexing (selection and polling)
- Signal management
- asynchronous (POSIX aio_ functions)
In addition, servers can be iterative or parallel.
You ask why TCP servers are usually parallel, whereas UDP servers are usually iterative.
UDP is easier to answer. Typically, UDP applications follow a simple query response model when a client sends a short request followed by a response, with each pair representing a separate transaction. UDP servers are the only ones that use Signal Drive I / O, and very rarely.
TCP is a bit more complicated. Iterative servers can use any of the above I / O models, except for # 4. The fastest servers on a single processor are actually iterative servers that use non-blocking I / O. However, they are considered relatively difficult to implement and that is plus the Unix filter idiom, where traditionally are the main reasons for using a parallel model with I / O blocking, be it multi-processor or multi-threaded. Now, with the advent of common multi-core systems, the parallel model also has a performance advantage.
Robert S. Barnes
source share