How many female connections are possible?

Does anyone know how many tcp-socket connections are possible on a modern standard root server? (Each connection tends to have less traffic, but all connections should be all the time.)

EDIT: We will use Linux Server.

+67
linux max sockets tcp
Mar 16 '09 at 18:44
source share
9 answers

Google for the "C10K" problem. This is mainly a discussion and technology that allows you to manage 10,000 or more simultaneous connections.

I suspect that this number was chosen because it is difficult, but theoretically possible.

-5
Mar 16 '09 at 19:54
source share

I reached 1600k simultaneous connections to a socket without a connection and at the same time 57k req / s on a Linux desktop (16G RAM, I7 2600 CPU). This is a single stream HTTP server written in C with epoll. The source code is on github , here.

Edit:

I made 600k concurrent HTTP connections (client and server) on the same machine with JAVA / Clojure. more info post , discussion of HN: http://news.ycombinator.com/item?id=5127251 p>

Connection cost (with epoll):

  • The annex
  • some RAM per connection is required.
  • TCP buffer 2 * 4k ~ 10k or more
  • epoll needs some memory for the file descriptor, from epoll (7)

Each registered file descriptor costs approximately 90 bytes on a 32-bit kernel and approximately 160 bytes on a 64-bit kernel.

+79
Mar 13 2018-12-12T00:
source share

It depends not only on the specific operating system, but also on the configuration, potentially the configuration in real time.

For Linux:

cat /proc/sys/fs/file-max 

will show the current maximum number of file descriptors, the total number of which are allowed to be opened simultaneously. Check out http://www.cs.uwaterloo.ca/~brecht/servers/openfiles.html

+21
Mar 16 '09 at 19:04
source share

10000? 70,000? what all:)

FreeBSD is probably the server you want. Here's a short blog post about setting up to handle 100,000 connections, it had some interesting features like zero copies of sockets for some time, and also kqueue to act as a completion completion mechanism.

Solaris can handle 100,000 connections in the last century !. They say linux will be better

The best description I've come across is a presentation / document on creating a scalable web server. He is not afraid to say it like this :)

The same goes for software: cretins at the application level force innovation at the OS level. Because Lotus Notes supports one TCP connection per client open, IBM has made great optimizations for the "single process", 100,000 open connections "for Linux

And the O (1) scheduler was originally created for a good win on some irrelevant Java tests. The bottom is that this bloat is beneficial to all of us.

+8
May 30 '09 at 15:49
source share

On Linux, you should look at using epoll for async I / O. You may also need to fine-tune the socket buffers so as not to waste too much space on one connection.

I would suggest that you can achieve 100k connections on a smart machine.

+5
Mar 17 '09 at 18:09
source share

depends on the application. if there are only a few packages with each client, 100K is very easy for Linux. An engineer of my team conducted a test several years ago, the result shows: when there is no installed package from the client after the connection is established, linux epoll can watch 400k fd for readability with a processor utilization level of less than 50%.

+3
Nov 27 '11 at 2:52
source share

The limit on the number of open sockets is configured in the / proc file system

 cat /proc/sys/fs/file-max 

Max for incoming connections in the OS defined by integer limits.

Linux itself allows billions of open sockets.

To use sockets, you need to listen to the application, for example. a web server, and it will use a certain amount of RAM for each socket.

RAM and CPU will introduce real limits. (modern 2017, I think millions are not billions)

1 million is possible, not easy. Expect to use X gigabytes of RAM to control 1 million sockets.

Outgoing TCP connections are limited to ~ 65000 port numbers on IP. You can have multiple IP addresses, but not unlimited IP addresses. This is a limitation in TCP, not Linux.

+2
Apr 09 '17 at 11:03 on
source share

What is the operating system?

For windowed machines, if you write a server well to scale well, and therefore using I / O and asynchronous I / O, the main limitation is the number of unused pool that you use for each active connection. This translates directly into a limit based on the amount of memory set by your machine (an incomprehensible pool is the final size of a fixed size based on the total installed memory).

For connections that donโ€™t see much traffic, you can reduce their efficiency by placing โ€œzero bytesโ€ that do not use an incomprehensible pool and do not affect the restriction of blocked pages (another potentially limited resource that may prevent you from opening many socket connections).

In addition, you will need to profile, but I managed to get more than 70,000 simultaneous connections on a modestly set server (760 MB of memory); see here http://www.lenholgate.com/blog/2005/11/windows-tcpip-server-performance.html for more details.

Obviously, if you are using a less efficient architecture, such as thread-to-join or select, you should expect to get less impressive numbers; but, IMHO, there is simply no reason to choose such architectures for Windows socket servers.

Edit: see here http://blogs.technet.com/markrussinovich/archive/2009/03/26/3211216.aspx ; how the number of non-paged pool is calculated has been changed in Vista and Server 2008 and now much more is available.

+1
Mar 16 '09 at 10:15
source share

Actually, for an application, more than 4000-5000 open sockets on one machine become impractical. Just checking and managing activity on all sockets is starting to become a performance issue, especially in real-time environments.

-8
Mar 16 '09 at 19:43
source share



All Articles