Why did I get a connection after 1024 connections?

I am testing a local Linux server with server and client on the same server. After about 1024 connections, in my code where I connect, I get a connection failure. At first, I thought it was a limitation of fd_set_max 1024 for selecting and changing the server for polling instead of selecting, and I still have not passed this number. My ulimit -n is set to 2048, and I control lsof on the server, it reaches about 1033 (not sure if this is the exact number) and does not work. Any help is greatly appreciated.

+6
c linux sockets
source share
8 answers

So, after a little more research .. it looks like my side on the server side has queue depth 20. I think the reason. Do any of you guys think the problem is either?

Hi

-one
source share

If you connect faster than your server calls accept() , the queue for pending connections may be full. The maximum queue length is set by the second argument listen() on the server or by the sysctl net.core.somaxconn (usually 128) if it is less.

+3
source share

You may have reached the limit of the process for open file descriptors.

I'm not sure if I understand you correctly. Do you have server and client side in the same process? Then you will use twice as many file descriptors. This is close to what you see with ulimit. If this is not the case, the problem may be on the server side? Perhaps the server process is running out of handles and can no longer accept any connections.

accept the man page mentions that you should get the return value:

EMFILE
The process limit for open file descriptors has been reached.

ENFILE
The system limit on the total number of open files has been reached.

What error code are you getting? Obviously, you can only add connections that _accept_ed have successfully added to select or poll.

I know that you already know how to check ulimit, but others may:

 ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 40448 max locked memory (kbytes, -l) 32 max memory size (kbytes, -m) unlimited open files (-n) 4096 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 40448 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited 
+2
source share

Is there a danger that the server will open a separate log file for each received connection?

What is the upper limit that makes another group a server?

In one program that I watched (a few years ago), there was a bit of code that set the maximum file size to 1 MB. "It’s a pity that when it was first added, it increased in size, but over time and the growth of file restrictions it later became smaller! Is there a chance that the server has a similar problem - it sets the maximum number of open files to ridiculously a large number such as 1024?

0
source share

Apologies mainly for trivial questions :)
Did you recompile the server when you say "changed to poll"? Does the server work under one account? Is this a fork linger or maybe a streaming server? Do you get errno == ECONNREFUSED after calling connect() on the client? Can you confirm that you received RST in response to SYN with tcpdump ? Are client port numbers repeated? Are connections in TIME_WAIT state?

0
source share

I saw the comment you made with the close statement (sock_fd) in the error handling procedure.

You explicitly close your sockets after using them - close () or shutdown ().

I would not have guessed. Do you actually have 1024+ concurrent active connections? To do this, you will need to use pthreads. It is right?

0
source share

I had the same symptoms. Even after increasing ulimit -n, I still could not handle more than 1024 connection connections ...

My problem was that I used select, which cannot handle socket-FD above 1024. So when I increased my limit, my problem really changed !!! (which I did not notice at first ...)

So, to help anyone with similar problems:

If you need more than 1024 sockets, you should

  • increase your limit for open FD (ulimit -n)
  • and you can not use select () (use poll ())
0
source share

Your limitation is limited to the Linux user. If not specified, linux limits are 1,024 open files. To change this, permanently edit the /etc/security/limits.conf file and add

user nofile 16535 user hard nofile 16535

or try from the console

ulimit -n 16535

Hi

-one
source share

All Articles