Go, tcp too many open debug files

Here is a simple HTTP connection test (tcp) script

func main() { ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { fmt.Fprintln(w, "Hello, client") })) defer ts.Close() var wg sync.WaitGroup for i := 0; i < 2000; i++ { wg.Add(1) go func(i int) { defer wg.Done() resp, err := http.Get(ts.URL) if err != nil { panic(err) } greeting, err := ioutil.ReadAll(resp.Body) resp.Body.Close() if err != nil { panic(err) } fmt.Printf("%s", i, greeting) }(i) } wg.Wait() } 

And if I run this on Ubuntu, I get:

panic: Get http://127.0.0.1:33202: dial tcp 127.0.0.1:33202: too many open files

Other posts say that Close connection I am doing all this here. And others say that to increase the maximum connection limit with ulimit or try sudo sysctl -w fs.inotify.max_user_watches=100000 , but it still doesn’t work.

How to run millions of gpoutines tunnels on one server? It breaks down only into 2000 connections.

Thanks,

+13
source share
7 answers

I think you need to change the maximum file descriptors. I ran into the same problem on one of my development virtual machines, and I needed to change the maximum number of file descriptors, and not anything with the inotify settings.

FWIW, your program works fine on my virtual machine.

 Β·> ulimit -n 120000 

But after I run

 Β·> ulimit -n 500 Β·> ulimit -n 500 

I get:

 panic: Get http://127.0.0.1:51227: dial tcp 127.0.0.1:51227: socket: too many open files 

** Do not fall into the trap set by Pravin **

Note ulimit ! = ulimit -n .

 ➜ cmd git:(wip-poop) βœ— ulimit -a -t: cpu time (seconds) unlimited -f: file size (blocks) unlimited -d: data seg size (kbytes) unlimited -s: stack size (kbytes) 8192 -c: core file size (blocks) 0 -v: address space (kbytes) unlimited -l: locked-in-memory size (kbytes) unlimited -u: processes 1418 -n: file descriptors 4864 
+28
source

If you want to run millions of running procedures that open / read / close the socket, well you improve your ulimit or open / read / close the socket and pass the value read in go-daily, but I will use the buffer channel to control the number of file descriptors, which you want to open.

 const ( // this is where you can specify how many maxFileDescriptors // you want to allow open maxFileDescriptors = 100 ) func main() { ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { fmt.Fprintln(w, "Hello, client") })) defer ts.Close() var wg sync.WaitGroup maxChan := make(chan bool, maxFileDescriptors) for i := 0; i < 1000; i++ { maxChan <- true go func(url string, i int, maxChan chan bool, wg *sync.WaitGroup) { wg.Add(1) defer wg.Done() defer func(maxChan chan bool) { <-maxChan }(maxChan) resp, err := http.Get(url) if err != nil { panic(err) } greeting, err := ioutil.ReadAll(resp.Body) if err != nil { panic(err) } err = resp.Body.Close() if err != nil { panic(err) } fmt.Printf("%d: %s", i, string(greeting)) }(ts.URL, i, maxChan, &wg) } wg.Wait() } 
+5
source

Gos http packet does not determine the default request timeout. You should always include a timeout in your service. What if the client does not close his session? Your process will support old sessions, reaching limit values. A bad actor can intentionally open thousands of sessions using your server. High-load services must also adjust the limit values, but the wait time for a reverse stop.

Make sure you specify a timeout:

 http.DefaultClient.Timeout = time.Minute * 10 

You can check before and after by observing the files opened by your process:

 lsof -p [PID_ID] 
+2
source

can goruntine in your function too, try https://github.com/leenanxi/nasync

  //it has a simple usage nasync.Do(yourAsyncTask) 

in your code

 for i := 0; i < 2000; i++ { nasync.Do(func() { resp, err := http.Get("https://www.baidu.com") ... }) } 

default max go goruntine in nasync lib 1000

+1
source

modify ulimit to avoid the "too many open files" error by default max ulimit is 4096 for linux and 1024 for mac, u can change ulimit to 4096 by typing ulimit -n 4096 outside of 4096 you need to change limit.conf in etc / security for linux and set the hard limit to 100000 by adding this line "* hard core 100000"

0
source
 HTTP/1.1 uses persistent connections by default: A significant difference between HTTP/1.1 and earlier versions of HTTP is that persistent connections are the default behavior of any HTTP connection. http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html The solution was to inform the server that the client wants to close the connection after the transaction is complete. This can be done by setting the Connection header, req.Header.Set("Connection", "close") or by setting the Close property to true on the http.Request: req.Close = true After doing that, the "too many open files" issue went away as the program was no longer keeping HTTP connections open and thus not using up file descriptors. 

I solved this by adding req.Close = true and req.Header.Set ("Connection", "close"). I think this is better than changing ulimit.

source: http://craigwickesser.com/2015/01/golang-http-to-many-open-files/

0
source

I also had to manually set the closed connection header to avoid the file descriptor problem:

 r, _ := http.NewRequest(http.MethodDelete, url, nil) r.Close = true res, err := c.Do(r) res.Body.Close(); 

Without r.Close = true and res.Body.Close (), I hit the file descriptor limit. With both, I could shoot as much as I needed.

-1
source

All Articles