Process Control for Go Web Server

I am the new Go programmer, emerging from the world of web applications and service development. An apology is a shit question, but my search in search of an answer did not find anything. Also, this is Server Fault frontier territory, but since I'm more interested in APIs / programs, I ask here.

I wrote a small program using the built-in net/http package web server. I am ready to deploy to production, but I am a bit unclear regarding the go go go web server process and how I should deploy.

In particular, in the environments I'm used to (PHP, Ruby, Python), we have a web server (Apache, Nginx, etc.) sitting in front of our application, and we configure these web servers to use a certain number of workflows / threads and configure how many separate HTTP (S) connections each stream will handle.

I was not able to find information on how the Go web server works, or practical information on how to scale / plan the scale for the Go web server.

i.e. - if I have a simple program ready to process an HTTP request

 func main() { http.HandleFunc("/", processRequest) http.ListenAndServe(":8000", nil) } 

how many connections will HandleFunc try to process at once? Or will it start blocking when the connection is opened and will only serve the next connection after the connection is closed?

Or do I just not need to worry about it and get stuck in routine? But if I do, how can I prevent the system from getting too many execution threads to bog down?

I'm basically trying

  • Understanding go web server process mode
  • Find the built-in transition functions to configure this and / or anyone else who uses the standard package,

As I said, I am very new, so if I completely missed this story, please let me know!

+6
go scale
source share
2 answers

Configure / configure HTTP server

The type that implements the HTTP server is http.Server . If you do not create http.Server yourself, for example. because you call the http.ListenAndServe() function, which creates http.Server for you under the hood:

 func ListenAndServe(addr string, handler Handler) error { server := &Server{Addr: addr, Handler: handler} return server.ListenAndServe() } 

So, if you want to tweak / configure the HTTP server, then create it yourself and call it Server.ListenAndServe() . http.Server is a structure; its null value is a valid configuration. Look at his document, what fields he has, and what you can customize / customize.

The "process control" of the HTTP server is documented on Server.Serve() :

Serve accepts incoming connections in Listener l , creating a new goroutine service for each . The goroutines services read requests and then call srv.Handler to respond to them. The service always returns a non-zero error.

Thus, each incoming HTTP request is processed in its new goroutine, that is, they are served simultaneously. Unfortunately, the API does not document any way to transition and change how this works.

And looking at the current implementation (Go 1.6.2), there is also no undocumented way to do this. server.go , currently line # 2107-2139 :

 2107 func (srv *Server) Serve(l net.Listener) error { 2108 defer l.Close() 2109 if fn := testHookServerServe; fn != nil { 2110 fn(srv, l) 2111 } 2112 var tempDelay time.Duration // how long to sleep on accept failure 2113 if err := srv.setupHTTP2(); err != nil { 2114 return err 2115 } 2116 for { 2117 rw, e := l.Accept() 2118 if e != nil { 2119 if ne, ok := e.(net.Error); ok && ne.Temporary() { 2120 if tempDelay == 0 { 2121 tempDelay = 5 * time.Millisecond 2122 } else { 2123 tempDelay *= 2 2124 } 2125 if max := 1 * time.Second; tempDelay > max { 2126 tempDelay = max 2127 } 2128 srv.logf("http: Accept error: %v; retrying in %v", e, tempDelay) 2129 time.Sleep(tempDelay) 2130 continue 2131 } 2132 return e 2133 } 2134 tempDelay = 0 2135 c := srv.newConn(rw) 2136 c.setState(c.rwc, StateNew) // before Serve can return 2137 go c.serve() 2138 } 2139 } 

As you can see on line # 2137, the connection is done unconditionally on the new goroutine, so there is nothing you can do about it.

Limit "working" goroutines

If you want to limit the number of requests serving goroutines, you can still do it.

You can limit them to several levels. To limit listener levels, see Darigaaz Response. To limit the level of the handler, read on.

For example, you can insert code in each of the http.Handler or handler functions ( http.HandlerFunc ), which continues only if the number of simultaneous requests serving goroutines is less than the specified limit.

There are many constructions for such restriction-synchronization code. One example would be: creating a buffered channel with the required bandwidth. Each handler must first send a value on this channel, and then do the work. When the handler returns, it should get the value from the channel: so this is best done in a deferred function (so as not to forget to “clear” itself).

If the buffer is full, a new request trying to send over the channel will block: wait for the request to complete.

Note that you don’t need to enter this limit code to all your handlers, you can use the “middleware” template, a new type of handler that wraps your handlers, whether it does this job with a synchronization constraint and invokes a wrapped handler in the middle of it.

The advantage of the restriction in the handler (as opposed to the restriction in Listeners) is that in the handler we know what the handler does, so we can make a selective restriction (for example, we can limit some queries, such as database operations, and not restrict others, for example, serve static resources), or we can create several separate groups of restrictions arbitrarily for our needs (for example, limit concurrent db requests to 10 max, limit static requests to 100 max, restrictions heavy computational queries up to 3 max), etc. We can also easily implement restrictions, such as unlimited (or high limit) for registered users / users and low limit for anonymous / insolvent users.

Also note that you can even limit the speed in one place without using middlewares. Create a "main handler" and go to http.ListenAndServe() (or Server.ListenAndServe() ). This main handler executes a speed limit (for example, using a buffer channel, as mentioned above), and simply forward the http.ServeMux call http.ServeMux you're using.

Here is a simple example that uses http.ListenAndServe() and the default http packet multiplexer ( http.DefaultServeMux ) for demonstration. It limits concurrent requests 2:

 func fooHandler(w http.ResponseWriter, r *http.Request) { log.Println("Foo called...") time.Sleep(3 * time.Second) w.Write([]byte("I'm Foo")) log.Println("Foo ended.") } func barHandler(w http.ResponseWriter, r *http.Request) { log.Println("Bar called...") time.Sleep(3 * time.Second) w.Write([]byte("I'm Bar")) log.Println("Bar ended.") } var ch = make(chan struct{}, 2) // 2 concurrent requests func mainHandler(w http.ResponseWriter, r *http.Request) { ch <- struct{}{} defer func() { <-ch }() http.DefaultServeMux.ServeHTTP(w, r) } func main() { http.HandleFunc("/foo", fooHandler) http.HandleFunc("/bar", barHandler) panic(http.ListenAndServe(":8080", http.HandlerFunc(mainHandler))) } 

Deployment

Web applications written in Go do not require external servers to control processes, since the Go web server itself processes requests at the same time.

Thus, you can start your web server, written in the section "Go as is": the Go web server is being prepared for release.

Of course, you can use other servers to perform additional tasks (for example, HTTPS processing, authentication / authorization, routing, load balancing between several servers).

+8
source share

ListenAndServe starts the HTTP server with the given address and handler. The handler is usually zero, which means using DefaultServeMux . Handle and HandleFunc add handlers to DefaultServeMux .

Take a look at http.Server , many fields are optional and work fine with default values.

Now let's see http.ListenAndServe , not at all complicated

 func ListenAndServe(addr string, handler Handler) error { server := &Server{Addr: addr, Handler: handler} return server.ListenAndServe() } 

therefore, the default server is very simple to create.

 func (srv *Server) ListenAndServe() error { addr := srv.Addr if addr == "" { addr = ":http" } ln, err := net.Listen("tcp", addr) if err != nil { return err } return srv.Serve(tcpKeepAliveListener{ln.(*net.TCPListener)}) } func (srv *Server) Serve(l net.Listener) error { defer l.Close() if fn := testHookServerServe; fn != nil { fn(srv, l) } var tempDelay time.Duration // how long to sleep on accept failure if err := srv.setupHTTP2(); err != nil { return err } for { rw, e := l.Accept() if e != nil { if ne, ok := e.(net.Error); ok && ne.Temporary() { if tempDelay == 0 { tempDelay = 5 * time.Millisecond } else { tempDelay *= 2 } if max := 1 * time.Second; tempDelay > max { tempDelay = max } srv.logf("http: Accept error: %v; retrying in %v", e, tempDelay) time.Sleep(tempDelay) continue } return e } tempDelay = 0 c := srv.newConn(rw) c.setState(c.rwc, StateNew) // before Serve can return go c.serve() } } 

Listen to addr and accept each connection, then it generates a goroutine to handle each connection independently. (HTTP / 2.0 is slightly different, but overall it's the same).

If you want to manage connections, you have 2 options:

  • Create your own server (its 3 lines of code) with the server. ConnState responsive and manage client connections from there. (but they will still be accepted by the kernel)

  • Create your own server with your own implementation of net.Listener (for example, LimitedListener ) and control the connections from there, so you get maximum power over the connections.

Since the default value of http.Server cannot be stopped, the second way is the only way to gracefully stop listening. You can combine two methods to implement different strategies, and this has already been done.

+5
source share

All Articles