Database freezes when not in use

I have a web application that I'm starting with. It works well at startup, but if I leave it (say, an hour) and hit it with another request, the request freezes. I thought about closing it after each request, and then opening a new connection, but the documents explicitly say: "It is rare to close the database, since the database descriptor must be durable and shared between many goroutines." What am I doing wrong?

package main import ( "database/sql" "log" "net/http" _ "github.com/lib/pq" ) var Db *sql.DB func main() { var err error Db, err = sql.Open("postgres", "user=me password=openupitsme host=my.host.not.yours dbname=mydb sslmode=require") if err != nil { log.Fatal("Cannot connect to db: ", err) } http.HandleFunc("/page", myHandler) http.ListenAndServe(":8080", nil) } func myHandler(w http.ResponseWriter, r *http.Request) { log.Println("Handling Request....", r) query := `SELECT pk FROM mytable LIMIT 1` rows, err := Db.Query(query) if err != nil { log.Println(err) } defer rows.Close() for rows.Next() { var pk int64 if err := rows.Scan(&pk); err != nil { log.Println(err) } log.Println(pk) } log.Println("Request Served...") } 

EDIT # 1: In my postgres log:

 2015-07-08 18:10:01 EDT [7710-1] user@here LOG: could not receive data from client: Connection reset by peer 2015-07-08 18:20:01 EDT [7756-1] user@here LOG: could not receive data from client: Connection reset by peer 
+5
source share
1 answer

I am having similar problems. In our case, the problem was caused by the connection tracking firewall located between the client machine and the database.

Such firewalls monitor TCP-level connections, and to limit the use of resources, connections will be disconnected, which will remain inactive for a long period of time. The symptoms that we observed in this case were very similar to yours: on the client side, the connection seems to be hanging, and at the end of the server you can see connection reset by peer .

One way to prevent this is to ensure that TCP Keepalives are enabled and that the keepalive interval is less than the timeouts of firewalls, routers, etc. that cause connection problems. This is controlled by the connection parameters libpq keepalives , keepalives_idle , keepalives_interval and keepalives_count , which you can set in the connection string. See the manual for a description of these options.

  • keepalive determines whether keepalive is enabled or not. The default is 1 (enabled), so you probably don't need to specify this.
  • keepalives_idle determines the amount of downtime before sending keepalive. If you do not specify this, the default will be the default value for the operating system.

    On a Linux system, you can see the default by examining /proc/sys/net/ipv4/tcp_keepalive_time - 7200 seconds are set on my server, which will be too long in your case, since your observation is that the connection is dropped through ~ 1 hour.

    You can try to install it, say, 2500 seconds.

The Linux documentation project provides a useful TCP Keepalive HOWTO document that describes how they work in detail.

Please note that not all operating systems support TCP keepalives. If you cannot enable keepalives, here are some other options you might consider:

  • If it is under your control, reconfigure the firewall / router that removes the connection so that it does not do this for Postgresql client connections.

  • At the application level, you can send some traffic that will support active DB descriptors, for example, by sending an instruction like SELECT 1; every hour or so. If your programming environment provides connection caching (from the comments I'm compiling), this can be tricky.

+2
source

All Articles