PostgreSQL failover using poolboy and epgsql

I am creating an Erlang application that uses poolboy to combine pools and epgsql to talk to PostgreSQL.

I would like to handle PostgreSQL overloading, and I am wondering what is the best way to structure my application.

Should I or can I:

  • You have one pool for each peer PG and handle this above pool. That is: when I discover that the source PG server is not working, can I kill the pool? Is there an Erlang-idiomatic way to decide which pool is still alive?
  • Kill and restart workers when I find that the PG server is down? Is there an idiomatic way to do this?
  • Do you restore my work services from one PG server to another?
  • Something else?
+4
source share
2 answers

C epgsql, when the primary signal is disconnected, the socket connection is interrupted. Since the connection process is associated with the workflow, the workflow terminates and is restarted by the supervisor.

So, all you have to do is (c my_worker:init) handle the errors from pgsql:connectand connect to the backup server instead:

case pgsql:connect(Primary, Username, Password, Opts) of
    {ok, C} -> {ok, #state{conn=C}};
    _Other -> pgsql:connect(Standby, Username, Password, Opts)
end.

In my (admittedly very superficial) testing, this works fine.

+1
source

I did the same to handle the transition to redis recovery (I did not use poolboy), you can compare it with my case:

-record(redis_conn {
 Host,
 Port,
 User,
 Pwd
}).

-record(state{
  redis_connections :: dict()
}).

Let's say you start connections in the init function and write the state:

State#state{
  redis_connections=Conn
}.

, , "", .

, .

0

All Articles