Is it normal for a Rails application to support so much Puma and Postgres downtime?

I have a Rails application with a Puma server and DelayedJob. I checked several load tests - multiple requests at the same time, etc. And when I looked at htop, I discovered a number of processes, because of which I suspected that the cougar was leaking / not killing the processes. On the other hand, this may be normal behavior. However, I saw a memory.

I have 2 Puma employees in a Rails configuration and 2 Pending workers.

Can someone with experience with puma confirm / reject my memory leak problems?

CPU[| 1.3%] Tasks: 54, 19 thr; 1 running Mem[||||||||||||||||||||||||||||||||||||||||||||||||||||| 746/1652MB] Load average: 0.02 0.03 0.05 Swp[ 0/2943MB] Uptime: 1 day, 12:48:05 1024 admin 20 0 828M 183M 3840 S 0.0 11.1 0:00.00 puma: cluster worker 0: 819 1025 admin 20 0 828M 183M 3840 S 0.0 11.1 0:00.00 puma: cluster worker 0: 819 1026 admin 20 0 828M 183M 3840 S 0.0 11.1 0:02.68 puma: cluster worker 0: 819 1027 admin 20 0 828M 183M 3840 S 0.0 11.1 0:00.43 puma: cluster worker 0: 819 1028 admin 20 0 828M 183M 3840 S 0.0 11.1 0:07.04 puma: cluster worker 0: 819 1029 admin 20 0 828M 183M 3840 S 0.0 11.1 0:00.05 puma: cluster worker 0: 819 1022 admin 20 0 828M 183M 3840 S 0.0 11.1 0:13.23 puma: cluster worker 0: 819 1034 admin 20 0 829M 178M 3900 S 0.0 10.8 0:00.00 puma: cluster worker 1: 819 1035 admin 20 0 829M 178M 3900 S 0.0 10.8 0:00.00 puma: cluster worker 1: 819 1037 admin 20 0 829M 178M 3900 S 0.0 10.8 0:02.68 puma: cluster worker 1: 819 1038 admin 20 0 829M 178M 3900 S 0.0 10.8 0:00.44 puma: cluster worker 1: 819 1039 admin 20 0 829M 178M 3900 S 0.0 10.8 0:07.12 puma: cluster worker 1: 819 1040 admin 20 0 829M 178M 3900 S 0.0 10.8 0:00.00 puma: cluster worker 1: 819 1033 admin 20 0 829M 178M 3900 S 0.0 10.8 0:14.28 puma: cluster worker 1: 819 1043 admin 20 0 435M 117M 3912 S 0.0 7.1 0:00.00 delayed_job.0 1041 admin 20 0 435M 117M 3912 S 0.0 7.1 0:52.71 delayed_job.0 1049 admin 20 0 435M 116M 3872 S 0.0 7.1 0:00.00 delayed_job.1 1047 admin 20 0 435M 116M 3872 S 0.0 7.1 0:52.98 delayed_job.1 1789 postgres 20 0 125M 10964 7564 S 0.0 0.6 0:00.26 postgres: admin app_production_ [local] idle 1794 postgres 20 0 127M 11160 6460 S 0.0 0.7 0:00.18 postgres: admin app_production_ [local] idle 1798 postgres 20 0 125M 10748 7484 S 0.0 0.6 0:00.24 postgres: admin app_production_ [local] idle 1811 postgres 20 0 127M 10996 6424 S 0.0 0.6 0:00.11 postgres: admin app_production_ [local] idle 1817 postgres 20 0 127M 11032 6460 S 0.0 0.7 0:00.12 postgres: admin app_production_ [local] idle 1830 postgres 20 0 127M 11032 6460 S 0.0 0.7 0:00.14 postgres: admin app_production_ [local] idle 1831 postgres 20 0 127M 11036 6468 S 0.0 0.7 0:00.20 postgres: admin app_production_ [local] idle 1835 postgres 20 0 127M 11028 6460 S 0.0 0.7 0:00.06 postgres: admin app_production_ [local] idle 1840 postgres 20 0 125M 7288 4412 S 0.0 0.4 0:00.04 postgres: admin app_production_ [local] idle 1847 postgres 20 0 125M 7308 4432 S 0.0 0.4 0:00.06 postgres: admin app_production_ [local] idle 1866 postgres 20 0 125M 7292 4416 S 0.0 0.4 0:00.06 postgres: admin app_production_ [local] idle 1875 postgres 20 0 125M 7300 4424 S 0.0 0.4 0:00.04 postgres: admin app_production_ [local] idle 
+5
source share
1 answer

If the number of processes matches your concurrency configuration, I would say that this is normal, if it continues to grow with each request, then you may have a problem with the processes freezing. By default for puma, I consider it to be 16. It also looks like you are using cluster mode, so it will have several processes and several threads for each process.

+1
source

All Articles