PostgreSQL Profiling

I am studying an application that supports PostgreSQL.

CPU usage constantly exceeds 50% on a modern Xeon with 4 GB of RAM. Of the 50% processor load, 67% is the "user", and 33% is the "system" (this is a Linux machine). The system does not wait for I / O at all.

I am wondering how I can understand how this CPU time is interrupted.

Queries are mostly special SQL (without prepared statements) from what I see.

Do you think that this time of the user processor can be significantly reduced by moving to prepared statements? that is, can SQL parsing time, query scheduling time, etc. tackle this large amount of CPU? Some of the queries are fairly short (500-1000 characters plus.)

Can anyone confirm if PostgreSQL automatically normalizes special queries and cache query plans for them, which makes them as efficient as a prepared statement (plus SQL parsing time)?

I will probably use higher-level caching to solve this problem, but I'm curious to see if anyone is thinking of porting this application to prepared statements.

+7
postgresql
source share
2 answers

Assuming you regularly VACUUM in the database (which is a standard source of PostgreSQL performance problems). I think the way to win most of them is

a) configure the installation for performance based on the machine you are on, and

b) analyze each request and find out whether it can be further optimized.

I really do not think that much will be achieved by moving requests to stored procedures.

+7
source share

One trick you may not have seen yet is to use "top -c" to look at your system. With this parameter, you can see what each active Postgres process does.

Query plans are not cached in any way in the database outside prepared statements. Regardless if you do not reuse similar queries, it is unlikely that you can reduce the query time with prepared statements. You can even make it worse if it results in the optimizer getting less information to work with, because he prepares things before he finds out all the information about what he is going to do. 1000 characters are far from a short request, and if you do not have hundreds of connections at once, this is really unlikely to parse or query planning. These are probably locking problems, bad VACUUM procedures leading to bloated data that you need to look for to do any work (very easy to see on 8.1), slow restrictions, excessive indexes or a design that does not take into account the overhead of moving things around memory completely. Overhead is very low on the suspect list.

And if you have hundreds of connections, you should consider using a connection pool. The process of creating PostgreSQL is quite difficult, and it does not work on its own in this environment.

Shoot, you are using such an old version, even 8.1, that you can see the error; 8.1.4 is full of them. 8.1.19 is current, and even 8.3.5 is already some useful version updates for the current one). See Version Policy for details on why launching an older version is a greater risk than updating in almost any situation.

+4
source share

All Articles