Is it ok for "rsyslogd" to cost 170M of memory?

One of my sites is very slow,

and I use the top command to see that "rsyslogd" costs 170M of memory,

- is that normal?

If not, how can I limit the memory size to "rsyslogd" or the frequency to "rsyslogd"

of runs?

+4
source share
3 answers

Yes and no. Typically, you use the file / disk queue mode. It caches the entries in the buffer and writes the block on time instead of inefficient line by line at a time with opening and closing; reducing unnecessary and small disk access.

The problem is that it makes a 10 MB buffer for each file, registering it. 20 log files - 200 + MB. The number of log files can always be reduced, but you can also reduce the size of the buffer if you are not using a raid system (large block) or hi-demand. The documentation is here: http://www.rsyslog.com/doc/v8-stable/concepts/queues.html#disk-queues , "$ <object> QueueMaxFileSize" to reduce the size of each buffer. 4MB can reduce up to 70 MB

+3
source

It looks like you have too much process information. You can just look at the magazines and see who writes all this, and see if you can make them stop. I saw that gigabyte sizes fall into the logs when a program has a recurring error that causes it to log the same error message thousands of times per second. Check the logs seriously and just see who the hell rsyslogd is.

+2
source

There cannot be a β€œfrequency” rsyslogd β€œworks” because it is a daemon that provides logging facilities. As Robert S. Barnes pointed out, you better check the logs to determine the application that rsyslogd (ha!) Clogs. Log names are OS dependent, but most likely they are in / var / log and its subdirectories. I have seen rsyslogd consume relatively large amounts of memory, but 170Mb is too much and too abnormal.

Shameless offtopic editing: I have serverfault and stackoverflow tabs next to each other, and to be honest, I was 100% sure that I was sending to the server until I sent the answer (this should be a tip for you): P

+1
source

All Articles