Logs from load-balanced servers

Sorry if this sounds like a basic question, but I'm new to web development.

We load balance on several servers. Applications are configured to log in using log4j. Each of them writes log files to its respective servers. This means that investigating problems means getting logs from all of these servers, which is tedious, and means going through Ops, as they control load balancing and introduce delays.

Is this the norm for web applications? Or are there simple solutions to consolidate registration in one place? What are the standard logging methods that are easily accessible to developers?

+6
logging log4j
source share
5 answers

Log in to SQL using the JDBC appender (or alternative version ) instead of files.

+3
source share

There are a large number of magazines that you can do, and which are automatically available.

Some types:

  • Log in to built-in machine logs (event logs or similar.).
    To do this, gain access so that you can access them remotely and match / explore as needed.
  • Logging by applications that are usually logged in text files on the local computer. (IIS or other.)
    Access the folders so you can analyze them yourself.
  • User logging.
    I recommend entering the database. (Although they need to be often trimmed / generalized).
    If a registration failure in the database is not possible, logs are logged on.
    Note. This can affect performance, so be careful how many entries you make.

If the operations do not want to give you direct access, see if you can dump these files in a place that you can access.

+2
source share

Log4J has both a JMS appender (so you can send logs to the message queue - not as stupid as it sounds depending on how much / what processing you need to do!) And syslog appender (local or remote). Any of them will help you collect magazines in one place. Syslog appender may be the best choice just to pack things in one place, since Unix-ish systems have been running syslog for a very long time, and there are many stable functions that you can use.

Logging into the database can be difficult to scale depending on your traffic if you are not smart about batch inserts. I would recommend that you store this material in flat files (for example, combined, of course), so that you can flexibly import them into the database later, or experiment with things like Hadoop (many examples based on the analysis of log files) - provided that you have to justify this complexity, of course.

+2
source share

We have a web farm with reliable logging, and here is how it is implemented.

Each web application generates logging events. Using MSMQ, these messages are sent to a private queue located on a separate machine. There is an application on this computer that cancels messages and writes them to the Sqlite database.

Using MSMQ separates the web application from the log server. If the server is not connected, messages are posted on the web server until the connection is restored. MSMQ handles message transfer to the destination server. In this way, the website can continue to do its work without interruption.

The log server has its own web interface for querying the logging database and can also receive log messages from other applications.

We assign a classification to each message. For messages with fatal error classification, the log server automatically generates an email to the support service. Other non-fatal messages and trace messages are only written to the database for summary reporting.

+2
source share

One possible way to facilitate access to the logs is to write them to a shared drive using NFS . You can perform some manipulations with individual directories on the server, but there are both of these directories visible on the server on which you want to evaluate the logs.

+1
source share

All Articles