How should I handle remote logging using systemd?

I am running multiple instances of CoreOS on the Google Compute Engine (GCE). CoreOS uses the systemd logging feature. How can I click all logs to a remote destination? As far as I understand, systemd logging does not have remote logging capabilities. My current job is as follows:

journalctl -o short -f | ncat <addr> <ip> 

From https://logentries.com using token-based login over TCP :

 journalctl -o short -f | awk '{ print "<token>", $0; fflush(); }' | ncat data.logentries.com 10000 

Are there any better ways?

EDIT: https://medium.com/coreos-linux-for-massive-server-deployments/defb984185c5

+9
logging docker systemd
source share
5 answers

systemd version 216 includes remote logging capabilities through a pair of client / server processes.

http://www.freedesktop.org/software/systemd/man/systemd-journal-remote.html

+10
source share

The downside of using -o short is that the format is difficult to parse; short-iso better. If you use the ELK stack, exporting it as JSON is even better. A systemd service, such as the following, will be good enough to send JSON logs to a remote host.

 [Unit] Description=Send Journalctl to Syslog [Service] TimeoutStartSec=0 ExecStart=/bin/sh -c '/usr/bin/journalctl -o json -f | /usr/bin/ncat syslog 515' Restart=always RestartSec=5s [Install] WantedBy=multi-user.target 

On the other hand, logstash.conf for me includes:

 input { tcp { port => 1515 codec => json_lines type => "systemd" } } filter { if [type] == "systemd" { mutate { rename => [ "MESSAGE", "message" ] } mutate { rename => [ "_SYSTEMD_UNIT", "program" ] } } } 

This makes the entire log data structure available for Kibana / Elasticsearch.

+6
source share

Kelsey Hightower-2-logentries did a great job with us: https://logentries.com/doc/coreos/

If you want to turn on and turn on devices without a fleet:

 #!/bin/bash # # Requires the Logentries Token as Parameter if [ -z "$1" ]; then echo "You need to provide the Logentries Token!"; exit 0; fi cat << "EOU1" > /etc/systemd/system/systemd-journal-gatewayd.socket [Unit] Description=Journal Gateway Service Socket [Socket] ListenStream=/run/journald.sock Service=systemd-journal-gatewayd.service [Install] WantedBy=sockets.target EOU1 cat << EOU2 > /etc/systemd/system/journal-2-logentries.service [Unit] Description=Forward Systemd Journal to logentries.com After=docker.service Requires=docker.service [Service] TimeoutStartSec=0 Restart=on-failure RestartSec=5 ExecStartPre=-/usr/bin/docker kill journal-2-logentries ExecStartPre=-/usr/bin/docker rm journal-2-logentries ExecStartPre=/usr/bin/docker pull quay.io/kelseyhightower/journal-2-logentries ExecStart=/usr/bin/bash -c \ "/usr/bin/docker run --name journal-2-logentries \ -v /run/journald.sock:/run/journald.sock \ -e LOGENTRIES_TOKEN=$1 \ quay.io/kelseyhightower/journal-2-logentries" [Install] WantedBy=multi-user.target EOU2 systemctl enable systemd-journal-gatewayd.socket systemctl start systemd-journal-gatewayd.socket systemctl start journal-2-logentries.service rm -f $0 
0
source share

Recent python package will be useful to me: journalpump

With support for Elastic Search, Kafka and logplex results.

0
source share

You can also use the rsyslog-kafka module inside Rsyslog .

 Rsyslog with moduels: - imfile - input file - omkafka - output to Kafka 

Define a json template and submit it to Apache Kafka. When the logs in Kafka ...

0
source share

All Articles