In a usual ELK setup, one would run application servers logging to a central Logstash server and this Logstash writes log events into an Elasticsearch index.
But sometimes it might be required to forward logs from a central Logstash to another Logstash server. A practical example: Your applications run on different premises or cloud providers but you still want to have all the logs in a central place. For such a scenario a "local" Logstash server, which is used as a forwarder to the central Logstash server, can be used. The question is: Which output plugin should be used to forward the logs?
A couple of years ago, the logstash-forwarder project was created to solve exactly this: Pick up logs and forward them to one or more Logstash servers "listening for our messages".
However logstash-forwarder was replaced by Filebeat:
The filebeat project replaces logstash-forwarder. Please use that instead.
Interestingly this project was renamed from "lumberjack" to "logstash-forwarder" as can be read in the README:
This project was recently renamed from 'lumberjack' to 'logstash-forwarder' to make its intended use clear. The 'lumberjack' name now remains as the network protocol, and 'logstash-forwarder' is the name of the program.
We'll come back to the lumberjack name later again...
So if Filebeat should be used instead, is there an actual logstash-output-filebeat plugin available? No, there isn't (as of this writing). The idea behind using Filebeat as a forwarder to another Logstash server would be for the local Logstash to write local logs into files. These files are then picked up by Filebeat and sent to the central Logstash server.
Sounds messy? It is. Not only will you lose time by writing disks to log (by Logstash) and reading them again (by Filebeat), it also means to install a second daemon (Filebeat) which uses resources on that local Logstash server.
It would surely work, but do we really want to involve another daemon? Nope.
The challenge now is to find the right Logstash output plugin. Sure, there are the basic tcp and udp output plugins, which simply connect to a Logstash listener on the given port - but is that the right choice?
Going through the list of Logstash output plugins, one name catches the eye: lumberjack.
Wait, wasn't lumberjack the original name of the logstash-forwarder? A quick read in the lumberjack output plugin documentation doesn't really show a use case for this output, but it's certainly worth a try.
The lumberjack output basically uses the lumberjack protocol, which was picked up and replaced by Filebeat (as mentioned above). This means: Our local Logstash can use lumberjack as output and our central Logstash can use the beats input to receive these log events.
In order to use the lumberjack output, SSL certificate exchange between the local and the central Logstash servers must happen. The ssl_certificate parameter is required, it won't work without it. The SSL key and certificate need to be configured on the central (receiving) Logstash server and configured in the beats intput definition. On the local (sending) Logstash, it's enough to use the same SSL certificate.
To create a very basic SSL key and certificate on the central Logstash server (make sure to set CN to your central Logstash FQDN):
root@central:~# openssl req -x509 -batch -nodes -newkey rsa:2048 -keyout lumberjack.key -out lumberjack.cert -subj /CN=logstash.example.com
This command creates two files:
Move both files into /etc/logstash/ and configure a new beats input with the paths to these files:
# Beats input for collecting logs from other Logstash servers using lumberjack
input {
beats {
codec => json
port => 6000
ssl => true
ssl_certificate => "/etc/logstash/lumberjack.cert"
ssl_key => "/etc/logstash/lumberjack.key"
}
}
On the sending side, our "local" Logstash, the lumberjack output can be defined, using the same SSL certificate:
output {
lumberjack {
codec => json
hosts => "logstash.example.com"
ssl_certificate => "/etc/logstash/lumberjack.cert"
port => 6000
}
}
Don't forget to install the lumberjack output plugin as it is not part of the default output plugins:
root@local:~# /usr/share/logstash/bin/logstash-plugin install --no-verify logstash-output-lumberjack
Installing logstash-output-lumberjack
Installation successful
Restart Logstash and that's it already. As long as your local Logstash can communicate with the central Logstash with the given tcp port (here tcp/6000), log events should now arrive on the central Logstash server and stored into the Elasticsearch index.
ck from Switzerland wrote on Oct 27th, 2020:
Cliff, you could export your local hostname as an environment variable and then use this in Logstash mutate filter to add a field "logstash_forwarder". First create an environment variable:
ckadm@mintp ~ $ export myhost=$(hostname)
ckadm@mintp ~ $ env|grep myhost
myhost=mintp
filter {
mutate {
add_field => { "logstash_forwarder" => "${myhost}" }
}
}
Cliff from Australia wrote on Oct 27th, 2020:
Any idea how to get the details (ip/hostname) of the forwarding logstash instance added to the lumberjack output?
I know I can use a mutate filter to add fields, but for the life of me I cannot find any fields/variables that contain the details of the forwarding logstash instance.
ck from Switzerland wrote on Aug 5th, 2020:
Hi Yan. Yes, I would even recommend using certificates from your own CA. I just created self-signed certificates here as an example. Something else which should be mentioned: The certificate created in the article was only valid for 30 days (by default) - we ran into communication issues using the Lumberjack output once the certificate expired. We then replaced it with a 10y certificate.
Yan from wrote on Aug 4th, 2020:
Hey thanks for the useful post.
Just wondering. Do you know if I can use certificates from my CA instead of the selfsigned? I actually tried to replace them but having certificate related errors.
Jamsux from Salvador / Bahia / Brazil wrote on Jul 17th, 2020:
Hello!
I very Liked this Post. Very good Written.
Congratulations man!
AWS Android Ansible Apache Apple Atlassian BSD Backup Bash Bluecoat CMS Chef Cloud Coding Consul Containers CouchDB DB DNS Database Databases Docker ELK Elasticsearch Filebeat FreeBSD Galera Git GlusterFS Grafana Graphics HAProxy HTML Hacks Hardware Icinga Influx Internet Java KVM Kibana Kodi Kubernetes LVM LXC Linux Logstash Mac Macintosh Mail MariaDB Minio MongoDB Monitoring Multimedia MySQL NFS Nagios Network Nginx OSSEC OTRS Observability Office OpenSearch PGSQL PHP Perl Personal PostgreSQL Postgres PowerDNS Proxmox Proxy Python Rancher Rant Redis Roundcube SSL Samba Seafile Security Shell SmartOS Solaris Surveillance Systemd TLS Tomcat Ubuntu Unix VMWare VMware Varnish Virtualization Windows Wireless Wordpress Wyse ZFS Zoneminder