Whilst I'm currently building an ELK stack for centralized logging and visualizing these logs, I came also across Filebeat. Filebeat is basically a log parser and shipper and runs as a daemon on the client.
So far the first tests using Nginx access logs were quite successful.
Now I wanted to go one step further and automatically deploy Filebeat through an Ansible playbook. This playbook should also be used to automatically configure the "logs to be followed", called "prospectors" in Filebeat terminology.
Well, the following playbook does it. With the current code, it checks if there is Nginx and/or HAProxy installed on the target machine and automatically configures the prospectors and of course also the output (a Logstash receiver in my setup):
$ cat /srv/ansible/playbooks/filebeat/filebeat.yml
- name: ANSIBLE - Filebeat installation and configuration by www.claudiokuenzler.com
hosts: '{{ target }}'
roles:
- yaegashi.blockinfile
sudo: yes
tasks:
- name: APT - Add elastic.co key
apt_key: url="https://artifacts.elastic.co/GPG-KEY-elasticsearch"
when: ansible_distribution == "Ubuntu"
- name: APT - Add elastic.co repository
apt_repository: repo="deb https://artifacts.elastic.co/packages/5.x/apt stable main" filename="elastic-5.x" update_cache=yes
when: ansible_distribution == "Ubuntu"
- name: FILEBEAT - Install Filebeat
apt: pkg=filebeat
when: ansible_distribution == "Ubuntu"
- name: FILEBEAT - Copy base filebeat config file
copy: src=/srv/ansible/setup-files/filebeat/filebeat.yml dest=/etc/filebeat/filebeat.yml
- name: FILEBEAT - Set shipper name
lineinfile: "dest=/etc/filebeat/filebeat.yml state=present regexp='^name:' line='name: {{ ansible_hostname }}' insertafter='# Shipper Name'"
- name: FILEBEAT - Configure Logstash output
blockinfile:
dest: /etc/filebeat/filebeat.yml
insertafter: '# Logstash output'
marker: "# {mark} -- Logstash output configured by Ansible"
block: |
output.logstash:
hosts: ["logstashreceiver.example.com:5044"]
- name: FILEBEAT - Check if Nginx is installed
command: dpkg -l nginx
register: nginxinstalled
- name: FILEBEAT - Configure Nginx Logging
blockinfile:
dest: /etc/filebeat/filebeat.yml
insertafter: 'filebeat.prospectors:'
marker: "# {mark} -- Nginx logging configured by Ansible"
block: |
- input_type: log
paths:
- /var/log/nginx/*.log
document_type: nginx-access
when: nginxinstalled.rc == 0
- name: FILEBEAT - Check if HAProxy is installed
command: dpkg -l haproxy
register: haproxyinstalled
- name: FILEBEAT - Configure HAProxy Logging
blockinfile:
dest: /etc/filebeat/filebeat.yml
insertafter: 'filebeat.prospectors:'
marker: "# {mark} -- HAProxy logging configured by Ansible"
block: |
- input_type: log
paths:
- /var/log/haproxy.log
document_type: haproxy
when: haproxyinstalled.rc == 0
- name: FILEBEAT - Restart filebeat
service: name=filebeat state=restarted
Of course this only works when the correct "template" is used (see "FILEBEAT - Copy base filebeat config file"). This is a minimal config file prepared with certain lines:
$ cat /srv/ansible/setup-files/filebeat/filebeat.yml
#=========================== Filebeat prospectors =============================
filebeat.prospectors:
#================================ General =====================================
# Shipper Name
#================================ Outputs =====================================
# Logstash output
#================================ Logging =====================================
I let the playbook run on a test machine (which has Nginx and HAProxy installed):
$ ansible-playbook playbooks/filebeat.yaml --extra-vars "target=testmachine"
[DEPRECATION WARNING]: Instead of sudo/sudo_user, use become/become_user and make sure become_method is 'sudo'
(default).
This feature will be removed in a future release. Deprecation warnings can be disabled by setting
deprecation_warnings=False in ansible.cfg.
PLAY [ANSIBLE - Filebeat installation and configuration] ***********************
TASK [setup] *******************************************************************
ok: [testmachine]
TASK [APT - Add elastic.co key] ************************************************
ok: [testmachine]
TASK [APT - Add elastic.co repository] *****************************************
ok: [testmachine]
TASK [FILEBEAT - Install Filebeat] *********************************************
ok: [testmachine]
TASK [FILEBEAT - Copy base filebeat config file] *******************************
changed: [testmachine]
TASK [FILEBEAT - Set shipper name] *********************************************
changed: [testmachine]
TASK [FILEBEAT - Configure Logstash output] ****************
skipping: [testmachine]
TASK [FILEBEAT - Check if Nginx is installed] **********************************
changed: [testmachine]
TASK [FILEBEAT - Configure Nginx Logging] **************************************
changed: [testmachine]
TASK [FILEBEAT - Check if HAProxy is installed] ********************************
changed: [testmachine]
TASK [FILEBEAT - Configure HAProxy Logging] ************************************
changed: [testmachine]
TASK [FILEBEAT - Restart filebeat] *********************************************
changed: [testmachine]
PLAY RECAP *********************************************************************
testmachine : ok=13 changed=8 unreachable=0 failed=0
On the testmachine itself, the Filebeat config was correctly set:
root@testmachine:~# cat /etc/filebeat/filebeat.yml
#=========================== Filebeat prospectors =============================
filebeat.prospectors:
# BEGIN -- HAProxy logging configured by Ansible
- input_type: log
paths:
- /var/log/haproxy.log
document_type: haproxy
# END -- HAProxy logging configured by Ansible
# BEGIN -- Nginx logging configured by Ansible
- input_type: log
paths:
- /var/log/nginx/*.log
document_type: nginx-access
# END -- Nginx logging configured by Ansible
#================================ General =====================================
# Shipper Name
name: testmachine
#================================ Outputs =====================================
# Logstash output
# BEGIN -- Logstash output configured by Ansible
output.logstash:
hosts: ["logstashreceiver.example.com:5044"]
# END -- Logstash output configured by Ansible
#================================ Logging =====================================
Now that Logstash received these logs and added them into ElasticSearch, I am able to see them in Kibana:
Claudio from Switzerland wrote on Jun 28th, 2019:
Satish, thanks for your comment. You're free to adapt the playbook to your liking :)
Satish from wrote on Jun 28th, 2019:
Thanks for sharing the playbook for deploying filebeat on remote machines, here the paths and hosts fields are hard coded. Do we have provision to take dynamic values from user or some external config files for these fields.
Please let me know. Thanks much
paths:
- /var/log/haproxy.log
hosts: ["logstashreceiver.example.com:5044"]
AWS Android Ansible Apache Apple Atlassian BSD Backup Bash Bluecoat CMS Chef Cloud Coding Consul Containers CouchDB DB DNS Database Databases Docker ELK Elasticsearch Filebeat FreeBSD Galera Git GlusterFS Grafana Graphics HAProxy HTML Hacks Hardware Icinga Influx Internet Java KVM Kibana Kodi Kubernetes LVM LXC Linux Logstash Mac Macintosh Mail MariaDB Minio MongoDB Monitoring Multimedia MySQL NFS Nagios Network Nginx OSSEC OTRS Office OpenSearch PGSQL PHP Perl Personal PostgreSQL Postgres PowerDNS Proxmox Proxy Python Rancher Rant Redis Roundcube SSL Samba Seafile Security Shell SmartOS Solaris Surveillance Systemd TLS Tomcat Ubuntu Unix VMWare VMware Varnish Virtualization Windows Wireless Wordpress Wyse ZFS Zoneminder