Using an ELK (Elasticsearch, Logstash, Kibana) stack as a central logging platform is great to consolidate your logs in one place and have a general view over all kinds of applications and even infrastructures. Once the initial question marks around the ELK setup are resolved, this central logging stack just keeps on running smoothly (usually). But there's a catch when upgrading a long-living ELK stack: The [type] field.
If you've been using Elasticsearch only in the last 2 years, you probably never heard of or ran into the issue with the "type" field as the behaviour for that field is the same since Elasticsearch 6 (released in November 2017). The type field basically defines the type or kind of event inside an Elasticsearch Index. This field is unique and only one value can exist in the same Index. If events with a type value of "log" are logged into an Index and all of a sudden another event with a type "event" wants to be inserted into this Index, a mapping conflict occurs (see also Confused ElasticSearch refuses to insert data due to mapping conflict).
But before Elasticsearch 6, it was actually possible to have events with multiple type values in the same Index. It was even a "normal" way to treat logging events this way as the [type] field could be used to specify the origin of the log event; for example "syslog" for events coming from syslog or "beats" for events coming from Filebeat agents. These events happily co-existed in the same Index, without Elasticsearch complaining about it.
With the release of Elasticsearch 6, this behaviour on the type field drastically changed and was noted under the breaking changes in 6.0 in the release notes:
Multiple mapping types are not supported in indices created in 6.0
The ability to have multiple mapping types per index has been removed in 6.0. New indices will be restricted to a single type. This is the first step in the plan to remove mapping types altogether. Indices created in 5.x will continue to support multiple mapping types.
That's right. The [type] field is that particular field to determine mapping types.
When Elasticsearch in our ELK stack was upgraded from 5.x to 6.x we obviously ran into the [type] field problem. But we were able to get around this problem by unifying the field values in Logstash 5.
Inside Filebeat's filebeat.prospectors section, a specific type was defined by using the document_type parameter:
#=========================== Filebeat prospectors =============================
filebeat.prospectors:
- input_type: log
paths:
- /var/log/nginx/*.log
document_type: beats
This document_type parameter disappeared after Filebeat 5 (and was already marked as deprecated with Filebeat 5.5). From the documentation:
document_type
Deprecated in 5.5. Use fields instead
The event type to use for published lines read by harvesters. For Elasticsearch output, the value that you specify here is used to set the type field in the output document. The default value is log.
At the same time, the beat input configuration in Logstash 5, the type field was (over-) written with one only value: beats:
input {
beats {
port => 5044
type => "beats"
}
}
Although the documentation of Logstash 5.x mentioned that existing type fields from shippers (such as Filebeat) will not be overwritten, this has (strangely?) always worked for us.
But once Logstash was upgraded from 5.x to 6.x, it did not take long and thousands of mapping conflicts showed up in Logstash's log file:
[2020-05-19T12:14:01,866][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"filebeat-2020.05.19", :_type=>"doc", :routing=>nil}, #<LogStash::Event:0x3cb4a4a9>], :response=>{"index"=>{"_index"=>"filebeat-2020.05.19", "_type"=>"doc", "_id"=>"chlsLHIB6X04fnom_3Rz", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"Rejecting mapping update to [filebeat-2020.05.19] as the final mapping would have more than 1 type: [beats, doc]"}}}}
Obviously the previous workaround to unify the [type] field is not working anymore. The filebeat Index contained the previous events with the "beats" type but Logstash now tried to insert the new events as "doc" types.
With the upgrade of Logstash from 5.x to 6.x, Logstash's Elasticsearch output changed the behaviour how to handle the type field in accordance with Elasticsearch 6.x and above. The relevant information can be found in the documentation of the elasticsearch output plugin:
document_type
Value type is string
There is no default value for this setting.
This option is deprecated
Note: This option is deprecated due to the removal of types in Elasticsearch 6.0. It will be removed in the next major version of Logstash. This sets the document type to write events to. Generally you should try to write only similar events to the same type. String expansion %{foo} works here. If you don’t set a value for this option:
for elasticsearch clusters 7.x and above: the value of _doc will be used;
for elasticsearch clusters 6.x: the value of doc will be used;
for elasticsearch clusters 5.x and below: the event’s type field will be used, if the field is not present the value of doc will be used.
Although it would be possible to manually overwrite this type field (once again) in Logstash's Elasticsearch output configuration using document_type, this would be yet another short-lived workaround as document_type is marked as deprecated in Logstash 6.x (and as of this writing, Logstash 7.x is the current stable version).
The documentation explains the situation straightforward: You're using an Elasticsearch 6.x cluster, therefore the type field will have a value of "doc". Hence the mapping conflict right after upgrading Logstash.
In addition to Logstash's different handling of the type field after the upgrade, Filebeat itself also goes through a change of default value. A Filebeat 5.x event would have a default type of "log", unless the field was manually configured with the document_type parameter, as mentioned above.
When some of the Filebeat agents were upgraded to 7.x, the type field changed its value to "_doc" - the same value which was mentioned in the documentation of Logstash's Elasticsearch output plugin. Since the document_type parameter does not exist in Filebeat 7.x anymore, there's no possibility anymore to adjust the value of the type field.
Although the events from Filebeat 7 now arrived as "_doc" types on Logstash, the final event was logged as "doc" into Elasticsearch 6. This is done by Logstash having the intelligence to know: Oh no, I don't want to cause a mapping conflict, so I rewrite the incoming type field into the default!
Once all Logstash servers were upgraded from 5.x to 6.x, the Indexes which suffered from a mapping conflict were deleted (means data loss of half a day) and then automatically recreated by Logstash. All new logged events now had the "doc" value in the [type] field. We could have also waited until the next day, as a new Index would have been created on date change - but as this ELK stack is only used for application log observation, there was no need forcibly keep these couple of hours of data.
The only remaining problem to solve is how to handle the log events differently when the [type] field cannot be touched or altered anymore. But before that, there's an important question on which it all depends: Why did we adjust the type fields in the first place?
As you can imagine, a central ELK stack can grow huge and the more applications and log sources are added, the more shards, replications etc. are around. To facilitate maintenance and speed up performance, we created different Indexes to keep matching events together. For example logs from HAProxy (collected via Syslog) should have their own Index, access logs from Nginx have their own Index, Docker containers should have their own Index, and so on.
To solve this, we used the "type" field to determine the type of log event coming in and used filters in Logstash to determine the target Index. Here's a simplified example:
# Depending on the condition, the destination "target_index" changes.
filter {
if [type] == "haproxy" {
mutate {
add_field => {
"[@metadata][target_index]" => "haproxy"
}
}
} else if [type] == "docker" {
mutate {
add_field => {
"[@metadata][target_index]" => "docker"
}
}
} else if [type] == "application" {
mutate {
add_field => {
"[@metadata][target_index]" => "application"
}
}
} else {
# Last Possible output: Send to default index (logstash):
mutate {
add_field => {
"[@metadata][target_index]" => "logstash"
}
}
}
}
output {
elasticsearch {
hosts => [ "ES01:9200", "ES02:9200", "ES03:9200" ]
index => "%{[@metadata][target_index]}-%{+YYYY.MM.dd}"
ssl_certificate_verification => false
user => "elastic"
password => "secret"
}
}
Instead of using the [type] field, other fields can be defined and the Logstash filter adjusted accordingly.
In Filebeat, additional fields can be created using "fields" and by setting the "fields_under_root" parameter to true, these fields are on the top-level (easier to create filters):
#=========================== Filebeat inputs =============================
filebeat.inputs:
- type: docker
containers.ids:
- '*'
fields:
my_company_app: docker
fields_under_root: true
This Filebeat 7 agent now sends Docker container logs with [type] value "_docs" (due to version 7). But the events now also contain an additional field [my_company_app] with the value "docker". Last but not least there's another handy field which does not seem obvious: These events coming from Filebeat 7 also contain a field [input][type] with the value of "docker".
Using these other fields, the filters can be slightly adjusted:
# Depending on the condition, the destination "target_index" changes.
filter {
if [my_company_app] == "haproxy" {
mutate {
add_field => {
"[@metadata][target_index]" => "haproxy"
}
}
} else if [my_company_app] == "docker" or [input][type] == "docker" {
mutate {
add_field => {
"[@metadata][target_index]" => "docker"
}
}
} else if [my_company_app] == "application" {
mutate {
add_field => {
"[@metadata][target_index]" => "application"
}
}
} else {
# Last Possible output: Send to default index (logstash):
mutate {
add_field => {
"[@metadata][target_index]" => "logstash"
}
}
}
}
output {
elasticsearch {
hosts => [ "ES01:9200", "ES02:9200", "ES03:9200" ]
index => "%{[@metadata][target_index]}-%{+YYYY.MM.dd}"
ssl_certificate_verification => false
user => "elastic"
password => "secret"
}
}
Basically keep your hands off the [type] field and you're good to go.
No comments yet.
AWS Android Ansible Apache Apple Atlassian BSD Backup Bash Bluecoat CMS Chef Cloud Coding Consul Containers CouchDB DB DNS Database Databases Docker ELK Elasticsearch Filebeat FreeBSD Galera Git GlusterFS Grafana Graphics HAProxy HTML Hacks Hardware Icinga Influx Internet Java KVM Kibana Kodi Kubernetes LVM LXC Linux Logstash Mac Macintosh Mail MariaDB Minio MongoDB Monitoring Multimedia MySQL NFS Nagios Network Nginx OSSEC OTRS Office OpenSearch PGSQL PHP Perl Personal PostgreSQL Postgres PowerDNS Proxmox Proxy Python Rancher Rant Redis Roundcube SSL Samba Seafile Security Shell SmartOS Solaris Surveillance Systemd TLS Tomcat Ubuntu Unix VMWare VMware Varnish Virtualization Windows Wireless Wordpress Wyse ZFS Zoneminder