After I adjusted a field ("response") type in the Logstash configuration to "integer" (which is translated into "long" in Elasticsearch), the field now led to a conflict in Kibana.
With a click on the "Conflict warning icon" Kibana shows which indices have a conflict and why. This clearly shows that starting on the index "filebeat-2025.04.10" the field type is set to "long"; previously it was set to "text" (the default in Elasticsearch).
This is in line with my change in Logstash. Now I just need to change the mapping in Elasticsearch, too.
But this is easier said then done. As it turns out: Already created mappings can't be edited in an index. Sure, you can add a new field with a different type and add a new mapping to an index.
Another idea would be to create a so-called "sub-field" of a field. E.g. "response.status" could be such a field. But then I'd have to change all the Logstash grok filters, too.
Besides that it would also break all the monitoring dashboards, looking for the "response" field.
The official documentation clearly mentions:
Except for supported mapping parameters, you can't change the mapping or field type of an existing field. Changing an existing field could invalidate data that's already indexed.
If you need to change the mapping of a field in a data stream's backing indices, refer to documentation about modifying data streams. If you need to change the mapping of a field in other indices, create a new index with the correct mapping and reindex your data into that index.
I quite honestly didn't believe it. There must be a way around it. How can such a trivial alteration be impossible on an index? But, unfortunately, all searches (including AI prompts) led to the same conclusion: You must re-index your data/docs into a new index with the correct mapping. Jeez...
Hint: Have a look at this tutorial video from Elastic, which is a great helper in my eyes.
So let's have a look at the newest index where the "response" field is still mapped as "text" type: The filebeat-2025.04.09 index.
Use the API to retrieve the current mappings of that index. You can use curl:
ck@linux ~ $ curl -X GET http://localhost:9200/filebeat-2025.04.09/_mapping
Or if you use Kibana, you can open the "Dev Tools" and use the "Console" to enter the HTTP requests.
On the screenshot we can see the result (the mappings) on the right side after a click on the "Play" icon. Here the "response" field is clearly shown with the definition of "text" as type.
Copy the whole content from the result and paste it into the console on the left side. As mentioned in the video, remove the second line with the index name (filebeat-2025.04.09 in my case) and the corresponding closing curly bracket at the second last line.
Navigate to the field(s) you need to adjust the type. In my case it's the "response" field and I changed it to "long":
"response" : {
"type" : "long",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
Add the create index command (PUT <INDEXNAME>) just on top of the whole mappings JSON, like this:
After a click on the play icon once more, Elasticsearch successfully created the new index "filebeat-2025.04.09-ri2" with the defined mappings.
At this state we have:
This means we need to re-index the data from the original (source) to the new (dest) index. This can be done using the _reindex action.
POST _reindex
{
"source": { "index": "filebeat-2025.04.09" },
"dest": { "index": "filebeat-2025.04.09-ri2"}
}
This took quite some time - and eventually an error 400 (request body is required) showed up:
According to a discussion in the Elastic forums, the error is only shown in Kibana due to a timeout between Kibana and Elasticsearch. The re-indexing is actually started and is happening in the background.
This can be verified under "Stack Management" -> "Index Management", where our new and previously empty index "filebeat-2025.04.09-ri2" is slowly but surely getting filled with data:
So far this seemed to work... Let's wait until all the data/docs are indexed into our new index.
I was pretty happy to see that the re-indexing still worked, even though Kibana showed an error as output. But after a few minutes I realized something bad; this is bloody slow!
It took several hours to get to 20-30 GB. And the source index is over 80GB in size...
At the end it took roughly 8 hours (I didn't look exactly but that should more or less be accurate) until all 66514395 docs were indexed on the new index.
Now I can set an alias inside Elasticsearch pointing "filebeat-2024.04.09" to "filebeat-2024.04.09-ri2" and delete the source index.
That works to resolve the conflict for one index. But as you can see from the conflict screenshot at the beginning, I would be spending a lot of time.
So changing the field type in an Elasticsearch index mapping works by re-indexing into a newly created index with adjusted mappings - however the re-indexing is very slow.
This might be OK for smaller indices, maybe up to 5GB I'd say. But here we're talking about an ELK stack with large indices - some are 200GB in size per day.
Considering the days I'd (the Elasticsearch cluster) have to spend to re-index all the conflicting indices, I will probably end up changing the name of the field in Logstash's grok filter and adjust the Kibana visualizations and dashboards accordingly.
If someone from Elastic reads this one day, it would be very cool to be able to change the type of an already created field in the future. Maybe doesn't even have to be a type change, but an alias placed on top (similar to pointing an index to a different index).
AWS Android Ansible Apache Apple Atlassian BSD Backup Bash Bluecoat CMS Chef Cloud Coding Consul Containers CouchDB DB DNS Database Databases Docker ELK Elasticsearch Filebeat FreeBSD Galera Git GlusterFS Grafana Graphics HAProxy HTML Hacks Hardware Icinga Influx Internet Java KVM Kibana Kodi Kubernetes LVM LXC Linux Logstash Mac Macintosh Mail MariaDB Minio MongoDB Monitoring Multimedia MySQL NFS Nagios Network Nginx OSSEC OTRS Observability Office OpenSearch PGSQL PHP Perl Personal PostgreSQL Postgres PowerDNS Proxmox Proxy Python Rancher Rant Redis Roundcube SSL Samba Seafile Security Shell SmartOS Solaris Surveillance Systemd TLS Tomcat Ubuntu Unix VMWare VMware Varnish Virtualization Windows Wireless Wordpress Wyse ZFS Zoneminder