Filebeat-Logstash-ElasticSearch-Kibana. Thanks in advance. Curator v3 should work for you with AWS ES 2.x, but lacks many advanced features found in Curator 4. See this example: I had an issue where syslog would see some *very* old timestamps on some machines, creating a ton of useless ES indices.Curator is in constant development. See If i use AWS ES service 2.3 , does it supports to delete old data? i was running out of space its keep writing logs . Even though AWS added the /_cluster/state endpoint—which Curator depends on—to their release of AWS ES 5.1, it still doesn't have the necessary data to support Curator. Embed a dashboard, share a link, or export to PDF, PNG, or CSV files and send as an attachment. The rest of the documentation should help with understanding.I was using ElasticSearch 5.1 in AWS does it support ElasticSearch Curator ?? i want to run something in corn or in settings to delete logs automatically every 15days .i want to run something in corn or in settings to delete logs automatically every 15days .You're on your own for that, as I stated. Also maybe mention the –dry-run argument before executing the cronjob Hi, due to alot of log data that comes into my small server I would like to delete the log data that is older than 1 hour. Anyway, Logstash doesn't have any log rotation functionality at all so any purging of old rotated files needs to be managed outside of Logstash. Curator is normally the go-to solution for that, but AWS made their version of ES incompatible with it. Increases the readability. Add the following line to run curator at 20 minutes past midnight (system time) and connect to the elasticsearch node on 127.0.0.1 and delete all indexes older than 120 days and close all indexes older than 90 days. Then you can make your "non-time-series" index into a time-series index for all intents and purposes.And ask for help with the Rollover API in a new topic, or search for an existing one, as this is off-topic here.I highly recommend looking into the Rollover API for a way to simplify this for you. And you are done. You will have to run the above command manually, or find some way to script it.you said AWS was not supporting Curator , so can i use thisIt's apparent you are not using time-series indices, if that is the case. Ex: 15days or 20days or 1mnth automatically .Are you talking about Elasticsearch log files or log indexed in Elasticsearch and searchable using Kibana?log indexed in Elasticsearch and searchable using Kibanawas talking about log indexed in elastic search. You will have to run the above command manually, or find some way to script it.No, that curl command will only delete the specified INDEXNAME.I already used this , it was delete whole log in that index.You're on your own for that, as I stated. I was sending large amount of logs to ES , so want to delete logs(mydata) after 15 or 20 days.I just want to delete my old data what ever i send to elastic search. I was using elasticsearch 5.1.Is there any chance to delete old data ?????? Easily share Kibana visualizations with your team members, your boss, their boss, your customers, compliance managers, contractors — anyone you like, really — using the sharing option that works for you. i was running out of space its keep writing logs . It’s called curator.I like the idea of being able to let a cron job kick off the cleanup so I don’t forget.edit the crontab. Deleting is pretty easy, as is closing an index.The awesome people working on elasticsearch already have the solution! How can I use the above crontab line to delete by hour instead of by day?Starting with curator 1.1.0 (2014/07/13) its command line syntax has changed. Ex: 15days or 20days or 1mnth automatically . OP would/could you possibly put what version you have? Is it possible to delete logs from elasticsearch index which are older than 3 months? I have tons of logs that was writing to elasticsearch service . Logstash itself doesn't name files this way and none of the log rotation tools I know about name files in this manner either. It usually offers some clues why it didn't work. 20 0 * * * /usr/local/bin/curator --host 127.0.0.1 -d 120 -c 90 If you prefer an alternative, here’s one written in perl. Does anybody have any idea how to delete data after three months automatically? You can now visualize the logs generated by FileBeat, ElasticSearch, Kibana and your other containers in the Kibana interface: Possible Improvements Removing old logs. Were you getting any response from ES from that request that failed? It might be worth adding that above commands work for 3.5 branch of curator, it didn’t work for me until I specifically went for 3.5 Doesn’t work with 3.5.0 either. How are these files created? You will have to write your own scripts to automate index deletion, or you may be able to find some online, somewhere.If you are running AWS ES 5.1, then Curator will not work for you. I was using ELKB. i was looking for something to delete logs after certain period of time. i was looking for something to delete logs after certain period of time. See the official documentation at You can also set curator to wait to prune indices until the disk is full to a certain size. To delete data older than 90 days you’d useYou can filter out old records with a filter as well. Is there any option or way available in elasticsearch. Any user should have access so I’ll run this under my user.Add the following line to run curator at 20 minutes past midnight (system time) and connect to the elasticsearch node on 127.0.0.1 and delete all indexes older than 120 days and close all indexes older than 90 days.If you prefer an alternative, here’s one written in perl.I suggest using long parameters for cron jobs. Please direct me proper way to delete those automatically.See the delete_indices action documentation, and the example for the same action. You should not be feeding a constant stream of data to a single index unless you're planning on using the rollover API.I already responded to your other request about the delete-by-query, which is a really bad approach to data management as it heavily taxes the cluster making millions of atomic flag-for-delete operations, which then have to be singled out for deletion at the next segment merge operation.