I am deploying a docker-compose stack of 5 applications on a single AWS EC2 host with 32GB RAM. This includes: 2 x Java containers, 2 x Rails containers, 1 x Elasticsearch/Logstash/Kibana (ELK) container (from https://hub.docker.com/r/sebp/elk/).
When I bring the stack up for the first time, all containers start. The ELK container takes about 3 minutes to start. The others come up straight away.
But the ELK containers exits after about 5 minutes. I can see from the logs that the elasticsearch service will not start. The log messages indicates a memory limitation error.
However, when I then tear everything down, and bring it up again, all the containers start straight away, including the ELK container, and everything remains stable. The issue only occurs the first time I start the stack on a new EC2 instance.
I can see from the docker stats that the ELK container is only using 2-3GB of the 32GB RAM available on the instance.
The ELK container is configured as follows:
elk: image: sebp/elk hostname: elk container_name: elk volumes: - ./pipeline/:/etc/logstash/conf.d/ tty: true expose: - "12201/udp" network_mode: host ports: - "5601:5601" - "9200:9200" - "12201:12201" ulimits: nofile: soft: 65536 hard: 65536
There are no dependencies between the containers on start up.
What is happening with elasticsearch when it first runs that cause the container to fail when starting?
sysctl -w vm.max_map_count=262144
)? (ref.: elastic.co/guide/en/elasticsearch/reference/current/…)