ELK 5 on Docker

This is a little follow up to a post I did awhile back about getting the ELK stack up and running using Docker.  The last post was over a year ago and a lot has changed in regards to both Docker and the ELK stack.

All of the components of the ELK stack have gone through several revisions since the last post and all kinds of features and improvements have been made to all components (Elasticsearch, Logstash and Kibana).  The current iteration is v5 for all of the components.  v5 is still in alpha but that doesn’t mean we can’t get it up and running with Docker.  NOTE: I don’t recommend trying to run ELK v5 in any kind of a setup outside of development at this point since it is still alpha.

Docker has evolved a little bit as well since the last post, which will help with some of the setup.  The improvements in docker-compose will allow us to wrap the new Docker features up in the containers and leverage some cool Docker features.

Here is the updated elk-docker repo.  Please open a PR or issue if you have ideas for improvement or if there are any issues you run into.

For the most part the items in the repo shouldn’t need to change unless you are interested in adjusting the Elasticsearch configuration or you want to update the Logstash input/filter/output configuration.  The Elasticsearch config is located in es/elasticsearch.yml and the Logstash config is located in logstash/logstash.conf.

This configuration has been tested using Docker version 1.11 and docker-compose 1.7 on OS X.

Here’s what the docker-compose file looks like.

version: '2'
services:
  elasticsearch:
  image: elasticsearch:5
  command: elasticsearch
  environment:
    # This helps ES out with memory usage
    - ES_JAVA_OPTS=-Xmx1g -Xms1g
  volumes:
  # Persist elasticsearch data to a volume
    - elasticsearch:/usr/share/elasticsearch/data
    # Extra ES configuration options
    - ./es/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
  ports:
    - "9200:9200"
    - "9300:9300"

  logstash:
  image: logstash:5
  command: logstash --auto-reload -w 4 -f /etc/logstash/conf.d/logstash.conf
  environment:
    # This helps Logstash out if it gets too busy
    - LS_HEAP_SIZE=2048m
  volumes:
    # volume mount the logstash config
    - ./logstash/logstash.conf:/etc/logstash/conf.d/logstash.conf
  ports:
    # Default GELF port
    - "12201:12201/udp"
    # Default UDP port
    - "5000:5000/udp"
    # Default TCP port
    - "5001:5001"
  links:
    - elasticsearch

  kibana:
  image: kibana:5
  environment:
    # Point Kibana to the elasticsearch container
    - ELASTICSEARCH_URL=http://elasticsearch:9200
  ports:
    - "5601:5601"
  links:
    - elasticsearch

  kopf:
  image: rancher/kopf:v0.4.0
  ports:
    - "8080:80"
  environment:
    KOPF_ES_SERVERS: "elasticsearch:9200"
  links:
    - elasticsearch

volumes:
  elasticsearch:

Notice that we are just storing the Elasticsearch data in a Docker volume called “elasticsearch”.  Storing the data in a volume makes it easier to manage.

To start up the ELK stack just run docker-compose up” (plus -d for detatched) and you should see the ELK components start to come up in the docker-compose log messages.  It takes about a minute or so to come up.

After everything has bootstrapped and come up you can see the fruits of your labor.  If you are using the Docker beta app, (which I highly recommend) you can just visit localhost:5601 in your browser.

kibana5

Bonus

To easily get some logs into ELK to start playing around with some data you can run the logspout container like I have below.  This will forward the logs from the Docker daemon to Logstash for you automatically so that you can create a timestamp index in Logstash as above.

docker run --rm --name="logspout" \
 --volume=/var/run/docker.sock:/var/run/docker.sock \
 --publish=127.0.0.1:8000:80 \
 gliderlabs/logspout:master \
 syslog://<local_ip_address>:5001

Edit: 3/10/17

If you are testing locally with the Logtash adapter enabled, you can use the following docker + logspout command to populate logs into Elasticsearch and create an index automatically.

docker run --rm --name="logspout" \ joshuareichardt@Joshuas-MacBook-Pro
 --volume=/var/run/docker.sock:/var/run/docker.sock \
 --publish=127.0.0.1:8000:80 \
 --env DEBUG=1 \
 <logspout_logstash_image> \
 logstash+tcp://<local_ip_address>:5001

The value of <local_ip_address> should be the address of your laptop or desktop, which you can grab with ifconfig.  Optionally you can add the debug flag to help troubleshoot issues by adding the following below the –publish line.

-env DEBUG=1

Then you can can check your local interface to make see packets being sent from logspout to Logstash using tcpdump.  You might need to adjust lo0 to the interface used on the local machine by Docker.

sudo tcpdump -v -s0 -A -i lo0 port 5001

Alternatively, you can use curl if you enabled the http module in logspout.

curl http://127.0.0.1:8000/logs

That’s pretty much all there is to it.  Feel free to tweak the configs if you want to play around with logstash or elasticsearch.  And also please let me know if you have any ideas for improvement or have any issues getting this up and running.

Liked it? Support me on Patreon

Josh Reichardt

Josh is the creator of this blog, a system administrator and a contributor to other technology communities such as /r/sysadmin and Ops School. You can also find him on Twitter and Facebook.