Docker compose is a tool for defining and running multi-container (Skedler Reports, Elasticsearch and Kibana) Docker applications. With Compose, you use a YAML file to configure your application’s services. Then with a single command, you create and start all the services from your configuration.
In this section, I will describe how to create a containerized installation for Skedler Reports, Elasticsearch and Kibana.
Benefits:
- You describe the multi-container set up in a clear way and bring up the containers in a single command.
- You can define the priority and dependency of the container on other containers.
Step-by-Step Instruction:
Step 1: Define services in a Compose file:
Create a file called docker-compose.yml in your project directory and paste the following
docker-compose.yml:
—
version: “2.4”
services:
# Skedler Reports container
reports:
image: skedler/reports:latest
container_name: reports
privileged: true
cap_add:
– SYS_ADMIN
volumes:
– /sys/fs/cgroup:/sys/fs/cgroup:ro
– reportdata:/var/lib/skedler
– ./reporting.yml:/opt/skedler/config/reporting.yml
command: /opt/skedler/bin/skedler
depends_on:
elasticsearch: { condition: service_healthy }
ports:
– 3000:3000
healthcheck:
test: [“CMD”, “curl”, “-s”, “-f”, “http://localhost:3000”]
networks: [‘stack’]
# Elasticsearch container
elasticsearch:
container_name: elasticsearch
hostname: elasticsearch
image: “docker.elastic.co/elasticsearch/elasticsearch:7.1.1”
logging:
options:
max-file: “3”
max-size: “50m”
environment:
– http.host=0.0.0.0
– transport.host=127.0.0.1
– bootstrap.memory_lock=true
– “ES_JAVA_OPTS=-Xms${ES_JVM_HEAP} -Xmx${ES_JVM_HEAP}”
mem_limit: ${ES_MEM_LIMIT}
ulimits:
memlock:
soft: -1
hard: -1
volumes:
– ./config/elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
– esdata:/usr/share/elasticsearch/data
ports: [‘9200:9200’]
healthcheck:
test: [“CMD”, “curl”,”-s” ,”-f”, “http://localhost:9200/_cat/health”]
networks: [‘stack’]
#Kibana container
kibana:
container_name: kibana
hostname: kibana
image: “docker.elastic.co/kibana/kibana:7.1.1”
logging:
options:
max-file: “3”
max-size: “50m”
volumes:
– ./config/kibana/kibana.yml:/usr/share/kibana/config/kibana.yml
ports: [‘5601:5601’]
networks: [‘stack’]
depends_on:
elasticsearch: { condition: service_healthy }
restart: on-failure
healthcheck:
test: [“CMD”, “curl”, “-s”, “-f”, “http://localhost:5601/”]
retries: 6
volumes:
esdata:
driver: local
reportdata:
driver: local
networks: {stack: {}}
This Compose file defines three services, Skedler Reports, Elasticsearch and Kibana.
Step 2: Basic configurations using reporting.yml and kibana.yml
Create a files called reporting.yml in your project directory.
Getting the reporting.yml file found here
Note: For more configuration options kindly refer the article reporting.yml and ReportEngineOptions Configuration
Create a files called kibana.yml in your project directory.
Note: For more configuration options kindly refer the article kibana.yml
Step 3: Build and run your app with docker-compose
From your project directory, start up your application by running
sudo docker-compose up -d
Compose pulls a Skedler Reports, Elasticsearch and Kibana images, builds an image for your code, and starts the services you defined
Skedler Reports is available at http://<hostIP>:3000, Elasticsearch is available at http://<hostIP>:9200 and Kibana is available at http://<hostIP>:5601 .
Summary
Docker compose is a useful tool to manage container stacks for your client. And manage all related containers with one single command.