Monitoring Servers and Docker Containers using Elasticsearch with Grafana

May 10th, 2022

Introduction

Infrastructure monitoring is the basis for application performance management. The underlying system’s availability and health must be maximized continually. To achieve this, one has to monitor the system metrics like CPU, memory, network, and disk. Response time lag, if any must be addressed swiftly. Here we’ll take a look at how to Monitor servers (and even Docker Containers running inside the Server) using Grafana, Elasticsearch, Metricbeat, and Skedler Reports.

Core Components

Grafana-Analytics & monitoring solution for database

Elasticsearch-Ingest and index logs

Metricbeat-Lightweight shipper for metrics

Skedler Reports –Automate actionable reports

Grafana — Analytics & monitoring solution for database

Grafana allows you to query, visualize, alert on and understand your metrics no matter where they are stored. Create, explore, and share dashboards with your team and foster a data-driven culture.

Elasticsearch-Ingest and index logs

Elasticsearch is a distributed, RESTful search and analytics engine capable of addressing a growing number of use cases. As the heart of the Elastic Stack, it centrally stores your data so you can discover the expected and uncover the unexpected.

Metricbeat — Lightweight shipper for metrics

Collect metrics from your systems and services. From CPU to memory, Redis to NGINX, and much more, Metricbeat is a lightweight way to send system and service statistics.

Skedler Reports — Automate actionable reports

Skedler offers the most powerful, flexible and easy-to-use data monitoring solution that companies use to exceed customer SLAs, achieve compliance, and empower internal IT and business leaders.

Export your Grafana Dashboard to PDF Report in Minutes with Skedler. Fully featured free trial.

Prerequisite,

  1. A Linux machine
  2. Docker Installed
  3. Docker Compose Installed

[email protected]:~$ mkdir monitoring

[email protected]:~$ cd monitoring/

[email protected]:~$ vim docker-compose.yml

Now, Create a Docker Compose file for Elasticsearch, You also need to Create an Elasticsearch configuration file, elasticsearch.yml Docker Compose file for Elasticsearch is below,

Note: We will keep on extending the same docker file as we will move ahead to install other components.

version: “2.1”

services:

#Elasticsearch container

  elasticsearch:

    container_name: elasticsearch

    hostname: elasticsearch

    image: “docker.elastic.co/elasticsearch/elasticsearch:latest”

    logging:

      options:

        max-file: “3”

        max-size: “50m”

    environment:

      – http.host=0.0.0.0

      – transport.host=127.0.0.1

      – bootstrap.memory_lock=true

      – “ES_JAVA_OPTS=-Xms${ES_JVM_HEAP} -Xmx${ES_JVM_HEAP}”

    mem_limit: 1g

    ulimits:

      memlock:

        soft: -1

        hard: -1

    volumes:

      – ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml

      – esdata:/usr/share/elasticsearch/data

    ports: [‘9200:9200’]

    healthcheck:

      test: [“CMD”, “curl”,”-s” ,”-f”, “http://localhost:9200/_cat/health”]

    networks: [‘stack’]

volumes:

  esdata:

    driver: local

networks: guidanz

Create an Elasticsearch configuration file elasticsearch.yml and paste the config as below.

cluster.name: guidanz-stack-cluster

node.name: node-1

network.host: 0.0.0.0

path.data: /usr/share/elasticsearch/data

http.port: 9200

xpack.monitoring.enabled: true

http.cors.enabled: true

http.cors.allow-origin: “*”

http.max_header_size: 16kb

Now run the docker-compose.

[email protected]:~/monitoring$ docker-compose up -d

Access Elasticsearchusing the IP and Port and you will see the Elasticsearch UI.

http://ip_address:9200

Now We will setup the Metricbeat. It is one of the best components used along with the Elasticsearch to capture metrics from the server where the Elasticsearch is running. It Captures all hardware and kernel-related metrics like system-level CPU usage, memory, file system, disk IO, and network IO statistics, as well as top-like statistics for every process running on your systems.

To Install the Metricbeat, simply append the docker-compose.yml file, metricbeat.yml, and modules.d file as below.

metricbeat:

    container_name: metricbeat

    hostname: metricbeat

    user: root #To read the docker socket

    image: docker.elastic.co/beats/metricbeat:latest

    logging:

      options:

        max-file: “3”

        max-size: “50m”

    volumes:

      #Mount the metricbeat configuration so users can make edits.

      – ./metricbeat.yml:/usr/share/metricbeat/metricbeat.yml

      #Mount the modules.d directory into the container. This allows user to potentially make changes to the modules and they will be dynamically loaded.

      – ./modules.d/:/usr/share/metricbeat/modules.d/

      #The commented sections below enable Metricbeat to monitor the Docker host rather than the Metricbeat container. These are used by the system module.

      – /proc:/hostfs/proc:ro

      – /sys/fs/cgroup:/hostfs/sys/fs/cgroup:ro

      #Allows us to report on docker from the hosts information.

      – /var/run/docker.sock:/var/run/docker.sock

      #We mount the host filesystem so we can report on disk usage with the system module.

      – /:/hostfs:ro

    command: metricbeat -e -system.hostfs=/hostfs -strict.perms=false

    networks: [‘stack’]

    restart: on-failure

#    environment:

#      – “MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}”

    depends_on:

      elasticsearch:  { condition: service_healthy }

Append the metricbeat.yml as below,

metricbeat.config.modules:

  path: ${path.config}/modules.d/*.yml

  reload.period: 5s

  reload.enabled: true

processors:

– add_docker_metadata: ~

monitoring.enabled: true

setup.ilm.enabled: false

output.elasticsearch:

  hosts: [“elasticsearch:9200”]

logging.to_files: false

setup:

  kibana.host: “kibana:5601”

  dashboards.enabled: true

The compose file consists of the volume mapping to the container, one is the metricbeat configuration and the second one (modules.d) is to Mount the modules.d directory into the container. This allows users to potentially make changes to the modules and they will be dynamically loaded. Now run the docker-compose.

[email protected]:~/monitoring$ mkdir modules.d

Append the system.yml as below inside the module.d folder,

– module: system

 metricsets:

   – core

   – cpu

   – load

   – diskio

   – filesystem

   – fsstat

   – memory

   – network

   – process

   – socket

 enabled: true

 period: 5s

 processes: [‘.*’]

 cpu_ticks: true

 process.cgroups.enabled: true

 process.include_top_n:

   enabled: true

   by_cpu: 20

   by_memory: 20

So now the Composite docker-compose file will look like below,

version: “2.1”

services:

#Elasticsearch container

  elasticsearch:

    container_name: elasticsearch

    hostname: elasticsearch

    image: “docker.elastic.co/elasticsearch/elasticsearch:latest”

    logging:

      options:

        max-file: “3”

        max-size: “50m”

    environment:

      – http.host=0.0.0.0

      – transport.host=127.0.0.1

      – bootstrap.memory_lock=true

      – “ES_JAVA_OPTS=-Xms${ES_JVM_HEAP} -Xmx${ES_JVM_HEAP}”

    mem_limit: 1g

    ulimits:

      memlock:

        soft: -1

        hard: -1

    volumes:

      – ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml

      – esdata:/usr/share/elasticsearch/data

    ports: [‘9200:9200’]

    healthcheck:

      test: [“CMD”, “curl”,”-s” ,”-f”, “http://localhost:9200/_cat/health”]

    networks: [‘stack’]

metricbeat:

    container_name: metricbeat

    hostname: metricbeat

    user: root #To read the docker socket

    image: docker.elastic.co/beats/metricbeat:latest

    logging:

      options:

        max-file: “3”

        max-size: “50m”

    volumes:

      #Mount the metricbeat configuration so users can make edits.

      – ./metricbeat.yml:/usr/share/metricbeat/metricbeat.yml

      #Mount the modules.d directory into the container. This allows user to potentially make changes to the modules and they will be dynamically loaded.

      – ./modules.d/:/usr/share/metricbeat/modules.d/

      #The commented sections below enable Metricbeat to monitor the Docker host rather than the Metricbeat container. These are used by the system module.

      – /proc:/hostfs/proc:ro

      – /sys/fs/cgroup:/hostfs/sys/fs/cgroup:ro

      #Allows us to report on docker from the hosts information.

      – /var/run/docker.sock:/var/run/docker.sock

      #We mount the host filesystem so we can report on disk usage with the system module.

      – /:/hostfs:ro

    command: metricbeat -e -system.hostfs=/hostfs -strict.perms=false

    networks: [‘stack’]

    restart: on-failure

#    environment:

#      – “MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}”

    depends_on:

      elasticsearch:  { condition: service_healthy }

volumes:

  esdata:

    driver: local

networks: guidanz

You can Simply do compose up and down.

[email protected]:~/monitoring$ docker-compose down 

[email protected]:~/monitoring$ docker-compose up -d

Now See the Targets in Elasticsearch, you will see metricbeat as well as a target.

Now eventually we will set up the grafana, where we will be using Elasticsearch as a data source. We can have a better Dashboard in grafana for the metrics visualization.

Append the code in the above docker compose and restart.

grafana:

 image: grafana/grafana

 user: “1000”

 environment:

   – GF_SECURITY_ADMIN_PASSWORD=secure_pass

 volumes:

   – ./grafana_db:/var/lib/grafana

 depends_on:

   – elasticsearch

 ports:

   – ‘3000:3000’

Access grafana UI from 3000 port, default user will be admin and the password you set in the compose file.

Now eventually we will set up the Skedler Reports, where we will be using Grafana as a data source. Skedler offers a simple and easy to add reporting and alerting solution for Elastic Stack and Grafana. Please review the documentation to install Skedler.

Now, Setup Skedler Reports, for this append the docker compose with the below code.

reports:

   image: skedler/reports:latest

   container_name: reports

   privileged: true

   cap_add:

     – SYS_ADMIN

   volumes:

     – /sys/fs/cgroup:/sys/fs/cgroup:ro

     – reportdata:/var/lib/skedler

     – ./reporting.yml:/opt/skedler/config/reporting.yml

   ports:

     – 3001:3001

Generate Report from Grafana in Minutes with Skedler. Fully featured free trial.