Our Blog

Installing Skedler Reports as a Kibana Plugin with Docker Compose

If you are using ELK stack, you can now install Skedler as a Kibana plugin. Skedler Reports plugin is available for Kibana versions from 6.5.x to 7.6.x.

Let’s take a look at the steps to Install Skedler Reports as a Kibana plugin.

Prerequisites:

  1. A Linux machine
  2. Docker Installed
  3. Docker Compose Installed

Let’s get started!

Login to your Linux machine and update the repository and install Docker and Docker Compose. Then follow the below steps to update the Repository:

Setting Up Skedler Reports

Create a Directory, say skedlerplugin

ubuntu@guidanz:~$ mkdir skedlerplugin

ubuntu@guidanz:~$ cd skedlerplugin/

ubuntu@guidanz:~$ vim docker-compose.yml

Now, create a Docker Compose file for Skedler Reports. You also need to create a Skedler Reports configuration file, reporting.yml, and a Docker Compose file for Skedler as below,

version: “2.4”

services:

#  Skedler Reports container

  reports:

    image: skedler/reports:latest

    container_name: reports

    privileged: true

    cap_add:

      – SYS_ADMIN

    volumes:

      – /sys/fs/cgroup:/sys/fs/cgroup:ro

      – reportdata:/var/lib/skedler

      – ./reporting.yml:/opt/skedler/config/reporting.yml

    command: /opt/skedler/bin/skedler

    depends_on:

      elasticsearch: { condition: service_healthy }

    ports:

      – 3000:3000

    healthcheck:

      test: [“CMD”, “curl”, “-s”, “-f”, “http://localhost:3000”]

    networks: [‘stack’]

volumes:

  reportdata:

    driver: local

networks: {stack: {}}

Create an Elasticsearch configuration file – reporting.yml and paste the config as below.

ubuntu@guidanz:~$ mkdir skedlerplugin

ubuntu@guidanz:~$ cd skedlerplugin/

ubuntu@guidanz:~$ vim reporting.yml

Download the reporting.yml file found here

Setting Up Elasticsearch

You also need to create an Elasticsearch configuration file, elasticsearch.yml. Docker Compose file for Elasticsearch is below,

#Elasticsearch container

  elasticsearch:

    container_name: elasticsearch

    hostname: elasticsearch

    image: “docker.elastic.co/elasticsearch/elasticsearch:7.6.0”

    logging:

      options:

        max-file: “3”

        max-size: “50m”

    environment:

      – http.host=0.0.0.0

      – transport.host=127.0.0.1

      – bootstrap.memory_lock=true

      – “ES_JAVA_OPTS=-Xms${ES_JVM_HEAP} -Xmx${ES_JVM_HEAP}”

    mem_limit: 1g

    ulimits:

      memlock:

        soft: -1

        hard: -1

    volumes:

      – ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml

      – esdata:/usr/share/elasticsearch/data

    ports: [‘9200:9200’]

    healthcheck:

      test: [“CMD”, “curl”,”-s” ,”-f”, “http://localhost:9200/_cat/health”]

    networks: [‘stack’]

volumes:

  esdata:

    driver: local

networks: guidanz

Create an Elasticsearch configuration file elasticsearch.yml and paste the config as below.

cluster.name: guidanz-stack-cluster

node.name: node-1

network.host: 0.0.0.0

path.data: /usr/share/elasticsearch/data

http.port: 9200

xpack.monitoring.enabled: true

http.cors.enabled: true

http.cors.allow-origin: “*”

http.max_header_size: 16kb

Setting Up Skedler Reports as Kibana Plugin

Create a Directory inside skedlerplugin, say kibanaconfig

ubuntu@guidanz:~$ mkdir kibanaconfig

ubuntu@guidanz:~$ cd kibanaconfig/

ubuntu@guidanz:~$ vim Dockerfile

Now, create a Docker file for Kibana and check the Docker file for Kibana as below,

FROM docker.elastic.co/kibana/kibana:7.6.0

RUN ./bin/kibana-plugin install https://www.skedler.com/plugins/skedler-reports-plugin/4.10.0/skedler-reports-kibana-plugin-7.6.0-4.10.0.zip

Then, copy the URL of the Skedler Reports plugin matching your exact Kibana version from here.

You also need to create a Docker Compose file for Kibana is below,

#Kibana container

  kibana:

    container_name: kibana

    hostname: kibana

    build:

      context: ./kibanaconfig

      dockerfile: Dockerfile

    image: kibanaconfig

    logging:

      options:

        max-file: “3”

        max-size: “50m”

    volumes:

      – ./kibanaconfig/kibana.yml:/usr/share/kibana/config/kibana.yml

      – ./kibanaconfig/skedler_reports.yml:/usr/share/kibana/plugins/skedler/config/skedler_reports.yml

    ports: [‘5601:5601’]

    networks: [‘stack’]

    depends_on:

      elasticsearch: { condition: service_healthy }

    restart: on-failure

    healthcheck:

      test: [“CMD”, “curl”, “-s”, “-f”, “http://localhost:5601/”]

      retries: 6

Create a Kibana configuration file kibana.yml inside the kibanaconfig folder and paste the config as below.

ubuntu@guidanz:~$ cd kibanaconfig/

ubuntu@guidanz:~$ vim kibana.yml

server.port: 127.0.0.1:5601

elasticsearch.url: “http://elasticsearch:9200”

server.name: “full-stack-example”

xpack.monitoring.enabled: true

Create a Skedler Reports as Kibana Plugin configuration file skedler_reports.yml inside the kibanaconfig folder and paste the config as below.

ubuntu@guidanz:~$ cd kibanaconfig/

ubuntu@guidanz:~$ vim skedler_reports.yml

#/*********** Skedler Access URL *************************/

skedler_reports_url: “http://ip_address:3000”

#/*********************** Basic Authentication *********************/

# If Skedler Reports uses any username and password

#skedler_username: user

#skedler_password: password

Configure the Skedler Reports server URL in the skedler_reports_url variable. By default, the variable is set as shown below,

If the Skedler Reports server URL requires basic authentication, for example, Nginx, uncomment and configure the skedler_username and skedler_password with the basic authentication credentials as shown below: Now run the docker-compose.

ubuntu@guidanz:~/skedlerplugin$ docker-compose up -d

Access Skedler Reports the IP and Port and you will see the Skedler Reports UI.

| http://ip_address:3000

Access Elasticsearch the IP and Port and you will see the Elasticsearch UI.

| http://ip_address:9200

Access Kibana using the IP and Port and you will see the Kibana UI.

| http://ip_address:5601

So now the Composite docker-compose file will look like below,

You can Simply do compose up and down.

version: “2.4”

services:

#  Skedler Reports container

  reports:

    image: skedler/reports:latest

    container_name: reports

    privileged: true

    cap_add:

      – SYS_ADMIN

    volumes:

      – /sys/fs/cgroup:/sys/fs/cgroup:ro

      – reportdata:/var/lib/skedler

      – ./reporting.yml:/opt/skedler/config/reporting.yml

    command: /opt/skedler/bin/skedler

    depends_on:

      elasticsearch: { condition: service_healthy }

    ports:

      – 3000:3000

    healthcheck:

      test: [“CMD”, “curl”, “-s”, “-f”, “http://localhost:3000”]

    networks: [‘stack’]

#  Elasticsearch container

  elasticsearch:

    container_name: elasticsearch

    hostname: elasticsearch

    image: “docker.elastic.co/elasticsearch/elasticsearch:7.1.1”

    logging:

      options:

        max-file: “3”

        max-size: “50m”

    environment:

      – http.host=0.0.0.0

      – transport.host=127.0.0.1

      – bootstrap.memory_lock=true

      – “ES_JAVA_OPTS=-Xms${ES_JVM_HEAP} -Xmx${ES_JVM_HEAP}”

    mem_limit: ${ES_MEM_LIMIT}

    ulimits:

      memlock:

        soft: -1

        hard: -1

    volumes:

      – ./config/elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml

      – esdata:/usr/share/elasticsearch/data

    ports: [‘9200:9200’]

    healthcheck:

      test: [“CMD”, “curl”,”-s” ,”-f”, “http://localhost:9200/_cat/health”]

    networks: [‘stack’]

 #Kibana container

  kibana:

    container_name: kibana

    hostname: kibana

    build:

      context: ./kibanaconfig

      dockerfile: Dockerfile

    image: kibanaconfig

    logging:

      options:

        max-file: “3”

        max-size: “50m”

    volumes:

      – ./kibanaconfig/kibana.yml:/usr/share/kibana/config/kibana.yml

      – ./kibanaconfig/skedler_reports.yml:/usr/share/kibana/plugins/skedler/config/skedler_reports.yml

    ports: [‘5601:5601’]

    networks: [‘stack’]

    depends_on:

      elasticsearch: { condition: service_healthy }

    restart: on-failure

    healthcheck:

      test: [“CMD”, “curl”, “-s”, “-f”, “http://localhost:5601/”]

      retries: 6

volumes:

  esdata:

    driver: local

  reportdata:

    driver: local

networks: {stack: {}}

You can Simply do compose up and down.

ubuntu@guidanz:~/skedlerplugin$ docker-compose down 

ubuntu@guidanz:~/skedlerplugin$ docker-compose up -d

Summary

Docker compose is a useful tool to manage container stacks for your client. And manage all related containers with one single command.

Automate your Grafana Grafana and  Kibana Reports Today!
Reporting Made Simple.

Start your free trial
 
 
 
 
 
 
 
 
 
Kibana

Start your 15-day free trial with instant download

Automate what’s slowing you down. Focus on what fires you up.

Copyright © 2024 Guidanz Inc
Translate »