Application Performance Monitoring with Elasticsearch 6.1, Kibana & Skedler Alerts

Have you ever wonder how to easily monitor the performance of your application and how to house your application metrics in Elasticsearch? The answer is Elastic APM.
Elastic Application Performance Management(APM) is a new feature available in Elasticsearch 6.1 (in beta and alpha in 6.0).  A few months ago, Opbeat (an application performance monitoring – APM – company) joined forces with Elastic which is now Elastic APM.

Adding APM (Application Performance Monitoring) to the Elastic Stack is a natural next step in providing users with end-to-end monitoring, from logging to server-level metrics, to application-level metrics, all the way to the end-user experience in the browser or client.

In this post, we are going to see how to monitor the performance of a Python Flask application using the APM feature of Elasticsearch and how to get notified (webhook or email) when something happens in your application by Skedler Alerts.

Here you can read more about Opbeat acquisition and APM announcement:

APM Overview

First of all, let’s see how does APM work. What is written below is taken from here: APM Overview

APM is an application performance monitoring system built on the Elastic Stack. It uses Elasticsearch as its data store and allows you to monitor the performance of thousands of applications in real time.

With APM, you can automatically collect detailed performance information from inside your applications and it requires only minor changes to your application. APM will automatically instrument your application and measure the response time for incoming requests. It also automatically measures what your application was doing while it was preparing the response.

APM components:

elastic application performance management (APM)

APM agents

APM agents are open source libraries written in the same language as your application. You install them into your application as you would install any other library. The agents hook into your application and start collecting performance metrics and errors. All data that gets collected by agents and sent on to APM Server.

APM Server

APM server is an open source application written in Go which runs on your servers. It listens on port 8200 by default and receives data from agents periodically. The API is a simple JSON based HTTP API. APM Server builds Elasticsearch documents from the data received from agents. These documents are stored in an Elasticsearch cluster. A single APM Server process can typically handle data from hundreds of agents.

Righ now these APM Agents are available:

In this post we are not going to see how to install and configure the APM server, you can read more here (the procedure is well documented): Elastic – APM .

Use case

The new APM feature can be used when you need a free solution to monitor your Python/Node.jS/Ruby/JS application and you want to use the Elasticsearch’s search powers and Kibana’s visualization to look at your applications metrics. If you integrate the alerting feature of Skedler Alerts (licensed) you can get notified in a flexible way, from webhook to email when something happens in your application.

Example of applications and notifications by Alerts:

  • Back office application is written in Python with the Flask or Django Framework: get a Slack notification when the number of HTTP 4xx errors are higher than a given threshold in the last 30 minutes – access control alert
  • Application server is written in Node.js: get an email message when a not handled exception is raised by the application in the last hour – error handling alert
  • Batch process script is written in Ruby: get a daily Slack notification with the details of all the operations of the day – application summary alert

Python Flask Application

Flask is a micro web framework written in Python and based on the Werkzeug toolkit and Jinja2 template engine. You can read more about it here: Welcome to Flask.
In this example, we will assume to have some web APIs written with Flask. We want to monitor our application (APIs calls) and get notified when the number of errors is particularly high and when some endpoint get too many calls.

Given a set of Flask API endpoints:

we are going to add few lines of code to send the application metrics to the APM Server (that will index these to Elasticsearch). First of all, install the Elastic APM dependencies:

$ pip install elastic-apm[flask]

and import them:

from elasticapm.contrib.flask import ElasticAPM

Configure Elastic APM in your application by specifying the APM server URL, eventually a secret token (that you set in the APM server config.yml) and your application name.

# configure to use ELASTIC_APM in your application’s settings from elasticapm.contrib.flask import ElasticAPM

app.config[‘ELASTIC_APM’] = {

    # allowed app_name chars: a-z, A-Z, 0-9, -, _, and space from elasticapm.contrib.flask

   ‘APP_NAME’: ‘yourApplicationName’,

   #’SECRET_TOKEN’: ‘yourToken’, #if you set on the APM server configuration

   ‘SERVER_URL’: ‘http://apmServer:8200’ # your APM server url

}

apm = ElasticAPM(app)

We are now monitoring our application and housing our metrics in Elasticsearch!
You can monitor addition events or send additional data to the APM server.

Capture exceptions:

try:

    1 / 0

except ZeroDivisionError:

    apm.capture_exception()

Log generic message:

apm.capture_message(‘hello, world!’)

Send extra information:

@app.route(‘/’)

def bar():

    try:

        1 / 0

    except ZeroDivisionError:

        app.logger.error(‘Math is hard’,

            exc_info=True,

            extra={

                ‘good_at_math’: False,

            }

        )

    )

Elasticsearch and Kibana

All the collected metrics are stored within an Elasticsearch index APM-6.1.1-* as a doc type.
Here an extract of the doc type mapping (related to the HTTP request/response).

“request”: {

“properties”: {

  “http_version”: {

“type”: “keyword”,

“ignore_above”: 1024

  },

  “method”: {

“type”: “keyword”,

“ignore_above”: 1024

  },

  “url”: {

“properties”: {

  “pathname”: {

“type”: “keyword”,

“ignore_above”: 1024

  },

  “port”: {

“type”: “keyword”,

“ignore_above”: 1024

  },

  “protocol”: {

“type”: “keyword”,

“ignore_above”: 1024

  }

}

  },………

}

},

“response”: {

“properties”: {

  “finished”: {

“type”: “boolean”

  },

  “status_code”: {

“type”: “long”

  },……….

}

}

Here you can find the full type mapping: doc type mapping.

Example of indexed document:

kibana apm ui

Once our application is configured, all the metrics will be stored in Elasticsearch and we can use the default Kibana APM UI to view them.

kibana amp dashboard

Response time and response by minutes (HTTP 2xx and HTTP 4xx):

kibana apm dashboard

Request details:

Elasticsearch Alerts with Skedler

We are now sending our application’s metrics to Elasticsearch and we have a nice way to view them, but we will not look all the time to the Kibana APM UI to see if everything is ok.
Wouldn’t it be nice if we could receive a Slack notification or an email when something is wrong so we can look at the dashboard?

Here Skedler Alerts comes into the picture!

It simplifies how you monitor data in Elasticsearch for abnormal patterns, drill down to root cause and alert using webhooks and email.  You can design your rules for detecting patterns, spikes, new events, and threshold violations using Skedler’s easy to use UI. You can correlate across indexes, filter events and compare against baseline conditions to detect abnormal patterns in data.  

Read more about Skedler Alerts here:

From the Alerts UI (to see how to install Alerts, take a look here: Install Alerts) let’s define a new Webhook (I took the webhook URL from my Slack team setting, read more here: Slack Incoming Webhook):

elasticsearch slack webhook
elasticsearch alert skedler setup

We want to get a notification when our application returns an HTTP 400 error. Define a new Alert rule (Threshold type):

Filter by the context.response.status_code == 400 field:

elasticsearch alert skedler setup

Choose your schedule and action.

In the picture below, the job will run every minute and the notification will be sent to the Slack webhook. You can define your Slack message template.

elasticsearch alert schedule skedler

Once the event is fired we correctly get notified to the Slack channel.

elasticsearch slack alert

You can now create as many new Alert rules as you need to get notified when something happens is your application.

The applications metrics are written by the APM Server to a standard Elastic Index, so you can write your own Alert rules (no constraints on the APM index).

Here you can find some useful resources about Skedler Alerts:

Conclusion

In this post, we have seen how to monitor the performances of your application with Elastic APM, to automatically send them to Elasticsearch and how to use Skedler Alerts to get notified when something is wrong.

Monitoring the performance of your applications is something that you should always do to improve, fix, and manage your application.

You should use Elastic APM if you look for something free, easy to configure and fully integrated with Elasticsearch (metrics are stored in a normal index) and Kibana (you have a dedicated APM UI and you can build your own dashboards).

You should use Skedler Alerts if you want to be notified about your applications’ metrics. It provides a nice dashboard where you can configure your alert rules and supports webhook and email notifications with a custom template.

Announcing Skedler Reports v3.3 for ELK 6.1.1

Introducing Skedler Reports v3.3

Upgrading to Elasticsearch(ELK) stack 6.1.1 and looking for a powerful, yet affordable and easy to use reporting alternative to X-Pack?  Introducing Skedler Reports v3.3 that enables custom reporting for Elasticsearch Kibana 6.1.1 and also Grafana.   With Skedler Reports v3.3, you can:

  • Schedule, create and distribute powerful, custom PDF, XLS, CSV reports from Elasticsearch-Kibana 6.1.1 platform.  ELK 5 is also supported.
  • Search and view historical reports.
  • Create, schedule, and generate reports directly from your application using REST API.
  • Add tags to reports for easier classification and searching.
  • Connect Skedler to Elasticsearch Kibana securely using SSL/TLS.
  • Use Skedler Reports with Search Guard Kibana Plug-in

Below is a quick look at the highlights of Skedler Reports v3.3.

Elasticsearch Kibana 6.1.1 Support

Ever since the launch of Skedler v1.0 for ELK 1.x,  Skedler has kept pace with the ELK releases and provided powerful reporting features for ELK users.  Skedler releases for new versions of ELK are typically available within a month from the GA of the ELK releases.

With Skedler v3.3, you can now create informative and stylish reports from ELK 6.1.1.  If you have applications on older versions of ELK, you can still use the latest version of Skedler which supports ELK 2.3.x to 6.1.1.

Search and Download Historical Reports

Are your users requesting copies of reports from the past?  No problem! Skedler v3.3 makes it easy to search historical reports by attributes.  You can search for historical reports by

  • Time
  • Tags (more on it later in this article)
  • Format (PDF/Excel)
  • Scheduling Frequency
  • Recipients

You can instantly download a copy of the report and share it with your users when they need it.   Historical Reports feature is available in Skedler Premium and Enterprise Edition.

REST API for Integration

Are you looking to generate reports from ELK stack using API?  Skedler v3.3 makes it easy to create, schedule and generate reports using REST APIs.

The Skedler Reports REST API allows you to

  • Create reports for Kibana/Grafana Dashboards
  • Get a list of all the reports
  • Instantly generate and send the reports via Mail and Slack
  • Schedule a report
  • Configure report action(Mail/Slack) for the created reports.

Integration via REST API is available in Skedler Designer Edition.

Organize and Search Reports with Tags

As the number of reports grows in your organization, you need better tools to organize and search the reports.  Tags in Skedler v3.3 provide an efficient way to organize the reports.  You can assign multiple tags to a report and search across tags to filter the reports.

What Else is New in Skedler v3.3

There are several more features in Skedler v3.3 that simplify reporting for Elasticsearch Kibana and Grafana applications.  You can learn more about these capabilities and review the release notes here.

Try Skedler v3.3 Free

Download Skedler Reports from the Free Trial page and try it free to see if it meets the reporting requirements for your Elasticsearch Kibana or Grafana application.

Copyright © 2024 Guidanz Inc
Translate »