Skedler Reports v4.19.0 & Alerts v4.9.0 now supports ELK 7.10

Here are the highlights of what’s new and improved in Skedler Reports 4.19.0 & Alerts 4.9.0. For detailed information about this release, check the release notes.

Indexing speed improvement

Elasticsearch 7.10 improves indexing speed by up to 20%. We’ve reduced the coordination needed to add entries to the transaction log. This reduction allows for more concurrency and increases the transaction log buffer size from 8KB to 1MB. However, performance gains are lower for full-text search and other analysis-intensive use cases. The heavier the indexing chain, the lower the gains, so indexing chains that involve many fields, ingest pipelines or full-text indexing will see lower gains which can now be utilized in Skedler v4.19.0.

More space-efficient indices

Elasticsearch 7.10 depends on Apache Lucene 8.7, which introduces higher compression of stored fields, the part of the index that notably stores the _source. On the various data sets that we benchmark against, we noticed space reductions between 0% and 10%. This change especially helps on data sets that have lots of redundant data across documents, which is typically the case of the documents that are produced by our Observability solutions, which repeat metadata about the host that produced the data on every document.

Elasticsearch offers the ability to configure the index.codec setting to tell Elasticsearch how aggressively to compress stored fields. Both supported values default and best_compression will get better compression with this change.

Data tiers

7.10 introduces the concept of formalized data tiers within Elasticsearch. Data tiers are a simple, integrated approach that gives users control over-optimizing for cost, performance, and breadth/depth of data. Prior to this formalization, many users configured their own tier topology using custom node attributes as well as using ILM to manage the lifecycle and location of data within a cluster.

With this formalization, data tiers (content, hot, warm, and cold) can be explicitly configured using node roles, and indices can be configured to be allocated within a specific tier using index-level data tier allocation filtering. ILM will make use of these tiers to automatically migrate data between nodes as an index goes through the phases of its lifecycle.

Newly created indices abstracted by a data stream will be allocated to the data_hot tier automatically, while standalone indices will be allocated to the data_content tier automatically. Nodes with the pre-existing data role are considered to be part of all tiers.

AUC ROC evaluation metrics for classification analysis

The area under the curve of the receiver operating characteristic (AUC ROC) is an evaluation metric that has been available for outlier detection since 7.3 and now is available for classification analysis. AUC ROC represents the performance of the classification process at different predicted probability thresholds. The true positive rate for a specific class is compared against the rate of all the other classes combined at the different threshold levels to create the curve.

Custom feature processors in data frame analytics

Feature processors enable you to extract process features from document fields. You can use these features in model training and model deployment. Custom feature processors provide a mechanism to create features that can be used at search and ingest time and they don’t take up space in the index. This process more tightly couples feature generation with the resulting model. The result is simplified model management as both the features and the model can easily follow the same life cycle.

Points in time (PITs) for search

In 7.10, Elasticsearch introduces points in time (PITs), a lightweight way to preserve index state over searches. PITs improve the end-user experience by making UIs more reactive supported by Skedler v4.19.0

By default, a search request waits for complete results before returning a response. For example, a search that retrieves top hits and aggregations returns a response only after both top hits and aggregations are computed. However, aggregations are usually slower and more expensive to compute than top hits. Instead of sending a combined request, you can send two separate requests: one for top hits and another one for aggregations. With separate search requests, a UI can display top hits as soon as they’re available and display aggregation data after the slower aggregation request completes. You can use a PIT to ensure both search requests run on the same data and index state.

New thread pools for system indices

We’ve added two new thread pools for system indices: system_read and system_write. These thread pools ensure system indices critical to the Elastic Stack, such as those used by security or Kibana, remain responsive when a cluster is under heavy query or indexing load.

system_read is a fixed thread pool used to manage resources for reading operations targeting system indices. Similarly, system_write is a fixed thread pool used to manage resources for write operations targeting system indices. Both have a maximum number of threads equal to 5 or half of the available processors, whichever is smaller.

Export your Kibana Dashboard to PDF Report in Minutes with Skedler. Fully featured free trial.

Kibana Single Sign-On with OpenId Connect and Azure Active Directory

Introduction

Open distro supports OpenID so you can seamlessly connect your Elasticsearch cluster with Identity Providers like Azure AD, Keycloak, Auth0, or Okta. To set up OpenID support, you just need to point Open distro to the metadata endpoint of your provider, and all relevant configuration information is imported automatically. In this article, we will implement a complete OpenID Connect setup including Open distro for Kibana Single Sign-On.

What is OpenID Connect?

OpenID Connect 1.0 is a simple identity layer on top of the OAuth 2.0 protocol. It allows Clients to verify the identity of the End-User based on the authentication performed by an Authorization Server, as well as to obtain basic profile information about the End-User in an interoperable and REST-like manner.

OpenID Connect allows clients of all types, including Web-based, mobile, and JavaScript clients, to request and receive information about authenticated sessions and end-users. The specification suite is extensible, allowing participants to use optional features such as encryption of identity data, the discovery of OpenID Providers, and session management, when it makes sense for them.

Configuring OpenID Connect in Azure AD

Next, we will set up an OpenID Connect client application in Azure AD which we will later use for Open Distro for Elasticsearch Kibana Single Sign-On. In this post, we will just describe the basic steps.

Adding an OpenID Connect client application

Our first step is, we need to register an application with the Microsoft identity platform that supports OpenID Connect. Please refer to the official documentation.

Login to azure ad and open the Authentication tab in-app registrations and enter the redirect URL as https://localhost:5601/auth/openid/login and save it.

Besides the client ID, we also need the client secret in our Open Distro for elasticsearch Kibana configuration. This is an extra layer of security. An application can only obtain an id token from the IdP if it provides the client secret. In Azure AD you can find it under the Certificates & secrets tab of the client settings.

Connecting OpenDistro with Azure AD

For connecting Open Distro with Azure AD we need to set up a new authentication domain with type openid in config.yml. The most important information we need to provide is the Metadata Endpoint of the newly created OpenID connect client. This endpoint provides all configuration settings that Open Distro needs. The URL of this endpoint varies from IdP to IdP. In Azure AD the format is:

Since we want to connect Open Distro for Elasticsearch Kibana with Azure AD, we also add a second authentication domain which will use the internal user database. This is required for authenticating the internal Kibana server user. Our config.yml file now looks like:

Adding users and roles to Azure AD

While an IDP can be used as a federation service to pull in user information from different sources such as LDAP, in this example we use the built-in user management. We have two choices when mapping the Azure AD users to Open Distro roles. We can do it by username, or by the roles in Azure AD. While mapping users by name is a bit easier to set up, we will use the Azure AD roles here.

With the default configuration, two appRoles are created, skedler_role and guidanz_role, which can be viewed by choosing the App registrations menu item within the Azure Active Directory blade, selecting the Enterprise application in question, and clicking the Manifest button

A manifest is a JSON object that looks similar to:

There are many different ways we might decide to map how users within AAD will be assigned roles within Elasticsearch, for example, using the tenantid claim to map users in different directories to different roles, using the domain part of the name claim, etc.

With the role OpenID connect token attribute created earlier, however, the appRole to which an AAD user is assigned will be sent as the value of the Role Claim within the OpenID connect token, allowing:

  • Arbitrary appRoles to be defined within the manifest
  • Assigning users within the Enterprise application to these roles
  • Using the Role Claim sent within the SAML token to determine access within Elasticsearch.

For the purposes of this post, let’s define a Superuser role within the appRoles:

And save the changes to the manifest:

Configuring OpenID Connect in Open Distro for Kibana

The last part is to configure OpenID Connect in Open Distro for Kibana. Configuring the Kibana plugin is straight-forward: Choose OpenID as the authentication type, and provide the Azure AD metadata URL, the client name, and the client secret. Please refer to the official documentation.

Activate OpenID Connect by adding the following to kibana.yml:

Done. We can now start Open Distro for Kibana and enjoy Single Sign-On with Azure AD! If we open Kibana, we get redirected to the login page of Azure AD. After providing username and password, Kibana opens, and we’re logged in.

Summary

OpenID Connect is an industry-standard for providing authentication information. Open Distro for Elasticsearch and their Open Distro for Kibana plugin support OpenID Connect out of the box, so you can use any OpenID compliant identity provider to implement Single Sign-On in Kibana. These IdPs include Azure AD, Keycloak, Okta, Auth0, Connect2ID, or Salesforce.

Reference

If you wish to have an automated reporting application, we recommend downloading  Skedler Reports.

The Best Tools for Exporting data from Grafana

As a tool for visualizing data for time series databases, Logging & document databases, SQL, and Cloud, Grafana is a perfect choice. Its UI interface allows creating a dashboard and visualizations in minutes and analyzing the data with its help.

Despite having tons of visualizations, the open-source version of Grafana does not have advanced reporting capability. Automating the export of data into CSV, Excel, or PDF requires additional plugins.

We wrote an honest and unbiased review of the following tools that are available for exporting data directly from Grafana.

  1. Grafana reporter
  2. Grafana Data Exporter
  3. Skedler Reports

1. Grafana Reporter

https://github.com/IzakMarais/reporter

A simple Http service that generates *.PDF reports from Grafana dashboards.

Runtime requirements

  • pdflatex installed and available in PATH.
  • a running Grafana instance that it can connect to. If you are using an old Grafana (version < v5.0), see Deprecated Endpoint below.

Build requirements:

  • Golang

Pros of Grafana Reporter

  • Simply embeddable tool for Grafana
  • Uses simple curl commands and arguments

Cons of Grafana Reporter

  • You need pdflatex and Golang. So you must install a Golang environment in your system.
  • For non-technical users, it’s difficult to use
Export your Grafana Dashboard to PDF Report in Minutes with Skedler. Fully featured free trial.

#1 GRAFANA DATA EXPORTING TOOL

Start automating free with Skedler today!

CLAIM YOUR FREE TRIAL

No credit card required

 

2. Grafana Data Exporter

https://github.com/CorpGlory/grafana-data-exporter

Server for fetching data from Grafana data sources. You would you use it:

  • To export your metrics on a big range
  • To migrate from one data source to another

Runtime requirements

  • Linux.
  • Docker

Installation

  • grafana-data-exporter for fetching data from Grafana data sources.
  • Simple-JSON-data source for progress tracking.
  • grafana-data-exporter-panel for exporting metrics from the dashboard.
  • Import exporting template dashboard at http://<YOUR_GRAFANA_URL>/dashboard/import.

Pros of Grafana Data Exporter

  • Faster writing of documents
  • Added as a Grafana panel

Cons of Grafana Data Exporter

  • To automate the exporting of data on a periodic basis, you need to write your own cron job
  • Grafana Data Exporter installation is a bit tricky for non-technical users

3. Skedler Reports

https://www.skedler.com/

Disclosure: Skedler Reports is one of our products.

Skedler offers a simple and easy to add reporting and alerting solution for Elastic Stack and Grafana. There is also a plugin for Kibana that is easy to install and use with the Elasticsearch data. It’s called Skedler Reports as Kibana Plugin.

Pros of Skedler Reports

  • Simple to install, configure, and use
  • Send HTML, PDF, XLS, CSV reports on-demand or periodically via email or #slack
  • Report setup takes less than 5 minute
  • Easy to use, no coding required

Cons of Skedler Reports

  • It requires a paid license which includes software and also enterprise support
  • Installation is difficult for users who are not fully familiar with Elastic Stack or Grafana
Schedule & Automate Your Grafana Reports Free with Skedler. Fully featured free trial.

What tools do you use? 

Do you have to regularly export data from Grafana for external analysis or reporting purposes?  Do you use any other third-party tools? Email us about the tool at hello at skedler.com.

The Best Tools for Exporting Elasticsearch Data from Kibana

As a tool for visualizing elasticsearch data, Kibana is a perfect choice. Its UI interface allows creating a dashboard, search, and visualizations in minutes and analyzing the data with its help.

Despite having tons of visualizations, the open source version of Kibana does not have advanced reporting capability. Automating export of data into CSV, Excel, or PDF requires additional plugins.  

We wrote an honest and unbiased review of the following tools that are available for exporting data directly from Elasticsearch.

  1. Flexmonster Pivot plugin for Kibana 
  2. Sentinl (for Kibana)
  3. Skedler Reports

1. Flexmonster Pivot plugin for Kibana

https://github.com/flexmonster/pivot-kibana

Flexmonster Pivot covers the need in summarizing business data and displaying results in a cross-table format interactively & fast. All these Excel-like features, to which so many of you are used to, and its extended API will multiply your analytics results remarkably.

Though initially created as a pivot table component that can be incorporated into any app that uses JavaScript, it can serve as a part of Kibana as well. You can connect it to the Elasticsearch index, fetch the documents from it and start exploring the data.

Pros of Flexmonster Pivot plugin for Kibana

  • Flexmonster is in line with the concept of Kibana
  • Simply embeddable Pivot for Kibana

Cons of Flexmonster Pivot plugin for Kibana

  • To automate the exporting of data on a periodic basis, you need to write your own cron job.
  • Flexmonster Pivot plugin installation is a bit tricky. 

2. Sentinl (for Kibana)

https://github.com/sirensolutions/sentinl

SENTINL extends Kibana with Alerting and Reporting functionality to monitor, notify and report on data series changes using standard queries, programmable validators and a variety of configurable actions – Think of it as a free and independent “Watcher” which also has scheduled “Reporting”.

SENTINL is also designed to simplify the process of creating and managing alerts and reports in Siren Investigate/Kibana 6.x via its native App Interface, or by using native watcher tools in Kibana 6.x+.

Pros of Sentinl

  • It’s simple to install and configure
  • Added as a Kibana plugin.

Cons of Sentinl

  • This tool supports only 6x versions of Elasticsearch.  It does not support 7.x.
  • For non-technical users, it’s difficult to use 
  • Automation requires scripting which makes it laborious

3. Skedler Reports

https://www.skedler.com/

Disclosure: Skedler Reports is one of our products.

Skedler offers a simple and easy to add reporting and alerting solution for Elastic Stack and Grafana.  There is also a plugin for Kibana that is easy to install and use with the Elasticsearch data. It’s called Skedler Reports as Kibana Plugin. 

Pros of Skedler Reports

  • Simple to install, configure, and use
  • Send HTML, PDF, XLS, CSV reports on-demand or periodically via email or #slack
  • Report setup takes less than 5 minute
  • Easy to use, no coding required

Cons of Skedler Reports

  • It requires a paid license which includes software and also enterprise support
  • Installation is difficult for users who are not fully familiar with Elastic Stack or Grafana

What tools do you use?

Do you have to regularly export data from Kibana for external analysis or reporting purposes? Do you use any other third-party plugins?   Email us about the tool at hello at skedler.com.

The Best Tools for Exporting Elasticsearch Data to CSV

Introduction

This blog post shows you how to export data from Elasticsearch to a CSV file. Imagine that you have infrastructure or security log data in Elasticsearch that you would like to export as a CSV and open in Excel or other tools for further analysis. In this post, we’ll introduce the ways to export Elasticsearch data to a CSV using the top available tools on the market.

Possible Scenarios

There are multiple ways you can extract data from Elasticsearch. We will look at the following scenarios:

Export data directly from Elasticsearch

We wrote an honest and unbiased review of the following tools that are available for exporting data directly from Elasticsearch.

  1. Es2csv – A CLI tool for exporting data from Elasticsearch to a CSV file
  2. Python pandas – A python software library that has built in functions for exporting elasticsearch data in a CSV, Excel or HTML format.
  3. Elasticsearch Data Format Plugin

1. es2csv

https://github.com/taraslayshchuk/es2csv

Es2csv is the command-line utility, written in Python, for querying Elasticsearch in Lucene query syntax or Query DSL syntax and exporting the results as documents into a CSV file. This tool can query bulk docs in multiple indices and get only selected fields, which reduces query execution time.

Here are the major pros and cons of es2csv :

Pros of es2csv

Here are the most essential advantages of es2csv.

  • It’s simple to install and configure
  • This tool can query bulk docs in multiple indices and get only selected fields.
  • It reduces query execution time.

Cons of es2csv

After the advantages, it’s time to throw some light on the disadvantages of es2csv.

  • This tool supports only 2x and 5x versions of Elasticsearch. It does not support 6.x or 7.x
  • You need Python 2.7.x and pip. So you must install a python environment in your system.
  • For non-technical users it’s difficult to use
  • To automate the exporting of data on a periodic basis, you need to write your own cron job.

2. Python-pandas

https://kb.objectrocket.com/elasticsearch/export-elasticsearch-documents-as-csv-html-and-json-files-in-python-using-pandas-348

One of the advantages of having a flexible database and using Python’s Pandas Series is being able to export documents in a variety of formats. When you use Pandas IO Tools Elasticsearch to export Elasticsearch files in Python, you can analyze documents faster.
This requires the following prerequisites

  1. Install Python
  2. Install pip
  3. Pip install elasticsearch
  4. Pip install numpy
  5. Pip install pandas

Here are the major pros and cons of python pandas.

Pros of python pandas

  • Faster writing of documents
  • Since it is written in python, when we analyze it in terms of code level, the amount of code is less when compared to the amount of code written in nodejs
  • Supports elasticsearch version 7.x as well

Cons of Python pandas

  • Python needs to be installed properly.
  • Not able to export values with queries.
  • Automation requires scripting
  • It is a tool for developers and data scientists, not for non-technical users.

3. Elasticsearch Data format

https://github.com/codelibs/elasticsearch-dataformat

This is an elastic search plugin. You need to add and configure it into your elasticsearch plugins. It provides a feature to download the response of a search result in several formats other than JSON. The supported formats are CSV, Excel and JSON(Bulk).
For this, there are the following prerequisites

  1. Elastic search 5.X or below
  2. Java installed and JAVA_HOME path configured

Here are the major pros and cons of the Elasticsearch Data format.

Pros of Elasticsearch Data format

  • Easy to install.
  • Added as an elastic search plugin.
  • Uses simple curl commands and arguments.

Cons of Elasticsearch Data format

  • The response format is poor.
  • Supports only up to elasticsearch 5.x.
  • Difficult to use for non-technical users

Overall Summary

When we tried these tools, we were unable to export only the fields that are given in the query. So it takes all the values in the index. Whereas, with products like Skedler Reports, Kibana, and Grafana, it is possible to export the selected fields as a CSV/Excel file. Furthermore, only python pandas works with the latest versions of elasticsearch(>5.x). Last, but not least, a major drawback of these open source tools is that they are designed for use by technical users.

If you are looking for an easy and automated way to export Elastic Stack data to CSV, XLS or PDF, we invite you to try Skedler Reports. It is free to try and it could save you a ton of time.

If you are looking for export Elastic Stack data to CSV, XLS or PDF, be sure to test drive Skedler.

How to email PDF, CSV, XLS reports from Security Onion

Introduction

If you are running the Elastic Stack based Security Onion for intrusion detection and enterprise security monitoring, have you heard the good news? Skedler Reports has released its latest version with support for the security onion environment. The focus of this blog post will be how to email PDF, CSV, XLS reports from the Elastic Stack used in Security Onion.

What is Security Onion?

If you’ve never heard about Security Onion before, it is a Linux distro for intrusion detection, Network Security Monitoring, and log management. It’s based on Ubuntu and contains Snort, Bro, OSSEC, Sguil, Squert, and many other security tools. Security Onion is a platform that allows you to monitor your network for security alerts. Elasticsearch includes Logstash, Kibana, Snort, Suricata, Bro, Wazuh, Sguil, Squert, CyberChef, NetworkMiner, and many other security tools.
Source: Security Onion website

security-onion

Why Skedler Reports for Security Onion?

Skedler Reports offers the most powerful, flexible and easy-to-use data monitoring solution that companies use to exceed customer SLAs, achieve compliance, and empower internal IT and business leaders. By using Skedler Reports, you can enjoy the following benefits:

  • Simple installation, quick configuration, faster deployment
  • Send visually appealing, personalized reports
  • Report setup takes less than 5 minute
  • Send PDF, XLS, CSV, HTML reports on-demand or periodically via email or #slack
  • Help users see and understand data faster with customized mobile & print-ready reports

[video_embed video=”9kb0aU0cKmU” parameters=”” mp4=”” ogv=”” placeholder=”” width=”700″ height=”400″]

 

Step-by-Step Instruction for Adding Reports to Security Onion

If you haven’t already downloaded Skedler Reports, please download it from www.skedler.com. You can also install Skedler as a docker. Please review the documentation to install Skedler.

 

[video_embed video=”y05K-7bW3Is” parameters=”” mp4=”” ogv=”” placeholder=”” width=”700″ height=”400″]

 

Configure Skedler Reports Settings for Elk Stack on Security Onion

After installation, when you launch Skedler Reports for the first time, the Settings page is displayed. You can access Basic and Advanced settings.

Basic Setup – Data Source

Basic Setup - Data Source

You can configure the Data Source details required for Skedler report generation.

Basic Setup - Data Source

Select the Datasource as ”ELK Stack”.

Enter the name of the Datasource as ”SecurityOnion_DataSource”.

Enter the Elasticsearch URL instance in the Elasticsearch URL field. By default, the field is set with the “http://localhost:9200” value.

Enter the Kibana URL instance for Skedler report generation in the Kibana URL field. By default, the field is set with the “http://localhost:5601” value.
Enter the Kibana index in the Kibana Index field. By default, the field is set with .kibana value

kibana-index

Select the Authentication Type from the drop-down as Security Onion.

Enter the security username and password for Elasticsearch in the Elasticsearch Admin Username and Elasticsearch Admin Password field and for Kibana in the Kibana Admin Username and Kibana Admin Password respectively. Click “Test and Save” to test and save the Datasource configuration.

Notification Channel

kibana-index

To proceed to the Notification Channels step, I’ll click the next button at the bottom. For Notification Channel Configuration, I will choose a channel as Mail. Next, I’ll name this Channel “SecurityOnion_Mail”. I’ll choose Supported Service as Others.

Then I’ll configure the SMTP connection by specifying the Outgoing Server, Port, Senders Email, Password, and Admin Email. Click “Test and Save” to test and save the Notification configuration.

Schedule Reports

Then I’ll click Create a Report. The first step is the Report Details.

Report Details

Report-Details

I’ll name this report “Security Onion Overview” This is what will appear in the subject line of the report email. Next, I’ll choose my data source as SecurityOnion_DataSource. I’ll choose Dashboard as a type. Next, I need to select the Dashboard to be used for generating reports. I’ll choose the Overview. If needed, I could choose a filter, but I’ll leave this as No FIlter. For Time Range, I’ll choose This Week.

To proceed to the Report Design step, I’ll click next at the top.

Report Design

Report-Design

I’ll choose PDF as the file format I want to receive and I’ll choose the Default Template as the Template with Smart Layout as Layout. I’ll also include an Excel report by checking this box.

Schedule Details

Schedule -Details

Clicking Next takes me to the Schedule step, where I’ll choose to receive reports Daily at the beginning of the day. I could also choose to have reports sent only on weekdays. To save changes I’ll click Schedule.

Distribute

Schedule-Details

I’ll proceed to the last step: Distribute. From the drop-down, I’ll choose Mail Channel. Here I’ll enter the email recipient, add a CC or BCC if needed, and I can keep the default message or edit it. The report schedule is finished, so I’ll click Save and Exit.

Schedule

I now see my Security Onion Overview at the top of my reports list. To download the PDF and Excel report I can click this icon. Under Actions, I can edit the schedule, and I can email the report immediately if I don’t want to wait until the scheduled time.

Now I’ve successfully set up automated daily Security Onion Overview reports to the customer’s use.

Summary

This blog was a very quick overview of how to email PDF, CSV, XLS reports from Security Onion. If you have any more questions, please contact us.

An Easy Way to Export / Import Dashboards, Searches and Visualizations from Kibana

Introduction

Manually recreating Kibana dashboards, searches, and visualizations during upgrades, production deployment or recovery is a time-consuming affair. The easiest way to recreate the prebuilt Kibana dashboard and other objects is by exporting and importing dashboards, searches, and visualizations. This can be achieved by using,

  • Kibana API (available since Kibana 7.x) 
  • Kibana UI

If are you looking to export and import the Kibana dashboards and its dependencies automatically, we recommend the Kibana API’s. Also, you can export and import dashboard from Kibana UI.

Note: User should add the dependencies of the dashboards like visualization, index pattern individually while exporting or importing from Kibana UI.

Export Objects From Kibana API

The export API enables you to retrieve a set of saved objects that can later be imported into Kibana.

Request

Request Body

At least type or objects must be passed in within the request body.

type (optional)

(array/string) The saved object type(s) that the export should be limited to.

The following example exports all index pattern saved objects.

Example Curl:

objects (optional)

(array) A list of objects to export

The following example exports specific saved objects.

Example Curl:

Response Body

The response body will have a format of newline delimited JSON and the successful call returns a response code of 200 along with the exported objects as the response body.

Import Objects From Kibana API

The import API enables you to create a set of Kibana saved objects from a file created by the export API.

Request

Request Body

The request body must be of type multipart/form-data.

File

A file exported using the export API.

Example

The following example imports an index pattern and dashboard.

The file.ndjson file would contain the following.

Response Body

A successful call returns a response code of 200 and a response body containing a JSON structure similar to the following example:

Export Objects From Kibana UI:

You can now export your objects from Kibana UI under Management > Saved Objects > Export. Select the checkboxes of the objects you want to export, and click Export. Or to export objects by type:

  • Click Export objects.
  • Select the object types you want to export.
  • Click Export All.

kibana export

Import Objects From Kibana UI:

 You can import your JSON file from Kibana UI under Management > Saved Objects > Import. Follow the below steps to import your 

  • Click Import.
  • Navigate to the JSON file that represents the objects to import.
  • Indicate whether to overwrite objects already in Kibana.
  • Click Import.

kibana

Summary:

Exporting and importing the saved objects from the Kibana is an effective and easiest way to recreate dashboards and other objects in new environments or during migrations.

If you are looking to automate and make the process simpler,  we recommend using the Kibana APIs or else you can use the Kibana UI for granular export and import.

If you are looking for a Kibana reporting solution, be sure to test drive Skedler.

Kibana Customization – The brilliant beginner’s guide to simplifying Kibana for non-technical users

Kibana is an exceptional tool for technical users focused on log analysis and data visualization. But did you ever feel like Kibana is a little too overwhelming for your stakeholders and customers? If yes,  then you belong to a vast majority of users. Like you, many of our customers felt like the kibana is too difficult to use for their non-technical users and asked us if it is possible to simplify its user experience.

Simplifying Kibana

The Skedler team has been working with the Elastic Stack for several years now, starting all the way back in v1.x.  Given our team’s years of expertise, we wondered how can we help our customers? The problem with Kibana is that it is a swiss-army knife with too many options when all you need is a simple sharp penknife.   Is it possible to customize Kibana and make it a really effective penknife?

Today, we want to share with you some ideas on how Kibana can be customized and made simpler. If you are interested in learning more or need our help, drop us a note at [email protected] and we’ll connect with you.  

Project Simpana (Simple Kibana) – Customized Kibana

To simplify Kibana, we focused on five areas of user experience:

  1. Login
  2. Navigation
  3. Dashboard
  4. Visualization
  5. Reporting

Customized Login

Customized Login page

 

Kibana is secured with a security plugin. We customized the login page to keep it simple. In addition, we customized the logo, color schema and the look -n-feel in such a way that it doesn’t even look like kibana anymore 😉

Easy Navigation

Dashboard List

 

After login, instead of the standard Kibana home page, the users are automatically taken to the dashboard list since dashboards are the most frequently used. The look and feel of the dashboard list are clean. We used the same color theme as Skedler. In the sidebar, we removed the items that are not essential for a typical user.  

We strongly believe that the resulting user experience is clean, simple, and productive.

Simple Dashboard

Dashboard

In the dashboard page, instead of displaying too many buttons to the user, we grouped all the button inside the Actions drop list. Once again, this simplifies the user experience without losing any function.   

Adding New Visualization

In the above dashboard, you can see a new visualization in the bottom left. It is called Sankey visualization. It is a not available out-of-the-box in Kibana. One of our customers needed to visualize data connections from source to destination.   We added a Sankey visualization to the Kibana framework to visualize the data connections.

Create Visualization

Sankey Visualization

A little bit more about Sankey.  Sankey diagrams are a specific type of flow diagram, in which the width of the connectors is shown proportionally to the flow quantity.  Sankey diagram is a visualization used to depict a flow from one set of values to another. The things being connected are called nodes and the connections are called links.  Sankey diagrams are typically used to visualize data, energy, material or cost transfers between processes. Sankey diagrams put a visual emphasis on the major transfers or flow within a system. They are helpful in locating dominant contributions to an overall flow. Often, Sankey diagrams show conserved quantities within defined system boundaries.  

Reporting

Skedler Plugin

The users needed to automate reporting within the Kibana UI so that they don’t need to login to multiple screens.   Therefore, Reporting was a key requirement for auditing and compliance.  We added Skedler Reports plugin to Kibana. Users can easily create, schedule and generate reports from within Kibana.

Summary

Our goal was to make Kibana simple and easy to use for the non-technical users.  We achieved our objectives with the clean design, easy navigation, and simple dashboard layout, Finally, as an icing on the cake, we also simplified reporting with the Skedler reports.  

We are continuing to explore Kibana customization and adding new visualizations to Kibana. We invite you to share your feedback on this article and what new capabilities you’d like to see in the future. If you would like to learn more,  reach out to us at [email protected].  

A Comparison of Reporting Tools for Elastic Stack – Elastic Reporting and Skedler Reports

Elasticsearch is stronger with every new release while the Kibana visualizations are getting more sophisticated thereby helping users explore the Elasticsearch data effortlessly. All the search, analytics and visualization capability lead to one thing: reporting.

We recently published a white paper discussing the reporting options for Elastic Stack.

  • Elastic Reporting, from Elastic as part of Elastic Stack Features (formerly X-Pack)
  • Skedler Reports, a reporting solution provided by Guidanz Inc.

In the white paper, we dive into the details of the two reporting tools, compare their features and discuss their use cases. While both the tools provide excellent reporting features for Elastic stack, they differ in several areas. Below is a brief highlight:  

Customization

Being able to customize reports is very important, it not only allows for flexibility in presenting the information, but it also enables users to personalize the reports while building the feeling of ownership and brand. Elastic Reporting currently offers basic customization features which includes an option to add a logo, two built-in layouts, and two formats (CSV and PDF). Although this may prove to be useful in some scenarios, Elastic Reporting may be too narrow due to the lack of customization.

Skedler Reports, on the other hand, features a long list of customization features from Kibana dashboards, searches, and Grafana dashboards. Skedler Reports offers three report formats (CSV, PDF, and XLS), three layouts including a report designer for custom visualization layout, flexible templates, and report bursting. Report bursting allows users to send multiple personalized reports to groups of recipients based on a single report definition.

Ease of Use

Outstanding ease of use can dramatically decrease the resources and time needed to integrate reporting into your application. Elastic Reporting currently require users to write scripts to schedule reports and send notifications. This may not be an issue for users who are comfortable with scripts, but it may become a maintenance issue for those who aren’t. Elastic Reporting also does have a one minute time limit for generating reports, making it difficult for those who have larger dashboards.

Skedler Reports does not require the user to write scripts at any time making it easy to learn and use regardless of the user’s background. In addition, Skedler Reports can easily generate reports from large dashboards without any time limits. This allows reports to be seamlessly generated from a substantial amount of data without experiencing glitches.

Affordable

Technical abilities are not the only things that differentiate Elastic Reporting and Skedler Reports, their licensing models are also different. Elastic Reporting is part of the licensed Elastic Stack Features (formerly X-Pack) that bundles other capabilities into one package.  To deploy reporting, users must register for a Gold or Platinum license subscription (or the Free license for basic features – like CSV export). The license subscriptions can become expensive and users might end up paying for features that they don’t really need.

Skedler Reports offers a flexible and affordable licensing option.  By paying only for the reporting features that they need, users can use Skedler in conjunction with open source or third-party tools for Elasticsearch.   

Comparison

The following table summarizes the significant differences between Elastic Reporting and Skedler Reports.

Skedler Reports vs. Elastic Reporting Comparison

Conclusion

Reporting has become a critical requirement as organizations use Elastic Stack in a variety of use cases. It is crucial that users adequately evaluate and choose the best option for their organization.  The white paper discusses several scenarios for using Elastic Reporting and Skedler Reports. For more guidance on choosing the best reporting option for your use case, download the full white paper and discover the reporting solution that works best for you.

Download The White Paper

 

The Top 5 ELK Stack+ Tools Every Business Intelligence Analyst Needs

Updated for 2018.

The world’s most popular log management platform, ELK (Elasticsearch, Logstash and Kibana) Stack, is a powerful tool for business intelligence. But most businesses rely on multiple sources of data and need a way to analyze it all to improve the system and allocate resources properly.

The Elastic stack has many integrations with different log management tools. However, it isn’t the ideal tool in every business case, especially when you have multiple data sources. Given the breadth of tools in the marketplace, how do you decide which ones to add to your stack for optimal BI analysis? We put together our list for the top business intelligence tools to compliment ELK stack in 2018.

What makes ELK Stack tools just so attractive? Since it’s based on the Lucene search engine, Elasticsearch is a NoSQL database which forms as a log pipeline tool; accepting inputs from various sources, executing transformations, then exporting data to designated targets. It also carries enhanced customizability, which is a key preference nowadays, since program tweaking is more lucrative and stimulating for many engineers. This is coupled with ELK’s increased interoperability, which is now a practically indispensable feature, since most businesses don’t want to be limited by proprietary data formats.

1. Microsoft Power BI

https://powerbi.microsoft.com/en-us/

Microsoft Power BI is the most cost-efficient cloud-based option for analyzing and visualizing business intelligence data. Power BI offers ease-of-use, excel-based add ons and the ability to use browser- and desktop-based authoring with apps and platforms – both on-premise and in the cloud. Additionally, Microsoft Power BI is the only one of these tools to provide extensive R and big data integrations.

While Power BI doesn’t integrate with ELK stack natively, you can automate export of ELK data or use the REST API to combine it with both on-premise and cloud based datasets for complex data analysis.  Moreover, it has added data preparation, data discovery and data dashboards recently for enterprise users. The Power BI Suite is delivered on the Microsoft Azure Cloud platform, with the Power BI Desktop provided on-premise as a stand-alone option.

 

2. Tableau

https://www.tableau.com/products/desktop

Tableau is known as the industry leader in business intelligence data visualization. While it has a fairly steep learning curve to get all the value (like both Power BI and Qlik), it is the leader in ease of use. A non-technical user will still be able to create dashboards and get insights using the drag-and-drop interface. For advanced embedded analytics, Tableau is the clear leader. However, its architecture falls behind Qlik in supporting self-contained ETL and data storage.

Tableau can connect to most data sources through their more stable and mature APIs, and has added support for R. Like Power BI, Tableau doesn’t offer native integration to ELK, but you can automate export and import of ELK data into Tableau with connectors. They have been making several improvements to the UI, including improvements to the data wrangling and responsive mobile app. Additionally, Tableau has focused on expanding support for more complex data federation workflows and other feature requirements of large enterprise companies.

 

3. Qlik

https://www.qlik.com/us  

Qlik allows data exploration beyond the pattern-recognition capabilities of SQL data structures and queries with a powerful in-memory engine, though Qlik is not as good at advanced embedded analytics as Tableau. Qlik is well-known for developing enterprise-level products and great customer service that makes scaling more efficient. With a streamlined onboarding process, Qlik makes generating insights and value quick as well.

Similar to other visual analytics tools, Qlik doesn’t offer direct integration to ELK platform and requires export/import. Qlik’s functionality is always being improved, and it already has one of the leading Application Programming Interface (API) command sets in analytics. QlikView is a powerful reporting engine that goes beyond visual dashboards, and it goes beyond Tableau in its wide variety of integration points.

 

4. Anaconda

https://www.anaconda.com/distribution/

For data scientists, Anaconda is a great addition for prescriptive analytics and machine learning. It is an open source, freemium package manager (with conda), environment manager, a Python and R distribution and collection of open source packages. One of the biggest advantages of Anaconda is it just works right away. You can import ELK data into anaconda and build custom machine learning models that meet your business requirements.  

 

5. Slack

https://slack.com/

No, Slack is not a BI tool, but the team messaging and collaboration platform is one of the best ways to keep everyone up to speed. Apart from the productivity features, such as group channels and direct messaging, Slack has a huge number of integrations. There’s one for almost every enterprise product available. Plus, it’s much more fun to use than email. Slack is an excellent tool to distribute and consume reports and alerts from your ELK platform.

 

Bonus: Skedler

ELK Stack reporting and alerting tool Skedler combines all the automated processes you’d never dream you could have within one affordable unit. Fundamentally, it simplifies scheduling and distribution of relevant data as reports and alerts from ELK platform. With faster your speed-to-market, you can focus on more important things.    

Reports can be print-ready, high-resolution PDFs or analysis-ready CSV/XLS reports generated periodically from ELK platform. Alerts deliver timely information about anomalies in ELK data. Skedler delivers both reports and alerts via Slack in addition to email. You can also set up Alerts to trigger webhooks. By automating the export of data, Skedler can serve as the simple, time-saving bridge between your ELK platform and Analytics toolkit.   

 

There you have it, the top ELK Stack+ tools no business intelligence analyst should ever be without!

Ready to start streamlining your analysis and start reporting and alerting with more stability? Right now, we’re offering a free trial.

What are your favorite tools? What is critical to your stack?

 

Translate »