A Comparison of Reporting Tools for Elastic Stack – Elastic Reporting and Skedler Reports

Elasticsearch is stronger with every new release while the Kibana visualizations are getting more sophisticated thereby helping users explore the Elasticsearch data effortlessly. All the search, analytics and visualization capability lead to one thing: reporting.

We recently published a white paper discussing the reporting options for Elastic Stack.

  • Elastic Reporting, from Elastic as part of Elastic Stack Features (formerly X-Pack)
  • Skedler Reports, a reporting solution provided by Guidanz Inc.

In the white paper, we dive into the details of the two reporting tools, compare their features and discuss their use cases. While both the tools provide excellent reporting features for Elastic stack, they differ in several areas. Below is a brief highlight:  

Customization

Being able to customize reports is very important, it not only allows for flexibility in presenting the information, but it also enables users to personalize the reports while building the feeling of ownership and brand. Elastic Reporting currently offers basic customization features which includes an option to add a logo, two built-in layouts, and two formats (CSV and PDF). Although this may prove to be useful in some scenarios, Elastic Reporting may be too narrow due to the lack of customization.

Skedler Reports, on the other hand, features a long list of customization features from Kibana dashboards, searches, and Grafana dashboards. Skedler Reports offers three report formats (CSV, PDF, and XLS), three layouts including a report designer for custom visualization layout, flexible templates, and report bursting. Report bursting allows users to send multiple personalized reports to groups of recipients based on a single report definition.

Ease of Use

Outstanding ease of use can dramatically decrease the resources and time needed to integrate reporting into your application. Elastic Reporting currently require users to write scripts to schedule reports and send notifications. This may not be an issue for users who are comfortable with scripts, but it may become a maintenance issue for those who aren’t. Elastic Reporting also does have a one minute time limit for generating reports, making it difficult for those who have larger dashboards.

Skedler Reports does not require the user to write scripts at any time making it easy to learn and use regardless of the user’s background. In addition, Skedler Reports can easily generate reports from large dashboards without any time limits. This allows reports to be seamlessly generated from a substantial amount of data without experiencing glitches.

Affordable

Technical abilities are not the only things that differentiate Elastic Reporting and Skedler Reports, their licensing models are also different. Elastic Reporting is part of the licensed Elastic Stack Features (formerly X-Pack) that bundles other capabilities into one package.  To deploy reporting, users must register for a Gold or Platinum license subscription (or the Free license for basic features – like CSV export). The license subscriptions can become expensive and users might end up paying for features that they don’t really need.

Skedler Reports offers a flexible and affordable licensing option.  By paying only for the reporting features that they need, users can use Skedler in conjunction with open source or third-party tools for Elasticsearch.   

Comparison

The following table summarizes the significant differences between Elastic Reporting and Skedler Reports.

Skedler Reports vs. Elastic Reporting Comparison

Conclusion

Reporting has become a critical requirement as organizations use Elastic Stack in a variety of use cases. It is crucial that users adequately evaluate and choose the best option for their organization.  The white paper discusses several scenarios for using Elastic Reporting and Skedler Reports. For more guidance on choosing the best reporting option for your use case, download the full white paper and discover the reporting solution that works best for you.

Download The White Paper

 

Skedler Update: Version 3.7 Released

Skedler v3.7 Updates

We have some exciting news for you, Skedler v3.7 is now available with new features.

What’s New in Skedler Reports v3.7

  • Support for Elasticsearch version from 5.x to 6.3.x and Kibana version from 5.x to 6.3.x
  • Support for Search Guard from 5.0.x to 6.2.x
  • Retain the same order of the visualizations in reports as it is in Kibana/Grafana dashboard
  • REST API support
  • Ability to test email/Slack with the configured email/Slack settings

What’s New in Skedler Alerts v3.7

  • Elastic 6.3 Support

Download the latest version of Skedler from the Free Trial page: Download Skedler

For technical help, visit our Support Page for more information: Skedler Support 

Combine Amazon Translate with Elasticsearch and Skedler to build a cost-efficient multi-lingual omnichannel customer care – Part 1

Every organization provides services to customers before, during and after a purchase.  For organizations whose customers are spread all over the world, the customer care team has to handle requests in different languages.  Meeting the customer satisfaction SLA for a global multi-lingual customer base without breaking the bank is a significant challenge.   How can you enable our customer care team to respond to inquiries in different languages?  Is it feasible for organizations to handle customer inquiries from across the globe efficiently without compromising on quality?

With Amazon’s introduction of AWS Translate + ELK  + Skedler, you now can!

In this two-part blog post, we are going to present a system architecture to translate customer inquiries in different languages with AWS Translate, index this information in Elasticsearch 6.2.3 for fast search, visualize the data with Kibana 6.2.3, and automate reporting and alerting using Skedler.  In Part I, we will discuss the key components, architecture, and common use cases. In Part II, we will dive into the details on how to implement this architecture.

Let us begin by breaking down the business requirement into use cases:

  • Enable customer care teams (based in the US or other English language countries) to respond to tickets/questions from customers all over the world, automatically translated, across multiple channels such as email, chat
  • Build a searchable index of tickets/questions/responses/translations/customer satisfaction score to measure (such as key topics, customer satisfaction, identify topics for automation – auto-reply via chatbots or knowledgebase)
  • Use Skedler reporting and alerting to generate KPIs on the above and alert if customer satisfaction score falls below threshold levels

The components that we need are the following:

  • AWS API Gateway
  • AWS Lambda
  • AWS Translate
  • Elasticsearch 6.2.3
  • Kibana 6.2.3
  • Skedler Reports and Alerts

System architecture:

architecture

A Bit about AWS Translate

At the re:invent2017 conference, Amazon Web Services presented Amazon Translate, a new machine learning – natural language processing – service.

aws translate

Amazon Translate is a neural machine translation service that delivers fast, high-quality, and affordable language translation. Neural machine translation is a form of language translation automation that uses deep learning models to deliver more accurate and more natural sounding translation than traditional statistical and rule-based translation algorithms. Amazon Translate allows you to localize content – such as websites and applications – for international users, and to easily translate large volumes of text efficiently.

Alternatives to AWS Translate include Google Cloud Translation API and Azure Translator Text.

You can find more details about AWS Translate in the following links.

> AWS official documentation: What is Amazon Translate?
> Blog post: Amazon Translate Now Generally Available
> Blog post: Introducing Amazon Translate – Real-time Language Translation
> AWS Machine Learning blog: Amazon Translate

Conclusion

In this post we presented a system architecture that performs the following:

  • Text Translation with AWS Translate
  • Index and fast search – Elasticsearch
  • Dashboard visualization – Kibana
  • Automated Customizable Reporting and Alerting – Skedler Reports and Alerts

AWS Translate+ELK+Skedler is a robust solution in helping you to handle multi-lingual customer support inquiries in a high-quality and cost-efficient way.

Excited and ready to dive into the details?  In the next post (Part 2 of 2), you can see how to implement the described architecture.

How to Combine Text Analytics and Search using AWS Comprehend and Elasticsearch 6.0

How to automatically extract metadata from documents? How to index them and perform fast searches? In this post, we are going to see how to automatically extract metadata from a document using Amazon AWS Comprehend and Elasticsearch 6.0 for fast search and analysis.

This architecture we present improves the search and automatic classification of documents (using the metadata) for your organization.

Using the automatically extracted metadata you can search for documents and find what you need.

We are going to use the following components:

 

Architecture

AWS Comprehend and Elasticsearch

Example of applications:

  • Voice of customer analytics: You can use Amazon Comprehend to analyze customer interactions in the form of documents, support emails, online comments, etc., and discover what factors drive the most positive and negative experiences. You can then use these insights to improve your products and services.
  • Semantic search: You can use Amazon Comprehend to provide a better search experience by enabling your search engine to index key phrases, entities, and sentiment. This enables you to focus the search on the intent and the context of the articles instead of basic keywords.
  • Knowledge management and discovery: You can use Amazon Comprehend to organize and categorize your documents by topic for easier discovery, and then personalize content recommendations for readers by recommending other articles related to the same topic.

 

When we talk about metadata, I like the following definition:

Metadata summarizes basic information about data, which can make finding and working with particular instances of data easier. For example, author, date created and date modified and file size are examples of very basic document metadata. Having the ability to filter through that metadata makes it much easier for someone to locate a specific document.

We are going to focus on the following metadata:

  • Document content type (PDF, Plain Text, HTML, Docx)
  • Document dominant language
  • Document entities
  • Key phrases
  • Sentiment
  • Document length
  • Country of origin of the document (metadata taken from the user details – ip address)

 

Amazon S3 will be the main documents storage. Once a document has been uploaded to S3 (you can easily use the AWS SDK to upload a document to S3 from your application) a notification is sent to an SQS queue and then consumed by a consumer.

The consumer gets the uploaded document and detects the entities/key phrases/sentiment using AWS Comprehend. Then it indexes the document to Elasticsearch. We use the Elasticsearch pre-processor plugins, Attachment Processor and Geoip Processor, to perform the other metadata extraction (more details below).

Here are the main steps performed in the process:

  1. Upload a document to S3 bucket
  2. Event notification from S3 to a SQS queue
  3. Event consumed by a consumer
  4. Entities/key phrases/sentiment detection using AWS Comprehend
  5. Index to Elasticsearch
  6. ES Ingestion pre-processing: extract document metadata using Attachment and Geoip Processor plugin
  7. Search in Elasticsearch by entities/sentiment/key phrases/language/content type/source country and full-text search
  8. Use Kibana for dashboard and search
  9. Use Skedler and Alerts for reporting, monitoring and alerting

In the example, we used AWS S3 as document storage. But you could extend the architecture and use the following:

  • SharePoint: create an event receiver and once a document has been uploaded extract the metadata and index it to Elasticsearch. Then search and get the document on SharePoint
  • Box, Dropbox and Google Drive: extract the metadata from the document stored in a folder and then easily search for them
  • Similar Object storage (i.e. Azure Blob Storage)

 

Event notification

When a document has been uploaded to the S3 bucket a message will be sent to an Amazon SQS queue. You can read more information on how to configure the S3 Bucket and read the queue programmatically here: Configuring Amazon S3 Event Notifications.

This is how a message notified from S3 looks. The information we need are the sourceIPAddress and object key

Consume messages from Amazon SQS queue

Now that the S3 bucket has been configured, when a document is uploaded to the bucket a notification will be sent to the SQS queue. We are going to build a consumer that will read this message and perform the instances/key phrases/sentiment detection using AWS Comprehend. You can eventually read a set of messages (change the MaxNumberOfMessages parameter) from the queue and run the task against a set of documents (batch processing).

With this code you can read the messages from a SQS queue and fetch the bucket and key (used in S3) of the uploaded document and use them to invoke AWS Comprehend for the metadata detection task:

We will download the uploaded document from S3.

AWS Comprehend

Amazon Comprehend is a new AWS service presented at the re:invent 2017.
Amazon Comprehend is a natural language processing (NLP) service that uses machine learning to find insights and relationships in text. Amazon Comprehend identifies the language of the text; extracts key phrases, places, people, brands, or events; understands how positive or negative the text is, and automatically organizes a collection of text files by topic. – AWS Service Page

AWS Comprehend and Elasticsearch

It analyzes text and tells you what it finds, starting with the language, from Afrikaans to Yoruba, with 98 more in between. It can identify different types of entities (people, places, brands, products, and so forth), key phrases, sentiment (positive, negative, mixed, or neutral), and extract key phrases, all from a text in English or Spanish. Finally, Comprehend‘s topic modeling service extracts topics from large sets of documents for analysis or topic-based grouping. – Jeff Barr – Amazon Comprehend – Continuously Trained Natural Language Processing.

Instead of AWS Comprehend you can use similar service to perform Natural Language Processing, like: Google Cloud Platform – Natural Language API or Microsoft Azure – Text Analytics API.

 

 

Entities Detection

With this code, we can invoke the entities detection of AWS Comprehend. We will use the object key to download the object from S3.

Once you have downloaded the document, invoke the detect_entities method of AWS Comprehend.

Key phrases

To extract the key phrases use the detect_key_phrases method of AWS Comprehend.

Sentiment

To extract the sentiment (positive, negative, neutral) use the detect_sentiment method of AWS Comprehend.

Index to Elasticsearch

Given a document, we now have a set of metadata that identify it. Next, we index these metadata to Elasticsearch and use a pipeline to extract the other metadata. To do so, I created a new index called library and a new type called document.

Since we are going to use Elasticsearch 6.0 and Kibana 6.0, I suggested you read the following resource:

The document type we are going to create will have the following properties:

  • title: the title of the document (s3 key)
  • data: the base64 encoding of the document (used from the Attachment plugin to extract metadata)
  • ip: field that will contain the ip address of the user that uploaded the document (so we can extract the location details)
  • entities: the list of entities detected by AWS Comprehend
  • key phrases: the list of key phrases detected by AWS Comprehend
  • sentiment: the sentiment of the document detected by AWS Comprehend
  • s3Location: link to the document in the S3 bucket

Create a new index:

Create a new mapping. As you may notice, in ES 6.0, the type string has been replaced by the type text and keyword. String type ES 6.0

To pre-process documents before indexing it, we define a pipeline that specifies a series of processors. Each processor transforms the document in some way. For example, you may have a pipeline that consists of one processor that removes a field from the document followed by another processor that renames a field. Our pipeline will extract the document metadata (from the encoded base64) and the location information from the ip address.

The attachment processors use the Ingest Attachment plugin and the geoip processor use the Ingest Geoip plugin.

Read more about ingestion and pipeline here: Ingest Node, Pipeline Definition.

If you want, you can write your custom pre-processor and invoke AWS Comprehend in the ingestion phase: Writing Your Own Ingest Processor for Elasticsearch.

We can now index a new document:

This is how an indexed document looks like. Notice the attachment and geoip section. We have the language, content type, length and user location details.

Visualize, Report, and Monitor

With Kibana you can create a set of visualizations/dashboards to search for documents by entities and to monitor index metrics (like number of document by language, most contributing countries, document by content type and so on).

Using Skedler, an easy to use report scheduling and distribution application for Elasticsearch-Kibana-Grafana, you can centrally schedule and distribute custom reports from Kibana Dashboards and Saved Searches as hourly/daily/weekly/monthly PDF, XLS or PNG reports to various stakeholders. If you want to read more about it: Skedler Overview.

Example of Kibana dashboard:

Number of documents by language and countries that upload more documents.
AWS Comprehend and Elasticsearch

Countries by the number of uploaded documents.

AWS Comprehend and Elasticsearch

If you want to get notified when something happens in your index, for example, a certain entity is detected or the number of documents by country or documents by language reaches a certain value, you can use Alerts. It simplifies how you create and manage alert rules for Elasticsearch and it provides a flexible approach to notification (it supports multiple notifications, from Email to Slack and Webhook).

Conclusion

In this post we have seen how to use Elasticsearch as the search engine for documents metadata. You can extend your system by adding this pipeline to automatically extract the document metadata and index them to Elasticsearch for fast search (semantic search).

By automatically extracting the metadata from your documents you can easily classify and search (Knowledge management and discovery) for them by content, entities, content type, dominant content language and source country (from where the document has been uploaded).

I ran this demo using the following environment configurations:

  • Elasticsearch and Kibana 6.0.0
  • Python 3.4 and AWS SDK Boto3 1.4.8
  • Ubuntu 14.04
  • Skedler Reports and Alerts

Machine learning with Amazon Recognition and Elasticsearch

In this post we are going to see how to build a machine learning system to perform the image recognition task and use Elasticsearch as search engine to search for the labels identified within the images. The image recognition task is the process of identifying and detecting an object or a feature in a digital image or video.
The components that we will use are the following:

  • Elasticsearch
  • Kibana
  • Skedler Reports and Alerts
  • Amazon S3 bucket
  • Amazon Simple Queue Service (eventually you can replace this with AWS Lambda)
  • Amazon Rekognition

The idea is to build a system that will process the image recognition task against some images stored in a S3 bucket and will index the results (set of labels and % of confidence) to Elasticsearch. So we are going to use Elasticsearch as search engine for the labels found in the images.

If you are not familiar with one or more on the item listed before, I suggest you to read more about them here:

Amazon Rekognition is a service that makes it easy to add image analysis to your applications. With Rekognition, you can detect objects, scenes, faces; recognize celebrities; and identify inappropriate content in images.

These are the main steps performed in the process:

  • Upload an image to S3 bucket
  • Event notification from S3 to a SQS queue
  • Event consumed by a consumer
  • Image recognition on the image using AWS Rekognition
  • The result of the labels detection is indexed in Elasticsearch
  • Search in Elasticsearch by labels
  • Get the results from Elasticsearch and get the images from S3
  • Use Kibana for dashboard and search
  • Use Skedler and Alerts for reporting, monitoring and alerting

Architecture:

Machine Learning Elasticsearch AWS Rekognition

Use Case

This system architecture can be useful when you need to detect the labels in a picture and perform fast searches.
Example of applications:

  • Smart photo gallery: find things in your photo. Detect labels in an automatic way and use Elasticsearch to search them
  • Product Catalog: automatically classify the products of a catalog. Take photos of a product and get it classified
  • Content moderation: get notified when a NSFW content is uploaded
  • Accessibility camera app: help people with disability to see and take pictures

Event notification

When an image is uploaded to the S3 bucket a message will be stored to a Amazon SQS queue. You can read more information on how to configure the S3 Bucket and to read the queue programmatically here: Configuring Amazon S3 Event Notifications.
This is how a message notified from S3 looks like (we need the bucket and key information):

Consume messages from Amazon SQS queue

Now that the S3 bucket is configured, when an image is uploaded to the bucket an event will be notified and a message saved the SQS queue. We are going to build a consumer  that will read this message and perform the image labels detection using AWS Rekognition. You can eventually read a set of messages (change the MaxNumberOfMessages parameter) from the queue and run the task against a set of images (batch processing) or use a AWS Lambda notification (instead of SQS).

With this code you can read the messages from a SQS queue and fetch the bucket and key (used in S3) of the uploaded file and use them to invoke AWS Rekognition for the labels detection task:

Image recognition task

Now that we have the key of the uploaded image we can use AWS Rekognition to run the image recognition task.
The following function invoke the detect_labels method to get the labels of the image. It returns a dictionary with the identified labels and % of confidence.

Index to Elasticsearch

So given an image we have now a set of labels that identify it. We want now to index these labels to Elasticsearch. To do so, I created a new index called imagerepository and a new type called image.

The image type we are going to create will have the following properties:

  • title: the title of the image
  • s3_location: the link to the S3 resource
  • labels: field that will contain the result of the detection task

For the labels property I used the Nested datatype. It allows arrays of objects to be indexed and queried independently of each other.
You can read more about it here:

We will not store the image in Elasticsearch but just the URL of the image within the S3 bucket.

New Index:

New type:

You can now try to post a dummy document:

We can index a new document using the Elasitcsearch Python SDK.

Search

Now that we indexed our documents in Elasticsearch we can search for them.
This is an example of queries we can run:

  • Give me all the images that represent this object (searching by label= object_name)
  • What does this image (give the title) represent?
  • Give me all the images that represent this object with at least 90% of probability (search by label= object_name and confidence>= 0.9)

I wrote some Sense queries.

Images that represent a waterfall:

Images that represent a pizza with at least 90% of probability:

Visualize and monitor

With Kibana you can create set of visualizations/dashboards to search for images by label and to monitor index metrics (like number of pictures by label, most common labels and so on).
Using Skedler, an easy to use report scheduling and distribution application for Elasticsearch-Kibana-Grafana, you can centrally schedule and distribute custom reports from Kibana Dashboards and Saved Searches as hourly/daily/weekly/monthly PDF, XLS or PNG reports to various stakeholders. If you want to read more about it: Skedler Review.

With Kibana you can create a dashboard to visualize the number of labels. Here you can see and example with bar/pie chart and tag cloud visualization (you can easily schedule and distribute the dashboard with Skedler).

If you want to get notified when something happen in your index, for example, a certain labels is detected or the number of labels reach a certain number, you can use Alerts. It simplifies how you create and manage alert rules for Elasticsearch and it provides a flexible approach to notification (it supports multiple notifications, from Email to Slack and Webhook).

Conclusion

In this post we have seen how to combine the power of Elasticsearch’s search with the powerful machine learning service AWS Rekognition. The process pipeline includes also a S3 bucket (where the images are stored) and a SQS Queue used to receive event notifications when a new image is stored to S3 (and it is ready for the image labels detection task). This use case show you how to use Elasticsearch as a search engine not only for logs.

If you are familiar with AWS Lambda you can replace the SQS Queue and the consumer with a function (S3 notification supports AWS Lambda as destination) and call the AWS rekognition service from your Lambda function. Keep in mind that with Lambda you have a 5 minutes execution limit and you can’t invoce the function on batch on a set of images (so you will pay the Lambda execution for each image).

I ran this demo using the following environment configuration:

  • Elasticsearch 5.0.0
  • Python 3.4 and Boto3 AWS SDK
  • Ubuntu 14.04

A similar (Tensorflow instead of AWS Rekognition) use case has been presented to the Elastic Stack in a Day in Milan, Italy (June 2017), you can find the slides here: Tensorflow and Elasticsearch: Image Recognition.

Introducing Skedler Custom Reporting (Formally Report Designer) for Elasticsearch Kibana (ELK)

Give Me Some Real Reports!

When it comes to reporting for ELK,  users are frustrated with the expensive packs and do-it-yourself modules.   Reports from these approaches are rudimentary and nothing more than basic screen grabs of Kibana dashboards. They lack customization, charts get stretched, and visuals are laid out randomly based on the Kibana dashboard.  And if you need to generate large reports, you might as well forget about it since none of these solutions scale!  Users are craving for reports that deliver clear insights from their ELK based log/search/SIEM analytics applications right in their inbox.

Create Intuitive, Custom Reports with Data Stories

Today, we are pleased to announce the Skedler Reports Enterprise Edition (Formally Designer Edition) that offers organizations a new way to unleash the value of Elasticsearch (ELK) data. This innovative solution makes it easy to create custom reports that present the data in an intuitive fashion to the users.  With just a few clicks, you can design report templates, create data stories, and automate distribution of reports that enable users to make quick decisions.

See Skedler in Action

[video_embed video=”9kb0aU0cKmU” parameters=”” mp4=”” ogv=”” placeholder=”” width=”700″ height=”400″]

See a Sample Report

Custom Elasticsearch Kibana Report | Skedler Enterprise Edition (Formally Designer Edition)  from Skedler

Add Custom Reporting to Skedler Premier Edition

Skedler Reports Enterprise Edition (Formally Designer Edition) is available as a seamless add-on module to the Premier Edition.  It is designed for organizations that strive to deliver insightful data stories to users and empower them to make quick decisions.  Skedler Reports Enterprise Edition (Formally Designer Edition) is licensed separately and can be activated instantly with the appropriate license key.

Get a Demo of the Real Reporting for ELK Stack

The Skedler Reports Enterprise Edition (Formally Designer Edition) Preview is available starting today.  Schedule a demo to see the powerful custom reporting capabilities that Skedler offers.  Explore how you can deliver actionable custom ELK reports to users with Skedler.

GET A DEMO

 

Skedler Review: The Report Scheduler Solution for Kibana

 Matteo Zuccon is a software developer with a passion for web development (RESTFull services, JS Frameworks), Elasticsearch, Spark, MongoDB, and agile processes. He runs whiletrue.run. Follow him on Twitter @matteo_zuccon

With Kibana you can create intuitive charts and dashboards. Since Aug 2016 you can export your dashboards in a PDF format thanks to Reporting. With Elastic version, 5 Reporting has been integrated into X-Pack for the Premium and Enterprise subscriptions.

Recently I tried Skedler, an easy to use report scheduling and distribution application for Kibana that allows you to centrally schedule and distribute Kibana Dashboards and Saved Searches as hourly/daily/weekly/monthly PDF, XLS or PNG reports to various stakeholders.

Skedler is a standalone app that allows you to utilize a new dashboard where you can manage Kibana reporting tasks (schedule, dashboards and saved search). Right now there are four different price plans (from free to premium edition).

In this post I am going to show you how to install Skedler (on Ubuntu) and how export/schedule a Kibana dashboard.

Install Pre-requisites

Install .deb package

Download the latest skedler-xg.deb file and extract it.  If you have previously installed the .deb package, remove it before installing the latest version.

Install .tar.gz package

Download the latest skedler-xg.tar.gz file and extract it.

Configure your options for Skedler v5

Skedler Reports has a number of configuring options that can be defined in its reporting.yml file (located in the skedler folder).  In the reporting.yml file, you can configure options to run Skedler in an air-gapped environment, change the port number, define the hostname, change the location for the Skedler database, and log files.

Read more about the reporting.yml configuration options.

Start Skedler for .deb

To start Skedler, the command is:

To check status, the command is:

To stop Skedler. the command is:

Start Skedler for .tar.gz

To run Skedler manually, the command is:

To run Skedler as a service, the commands are:

To start Skedler, the command is:

To check status, the command is:

To stop Skedler. the command is:

Access Skedler Reports

The default URL for accessing Skedler Reports v5 is:

http://localhost:3005/

If you had made configuration changes in the reporting.yml, then the Skedler URL is of the following format:

http://<hostname or your domainurl>:3005

or

http://<hostname or your domain url>:<port number>

Login to Skedler Reports

By default, you will see the Create an account UI.  Enter your email to create an administrator account in Skedler Reports. Click on Continue.

Note: If you have configured an email address and password in reporting.yml, then you can skip the create account step and proceed to Login.

An account will be created and you will be redirected to the Login page.

Sign in using the following credentials:

Username: <your email address>   (or the email address you configured in reporting.yml)Password: admin   (or the password you configured in reporting.yml)

Click Sign in.

You will see the Reports Dashboard after logging in to the skedler account.   

In this post, I demonstrated how to install and configure Skedler and how to create a simple schedule for our Kibana dashboard. My overall impression of Skedler is that it is a powerful application to use side-by-side with Kibana that allows you to deliver reports directly to your stakeholders.

These are the main benefits that Skedler offers:

  • It’s easy to install
  • Linux, Windows  and Mac OS support (it runs on Node.js server)
  • Reports are generated locally (your data isn’t sent to the cloud or Skedler servers)
  • Competitive price plans
  • Supports Kibana and Grafana.
  • Automatically discovers your existing Kibana Dashboards and Saved Searches (so you can easily use Skedler in any environment with no new stack installation needed)
  • It lets you centrally schedule and manage who gets which reports and when they get them
  • Allows for hourly, weekly, monthly, and yearly schedules
  • Generates XLS and PNG reports besides PDF as opposed to Elastic Reporting that only supports PDF.
  • I strongly recommend that you try Skedler because it can help you to automatically deliver reports to your stakeholders and it integrates within your ELK environment without any modification to your stack.

Click here for free trial option.

You can find more resources about Skedler here:

The Top 3 ELK Stack Tools Every Business Intelligence Analyst Needs in 2017

A version of this post, updated for 2018, can be found here: The Top 5 ELK Stack+ Tools Every Business Intelligence Analyst Needs.

The world’s most popular log management platform, ELK Stack, has ultimately reflected its nifty, modernized capabilities with this recent statistic: each month, it is downloaded 500,000 times. So what makes ELK Stack and ELK Stack Tools just so attractive? In many cases, it fulfills what’s really been needed in the log analytics space within SaaS: IT companies are favoring open source products more and more. Since it’s based on the Lucene search engine, Elasticsearch is a NoSQL database which forms as a log pipeline tool; accepting inputs from various sources, executing transformations, then exporting data to designated targets. It also carries enhanced customizability, which is a key preference nowadays, since program tweaking is more lucrative and stimulating for many engineers. This is coupled with ELK’s increased interoperability, which is now a practically indispensable feature, since most businesses don’t want to be limited by proprietary data formats.

ELK Stack tools which simply higher-tier those impressive elements will elevate data analysis just that little bit further; depending on what you want to do with it, of course.

Logstash

Elite tool Logstash is well-known for its intake, processing and output capabilities. It’s mainly intended for organizing and searching for log files, but works effectively for cleaning and streaming big data from all sorts of sources into a comprehensive database, including metrics, web applications, data stores, and various AWS services. Logstash also carries impressive input plugins such as cloudwatch and graphite, allowing you to sculpt your intelligence to be as easy to work with as possible. And, as data travels from source to store, those filters identify named fields to accelerate your analysis; deciphering geo coordinates from IP addresses, and anonymizing PII data. It even derives structure from seemingly unstructured data.

Kibana 5

Analysis program Kibana 5.0 boasts a wealth of new refurbishments for pioneering intelligence surveying. Apart from amplified functionalities such as increased rendering, less CPU usage, and elevated data and index handling, Kibana 5.0 has enriched visualisations with interactive platforms, leveraging the aggregation capabilities of Elasticsearch. Space and time auditing are a crucial part of Kibana’s make up: the map service empowers you to foresee geospatial data with custom location data on a schematic of your selection, whilst the time series allows you to perform advanced generation analysis by describing queries and transformations.

Skedler

ELK Stack reporting tool, Skedler, combines all the automated processes you’d never dream you could have within one unit. Fundamentally, it ups your speed-to-market auditing with cutting-edge scheduling, which Kibana alone does not offer; serving as a single system for both interactive analysis and reporting. Skedler methodically picks up your existing dashboards in the server for cataloging, whilst also enabling you to create filters, refine specific recipients, and filter file folders to use whilst scheduling. Additionally, Skedler automatically applies prerequisite filters with generate reports, preserving them as defined; and encompasses high-resolution PDF and PNG options to incorporate in reporting, which sequentially eliminates the need for redundant reporting systems.

There you have it, the top ELK stack tools no business intelligence analyst should ever be without!

Ready to start streamlining your analysis and start reporting with more stability? Right now, we’re offering a free trial.

Are You Wasting Time Manually Sending Kibana Reports?

Automated processes are, invariably, becoming more and more integral to our everyday lives, both in and out of the office. They’ve replaced much of the manual workforce and have improved systematic procedures, which otherwise would be at the mercy of various human error elements as well as higher risks of data breaches. This, as well as recognizing manual reporting as time-consuming labour, are some key issues we don’t need to worry about any more by virtue of processing automation; Kibana being one of those favorable products.

Focus on What Matters

As a result of businesses adopting bots as part of our everyday processes, we’re left with the far more creative aspects of information science (which automation hasn’t quite caught up with yet). Naturally, Elasticsearch’s aesthetically enhanced data delivery is one of its chief selling points: users are able to explore unchartered data with clear-cut digital graphics at their very disposal. This significant upgrade in data technology has allowed us to possess more varied and complex insights; it’s more exciting now than it has ever been before.

In contrast, however, tedious tasks such as email deliveries of reports to customers, compliance managers and other stakeholders remain arduous and time-consuming; deterring attention from more stimulating in-depth data analysis. What we know to be necessary is for analysts to have the time available to devote themselves to exploring Tableau’s analytics, instead of undergoing mundane processes such as manual spreadsheet creation, generating, email exporting, and distributing.

Automate Kibana Reports

Perhaps it’s possible that you’ve already started utilizing Kibana without realizing the perks of automated scheduling. Luckily, Skedler can completely undertake those prosaic tasks, at an affordable price. As an automated scheme which meets full compliance and operations requirements, Skedler allows your peers, customers and other stakeholders to be kept informed in a virtually effortless and secure way. Comprehensive exporting preferences such as PDF, XLS and PNG are also serviceable; allowing you the luxury of consigning instant or scheduled report generation in the format you desire.

Additionally, Skedler’s reporting motions are facilitated through its prestigious dashboard system, which automatically discovers your existing Kibana dashboards and saved searches to make them available for reporting – again, saving you time creating, scheduling and sending Kibana reports. All your filtered reporting and data chartering is available on a single, versatile platform; meaning you won’t spend extensive amounts of time searching through your outgoing email reports for a specific item.

Skedler simply allows you to examine all of your criteria through one umbrella server with clear functionalities to separate the stunning data visualization deliveries, and the slightly less exciting archive of manual spreadsheet generation and handling for other departments, which it can totally manage by itself.

Ready to start saving time by creating, scheduling and distributing Kibana reports automatically? Try Skedler for free.

3 Apps to Get the Most Out of Kibana 5.0

A new financial quarter starts, full-scale data appraisals are once again at the forefront for every business’ sales agenda. Luckily, Elasticsearch’s open source tool Kibana 5.0 is the talk of the town – and for good reason.

Improvements since version 4.0 are unequivocally noticeable. Its new and far more sleek user interface display not only wows in terms of visuals (note the subsidiary menu that minimizes when not in use), but demonstrates impressive UI capabilities that allows you to reach data far more effectively. The new CSV upload, for example, has the potential to catch a much wider data spread, transforming it to index mapping that’s effortlessly navigable. Its new management tab allows you to view the history of the files with associated data, as well as Elasticsearch indexes where you actively send log files.

This version’s huge boost in code architecture grants the potential for more augmentations than ever, especially with split code self-contained plugins with open-end code tweaking, resulting in several lucrative alpha and beta versions. And it’s essentially allowed us the privilege to now ask: what kind of data insight does my company really need, and which app is best to harness it?

1. Logz.io

Logz.io has fundamentally enriched Kibana with two major touches: increased data security, and more serviceable enterprise sequences as a result. Take their access user tokens, for example, which enable share visualizations and dashboard with those who aren’t necessarily Logz.io users, rather than the URL share function. You can pretty much be as selective with your data as you so please; specific and cross-referenced filter searches are an added function to the tokens. This makes it easy to attach pre-saved filters when back in Kibana.

2. Skedler

Skedler has specifically focused developed reporting capabilities with actionables to perform on data, effectively meaning you can do more with it all in a proactive way. Scheduling is an integral part of this program’s faculty, as it works with your existing database searches and dashboards; allowing you to organize dispatches daily, weekly, monthly and so on. Again, you’re able to action specific filters as and when you’re scheduling, making your reports as customized as needed when sending for peer review.

3. Predix

Predix has established itself as a strong contender for effective data trend sweeps, such as HTTP responses, latencies and visitors – and you’re able to debug apps at the sam e time. Combining this with Kibana’s exhaustive data visualizations and pragmatic dashboard, controlling and managing your log data not only highly secure, but it allows you to become more prognostic when forecasting future data.

Ready to save hours generating, scheduling and distributing PDF and XLS reports from your
Elasticsearch Kibana (ELK) application to your team, customers and other stakeholders? Try Skedler for free.

Translate »