What’s New in Skedler

The release of Skedler in November came with many improvements, such as auto-scaling support for Grafana dashboard layout reports and an updated user interface. In the December release, we came up with more features like Autoscaling support for charts in Kibana and the option to configure proxy URL. We are very proud of these releases, but the team is always looking forward to new ways of making Skedler better for you. We are already improving our product further and wanted you to know about our newly added features and UI.So, before we end the year, we want to update you on the features we released and go through some of the important ones in this blog.

Halt your reporting schedules for Specific Days

Want to make sure you are not sending your reports on a holiday? We got you covered! You can now choose the days you do not wish to schedule reports with our new Weekday feature.

Weekday feature

Autoscaling support for charts in Kibana

Skedler now supports autoscaling of charts in Kibana. You do not have to worry about your reports being messy or missing out on important information when you add more data to your chart because Skedler will automatically take care of that.

Autoscaling in Kibana

Added an auto-scaling support for Grafana dashboard layout reports 

You can now stop worrying about your graphs and modules getting distorted in your reports as Skedler has added auto-scaling support for generating reports from Grafana Dashboard.

Autoscaling in Grafana

 Added a privilege to super admin users to change their email id

Super Admins can now update their email ID in their profile. You can add a new Mail ID instead of the one you used when you opened your account.

Super Admin User

 Generate reports using Grafana dashboard timezone

You can now generate reports in Skedler as per your Grafana time window by selecting “use dashboard time” in Skedler. You do not have to worry about missing or skipping any reports.

Dashboard Timezone

Support for fiscal year time window in Grafana dashboards. 

Grafana 8.2  has the option of the configurable fiscal year in the time picker. This option enables fiscal quarters as time ranges for business-focused and executive dashboards. Skedler now supports this feature too!

Fiscal Time Year Window

Added support for Outlook SMTP

Skedler now supports Outlook. So you can set up Outlook as your notification channel in your Skedler account.

Outlook SMTP

These are just some of the new features of Skedler. For more details on these features, do check out our release notes.

If you would like to stay updated on the latest release news or know about upcoming features, please feel free to reach out to the team and keep an eye out for our monthly newsletters.

Installing, configuring Skedler Reports as Kibana Plugin with Elasticsearch and Kibana Environment using Docker Compose

Introduction

If you are using ELK stack, you can now install Skedler as a Kibana plugin. Skedler Reports plugin is available for Kibana versions from 6.5.x to 7.6.x.

Let’s take a look at the steps to Install Skedler Reports as a Kibana plugin.

Prerequisites:

  1. A Linux machine
  2. Docker Installed
  3. Docker Compose Installed

Let’s get started!

Login to your Linux machine and update the repository and install Docker and Docker Compose. Then follow the below steps to update the Repository:

Setting Up Skedler Reports

Create a Directory, say skedlerplugin

Now, create a Docker Compose file for Skedler Reports. You also need to create a Skedler Reports configuration file, reporting.yml, and a Docker Compose file for Skedler as below,

Create an Elasticsearch configuration file – reporting.yml and paste the config as below.

Download the reporting.yml file found here

Setting Up Elasticsearch

You also need to create an Elasticsearch configuration file, elasticsearch.yml. Docker Compose file for Elasticsearch is below,

Create an Elasticsearch configuration file elasticsearch.yml and paste the config as below.

Setting Up Skedler Reports as Kibana Plugin

Create a Directory inside skedlerplugin, say kibanaconfig

Now, create a Docker file for Kibana and check the Docker file for Kibana as below,

Then, copy the URL of the Skedler Reports plugin matching your exact Kibana version from here.

You also need to create a Docker Compose file for Kibana is below,

Create a Kibana configuration file kibana.yml inside the kibanaconfig folder and paste the config as below.

Create a Skedler Reports as Kibana Plugin configuration file skedler_reports.yml inside the kibanaconfig folder and paste the config as below.

Configure the Skedler Reports server URL in the skedler_reports_url variable. By default, the variable is set as shown below,

If the Skedler Reports server URL requires basic authentication, for example, Nginx, uncomment and configure the skedler_username and skedler_password with the basic authentication credentials as shown below: Now run the docker-compose.

Access Skedler Reports the IP and Port and you will see the Skedler Reports UI.

| http://ip_address:3000

Access Elasticsearch the IP and Port and you will see the Elasticsearch UI.

| http://ip_address:9200

Access Kibana using the IP and Port and you will see the Kibana UI.

| http://ip_address:5601

So now the Composite docker-compose file will look like below,

You can Simply do compose up and down.

Summary

Docker compose is a useful tool to manage container stacks for your client. And manage all related containers with one single command.

The Best Tools for Exporting Elasticsearch Data from Kibana

As a tool for visualizing elasticsearch data, Kibana is a perfect choice. Its UI interface allows creating a dashboard, search, and visualizations in minutes and analyzing the data with its help.

Despite having tons of visualizations, the open source version of Kibana does not have advanced reporting capability. Automating export of data into CSV, Excel, or PDF requires additional plugins.  

We wrote an honest and unbiased review of the following tools that are available for exporting data directly from Elasticsearch.

  1. Flexmonster Pivot plugin for Kibana 
  2. Sentinl (for Kibana)
  3. Skedler Reports

1. Flexmonster Pivot plugin for Kibana

https://github.com/flexmonster/pivot-kibana

Flexmonster Pivot covers the need in summarizing business data and displaying results in a cross-table format interactively & fast. All these Excel-like features, to which so many of you are used to, and its extended API will multiply your analytics results remarkably.

Though initially created as a pivot table component that can be incorporated into any app that uses JavaScript, it can serve as a part of Kibana as well. You can connect it to the Elasticsearch index, fetch the documents from it and start exploring the data.

Pros of Flexmonster Pivot plugin for Kibana

  • Flexmonster is in line with the concept of Kibana
  • Simply embeddable Pivot for Kibana

Cons of Flexmonster Pivot plugin for Kibana

  • To automate the exporting of data on a periodic basis, you need to write your own cron job.
  • Flexmonster Pivot plugin installation is a bit tricky. 

2. Sentinl (for Kibana)

https://github.com/sirensolutions/sentinl

SENTINL extends Kibana with Alerting and Reporting functionality to monitor, notify and report on data series changes using standard queries, programmable validators and a variety of configurable actions – Think of it as a free and independent “Watcher” which also has scheduled “Reporting”.

SENTINL is also designed to simplify the process of creating and managing alerts and reports in Siren Investigate/Kibana 6.x via its native App Interface, or by using native watcher tools in Kibana 6.x+.

Pros of Sentinl

  • It’s simple to install and configure
  • Added as a Kibana plugin.

Cons of Sentinl

  • This tool supports only 6x versions of Elasticsearch.  It does not support 7.x.
  • For non-technical users, it’s difficult to use 
  • Automation requires scripting which makes it laborious

3. Skedler Reports

https://www.skedler.com/

Disclosure: Skedler Reports is one of our products.

Skedler offers a simple and easy to add reporting and alerting solution for Elastic Stack and Grafana.  There is also a plugin for Kibana that is easy to install and use with the Elasticsearch data. It’s called Skedler Reports as Kibana Plugin. 

Pros of Skedler Reports

  • Simple to install, configure, and use
  • Send HTML, PDF, XLS, CSV reports on-demand or periodically via email or #slack
  • Report setup takes less than 5 minute
  • Easy to use, no coding required

Cons of Skedler Reports

  • It requires a paid license which includes software and also enterprise support
  • Installation is difficult for users who are not fully familiar with Elastic Stack or Grafana

What tools do you use?

Do you have to regularly export data from Kibana for external analysis or reporting purposes? Do you use any other third-party plugins?   Email us about the tool at hello at skedler.com.

Episode 1 – AI Usage in Cybersecurity – is it hype/real? The Infralytics Show interview with Bharat Kandanoor, Head of Technology for Security and Cloud at Blue Ally

Shankar Radhakrishnan, Founder of Skedler, recently sat down with Bharat Kandanoor to discuss the use of Artificial Intelligence (AI) in cybersecurity. Bharat, who is the Technology Head for cybersecurity and cloud at Blue Ally, a managed service provider, was able to shed light on the intricacies of AI’s usage in cybersecurity processes. Let’s dive deep into understanding whether AI is an overhyped cybersecurity solution, how it is being used to tackle network security problems, and how AI may be able to create a better cybersecurity future for the end user.

See and listen to the Infralytics Show  interview with Bharat Kandanoor

[video_embed video=”L9i4ESNEFpM” parameters=”” mp4=”” ogv=”” placeholder=”” width=”700″ height=”400″]

Is AI in Cybersecurity Overhyped or Not?

69% of enterprises believe AI will be necessary to respond to cyberattacks, with U.S.-based enterprises placing a more than 15% higher priority on AI-based cybersecurity applications and platforms than the global average when measured on a country basis. Is this level of AI adoption a response to measurable cyber threats that AI can help to remediate or is it merely an overhyped reach by firms around the world? Bharat Kandanoor tells us in our exclusive one-on-one video podcast that “Artificial Intelligence is being used as an overhyped terminology in general.” Bharat goes on to explain that “everyone expects using AI can solve lots of problems, but not necessarily can it do that.”

All in all, these AI tools will always have big drawbacks due to it being an overhyped solution. Bharat explains that “AI can give valuable actionable information, but at the end of the day, it is a human who can decide if the data is an anomaly or not.” It is with this human interaction that data anomalies can be found and analyzed by a human operator who is focused on the end goal of long-term data and network protection at all times.

Using AI to Tackle Cybersecurity Problems

AI has the ability to weed through the plethora of incident response data and find a solution exponentially faster than humans are able to. With AI, you can drill deeper into your data to pull out actionable insights that can help your team work more efficiently and effectively to detect anomalies using behavior analytics, network traffic analysis, and email scanning solutions for phishing/spear phishing attacks.

Small-to-Medium Enterprises (SMEs) struggling with cybersecurity have more to lose than their data and potential profits; the loss could stretch to their customers. AI-enabled technologies allow organizations of all sizes to implement a healthy security posture, from network monitoring and risk control to detecting rising cyber threats and recognizing the scam.  With more SMEs looking to AI as their silver bullet solution in the face of a current shortage of more than 3 million cybersecurity experts globally, SMEs can use AI to react to existing cyber threats and head off new ones.

Incorporating AI Into Your SME’s Cybersecurity Strategy

Even though SMEs believe AI will positively affect their business, uptake of AI solutions within SMEs has been slow, with just a 4% adoption rate per a 2019 report. No matter what the level of maturity is for an enterprise, it is vital that C-suite, IT, and security teams rationalize their existing technologies with solutions that can support their initiatives for a strong return on investment (ROI). Bharat explains that “It’s more of what fits into your use case and how you can make it work” when it comes to incorporating AI solutions into your cybersecurity plans. One AI solution may work for one SME where another may not. It’s just a matter of researching, testing, and finding the right solution for you.

Don’t forget to subscribe to the Infralytics Show Channel and review us because we want to help others like you improve their IT operations, security operations and streamline business operations. If you want to learn more about Skedler and how we can help you just go to Skedler.com where you’ll find tons of information on Kibana, Grafana, and Elastic Stack reporting. You can also download a free trial with us, so you can see how it all works at skedler.com/download. Thanks for joining and we’ll see you next episode.

Tabular Reports from Elastic Stack – New in Skedler Reports v4.4

We are excited to announce the release of Skedler Reports v4.4. As always, it’s packed with capabilities to help you meet compliance, audit, and snapshot reporting requirements.

Tabular PDF, Excel, CSV Reports from Kibana Data Table

If you are a security analyst or network admin looking for the list of unauthorized IP addresses connecting to your machines, Skedler can deliver the data to you in the form of PDF or Excel. With just a couple of clicks, schedule a PDF and/or Excel report that uses the Kibana data table as a source, sit back and have the reports delivered to your stakeholders automatically!

[video_embed video=”l-4JSKe9ee4″ parameters=”” mp4=”” ogv=”” placeholder=”” width=”700″ height=”400″]

Schedule Reports with Custom Time Ranges

If your customer needs a daily report that summarizes the top security events during the work hours of 9 AM – 5 PM, you can send it to them right away. Simply create a custom time range in Kibana and customize your dashboard to use this time range.  In Skedler, schedule a daily report with the dashboard as a data source and you’re all set!

Here is the list of additional features in the new release:

  • You can use the latest features in Elastic Stack 7.3 and Grafana 6.3 and generate reports with Skedler.
  • Users do not need administrator privileges to configure Grafana as a data source in Skedler.

Go Ahead and Try it Out

Test out the data table reports with custom time ranges in ELK 7.3 or Grafana 6.3 environment! Start now below by doing the following:

  1. Download Skedler Reports
  2. Follow the simple steps in our documentation and start generating reports.

Skedler v4.1: Next Generation Reporting for Elasticsearch Kibana 7.0 and Grafana 6.1 is here

We are excited to announce that we have just released version 4.1 of Skedler Reports!  

[button title=”Download Skedler 4.1 Now” icon=”” icon_position=”” link=”https://www.skedler.com/download/” target=”_blank” color=”#800080″ font_color=”#000″ large=”0″ class=”v4download” download=”” onclick=””]

Self Service Reporting Solution for Elasticsearch Kibana 7.0 and Grafana 6.1

We understand that your stakeholders and customers need intuitive and flexible options to save time in receiving the data that matters to them and we’ve achieved exactly that with the release of Skedler 4.1.  The newly enhanced UI offers a delightful user experience for creating and scheduling reports from your Elasticsearch Kibana 7.0 and Grafana 6.1 .

[video_embed video=”4flSLj5q1yk” parameters=”” mp4=”” ogv=”” placeholder=”” width=”700″ height=”400″]

Multi-Tenancy Capabilities

If you are a service provider, you need a simple and automated way to provide different groups of users (i.e. “tenants”) with access to different sets of data. Skedler 4.1’s powerful and secure multi-tenancy capabilities will now allow you to send reports to your customers from your multi-tenant analytics application within minutes.  Supported with Search Guard, Open Distro & X-Pack.

Intuitive and Mobile Ready Reports

Skedler 4.1 will now allow you to produce high-resolution HTML reports from Elasticsearch Kibana and Grafana that will make it easy and convenient for your end users to access to critical data through their mobile devices and email clients. No more cumbersome and large PDF attachments.

[video_embed video=”soFITSdyDdE” parameters=”” mp4=”” ogv=”” placeholder=”” width=”700″ height=”400″]

The latest release also includes:

  • Support for the latest and greatest version of Elastic Stack and Grafana. Skedler 4.1 supports the following versions:
    • Elastic stack 6.7 and 7.0
    • Grafana 6.1.x
    • Open distro for Elasticsearch 6.7 and 7.0.  

Please continue to send us feedback for what new capabilities you’d like to see in the future by reaching out to us at [email protected]

Webinar: Save Time and Money With Automated Reports & Alerts

How do you stay up to date on the critical events in your log analytics platform? Do you spend tens of thousands of dollars and countless hours to create reports and alerts from your Elastic Stack or Grafana application?

Whatever critical scenario arises, receiving the right information at the right time can ultimately be the difference between success and failure. Therefore, it’s important to be constantly aware of every situation, whether it be business partners, operations, customers, or employees, is crucial. The faster a possible issue is identified the faster it can be solved.

Benefits of Automation

Join us in the upcoming webinar on Tuesday, December 18th, 2018 @10AM PST to learn how Skedler, which installs in minutes, can help you save time & money with automated reports and alerts for Elastic Stack & Grafana.

Watch Our Webinar Here

You’ll learn how to quickly add reporting and alerting for Elastic Stack and Grafana while seeing how Skedler can provide a flexible framework to meet your complex monitoring requirements. Be ready with your questions and we’ll be more than happy to discuss them in the webinar Q&A session.

Watch Our Webinar Here

Graph Source: https://www.statista.com/chart/10659/risks-and-advantages-to-automation-at-work/

Skedler Update: Version 3.9 Released

Skedler Update: Version 3.9 Released

Here’s everything you need to know about the new Skedler v3.9. Download the update now to take advantage of its new features for both Skedler Reports and Alerts.

What’s New With Skedler Reports v3.9

  • Support for:
    • ReadOnlyRest Elasticsearch/Kibana Security Plugin.
    • Chromium web browser for Skedler report generation.
    • Report bursting in Grafana reports if the Grafana dashboard is set with Template Variables.
    • Elasticsearch version 6.4.0 and Kibana version 6.4.0.
  • Ability to install Skedler Reports through Debian and RPM packages.
  • Simplified installation levels of Skedler Reports here.
  • Upgraded license module
    • NOTE: License reactivation is required when you upgrade Skedler Reports from the older version to the latest v3.8. Refer to this URL to reactivate the Skedler Reports license key.
    • Deactivation of Skedler license key in UI

What’s New With Skedler Alerts v3.9

  • Support for:
    • Installing Skedler Alerts via Debian and RPM packages.
    • GET method type in Webhook.
    • Elasticsearch 6.4.0.
  • Simplified installation levels of Skedler. Refer to this URL for installation guides.
  • Upgraded license module:
    • NOTE: License reactivation is required when you upgrade Skedler Alerts from the older version to the latest v3.8. Refer to this URL to reactivate the Skedler Alerts license key.
  • Deactivation of Skedler Alerts license key in UI

 

Get Skedler Reports

Download Skedler Reports

Get Skedler Alerts

Download Skedler Alerts

 

How to Extract Business Insights from Audio Using AWS Transcribe, AWS Comprehend and Elasticsearch – Part 2 of 2

In the previous post, we presented a system architecture to convert audio and voice into written text with AWS Transcribe, extract useful information for quick understanding of content with AWS Comprehend, index this information in Elasticsearch 6.2 for fast search and visualize the data with Kibana 6.2.

In this post we are going to see how to implement the previosly described architecture.
The main steps performed in the process are:

  1. Configure S3 Event Notification
  2. Consume messages from Amazon SQS queue
  3. Convert the recording to text with AWS Transcribe
  4. Entities/key phrases/sentiment detection using AWS Comprehend
  5. Index to Elasticsearch 6.2
  6. Search in Elasticsearch by entities/sentiment/key phrases/customer
  7. Visualize, report and monitor with Kibana dashboards
  8. Use Skedler and Alerts for reporting, monitoring and alerting

1. Configure S3 Event Notification

When a new recording has been uploaded to the S3 bucket, a message will be sent to an Amazon SQS queue.

You can read more information on how to configure the S3 Bucket and read the queue programmatically here: Configuring Amazon S3 Event Notifications.

This is how a message notified from S3 looks. The information we need are the object key and bucket name.

2. Consume messages from Amazon SQS queue

Now that the S3 bucket has been configured, a notification will be sent to the SQS queue when a recording is uploaded to the bucket. We are going to build a consumer that will perform the following operations:

  • Start a new AWS Transcribe transcription job
  • Check the status of the job
  • When the job is done, perform text analysis with AWS Comprehend
  • Index the results to Elasticsearch

With this code you can read the messages from a SQS queue, fetch the bucket and key (used in S3) of the uploaded document and use them to invoke AWS Transcribe for the speech to text task:

3. AWS Transcribe – Start Transcription Job

Once we have consumed a S3 message and we have the url of the new uploaded document, we can start a new transcription job (asynchronous) to perform the speech to text task.

We are going to use the start_transcription_job method.

It takes a job name, the S3 url and the media format as parameters.

To use the AWS Transcribe API be sure that your AWS Python SDK – Boto3 is updated.

Read more details here: Python Boto3 AWS Transcribe.

3a. AWS Transcribe – Check Job Status

Due to the asynchronous nature of the transcription job (it could take a while depending on the length and complexity of your recordings), we need to check the job status.

Once the stauts is “COMPLETED” we can retrieve the result of the job (the text converted from the recording).

Here’s how the output looks:

4. AWS Comprehend – Text Analysis

We have converted our recording to text. Now, we can run the text analysis using AWS Comprehend. The analysis will extract the following elements from the text:

  • Sentiment
  • Entities
  • Key phreses

Read more details here: Python Boto3 AWS Comprehend.

5. Index to Elasticsearch

Given a recording, we now have a set of elements that characterize it. Now, we want to index this information to Elasticsearch 6.2. I created a new index called audioarchive and a new type called recording.

The recording type we are going to create will have the following properties:

  • customer id: the id of the customer who submitted the recording (substring of the s3 key)
  • entities: the list of entities detected by AWS Comprehend
  • key phrases: the list of key phrases detected by AWS Comprehend
  • sentiment: the sentiment of the document detected by AWS Comprehend
  • s3Location: link to the document in the S3 bucket

Create the new index:

Add the new mapping:

We can now index the new document:

6. Search in Elasticsearch by entities, sentiment, key phrases or customer

Now that we indexed the data in Elasticsearch, we can perform some queries to extract business insights from the recordings.

Examples:

Number of positive recordins that contains the _feedback_ key phrases by customer.

Number of recordings by sentiment.

What are the main key phares in the nevative recordings?

7. Visualize, Report, and Monitor with Kibana dashboards and search

With Kibana you can create a set of visualizations/dashboards to search for recording by customer, entities and to monitor index metrics (like number of positive recordings, number of recordings by customer, most common entities/key phreases in the recordings).

Examples of Kibana dashboards:

Percentage of documents by sentiment, percentage of positive feedback and key phrases:

kibana report dashboard

Number of recordings by customers, and sentiment by customers:

kibana report dashboard

Most common entities and heat map sentiment-entities:

kibana report

8. Use Skedler Reports and Alerts to easily monitor data

Using Skedler, an easy to use report scheduling and distribution application for Elasticsearch-Kibana-Grafana, you can centrally schedule and distribute custom reports from Kibana Dashboards and Saved Searches as hourly/daily/weekly/monthly PDF, XLS or PNG reports to various stakeholders. If you want to read more about it: Skedler Overview.

[video_embed video=”APEOKhsgIbo” parameters=”” mp4=”” ogv=”” placeholder=”” width=”700″ height=”400″]

If you want to get notified when something happens in your index, for example, a certain entity is detected or the number of negative recording by customer reaches a certain value, you can use Skedler Alerts. It simplifies how you create and manage alert rules for Elasticsearch and it provides a flexible approach to notification (it supports multiple notifications, from Email to Slack and Webhook).

Conclusion

In this post we have seen how to use Elasticsearch as the search engine for customer recordings. We used the speech to text power of AWS Transcribe to convert our recording to text and then AWS Comprehend to extract semantic information from the text. Then we used Kibana to aggregate the data and create useful visualizations and dashboards. Then scheduled and distribute custom reports from Kibana Dashboards using Skedler Reports.

Environment configurations:

  • Elasticsearch and Kibana 6.2
  • Python 3.6.3 and AWS SDK Boto3 1.6.3
  • Ubuntu 16.04.3 LTS
  • Skedler Reports & Alerts

Extract business insights from audio using AWS Transcribe, AWS Comprehend and Elasticsearch – Part 1

Many businesses struggle to gain actionable insights from customer recordings because they are locked in voice and audio files that can’t be analyzed. They have a gold mine of potential information from product feedback, customer service recordings and more, but it’s seemingly locked in a black box.

Until recently, transcribing audio files to text has been time-consuming or inaccurate.
Speech to text is the process of converting speech input into digital text, based on speech recognition. The best solutions were either not accurate enough, too expensive to scale or didn’t play well with legacy analysis tools. With Amazon’s introduction of AWS Transcribe, that has changed.

In this two-part blog post, we are going to present a system architecture to convert audio and voice into written text with AWS Transcribe, extract useful information for quick understanding of content with AWS Comprehend, index this information in Elasticsearch 6.2 for fast search and visualize the data with Kibana 6.2.  In Part I, you can learn about the key components, architecture, and common use cases.  In Part II, you can learn how to implement this architecture.

We are going to analyze some customer recordings (complaints, product feedbacks, customer support) to extract useful information and answer the following questions:

  • How many positive recordings do I have?
  • How many customers are complaining (negative feedback) about my products?
  • Which is the sentiment about my product?
  • Which entities/key phrases are the most common in my recordings?

The components that we are going to use are the following:

  • AWS S3 bucket
  • AWS Transcribe
  • AWS Comprehend
  • Elasticsearch 6.2
  • Kibana 6.2
  • Skedler Reports and Alerts

System architecture:

This architecture is useful when you want to get useful insights from a set or audio/voice recording. You will be able to convert to text your recordings, extract semantic details from the text, perform fast search/aggregations on the data, visualize and report the data.

Examples of common applications are:

  • transcription of customer service calls
  • generation of subtitles on audio and video content
  • conversion of audio file (for example podcast) to text
  • search for keywords or inappropriate words within an audio file

 

AWS Transcribe

At the re:invent2017 conference, Amazon Web Services presented Amazon Transcribe, a new, machine learning – natural language processing – service.

Amazon Transcribe is an automatic speech recognition (ASR) service that makes it easy for developers to add speech to text capability to their applications. Using the Amazon Transcribe API, you can analyze audio files stored in Amazon S3 and have the service return a text file of the transcribed speech.

Instead of AWS Transcribe, you can use similar services to perform speech to text analysis, like: Azure Bing Speech API or Google Cloud Speech API.

> The service is still in preview, watch the launch video here: AWS re:Invent 2017: Introducing Amazon Transcribe.

> You can read more about it here: Amazon Transcribe – Accurate Speech To Text At Scale.

 

AWS Comprehend

Amazon Comprehend is a natural language processing (NLP) service that uses machine learning to find insights and relationships in text. Amazon Comprehend identifies the language of the text; extracts key phrases, places, people, brands, or events; understands how positive or negative the text is, and automatically organizes a collection of text files by topic. – AWS Service Page

AWS Comprehend and Elasticsearch

It analyzes text and tells you what it finds, starting with the language, from Afrikaans to Yoruba, with 98 more in between. It can identify different types of entities (people, places, brands, products, and so forth), key phrases, sentiment (positive, negative, mixed, or neutral), and extract key phrases, all from a text in English or Spanish. Finally, Comprehend’s topic modeling service extracts topics from large sets of documents for analysis or topic-based grouping. – Jeff Barr – Amazon Comprehend – Continuously Trained Natural Language Processing.

Instead of AWS Comprehend, you can use similar services to perform Natural Language Processing, like: Google Cloud Platform – Natural Language API or Microsoft Azure – Text Analytics API.
I prefer to use AWS Comprehend because the service constantly learns and improves from a variety of information sources, including Amazon.com product descriptions and consumer reviews – one of the largest natural language data sets in the world. This means it will keep pace with the evolution of language and it is fully integrated with AWS S3 and AWS Glue (so you can load documents and texts from various AWS data stores such as Amazon Redshift, Amazon RDS, Amazon DynamoDB, etc.).

Once you have a text file of the audio recording, you enter it into Amazon Comprehend for analysis of the sentiment, tone and other insights. Instead of AWS Comprehend, you can use similar services to perform Natural Language Processing, like: Google Cloud Platform – Natural Language API or Microsoft Azure – Text Analytics API.

> Here you can find an AWS Comprehend use case: How to Combine Text Analytics and Search using AWS Comprehend and Elasticsearch 6.0.

 

Conclusion

In this post we have seen a system architecture that performs the following:

  • Speech to text task – AWS Transcribe
  • Text analysis – AWS Comprehend
  • Index and fast search – Elasticsearch
  • Dashboard visualization – Kibana
  • Automatic Reporting and Alerting – Skedler Reports and Alerts

Amazon Transcribe and Comprehend can be powerful tools in helping you unlock the potential insights from voice and video recordings that were previously too costly to access. Having these insights makes it easier to understand trends in issues and consumer behavior, brand and product sentiment, Net Promoter Score, as well as product ideas and suggestions, and more.

In the next post (Part 2 of 2), you can see how to implement the described architecture.

Translate »