Navigating the messy world of (too many) CVE’s

Introduction to CVE

CVE (Common Vulnerabilities and Exposures) is a database of publicly disclosed security issues. Every vulnerability is uniquely identified by a CVE number & there has been a gradual upward trend in the number of CVEs reported since 1999.

CVE is developed by MITRE corporation. CVE entries are always brief.
CVE is a free service that identifies and lists all known software or firmware vulnerabilities. CVE is not an actionable vulnerability database. It is a standardized dictionary of publicly known vulnerabilities and exposures.

CVE is not a vulnerability database. CVE is created to link vulnerability databases and other capabilities and to facilitate the comparison of security tools and services. CVE does not contain risk, impact, fix information, or detailed technical information. CVE only contains the standard identifier number with a status indicator, a brief description, and references to related vulnerability reports and advisories.

 At this year’s KubeCon + CloudNativeCon America, Pushkar Joglekar, a senior security engineer at VMware Tanzu, spoke about how best to navigate this crisis with a fictitious example.

Communication breakdown?

Starting off with some numbers, the results of a survey towards better understanding Cloud-Native security posture amongst various end-users revealed some surprising statistics. A major part of the problem was that even though ~85% of the surveyed audience reported image scanning as one of the better measures towards security hardening, ~60% of them also reported vulnerability scanning as one of the concern areas in the remit of cloud-native security. This majorly stems from the fact that the results of such scans a.k.a. CVEs, even though documented, are not easily comprehensible to the people at the receiving end of these reports. These results, typically, need to traverse a long chain before being assessed for their severity & impact causing an ultimate delay in the timelines for a typical go-live activity. Speaking about Zero trust & how it is not equal to zero CVE’s, Pushkar dives deep into demystifying some CVEs.

Not just an end user problem?

Shifting our focus for just a bit to the wide range of open source tools & technologies, they obviously are encumbered by these CVEs too. While illustrating this fact, Pushkar takes the example of the Kubernetes project & the efforts it expended in managing and maintaining multiple images. With KEP-1729, the churn on the total number of image versions was aimed to be reduced by rebasing the images to distro less/static from the standard Debian base. However despite these efforts, due to the requirement of iptables the Kube Proxy component continues to use the Debian upstream image & as a result, is also one of the most updated images in the project. Even with this seemingly imperfect solution, the benefits reaped by the Kubernetes project were huge as illustrated by this slide!

How is this relevant?

Circling back to our original fictitious premise, one of the ways to avoid the mammoth efforts undertaken to build, maintain, and update images would be to go down the distro less route. However, when that is not an option, a few ways to alleviate (if not eliminate) the problem would be to

  • Focus on fixable CVEs
  • Understand whether the vulnerabilities were in the code execution path or not
  • Develop better automation towards rebuilding, shipping, & testing of updated base images
  • Create a list of images that require special attention.  

Even when everything’s said and done, vulnerability scanners have their own limitations. Therefore, along with learning about these limitations, it is also extremely essential that we understand the impact of the vulnerabilities reported while simultaneously working towards their remediation.

Remember, it is only through modeling threats and assessing the associated risk that we can manage to be secure despite CVEs. Or as Pushkar would say, manage vulnerabilities by being vulnerable!

Try our new and improved Skedler for custom generated Grafana reports for free!

Download Skedler

An easy way to add alerting to Elasticsearch on Kubernetes with Skedler Alerts

There is a simple and effective way to add alerting for your Elasticsearch applications that are deployed to Kubernetes. Skedler Alerts offers no-code alerting for Elasticsearch and reduces the time, effort, and cost of monitoring your machine data for anomalies.   In this article, you are going to learn how to deploy Skedler Alerts for Elasticsearch applications to Kubernetes with ease.

What is Kubernetes?

For those that haven’t ventured into container orchestration, you’re probably wondering what Kubernetes is. Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.

Kubernetes (“k8s” for short), was a project originally started at, and designed by Google, and is heavily influenced by Google’s large scale cluster management system, Borg. More simply, k8s gives you a platform for managing and running your applications at scale across multiple physical (or virtual) machines.

Kubernetes offers the following benefits:

  • Workload Scalability
  • High Availability
  • Designed for deployment

Deploying Skedler Alerts to Kubernetes

If you haven’t already downloaded Skedler Alerts, please download it from www.skedler.com.  Review the documentation to get started.   

Creating a K8s ConfigMap

Kubernetes ConfigMaps allows a containerized application to become portable without worrying about configurations. Users and system components can store configuration data in ConfigMap. In Skedler Alerts ConfigMaps can be used to store database connection string information such as datastore settings, port number, server information and files locations, log directory etc.

If Skedler Alerts defaults are not enough, one may want to customize alertconfig.yml through a ConfigMap. Please refer to Alertconfig.yml Configuration for all available attributes.

1.Create a file called alerts-configmap.yaml in your project directory and paste the following

alerts-configmap.yaml:

2. To deploy your configmap, execute the following command

Creating Deployment and Service

To deploy Skedler Alerts, we’re going to use the “skedler-deployment” pod type. A deployment wraps the functionality of Pods and Replica Sets to allow you to update your application. Now that our Skedler Alerts application is deployed, we need a way to expose it to traffic from outside the cluster. To this, we’re going to add a Service inside the skedler-deployment.yaml file. We’re going to open up a NodePort directly to our application on port 30001.

1.Create a file called alerts-deployment.yaml in your project directory and paste the following

alerts-deployment.yaml:

2. For deployment, execute the following command,

3. To get your deployment with kubectl, execute the following command,

4. We can get the service details by executing the following command,

Now, Skedler Alerts will be deployed in 30001 port.

Accessing Skedler Alerts

Skedler Alerts can be accessed from the following URL, http://<hostIP>:30001

To learn more about creating Skedler Alerts, visit Skedler documentation site.

Summary

This blog was a very quick overview of how to get Skedler Alerts for Elasticsearch application up and running on Kubernetes with the least amount of configuration possible. Kubernetes is an incredibly powerful platform that has many more features than we used today.  We hope that this article gave a headstart and saved you time.

An easy way to add reporting to Elasticsearch Kibana 7.x and Grafana 6.x on Kubernetes with Skedler Reports

There is a simple and effective way to add reporting for your Elasticsearch Kibana 7.x (including Open Distro for Elasticsearch) or Grafana 6.x applications that are deployed to Kubernetes. In this part of the article, you are going to learn how to deploy Skedler Reports for Elasticsearch Kibana and Grafana applications to Kubernetes with ease.

What is Kubernetes?

For those that haven’t ventured into container orchestration, you’re probably wondering what Kubernetes is. Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.

Kubernetes (“k8s” for short), was a project originally started at, and designed by Google, and is heavily influenced by Google’s large scale cluster management system, Borg. More simply, k8s gives you a platform for managing and running your applications at scale across multiple physical (or virtual) machines.

Kubernetes offers the following benefits:

  • Workload Scalability
  • High Availability
  • Designed for deployment

Deploying Skedler Reports to Kubernetes

If you haven’t already downloaded Skedler Reports, please download it from www.skedler.com.  Review the documentation to get started.   

Creating a K8s ConfigMap

Kubernetes ConfigMaps allows containerized application to become portable without worrying about configurations. Users and system components can store configuration data in ConfigMap. In Skedler Reports ConfigMaps can be used to store database connection string information such as datastore settings, port number, server information and files locations, log directory etc.

If Skedler Reports defaults are not enough, one may want to customize reporting.yml through a ConfigMap. Please refer to Reporting.yml and ReportEngineOptions Configuration for all available attributes.

1. Create a file called skedler-configmap.yaml in your project directory and paste the following

skedler-configmap.yaml:

2. To deploy your configmap, execute the following command,

Creating Deployment and Service

To deploy our Skedler Reports, we’re going to use the “skedler-deployment” pod type. A deployment wraps the functionality of Pods and Replica Sets to allow you to update your application. Now that our Skedler Reports application is deployed, we need a way to expose it to traffic from outside the cluster. To this, we’re going to add a Service inside the skedler-deployment.yaml file. We’re going to open up a NodePort directly to our application on port 30000.

1.Create a file called skedler-deployment.yaml in your project directory and paste the following

skedler-deployment.yaml:

2. For deployment, execute the following command,

3. To get your deployment with kubectl, execute the following command,

4. We can get the service details by executing the following command,

Now, Skedler will be deployed in 30000 port.

Accessing Skedler

Skedler Reports can be accessed from the following URL, http://<hostIP>:30000

To learn more about creating reports, visit Skedler documentation site.

Summary

This blog was a very quick overview of how to get a Skedler Reports for Elasticsearch Kibana 7.x and Grafana 6.x application up and running on Kubernetes with the least amount of configuration possible. Kubernetes is an incredibly powerful platform that has many more features than we used today.  We hope that this article gave a headstart and saved you time.

Translate »