Navigating the messy world of (too many) CVE’s
Introduction to CVE
CVE (Common Vulnerabilities and Exposures) is a database of publicly disclosed security issues. Every vulnerability is uniquely identified by a CVE number & there has been a gradual upward trend in the number of CVEs reported since 1999.
CVE is developed by MITRE corporation. CVE entries are always brief.
CVE is a free service that identifies and lists all known software or firmware vulnerabilities. CVE is not an actionable vulnerability database. It is a standardized dictionary of publicly known vulnerabilities and exposures.
CVE is not a vulnerability database. CVE is created to link vulnerability databases and other capabilities and to facilitate the comparison of security tools and services. CVE does not contain risk, impact, fix information, or detailed technical information. CVE only contains the standard identifier number with a status indicator, a brief description, and references to related vulnerability reports and advisories.
Starting off with some numbers, the results of a survey towards better understanding Cloud-Native security posture amongst various end-users revealed some surprising statistics. A major part of the problem was that even though ~85% of the surveyed audience reported image scanning as one of the better measures towards security hardening, ~60% of them also reported vulnerability scanning as one of the concern areas in the remit of cloud-native security. This majorly stems from the fact that the results of such scans a.k.a. CVEs, even though documented, are not easily comprehensible to the people at the receiving end of these reports. These results, typically, need to traverse a long chain before being assessed for their severity & impact causing an ultimate delay in the timelines for a typical go-live activity. Speaking about Zero trust & how it is not equal to zero CVE’s, Pushkar dives deep into demystifying some CVEs.
Not just an end user problem?
Shifting our focus for just a bit to the wide range of open source tools & technologies, they obviously are encumbered by these CVEs too. While illustrating this fact, Pushkar takes the example of the Kubernetes project & the efforts it expended in managing and maintaining multiple images. With KEP-1729, the churn on the total number of image versions was aimed to be reduced by rebasing the images to distro less/static from the standard Debian base. However despite these efforts, due to the requirement of iptables the Kube Proxy component continues to use the Debian upstream image & as a result, is also one of the most updated images in the project. Even with this seemingly imperfect solution, the benefits reaped by the Kubernetes project were huge as illustrated by this slide!
How is this relevant?
Circling back to our original fictitious premise, one of the ways to avoid the mammoth efforts undertaken to build, maintain, and update images would be to go down the distro less route. However, when that is not an option, a few ways to alleviate (if not eliminate) the problem would be to
- Focus on fixable CVEs
- Understand whether the vulnerabilities were in the code execution path or not
- Develop better automation towards rebuilding, shipping, & testing of updated base images
- Create a list of images that require special attention.
Even when everything’s said and done, vulnerability scanners have their own limitations. Therefore, along with learning about these limitations, it is also extremely essential that we understand the impact of the vulnerabilities reported while simultaneously working towards their remediation.
Remember, it is only through modeling threats and assessing the associated risk that we can manage to be secure despite CVEs. Or as Pushkar would say, manage vulnerabilities by being vulnerable!
Try our new and improved Skedler for custom generated Grafana reports for free!