So what is a zero-day vulnerability? Simply, a new vulnerability that hasn’t yet been disclosed or publicly discovered. Day zero is when it is first detected in the wild, or publicly released.
Before we go any further, a point on nomenclature: a vulnerability is a security problem, which is taken advantage of by an exploit. A single exploit may form part of an exploit chain that is designed to achieve a certain result, such as deploying a payload. The payload does the real damage, whether it’s stealing keys and data, attacking other devices in the case of a worm, or denying access in the case of ransomware.
Zero-day vulnerabilities get a lot of coverage in the tech press, and sometimes in the regular media. In general, the security industry seems to devote much more interest to offensive techniques than defensive; just look at the balance of talks at a typical conference.
You could argue that they garner too much coverage, probably because they are seen as particularly exciting. But often the coverage doesn’t represent many important factors, such as whether there is available proof of concept, the extent of technical work required to actually exploit the vulnerability, and what exploiting the vulnerability actually allows you to do. Typically, exploiting a particular zero-day is only the first step of an attack. This is particularly the case with, for example, exploiting mobile operating systems, as attacks against these systems generally require a whole chain of exploits to do anything meaningful.
The security community sees a constant arms race between attack and defence, with progressive leaps forward on both sides. This includes defensive protections and their bypasses in many layers of hardware and software, including the processor, programming languages, frameworks and operating systems. For this reason, most interesting of all are entirely new classes of vulnerabilities, which necessitate a whole new defensive approach that isn’t necessarily easy or possible to patch.
Whilst zero-days are undoubtedly interesting, we should arguably be more concerned with the payload of an attack. The vulnerability itself will typically get patched soon, but that doesn’t mean we’re now secure, as what did the attacker actually do with that exploit?
From a defensive perspective there’s lots of talk about defending against zero-days, especially from people trying to sell defensive software or products. But by definition there’s nothing you can do to directly defend against zero-days, above the usual best practice defence in depth.
Maybe it’s the known vulnerabilities, the 1-days or n-days, that are more important. These are vulnerabilities that are known, and there’s either no available patch or a patch that has been universally applied, leaving a patch gap. It’s these vulnerabilities that require a response, be it mitigations or increased monitoring before a patch is released, then prompt patching once one is available.
It is also where the rest of your security defences and practices come into play. For example, threat intelligence, because if we know the common tools and techniques of an attacker then we can look for what they typically do next, regardless of how they got in. It also underlines the value of defensive monitoring; whilst you can’t rely on detecting a zero-day being exploited, you may be able to spot the attacker moving across a network, attacking other systems, or performing any other behaviour that counts as anomalous.
If attackers are actually using zero-day exploits to come after you, take that as a compliment that all the easy stuff hasn’t worked.
Whilst most attackers would love a cupboard full of zero-day vulnerabilities, it’s outside of the capability or budget of all but the most well-funded and skilled organisations. Finding a new vulnerability in a targeted system can take weeks or months. Plus, as mentioned above, a single vulnerability is rarely enough on its own.
The other danger for an attacker is the more you use a given capability, the greater the chance that it will be discovered and fixed, in which case you’re back to vulnerability research. All the more reason to make use of n-day vulnerabilities and other existing capability when you can. There is also the risk that someone else found and is taking advantage of the same vulnerability, further increasing the chance of it being discovered.
Other aspects that are rarely discussed is how much work is needed to get an exploit working reliably or to adapt it to work against a particular target device. This is particularly the case with mobile devices that have a lot of model and manufacturer-specific features; A vulnerability discovered on an unlocked Android Pixel device won’t necessarily work on the latest Samsung device.
Not all vulnerabilities are directly exploitable, some may just crash the target system temporarily or permanently, which makes them perfect for a Denial of Service attack. Of the rest, they can be categorised in a number of ways, including by how they are caused. The Common Weakness Enumeration (CWE) method used by the Mitre CVE vulnerability tracking scheme enumerates the different kinds of, for example, software implementation errors that can lead to vulnerabilities.
At a higher level, they can be categorised but what they allow an attacker to do. Top of the pile are remote code execution (RCE) vulnerabilities. As the name implies, when exploited they allow a remote attacker to execute their own code within the targeted application. RCE vulnerabilities offer a perfect example of the battle between attack and defence, as the introduction of DEP and ASLR led to the development of new exploitation techniques such as ROP, as well as to the next kind of vulnerability.
Randomising the layout of programs in memory was designed to make exploiting certain kinds of vulnerabilities harder, so it makes sense that there would be a class of bug designed to mitigate such protections: an information leak bug. These are exploited by attackers to leak pointers or memory addresses that allow them to easier understand the memory layout of the targeted system, so they can exploit it.
Finally, for most modern systems that have multiple levels of users and abstraction, you’ll typically need a privilege escalation vulnerability to go from the exploited user process to a higher privileged account, or to maybe escape from whatever stack of sandboxes, containers and virtual machines you’re in to get to a level where you can do what you want.
Exploitation of modern platforms typically requires a chain of exploits. Putting the above together into a single example, you’ll need code execution to first run some attack code, with a memory leak to make use of existing libraries for what you’re trying to achieve. Next, depending on what you’ve exploited, you might then need to escalate privileges to gain higher-level access, or escape whatever sandbox or virtual machine the target application is running inside. At that point, you’ve pretty much got full control to do whatever you want on the target system.
There are some other factors related to vulnerabilities that aren’t often considered but can dramatically increase or decrease the efficacy of a single vulnerability. For example, one important factor that dramatically increases the danger a vulnerability poses is if the exploit is wormable, meaning it can spread without any user interaction. The most well known recent examples would be the different SMB vulnerabilities, such as Bluekeep and the recent SMB bugs.
On the other end of the scale are laboratory vulnerabilities, which are often not nearly as bad as they may sound. These are often vulnerabilities in cryptographic protocols, hardware or side channels. Whilst they’re often very interesting, many are not practically exploitable as they require specialist hardware, or massive amounts of data that couldn’t realistically be collected outside of a laboratory environment.
For our latest research, and for links and comments on other research, follow our Lab on Twitter.