Our Blog

On vulnerability, root cause, white-listing and compliance

Reading time ~4 min

Many years ago, when we first released ‘Setiri’ one of the controls
that we preached was website white-listing. As talk-back trojans
would connect back to arbitrary web servers on the Internet, we
argued that companies should create shortlists of the sites employees
are allowed to visit. This, we argued, was much more feasible than
trying to identify and block known ‘bad’ sites. Of course, there are
a number of other compelling reasons for implementing this kind of
white-listing, and of course nobody does it (even though I’ve seen
fairly good technical implementations of this concept).

On a recent Tenable podcast interview of Marcus Ranum he makes the
same point with regard to anti-virus: Instead of trying to list and
identify the many thousands of bad programs that could run on your
computer, wouldn’t it be simpler and wiser to list the very small
number of applications that are allowed to run on your computer, and
treat everything else as malicious? Ranum acknowledges in the
interview that some of his views might be a bit idealistic, but its
clear that the principle holds true across many different spheres of
security. Its not a new idea, really.

A relatively new (I think) application of this concept is in
vulnerability scanning. We’ve always understood the point of
vulnerability scanning to determine (either locally or remotely) what
weaknesses or vulnerabilities a given system might have. That is,
we’re scanning for ‘bad’ things. Scanning hundreds of thousands of
systems, as we do with our BroadView service, we’ve come to focus
increasingly on what we call ‘root cause analysis’. Our reasoning
here is that an entire class of vulnerabilities often share a single,
root, cause. If we can identify the root cause then its much more
efficient to scan for and remediate that, than all the different
symptoms it causes. Patching is a case in point. Come on! This
problem is not a science rocket and yet, in a vulnerability scan of a
sufficiently large network, we will find a high proportion of
machines that have vulnerabilities because they are not fully
patched. We need to understand the root cause of this, and its
typically that something is going wrong with the patch management
software – the agent is failing somewhere, the machine is firewalled,
the machine is not in the correct windows domain, the machine has not
rebooted, etc. It makes much more sense to scan for this small number
root causes, than for the very large number of possible symptoms.

The approach described above is more efficient than traditional
vulnerability scanning, and is actually a form of white-listing:
Instead of trying to search for an infinite number of bad things on a
system, we rather check that a small number of good things are
running right. Once you go this way, it quickly makes sense to take
the concept much further, and Tenable were quick to spot this with
Nessus’ compliance scanning checks. Its relatively easy to create a
‘model’ system; one that is known to be fully patched, securely
configured and properly maintained, then compare every other system
with this one and consider anything that deviates from this to be
insecure. With this approach you can simultaneously increase accuracy
and scope – checking for an infinite number of vulnerabilities will
(theoretically) no false positives. This kind of trade-off is
generally hard to achieve in security.

Once we make the paradigm shift from black-list to white-list
vulnerability scanning a whole new world of possibility opens up to
us. There are challenges with this approach to be sure, but I
wouldn’t be surprised if many of them aren’t rooted in black-list
thinking about white-list scanning. For example, there are entire
segments of our industry that focus on building black lists. Tenable
themselves do this with their commercial ‘live’ feeds, which
perpetually add more checks for the Nessus scanner. With white-list
scanning it would make more sense to maintain and sell ‘white-lists’
or baselines. For example, why not have a service whereby you
continuously update a secure base installation of a typical OS. A
vulnerability scanner could then fingerprint that base before
scanning and identifying systems that deviate from that base. The big
problem will lie with systems or subsystems where there is no base to
scan against. Web servers and web applications immediately spring to
mind, for example.