Our Blog

Threat Modeling vs Information Classification

Reading time ~3 min

Over the last few years there has been a popular meme talking about information centric security as a new paradigm over vulnerability centric security. I’ve long struggled with the idea of information-centricity being successful, and in replying to a post by Rob Bainbridge, quickly jotted some of those problems down.

In pre-summary, I’m still sceptical of information-classification approaches (or information-led control implementations)  as I feel they target a theoretically sensible idea, but not a practically sensible one.

Information gets stored in information containers (to borrow a phrase from Octave) such as the databases or file servers. This will need to inherit a classification based on the information it stores. That’s easy if it’s a single purpose DB, but what about a SQL cluster (used to reduce processor licenses) or even end-user machines? These should be moved up the classification chain because they may store some sensitive info, even if they spend the majority of the time pushing not-very-sensitive info around. In the end, the hoped-for cost-saving-and-focus-inducing prioritisation doesn’t occur and you end up having to deploy a significantly higher level of security to most systems. Potentially, you could radically re-engineer your business to segregate data into separate networks such as some PCI de-scoping approaches suggest, but, apart from being a difficult job, this tends to counter many of the business benefits of data and system integrations that lead to the cross-pollination in the first place.

Next up, I feel this fails to take cognisance of what we call “pivoting”; the escalation of privileges by moving from one system or part of a system to another. I’ve seen situations when the low criticality network monitoring box is what ends up handing out the domain administrator password. It had never been part of internal/external audits scope, none of the vulns showed up on your average scanner, it had no sensitive info etc. Rather, I think we need to look at physical, network and trust segregation between systems, and then data. It would be nice to go data-first, but DRM isn’t mature (read simple & widespread) enough to provide us with those controls.

Lastly, I feel information-led approaches often end up missing the value of raw functionality. For example, a critical trade execution system at an investment bank could have very little sensitive data stored on it, but the functionality it provides (i.e. being able to execute trades using that bank’s secret sauce) is hugely sensitive and needs to be considered in any prioritisation.

I’m not saying I have the answers, but we’ve spent a lot of time thinking about how to model how our analysts attack systems and whether we could “guess” the results of multiple pentests across the organisation systematically, based on the inherent design of your network, systems and authentication. The idea is to use that model to drive prioritisation, or at least a testing plan. This is probably closer aligned to the idea of a threat-centric approach to security, and suffers from a lack of data in this area (I’ve started some preliminary work on incorporating VERIS metrics).

In summary, I think information-centric security fails in three ways; by providing limited prioritiation due to the high number of shared information containers in IT environments, by not incorporating how attackers move through a networks and by ignoring business critical functionality.