Grey bar Blue bar
Share this:

Fri, 6 Sep 2013

Offence oriented defence

We recently gave a talk at the ITWeb Security Summit entitled "Offense Oriented Defence". The talk was targeted at defenders and auditors, rather then hackers (the con is oriented that way), although it's odd that I feel the need to apologise for that ;)


The talks primary point, was that by understanding how attackers attack, more innovative defences can be imagined. The corollary was that common defences, in the form of "best practise" introduce commonality that is more easily exploited, or at least degrade over time as attackers adapt. Finally, many of these "security basics" are honestly hard, and we can't place the reliance on them we'd hoped. But our approach doesn't seem to want to acknowledge the problem, and much like an AA meeting, it's time we recognise the problem.


If you had to look at the average security strategy or budget items, you often end up with a list containing a couple of these:


  • Compliance/GRC - building policies, auditing against them, responding to audits

  • Risk Management - enumerating and ranking all the info sec risks, prioritising them, and justifying spend to mitigate

  • Best Practises - strengthening passwords, pushing patches, configuration management, etc.

  • Technology - cue buzzwords - UTM, WAF, DLP, DAM, SIEM, IPS, AV

  • Staff - everyone needed to get the above stuff done: compliance specialists, risk specialist, security managers, device ops managers


But, the truth is many of these items don't actually block attacks, or the few that do, don't really counter the common bypassed used to side-step them. For example:


  • It's really hard to link risk-based priorities to meaningful technical priorities.

  • Compliance drives a "teach the test" approach with little incentive to create contradictory measurements.

  • Then how can we have a bunch of things called "best practise" when we can't honestly say we know how to defend. Even then, some BPs are practically impossible to achieve in anything but a point in time. And the main point of this talk; common practises have common bypasses.


The current place we seem to be in is akin to having everyone build a wall. Attackers get to evaluate the wall, figure out how to get over it, and add to their capability (i.e. get a longer rope). But once they have a longer rope, they can use it over and over again, and against more than one wall. So attackers, who are quite good at sharing, get to keep building their tool chain, while all defenders can do it to keep building a higher wall, and maintaining the increasingly untenable structure. By understanding how attackers attack, we can break out of this and try more innovative approaches.


The talk is illustrated with four broad examples: Passwords, Patches, Anti-Virus and DMZs. For each, the belief around specific configurations is discussed, and how those don't stand up to how attackers actually attack. For example, the way AV's believed to work doesn't seem to correspond with how easy they are to bypass, or the common configuration of standard password controls such as lockout, don't seem to take into account horizontal brute-force attacks.


The point I want to make here is somewhat subtle; if you walk away thinking I've described new attacks, then you've missed it, if you think I'm recommending "the basics" then you've missed it. Truthfully, maybe it's just that I didn't make it very well ... decide for yourself, here are the slides:

Thu, 3 Nov 2011

Squinting at Security Drivers and Perspective-based Biases

While doing some thinking on threat modelling I started examining what the usual drivers of security spend and controls are in an organisation. I've spent some time on multiple fronts, security management (been audited, had CIOs push for priorities), security auditing (followed workpapers and audit plans), pentesting (broke in however we could) and security consulting (tried to help people fix stuff) and even dabbled with trying to sell some security hardware. This has given me some insight (or at least an opinion) into how people have tried to justify security budgets, changes, and findings or how I tried to. This is a write up of what I believe these to be (caveat: this is my opinion). This is certainly not universalisable, i.e. it's possible to find unbiased highly experienced people, but they will still have to fight the tendencies their position puts on them. What I'd want you to take away from this is that we need to move away from using these drivers in isolation, and towards more holistic risk management techniques, of which I feel threat modelling is one (although this entry isn't about threat modelling).

Auditors

The tick box monkeys themselves, they provide a useful function, and are so universally legislated and embedded in best practise, that everyone has a few decades of experience being on the giving or receiving end of a financial audit. The priorities audit reports seem to drive are:

  • Vulnerabilities in financial systems. The whole audit hierarchy was created around financial controls, and so sticks close to financial systems when venturing into IT's space. Detailed and complex collusion possibilities will be discussed when approving payments, but the fact that you can reset anyone's password at the helpdesk is sometimes missed, and more advanced attacks like token hijacking are often ignored.
  • Audit house priorities. Audit houses get driven just like anyone else. While I wasn't around for Enron, the reverberations could still be felt years later when I worked at one. What's more, audit houses are increasingly finding revenue coming from consulting gigs and need to keep their smart people happy. This leads to external audit selling "add-ons" like identity management audits (sometimes, they're even incentivised to).
  • Auditor skills. The auditor you get could be an amazing business process auditor but useless when it comes to infosec, but next year it could be the other way around. It's equally possibly with internal audit. Thus, the strengths of the auditor will determine where you get nailed the hardest.
  • The Rotation plan. This year system X, next year system Y. It doesn't mean system X has gotten better, just that they moved on. If you spend your year responding to the audit on system Y and ignore X, you'll miss vital stuff.
  • Known systems. External and internal auditors don't know IT's business in detail. There could be all sorts of critical systems (or pivot points) that are ignored because they weren't in the "flow of financial information" spread sheet.
Vendors Security vendors are the love to hate people in the infosec world. Thinking of them invokes pictures of greasy salesmen phoning your CIO to ask if your security chumps have even thought about network admission control (true story). On the other hand if you've ever been a small team trying to secure a large org, you'll know you can't do it without automation and at some point you'll need to purchase some products. Their marketing and sales people get all over the place and end up driving controls; whether it's “management by in-flight magazine”, an idea punted at a sponsored conference, or the result of a sales meeting.

But security vendors prioritisation of controls are driven by:

  • New Problems. Security products that work eventually get deployed everywhere they're going to be deployed. They continue to bring in income, but the vendor needs a new bright shiny thing they can take to their existing market and sell. Thus, new problems become new scary things that they can use to push product. Think of the Gartner hype curve. Whatever they're selling, be it DLP, NAC, DAM, APT prevention or IPS if your firewall works more like a switch and your passwords are all "P@55w0rd" then you've got other problems to focus on first.
  • Overinflated problems. Some problems really aren't as big as they're made out to be by vendors, but making them look big is a key part of the sell. Even vendors who don't mean to overinflate end up doing it just because they spend all day thinking of ways to justify (even legitimate) purchases.
  • Products as solutions. Installing a product designed to help with a problem isn't the same as fixing the problem, and vendors aren't great at seeing that (some are). Take patch management solutions, there are some really awesome, mature products out there, but if you can't work out where your machines are, how many there are or get creds to them, then you've got a long way to go before that product starts solving the problem it's supposed to.
Pentesters

Every year around Black Hat Vegas/Pwn2Own/AddYourConfHere time a flurry of media reports hit the public and some people go into panic mode. I remember The DNS bug, where all that was needed was for people to apply a patch, but which, due to the publicity around it, garnered a significant amount of interest from people who it usually wouldn't, and probably shouldn't have cared so much. But many pentesters trade on this publicity; and some pentesting companies use this instead of a marketing budget. That's not their only, or primary, motivation, and in the end things get fixed, new techniques shared and the world a better place. The cynical view then is that some of the motivations for vulnerability researchers, and what they end up prioritising are:

  • New Attacks. This is somewhat similar to the vendors optimising for "new problems" but not quite the same. When Errata introduced Hamster at ToorCon ‘07, I heard tales of people swearing at them from the back. I wasn't there, but I imagine some of the calls were because Layer 2 attacks have been around and well known for over a decade now. Many of us ignored FireSheep for the same reason, even if it motivated the biggest moves to SSL yet. But vuln researchers and the scene aren't interested, it needs to be shiny, new and leet . This focus on the new, and the press it drives, has defenders running around trying to fix new problems, when they haven't fixed the old ones.
  • Complex Attacks. Related to the above, a new attack can't be really basic to do well, it needs to involve considerable skill. When Mark Dowd released his highly complex flash attack, he was rightly given much kudos. An XSS attack on the other hand, was initially ignored by many. However, one lead to a wide class of prevalent vulns, while the other requires you to be, well, Mark Dowd. This mean some of the issues that should be obvious, that underpin core infrastructure, but that aren't sexy, don't get looked at.
  • Shiny Attacks. Some attacks are just really well presented and sexy. Barnaby Jack had an ATM spitting out cash and flashing "Jackpot", that's cool, and it gets a room packed full of people to hear his talk. Hopefully it lead to an improvement in security of some of the ATMs he targeted, but the vulns he exploited were the kinds of things big banks had mostly resolved already, and how many people in the audience actually worked in ATM security? I'd be interested to see if the con budget from banks increased the year of his talk, even if they didn't, I suspect many a banker went to his talk instead of one that was maybe talking about a more prevalent or relevant class of vulnerabilities their organisation may experience. Something Thinkst says much better here.
Individual Experience

Unfortunately, as human beings, our decisions are coloured by a bunch of things, which cause us to make decisions either influenced or defined by factors other than the reality we are faced with. A couple of those lead us to prioritising different security motives if decision making rests solely with one person:

  • Past Experience. Human beings develop through learning and consequences. When you were a child and put your hand on a stove hot plate, you got burned and didn't do it again. It's much the same every time you get burned by a security incident, or worse, internal political incident. There's nothing wrong with this, and it's why we value experience; people who've been burned enough times not to let mistakes happen again. However, it does mean time may be spent preventing a past wrong, rather than focusing on the most likely current wrong. For example, one company I worked with insisted on an overly burdensome set of controls to be placed between servers belonging to their security team and the rest of the company network. The reason for this was due to a previous incident years earlier, where one of these servers had been the source of a Slammer outbreak. While that network was never again a source of a virus outbreak, their network still got hit by future outbreaks from normal users, via the VPN, from business partners etc. In this instance, past experience was favoured over a comprehensive approach to the actual problem, not just the symptom.
  • New Systems. Usually, the time when the most budget is available to work on a system is during its initial deployment. This is equally true of security, and the mantra is for security to be built in at the beginning. Justifying a chunk of security work on the mainframe that's been working fine for the last 10 years on the other hand is much harder, and usually needs to hook into an existing project. The result is that it's easier to get security built into new projects than to force an organisation to make significant “security only” changes to existing systems. The result in those that present the vulnerabilities pentesters know and love get less frequently fixed.
  • Individual Motives. We're complex beings with all sorts of drivers and motivations, maybe you want to get home early to spend some time with your kids, maybe you want to impress Bob from Payroll. All sorts of things can lead to a decision that isn't necessarily the right security one. More relevantly however, security tends to operate in a fairly segmented matter, while some aspects are “common wisdom”, others seem rarely discussed. For example, the way the CISO of Car Manufacturer A and the CISO of Car Manufacturer B set up their controls and choose their focus could be completely different, but beyond general industry chit-chat, there will be little detailed discussion of how they're securing integration to their dealership network. They rely on consultants, who've seen both sides for that. Even then, one consultant may think that monitoring is the most important control at the moment, while another could think mobile security is it.
So What?

The result of all of this is that different companies and people push vastly different agendas. To figure out a strategic approach to security in your organisation, you need some objective risk based measurement that will help you secure stuff in an order that mirrors the actual risk to your environment. While it's still a black art, I believe that Threat Modelling helps a lot here, a sufficiently comprehensive methodology that takes into account all of your infrastructure (or at least admits the existence of risk contributed by systems outside of a “most critical” list) and includes valid perspectives from above tries to provide an objective version of reality that isn't as vulnerable to the single biases described above.

Fri, 15 Jul 2011

Security Policies - Go Away

Security policies are necessary, but their focus is to the detriment of more important security tasks. If auditors had looked for trivial SQL injection on a companies front-page as hard as they have checked for security polices, then maybe our industry would be in a better place. I want to make this go away, I want to help you tick the box so you can focus on the real work. If you just want the "tool" skip to the end.

A year and a half ago, SensePost started offering "build it" rather than "break it" consulting services, we wanted to focus on technical, high-quality advisory work. However, by far the most frequently "consulting" request we've seen has been asking for security policies. Either a company approaches us looking for them explicitly or they want them bolted on to other work. The gut feel I've picked up over the years is that if someone is asking you to develop security policies for them, then either they're starting on security at the behest of some external or compliance requirement or they're hoping that this is the first step in an information security program. (Obviously, I can't put everything into the same bucket, but I'm talking generally) Both are rational reasons to want to get your information security policies sorted, but getting outside consultants to spend even a week's worth of time developing them for you, is time that could be better spent in my opinion. My reasons for this are two-fold:

  • If you're starting a security program, then you have a lot to learn and possibly a lot of convincing of senior management to do. Something like an internal penetration test (not that I'm advocating this specifically instead of policy) will give you far more insight into the security of your environment and a lot more "red ink" that can be used to highlight the risk to the "higher ups".
  • Security policies don't "do" anything. They are a representation of management's intention and agreements around security controls, which in the best case, provide a "cover my ass" defense if an employee takes you to task for intercepting their e-mails or something similar. The policies need to be used to derive actual controls, and are not controls in themselves.
Instead, we too often end up in a world where security policies, rather than good security, is the end goal while new technologies keep us amused developing new ones (mobile policies, social media policies, data leakage policies etc.)

Saying all of this is fine, but it doesn't make the auditors stop asking, and it doesn't put a green box or tick in the ISO/PCI/CoBIT/HIPAA/SOX policies checkbox. Previously, I've pointed people at existing policy repositories, where sample policies can be downloaded and modified to suit their need. Sites such as CSOOnline or PacketSource have links to some policies, but by far the most comprehensive source of free security policy templates is SANS. The problem is people seem to look at these, think it looks like work, and move on to a consultancy that's happy to charge for a month's worth of time. Even when you don't, the policies are buried in sub-pages that don't always make sense (for example, why is the Acceptable Use Policy put under "computer security"), even then several of them are only available in PDF form (hence not editable), even though they are explicitly written as modifiable templates. What I did was to go through all of these pages, download the documents, convert them into relevant formats and categorise them into a single view in a spreadsheet with hyperlinks to the documents. I've also included their guidance documents on how to write good sec policies, and ISO 27001-linked policy roadmaps. I haven't modified any of the actual content of the documents, and those retain their original copyright. I'm not trying to claim any credit for others' hard work, merely make the stuff a little more accessible.

You can download the index and documents HERE.

In future, I hope to add more "good" policies (a few of the SANS policies aren't wonderful), and also look into expanding into security standards (ala CIS Security) in the future. If necessary, take this to a consultancy, and ask them to spend some time making these specific to your organisation and way of doing things, but please, if you aren't getting the basics right, don't focus on these. In the meantime, if you're looking for information security policies to go away, so you can get on with the bigger problems organisations, and our industry in general are facing, then this should be a useful tool.