Our Blog

Metricon 2011 Summary

Reading time ~5 min

[I originally wrote this blog entry on the plane returning from BlackHat, Defcon & Metricon, but forgot to publish it. I think the content is still interesting, so, sorry for the late entry :)]

I’ve just returned after a 31hr transit from our annual US trip. Vegas, training, Blackhat & Defcon were great, it was good to see friends we only get to see a few times a year, and make new ones. But on the same trip, the event I most enjoyed was Metricon. It’s a workshop held at the Usenix security conference in San Francisco, run by a group of volunteers from the security metrics mailing list and originally sparked by Andrew Jacquith’s seminal book Security Metrics.

There were some great talks, and interactions, the kind you only get at small groupings around a specific set of topics. It was a nice break from the offensive sec of BH & DC to listen to a group of defenders. The talks I most enjoyed (they were all recorded bar a few private talks) were the following:

Wendy Nather – Quantifying the Unquantifiable, When Risk Gets Messy

Wendy looked at the bad metrics we often see, and provided some solid tactical advice on how to phrase (for input) and represent (for output) metrics. As part of that arc, she threw out more pithy phrases that even the people in the room tweeting could keep up with. From introducing a new phrase for measuring attacker skill, “Mitnicks”, to practical experience such as how a performance metric phrase as 0-100 had sysadmins aiming for 80-90, but inverting it had them aiming for 0 (her hypothesis, is that school taught us that 100% was rarely achievable). Frankly, I could write a blog entry on her talk alone.

Josh Corman – “Shall we play a game?” and other questions from Joshua

Josh tried to answer the hard question of “why isn’t security winning”. He avoided the usual complaints and had some solid analysis that got me thinking. In particular the idea of how PCI is the “No Child Left Behind” act for security, which not only targeted those that had been negligent, but also encouraged those who hadn’t to drop their standards. “We’ve huddled around the digital dozen, and haven’t moved on.” He went on to talk about how controls decay as attacks improve, but our best practice advice doesn’t. “There’s a half-life to our advice”. He then provided a great setup for my talk “What we are doing, is very different from how people were exploited.”

Jake Kouns – Cyber Liability Insurance

Jake has taken security to what we already knew it was, an insurance sale ;) Jokes aside, Jake is now a product manager for cyber-liability insurance at Merkel. He provided some solid justifications for such insurance, and opened my eyes to the fact that it is now here. The current pricing is pretty reasonable (e.g. $1500 for $1million in cover). Most of the thinking appeared to target small to medium organisations, that until now have only really had “use AV & pray” as their infosec strategy, and I’d love to hear some case-studies from large orgs that are using it & have claimed. He also spoke about how it could become a “moral hazard” where people choose to insure rather than implement controls, and the difficulties the industry could face, but that right now work as incentives for us (the cost of auditing a business will be more than the insurance is worth). His conclusion, which seemed solid, is why spend $x million on the “next big sec product” when you could spend less & get more on insurance. Lots of questions left, but it looks like it may be time to start investigating.

Allison Miller – Applied Risk Analytics

I really enjoyed Allison and Itai’s talk. They looked at practical methodologies for developing risk metrics and coloured them with great examples. The process they presented was the following:

  1. Target – You need to figure out what you want to measure. Allison recommended aiming for “yes/no” questions rather than more open ended questions such as “Are we at risk”
  2. Find Data, Create Variables – Once you know what you’re tying to look at, you need to find appropriate data, and work out what the variables are from it.
  3. Data Prep – “Massaging the data” tasks such as normalising, getting it into a computable format etc.
  4. Model Training – Pick an algorithm, send the data through it and see what comes out. She suggested using a couple, and pitting them against each other.
  5. Assessment – Check the output, what is the “Catch vs False Positive vs False Negative” rate. Even if you have FP & FNs, sometimes, weighting the output to give you one failure of a particular type could still be useful.
  6. Deployment – Building intelligence to take automated responses once the metric is stable

The example they gave was to look for account takeovers stemming from the number of released e-mail/password combos recently. Itai took us through each step and showed us how they were eventually able to automate the decision making of the back of a solid metric.

Conclusion

I found the conference refreshing, with a lot of great advice (more than the little listed above). Too often we get stuck in the hamster wheels of pain, and it’s nice to think we may be able to slowly take a step off. Hopefully we’ll be back next year.