Grey bar Blue bar
Share this:

Wed, 12 Feb 2014

RAT-a-tat-tat

Hey all,


So following on from my talk (slides, video) I am releasing the NMAP service probes and the Poison Ivy NSE script as well as the DarkComet config extractor.



An example of finding and extracting Camellia key from live Poison Ivy C2's:
nmap -sV -Pn --versiondb=nmap-service-probes.pi --script=poison-ivy.nse <ip_address/range)
Finding Poison Ivy, DarkComet and/or Xtreme RAT C2's:
nmap -sV -Pn --versiondb=nmap-service-probes.pi <ip_range>


If you have any questions, please contact research@sensepost.com
Cheers

Mon, 30 Dec 2013

Goodbye to 2013, hello to 2014

With 2013 coming to a close, I thought it pertinent to look back at the year we've had and also forward to what's promising to be an incredibly exciting 2014 for us.


2013 for SensePost, was a year of transition. With a new leadership structure in myself, Shane and Dominic, we had a chance to stamp our style and vision and also learn from Charl and Jaco. One of the first leadership choices was to expand our reach and open our first office in London, aptly in a borough called Hackney. Here, we grew our family and welcomed some amazing people into the plak. After a few short months, we had outgrown the office and needed to look for bigger premises, this time in another aptly named area: Whitechapel (think Jack the Ripper).


Back in South Africa, after moving to bigger premises down the road, we finally got a chance to make it feel like home. These two new offices have allowed us to continue to grow at a steady pace, whilst still keeping the SensePost vision and vibe alive.


On a technical level, as this is what we are really about, we've had an amazing year. As part of this new vision, we made some key appointments:


Craig Swan, who originally was part of the assessments team and left, returned home to assume the role of Training Manager. On a training front, we've had one of the busiest years to date. From Blackhat in Las Vegas, Brasil and Seattle, to 44Con in London, for our friends in the US and our courses held in Southern Africa, we've trained hundreds of students in the art of offensive security. We've also created two new courses for the Hacking by Numbers series, one concentrating on mobile assessments and the other on malware reverse engineering. However, we are not resting on our laurels and with Craig on-board, 2014 is looking like being an amazing year for education at SensePost.


Victor Tadden, an experienced technical Project Manager, joined the assessment team to help us be more efficient with our delivery of projects. He brings with him a wealth of software dev experience and has already made a significant impact in the way we work, especially managing to wrangle pen testers together daily for scrum meetings, a feat many will tell you is akin to herding cats.


Tiago Rosado joined us from Portugal to head up our Managed Vulnerability Service, a key service line that many of our clients rely on for a more holistic view of their security posture. Our MVS service line is being revamped for 2014 and Tiago will help us achieve this.


Marc Peiser became our IT Manager and with him, brought a wealth of UNIX experience, having worked for a massive global bank. Marc's aim for 2014 is to ensure that our internal networks are not only robust but also allow us to do what we do. Surprisingly enough, we are frequently attacked and having defense in depth approach to IT is as important to us as it is to our clients.


Internally, we've welcomed some new family members, said goodbye to some.We value those who choose to work here very highly, we want work to be a creative environment where people can have fun, grow and most importantly enjoy coming to work. Nothing makes me more proud than seeing a plakker accepting new challenges, often defining the way the security industry works, or helping others with their security needs. As the penetration industry matures, one of my main goals for 2014 is to ensure that our proven hacker ethos remains.



2013 saw us presenting at conferences throughout the year and for the first time in our history, in a total of eight different countries over five continents. Our research included vulnerabilities in the Internet of things, distributed surveillance frameworks, security analysis of the Trustzone OS and Mobicore and finally using Spatial Statistics to detect Fast-Flux botnet Command and Control (C2) domains.


Technical prowess is still at the very heart of what we do at SensePost. We love to pwn and 2014 will see us continuing to write new tools, approach old problems with a new way of thinking and just being, well, us.


In November, after months of negotiations, came the news that we were to be acquired by SecureData Europe. This new chapter for us will usher in a new era of growth and development for us at SensePost and we are truly excited to be part of the SecureData Europe family.


Overall it was a fantastic year, especially for us, the new EXCO. I am extremely proud to stand alongside some incredibly talented people and call them colleagues and look forward to 2014 and what it brings.


From everyone at SensePost, we wish you a Merry Christmas and best wishes for the New Year.

Fri, 6 Sep 2013

Offence oriented defence

We recently gave a talk at the ITWeb Security Summit entitled "Offense Oriented Defence". The talk was targeted at defenders and auditors, rather then hackers (the con is oriented that way), although it's odd that I feel the need to apologise for that ;)


The talks primary point, was that by understanding how attackers attack, more innovative defences can be imagined. The corollary was that common defences, in the form of "best practise" introduce commonality that is more easily exploited, or at least degrade over time as attackers adapt. Finally, many of these "security basics" are honestly hard, and we can't place the reliance on them we'd hoped. But our approach doesn't seem to want to acknowledge the problem, and much like an AA meeting, it's time we recognise the problem.


If you had to look at the average security strategy or budget items, you often end up with a list containing a couple of these:


  • Compliance/GRC - building policies, auditing against them, responding to audits

  • Risk Management - enumerating and ranking all the info sec risks, prioritising them, and justifying spend to mitigate

  • Best Practises - strengthening passwords, pushing patches, configuration management, etc.

  • Technology - cue buzzwords - UTM, WAF, DLP, DAM, SIEM, IPS, AV

  • Staff - everyone needed to get the above stuff done: compliance specialists, risk specialist, security managers, device ops managers


But, the truth is many of these items don't actually block attacks, or the few that do, don't really counter the common bypassed used to side-step them. For example:


  • It's really hard to link risk-based priorities to meaningful technical priorities.

  • Compliance drives a "teach the test" approach with little incentive to create contradictory measurements.

  • Then how can we have a bunch of things called "best practise" when we can't honestly say we know how to defend. Even then, some BPs are practically impossible to achieve in anything but a point in time. And the main point of this talk; common practises have common bypasses.


The current place we seem to be in is akin to having everyone build a wall. Attackers get to evaluate the wall, figure out how to get over it, and add to their capability (i.e. get a longer rope). But once they have a longer rope, they can use it over and over again, and against more than one wall. So attackers, who are quite good at sharing, get to keep building their tool chain, while all defenders can do it to keep building a higher wall, and maintaining the increasingly untenable structure. By understanding how attackers attack, we can break out of this and try more innovative approaches.


The talk is illustrated with four broad examples: Passwords, Patches, Anti-Virus and DMZs. For each, the belief around specific configurations is discussed, and how those don't stand up to how attackers actually attack. For example, the way AV's believed to work doesn't seem to correspond with how easy they are to bypass, or the common configuration of standard password controls such as lockout, don't seem to take into account horizontal brute-force attacks.


The point I want to make here is somewhat subtle; if you walk away thinking I've described new attacks, then you've missed it, if you think I'm recommending "the basics" then you've missed it. Truthfully, maybe it's just that I didn't make it very well ... decide for yourself, here are the slides:

Thu, 5 Sep 2013

44CON 2013

In one week, it's 44CON time again! One of our favourite UK hacker cons. In keeping with our desire to make more hackers, we're giving several sets of training courses as well as a talk this year.


Training: Hacking by Numbers - Mobile Edition


If you're in a rush, you can book here.


We launched it at Blackhat USA, and nobody threw anything rotting, in-fact some said it went pretty well; our latest addition to the Hacking by Numbers training.


We created the course to share our experience testing mobile applications and platforms, and well, because lots of people asked us to. The course shows you how to test mobile platforms and installed applications for vulnerabilities. HBN Mobile provides a pretty complete and practical overview into the methods used when attacking mobile platforms and presents you with a methodology that can be applied across platforms (although we focus on iOS and Android). This course is mostly for existing penetration testers who are new to the mobile area looking to learn how to understand, analyse and audit applications on various mobile platforms.


For more information about the course, and to book a place, head over here.


Workshop: Malware Reverse Engineering


If we were marketing to hipsters, we'd use words like “bespoke” and “handcrafted” to describe this workshop. While it's not made out of yams, it was put together especially for 44con.


Inaki and Siavosh's workshop will cut through the black-magic often associated with reverse engineering and malware. Advanced attacks usually have some form of malware involved, and learning to pull these apart to understand the kill chain is an increasingly vital skill.


Using real malware used in attacks against large corporates, students will look at both behavioural analysis and code analysis, to determine what the malware does.


If you're keen to attend, speak to the 44con crew at the front desk on arrival.


Talk: 'Honey, I'm Home' - Hacking Zwave Home Automation Systems


Behrang and Sahand will be presenting the results of their research into smart homes on day two at 09:30am.


“Smart homes” employing a variety of home automation systems are becoming increasingly common. Heating, ventilation, security and entertainment systems are centrally controlled with a mixture of wired and wireless networking. In 2011 the UK market for home automation products was estimated at GBP 65 million, an increase of 12% on the previous year, with the US market exceeding $3 billion. Zigbee and Z-Wave wireless protocols underpin most home automation systems. Z-Wave is growing in popularity as it does not conflict with existing 2.4GHz WiFi and Bluetooth systems.


Their talk describes the Z-Wave protocol and a number of weaknesses, including how to build a low-cost attack kit to perform packet capture and injection, along with potential attacks on the AES crypto implementation. Bottom line: they can walk up to a house, disable security sensors, then open the front door. LIKE A BOSS

Wed, 9 May 2012

Pentesting in the spotlight - a view

As 44Con 2012 starts to gain momentum (we'll be there again this time around) I was perusing some of the talks from last year's event...

It was a great event with some great presentations, including (if I may say) our own Ian deVilliers' *Security Application Proxy Pwnage*. Another presentation that caught my attention was Haroon Meer's *Penetration Testing considered harmful today*. In this presentation Haroon outlines concerns he has with Penetration Testing and suggests some changes that could be made to the way we test in order to improve the results we get. As you may know a core part of SensePost's business, and my career for almost 13 years, has been security testing, and so I followed this talk quite closely. The raises some interesting ideas and I felt I'd like to comment on some of the points he was making.

As I understood it, the talk's hypothesis could be (over) simplified as follows:

  1. Despite all efforts the security problem is growing and we're heading towards a 'security apocalypse';
  2. Penetration Testing has been presented as a solution to this problem;
  3. Penetration Testing doesn't seem to be working - we're still just one 0-day away from being owned - even for our most valuable assets;
  4. One of the reasons for this is that we don't cater for the 0-day, which is a game-changer. 0-day is sometimes overemphasized, but mostly it's underemphasized, making the value of the test spurious at best;
  5. There are some ways in which this can be improved, including the use '0-day cards', which allow the tester to emulate the use of a 0-day on a specific system without needing to actually have one. Think of this like a joker in a game of cards.
To begin with, let's consider the term "Penetration Testing", which sits at the core of the hypotheses. This term is widely used to express a number of security testing methodologies and could also be referred to as "attack & penetration", "ethical hacking", "vulnerability testing" or "vulnerability assessment". At SensePost we use the latter term, and the methodology it expresses includes a number of phases of which 'penetration testing' - the attempt to actually leverage the vulnerabilities discovered and practically demonstrate their potential impact to the business - is only one. The talk did not specify which specific definition of Penetration Test he was using. However, given the emphasis later in the talk about the significance of the 0-day and 'owning' things, I'm assuming he was using the most narrow, technical form of the term. It would seem to me that this already impacts much of his assertion: There are cases of course where a customer wants us simply to 'own' something, or somethings, but most often Penetration Testing is performed within the context of some broader assessment within which many of Haroon's concerns may already be being addressed. As the talk pointed out, there are instances where the question is asked "can we breached?", or "can we be breached without detecting it?". In such cases a raw "attack and penetration" test can be exactly what's needed; indeed it's a model that's been used by the military for decades. However for the most part penetration testing should only be used as a specific phase in an assessment and to achieve a specific purpose. I believe many services companies, including our own, have already evolved to the point where this is the case.

Next, I'd like to consider the assertion that penetration testing or even security assessment is presented as the "solution" to the security problem. While it's true that many companies do employ regular testing, amongst our customers it's most often used as a part of a broader strategy, to achieve a specific purpose. Security Assessment is about learning. Through regular testing, the tester, the assessment team and the customer incrementally understand threats and defenses better. Assumptions and assertions are tested and impacts are demonstrated. To me the talk's point is like saying that cholesterol testing is being presented as a solution to heart attacks. This seems untrue. Medical testing for a specific condition helps us gauge the likelihood of someone falling victim to a disease. Having understood this, we can apply treatments, change behavior or accept the odds and carry on. Where we have made changes, further testing helps us gauge whether those changes were successful or not. In the same way, security testing delivers a data point that can be used as part of a general security management process. I don't believe many people are presenting testing as the 'solution' to the security problem.

It is fair to say that the entire process within which security testing functions is not having the desired effect; Hence the talk's reference to a "security apocalypse". The failure of security testers to communicate the severity of the situation in language that business can understand surely plays a role here. However, it's not clear to me that the core of this problem lies with the testing component.

A significant, and interesting component of the talk's thesis has to do with the role of "0-day" in security and testing. He rightly points out that even a single 0-day in the hands of an attacker can completely change the result of the test and therefore the situation for the attacker. He suggests in his talk that the testing teams who do have 0-day are inclined to over-emphasise those that they have, whilst those who don't have tend to underemphasize or ignore their impact completely. Reading a bit into what he was saying, you can see the 0-day as a joker in a game of cards. You can play a great game with a great hand but if your opponent has a joker he's going to smoke you every time. In this the assertion is completely true. The talk goes on to suggest that testers should be granted "0-day cards", which they can "play" from time to time to be granted access to a particular system and thereby to illustrate more realistically the impact a 0-day can have. I like this idea very much and I'd like to investigate incorporating it into the penetration testing phase for some of our own assessments.

What I struggle to understand however, is why the talk emphasizes the particular 'joker' over a number of others that seems apparent to me. For example, why not have a "malicious system administrator card", a "spear phishing card", a "backdoor in OTS software" card or a "compromise of upstream provider" card? As the 'compromise' of major UK sites like the Register and the Daily Telegraph illustrate there are many factors that could significantly alter the result of an attack but that would typically fall outside the scope of a traditional penetration test. These are attack vectors that fall within the victim's threat model but are often outside of their reasonable control. Their existence is typically not dealt with during penetration testing, or even assessment, but also cannot be ignored. This doesn't doesn't invalidate penetration testing itself, it simply illustrates that testing is not equal to risk management and that risk management also needs to consider factors beyond the client's direct control.

The solution to this conundrum was touched on in the presentation, albeit very briefly, and it's "Threat Modeling". For the last five years I've been arguing that system- or enterprise-wide Threat Modeling presents us with the ability to deal with all these unknown factors (and more) and perform technical testing in a manner that's both broader and more efficient.

The core of the approach I'm proposing is roughly based on the Microsoft methodology and looks as follows:

  1. Develop a model of your target environment, incorporating all players, locations, and interfaces. This is done in close collaboration between the client and the tester, thus incorporating both the 'insider' and the 'outsider' perspective;
  2. Enumerate all potential risks, and map them to the model. This results in a very long and comprehensive list of hypothetical risks, which would naturally include the 0-day, but also all the other 'jokers' that we discussed above;
  3. Sort the list into some order of priority and group similar hypothetical risks together;
  4. Perform tests in order of priority where appropriate to prove or disprove the hypothetical risks;
  5. Remediate, mitigate, insure or inform as appropriate;
  6. Rinse and repeat.
This approach provides a reasonable balance between solid theoretical risk management and aggressive technical testing that addresses all the concerns raised in the talk about the way penetration testing is done today. It also provides the customer with a concrete register of tested risks that can easily be updated from time-to-time and makes sense to both technical and business leaders.

Threat Modeling makes our testing smarter, broader, more efficient and more relevant and as such is a vital improvement to our risk assessment methodology.

Solving the security problem in total is sadly still going to take a whole lot more work...