Grey bar Blue bar
Share this:

Sat, 1 Jun 2013

Honey, I’m home!! - Hacking Z-Wave & other Black Hat news

You've probably never thought of this, but the home automation market in the US was worth approximately $3.2 billion in 2010 and is expected to exceed $5.5 billion in 2016.


Under the hood, the Zigbee and Z-wave wireless communication protocols are the most common used RF technology in home automation systems. Zigbee is based on an open specification (IEEE 802.15.4) and has been the subject of several academic and practical security researches. Z-wave is a proprietary wireless protocol that works in the Industrial, Scientific and Medical radio band (ISM). It transmits on the 868.42 MHz (Europe) and 908.42MHz (United States) frequencies designed for low-bandwidth data communications in embedded devices such as security sensors, alarms and home automation control panels.


Unlike Zigbee, almost no public security research has been done on the Z-Wave protocol except once during a DefCon 2011 talk when the presenter pointed to the possibility of capturing the AES key exchange ... until now. Our Black Hat USA 2013 talk explores the question of Z-Wave protocol security and show how the Z-Wave protocol can be subjected to attacks.


The talk is being presented by Behrang Fouladi a Principal Security Researcher at SensePost, with some help on the hardware side from our friend Sahand Ghanoun. Behrang is one of our most senior and most respected analysts. He loves poetry, movies with Owen Wilson, snowboarding and long walks on the beach. Wait - no - that's me. Behrang's the guy who lives in London and has a Masters from Royal Holloway. He's also the guy who figured how to clone the SecureID software token.


Amazingly, this is the 11th time we've presented at Black Hat Las Vegas. We try and keep track of our talks and papers at conferences on our research services site, but for your reading convenience, here's a summary of our Black Hat talks over the last decade:



2002: Setiri : Advances in trojan technology (Roelof Temmingh)


Setiri was the first publicized trojan to implement the concept of using a web browser to communicate with its controller and caused a stir when we presented it in 2002. We were also very pleased when it got referenced by in a 2004 book by Ed Skoudis.


2003: Putting the tea back into cyber terrorism (Charl van der Walt, Roelof Temmingh and Haroon Meer)


A paper about targeted, effective, automated attacks that could be used in countrywide cyber terrorism. A worm that targets internal networks was also discussed as an example of such an attack. In some ways, the thinking in this talk eventually lead to the creation of Maltego.


2004: When the tables turn (Charl van der Walt, Roelof Temmingh and Haroon Meer)


This paper presented some of the earliest ideas on offensive strike-back as a network defence methodology, which later found their way into Neil Wyler's 2005 book "Aggressive Network Self-Defence".


2005: Assessment automation (Roelof Temmingh)


Our thinking around pentest automation, and in particular footprinting and link analyses was further expanded upon. Here we also released the first version of our automated footprinting tool - "Bidiblah".


2006: A tail of two proxies (Roelof Temmingh and Haroon Meer)


In this talk we literally did introduce two proxy tools. The first was "Suru', our HTTP MITM proxy and a then-contender to the @stake Web Proxy. Although Suru has long since been bypassed by excellent tools like "Burp Proxy" it introduced a number of exciting new concepts, including trivial fuzzing, token correlation and background directory brute-forcing. Further improvements included timing analysis and indexable directory checks. These were not available in other commercial proxies at the time, hence our need to write our own.


Another pioneering MITM proxy - WebScarab from OWASP - also shifted thinking at the time. It was originally written by Rogan Dawes, our very own pentest team leader.


The second proxy we introduced operated at the TCP layer, leveraging off the very excellent Scappy packet manipulation program. We never took that any further, however.


2007: It's all about timing (Haroon Meer and Marco Slaviero)


This was one of my favourite SensePost talks. It kicked off a series of research projects concentrating on timing-based inference attacks against all kinds of technologies and introduced a weaponized timing-based data exfiltration attack in the form of our Squeeza SQL Injection exploitation tool (you probably have to be South African to get the joke). This was also the first talk in which we Invented Our Own Acronym.


2008: Pushing a camel through the eye of a needle (Haroon Meer, Marco Slaviero & Glenn Wilkinson)


In this talk we expanded on our ideas of using timing as a vector for data extraction in so-called 'hostile' environments. We also introduced our 'reDuh' TCP-over-HTTP tunnelling tool. reDuh is a tool that can be used to create a TCP circuit through validly formed HTTP requests. Essentially this means that if we can upload a JSP/PHP/ASP page onto a compromised server, we can connect to hosts behind that server trivially. We also demonstrated how reDuh could be implemented under OLE right inside a compromised SQL 2005 server, even without 'sa' privileges.


2009: Clobbering the cloud (Haroon Meer, Marco Slaviero and Nicholas Arvanitis)


Yup, we did cloud before cloud was cool. This was a presentation about security in the cloud. Cloud security issues such as privacy, monoculture and vendor lock-in are discussed. The cloud offerings from Amazon, Salesforce and Apple as well as their security were examined. We got an email from Steve "Woz" Wozniak, we quoted Dan Geer and we had a photo of Dino Daizovi. We built an HTTP brute-forcer on Force.com and (best of all) we hacked Apple using an iPhone.


2010: Cache on delivery (Marco Slaviero)


This was a presentation about mining information from memcached. We introduced go-derper.rb, a tool we developed for hacking memcached servers and gave a few examples, including a sexy hack of bps.org. It seemed like people weren't getting our point at first, but later the penny dropped and we've to-date had almost 50,000 hits on the presentation on Slideshare.


2011: Sour pickles (Marco Slaviero)


Python's Pickle module provides a known capability for running arbitrary Python functions and, by extension, permitting remote code execution; however there is no public Pickle exploitation guide and published exploits are simple examples only. In this paper we described the Pickle environment, outline hurdles facing a shellcoder and provide guidelines for writing Pickle shellcode. A brief survey of public Python code was undertaken to establish the prevalence of the vulnerability, and a shellcode generator and Pickle mangler were written. Output from the paper included helpful guidelines and templates for shellcode writing, tools for Pickle hacking and a shellcode library.We also wrote a very fancy paper about it all...


We never presented at Black Hat USA in 2012, although we did do some very cool work in that year.


For this year's show we'll back on the podium with Behrang's talk, as well an entire suite of excellent training courses. To meet the likes of Behrang and the rest of our team please consider one of our courses. We need all the support we can get and we're pretty convinced you won't be disappointed.


See you in Vegas!

Fri, 12 Apr 2013

Analysis of Security in a P2P storage cloud

A cloud storage service such as Microsoft SkyDrive requires building data centers as well as operational and maintenance costs. An alternative approach is based on distributed computing model which utilizes portion of the storage and processing resources of consumer level computers and SME NAS devices to form a peer to peer storage system. The members contribute some of their local storage space to the system and in return receive "online backup and data sharing" service. Providing data confidentiality, integrity and availability in such de-centerlized storage system is a big challenge to be addressed. As the cost of data storage devices declines, there is a debate that whether the P2P storage could really be cost saving or not. I leave this debate to the critics and instead I will look into a peer to peer storage system and study its security measures and possible issues. An overview of this system's architecture is shown in the following picture:


Each node in the storage cloud receives an amount of free online storage space which can be increased by the control server if the node agrees to "contribute" some of its local hard drive space to the system. File synchronisation and contribution agents that are running on every node interact with the cloud control server and other nodes as shown in the above picture. Folder/File synchronisation is performed in the following steps:



1) The node authenticates itself to the control server and sends file upload request with file meta data including SHA1 hash value, size, number of fragments and file name over HTTPS connection.


2) The control server replies with the AES encryption key for the relevant file/folder, a [IP Address]:[Port number] list of contributing nodes called "endpoints list" and a file ID.


3) The file is split into blocks each of which is encrypted with the above AES encryption key. The blocks are further split into 64 fragments and redundancy information also gets added to them.


4) The node then connects to the contribution agent on each endpoint address that was received in step 2 and uploads one fragment to each of them


Since the system nodes are not under full control of the control server, they fall offline any time or the stored file fragments may become damaged/modified intentionally. As such, the control server needs to monitor node and fragment health regularly so that it may move lost/damaged fragments to alternate nodes if need be. For this purpose, the contribution agent on each node maintains an HTTPS connection to the control server on which it receives the following "tasks":


a) Adjust settings : instructs the node to modify its upload/download limits , contribution size and etc


b) Block check : asks the node to connect to another contribution node and verify a fragment existence and hash value


c) Block Recovery : Assist the control server to recover a number of fragments


By delegating the above task, the control system has placed some degree of "trust" or at least "assumptions" about the availability and integrity of the agent software running on the storage cloud nodes. However, those agents can be manipulated by malicious nodes in order to disrupt cloud operations, attack other nodes or even gain unauthorised access to the distributed data. I limited the scope of my research to the synchronisation and contribution agent software of two storage nodes under my control - one of which was acting as a contribution node. I didn't include the analysis of the encryption or redundancy of the system in my preliminary research because it could affect the live system and should only be performed on a test environment which was not possible to set up, as the target system's control server was not publicly available. Within the contribution agent alone, I identified that not only did I have unauthorised access file storage (and download) on the cloud's nodes, but I had unauthorised access to the folder encryption keys as well.


a) Unauthorised file storage and download


The contribution agent created a TCP network listener that processed commands from the control server as well as requests from other nodes. The agent communicated over HTTP(s) with the control server and other nodes in the cloud. An example file fragment upload request from a remote node is shown below:



Uploading fragments with similar format to the above path name resulted in the "bad request" error from the agent. This indicated that the fragment name should be related to its content and this condition is checked by the contribution agent before accepting the PUT request. By decompiling the agent software code, it was found that the fragment name must have the following format to pass this validation:


<SHA1(uploaded content)>.<Fragment number>.<Global Folder Id>


I used the above file fragment format to upload notepad.exe to the remote node successfully as you can see in the following figure:



The download request (GET request) was also successful regardless of the validity of "Global Folder Id" and "Fragment Number". The uploaded file was accessible for about 24 hours, until it was purged automatically by the contribution agent, probably because it won't receive any "Block Check" requests for the control server for this fragment. Twenty four hours still is enough time for malicious users to abuse storage cloud nodes bandwidth and storage to serve their contents over the internet without victim's knowledge.


b) Unauthorised access to folder encryption keys


The network listener responded to GET requests from any remote node as mentioned above. This was intended to serve "Block Check" commands from the control server which instructs a node to fetch a number of fragments from other nodes (referred to as "endpoints") and verify their integrity but re-calculating the SHA1 hash and reporting back to the control server. This could be part of the cloud "health check" process to ensure that the distributed file fragments are accessible and not tampered with. The agent could also process "File Recovery" tasks from the control server but I didn't observe any such command from the control server during the dynamic analysis of the contribution agent, so I searched the decompiled code for clues on the file recovery process and found the following code snippet which could suggest that the agent is cable of retrieving encryption keys from the control server. This was something odd, considering that each node should only have access to its own folders encryption keys and it stores encrypted file fragments of other nodes.


One possible explanation for the above file recovery code, could be that the node first downloads its own file fragments from remote endpoints (using an endpoint list received from the control server) and then retrieves the required folder encryption key from the control server in order to decrypt and re-assemble its own files. In order to test if it's possible to abuse the file recovery operation to gain access to encryption key of the folders belonging to other nodes. I extracted the folderInfo request format from the agent code and set up another storage node as a target to test this idea. The result of the test was successful as shown in the following figure and it was possible to retrieve the AES-256 encryption key for the Folder Id "1099869693336". This could enable an attacker who has set up an contributing storage node to decrypt the fragments that belong to other cloud users.



Conclusion:


While peer to peer storage systems have lower setup/maintenance costs, they face security threats from the storage nodes that are not under direct physical/remote control of the cloud controller system. Examples of such threats relate to the cloud's client agent software and the cloud server's authorisation control, as demonstrated in this post. While analysis of the data encryption and redundancy in the peer to peer storage system would be an interesting future research topic, we hope that the findings from this research can be used to improve the security of various distributed storage systems.

Sat, 2 Mar 2013

IT Network Packet Wrangler


As we grow and operate on a number of continents, so does our dependence on a rock-solid IT infrastructure. We are expanding our repertoire to include a greater collection of Linux/Open Source/Windows and OS X products. With this, we are on the look-out for a rock star to wrangle control of our internal networks, external cloud infrastructure and help us us utilise technology in a way to make us even better.


Job Title: IT Network Packet Wrangling Penguin Master


Salary Range: Industry standard, commensurate with experience


Location: Johannesburg/Pretoria, South Africa


Real Responsibilities:


  • Managing a growing internal network, both in ZA and UK and increased cloud-based infrastructure

  • Championing the adoption of new technologies, ways of working and being incredibly excited about security. Yes, we like that type of person who scoffs at the idea of using a plain-text protocol


As a system / network administrator your daily duties and responsibilities will include:

  • Providing day-to-day Desktop, Server and Network administration, including helping plakkers (the name we give to all who work at SensePost) with their devices

  • Be capable of using a variety of operating systems

  • Ensuring our disaster recovery plan is working as it should

  • Being the go to person to all those who require assistance with their IT

  • Maintaining and administer the telecommunications system

  • Administering the network to ensure that the systems in place run effectively and securely (we are, after all, a security company!)

  • A real passion for finding technology led solutions to problems.

  • Be excited about Unix firewalls, Cisco routers, wrangling network packets, VPN tunnelling and Wi-Fi

  • Able to hold a conversation and smile when mentioning SMTP/HTTP/IMAP/Python


Not essential, but bonus points for:

  • Actually getting a linux laptop to use an overhead projector, without resorting to swear words in Spanish and Afrikaans

  • Administering a Windows server without complaining, at all, not once, in fact, you actually kinda enjoy it.

  • Being really passionate about security and showing it doesn't have to get in the way of working productively.


If the above has got you thinking 'weird, it's like they are talking to me bru!' then we want to hear from you. Send us a carrier pigeon message or send us a mail

Mon, 26 Nov 2012

Skype Passive IP Disclosure Vulnerability

When performing spear phishing attacks, the more information you have at your disposal, the better. One tactic we thought useful was this Skype security flaw disclosed in the early days of 2012 (discovered by one of the Skype engineers much earlier).

For those who haven't heard of it - this vulnerability allows an attacker to passively disclose victims external, as well as internal, IP addresses in a matter of seconds, by viewing the victims VCard through an 'Add Contact' form.

Why is this useful?

1. Verifying the identity and the location of the target contact. Great when performing geo-targeted phishing attacks.

2. Checking whether your Skype account has not been used elsewhere :)

3. Spear phishing enumeration while Pen Testing.

4. Just out of plain curiosity.

To get this working, following these basic steps:

1. Download and install the patched version of Skype 5.5 from here (the patch enables the Skype client to save the logs in non obfuscated form)

2. Save the lines below as a Skype_log_patch.reg reg file:

Windows Registry Editor Version 5.00
[HKEY_CURRENT_USER\Software\Skype\Phone\UI\General]
"LastLanguage"="en"
"Logging"="SkypeDebug2003"
"Logging2"="on"
Once saved, run it to enable the Skype Debug Log File.

4. Start Skype.

5. Search for any Skype contact and click on the 'Add a Skype Contact' button, but do not send the request, rather click on the user to view their VCard.

4. Open the log file (it should appear in the same folder as Skype executable e.g. debug-20121003-0150)

5. Look for the PresenceManager line - you should see something similar to this - >

In the above image you can spot my Skype name, external as well as internal IP addresses.

The log will include similar credentilas for everyone listed as a "contact" under your Skype account, as well as many other fresh, genuine and useful information received directly from your local Skype tracker.

Sat, 7 Aug 2010

BlackHat Write-up: go-derper and mining memcaches

[Update: Disclosure and other points discussed in a little more detail here.]

Why memcached?

At BlackHat USA last year we spoke about attacking cloud systems, while the thinking was broadly applicable, we focused on specific providers (overview). This year, we continued in the same vein except we focused on a particular piece of software used in numerous large-scale application including many cloud services. In the realm of "software that enables cloud services", there appears to be a handful of "go to" applications that are consistently re-used, and it's curious that a security practitioner's perspective has not as yet been applied to them (disclaimer: I'm not aware of parallel work).

We choose to look at memcached, a "Free & open source, high-performance, distributed memory object caching system" 1. It's not outwardly sexy from a security standpoint and it doesn't have a large and exposed codebase (total LOC is a smidge over 11k). However, what's of interest is the type of applications in which memcached is deployed. Memcached is most often used in web application to speed up page loads. Sites are almost2 always dynamic and either have many clients (i.e. require horizontal scaling) or process piles of data (look to reduce processing time), or oftentimes both. This implies that the sites that use memcached contain more interesting info than simple static sites, and are an indicator of a potentially interesting site. Prominent users of memcached include LiveJournal (memcached was originally written by Brad Fitzpatrick for LJ), Wikipedia, Flickr, YouTube and Twitter.

I won't go into how memcached works, suffice it to say that since data tends to be read more often than written in common use cases the idea is to pre-render and store the finalised content inside the in-memory cache. When future requests ask for the page or data, it doesn't need to be regenerated but can be simply regurgitated from the cache. Their Wiki contains more background.

go-derper

We released go-derper, a tool for playing with memcached instances. It supports three basic modes of operations:
  1. Fingerprinting memcacheds to determine interesting servers
  2. Extracting a (user-limited) copy of the cache
  3. Writing data into the cache
The tool has minor requirements: a recent Ruby and the memcache-client gem. What follows are basic use cases.

Fingerprinting

Let's assume you've scanned a hosting provider and found 239 potential targets using a basic .nse that hunts down open memcached instances3. You need to separate the wheat from the chaff and figure out which servers are potentially interesting; one way to do that is by extracting a bunch of metrics from each cache. Start small against one cache: insurrection:demo marco$ ruby go-derper.rb -f x.x.x.x [i] Scanning x.x.x.x x.x.x.x:11211 ============================== memcached 1.4.5 (1064) up 54:10:01:27, sys time Wed Aug 04 10:34:36 +0200 2010, utime=369388.17, stime=520925.98 Mem: Max 1024.00 MB, max item size = 1024.00 KB Network: curr conn 18, bytes read 44.69 TB, bytes written 695.93 GB Cache: get 514, set 93.41b, bytes stored 825.73 MB, curr item count 1.54m, total items 1.54m, total slabs 3 Stats capabilities: (stat) slabs settings items (set) (get)

44 terabytes read from the cache in 54 days with 1.5 million items stored? This cache is used quite frequently. There's an anomaly here in that the cache reports only 514 reads with 93 billion writes; however it's still worth exploring if only for the size.

We can run the same fingerprint scan against multiple hosts using

ruby go-derper.rb -f host1,host2,host3,...,hostn

or, if the hosts are in a file (one per line):

ruby go-derper.rb -F file_with_target_hosts

Output is either human-readable multiline (the default), or CSV. The latter helps for quickly rearranging and sorting the output to determine potential targets, and is enabled with the "-c" switch:

ruby go-derper.rb -c csv -f host1,host2,host3,...,hostn

Lastly, the monitor mode (-m) will loop forever while retrieving certain statistics and keep track of differences between iterations, in order to determine whether the cache appears to be in active use.

Mining

Once you've identified a potentially interesting target, it's time to mine that cache. The basic leach switch is "-l":

insurrection:demo marco$ ruby go-derper.rb -l -s x.x.x.x [w] No output directory specified, defaulting to ./output [w] No prefix supplied, using "run1"

This will extract data from the cache in the form of a key and its value, and save the value in a file under the "./output" directory by default (if this directory doesn't exist then the tool will exit so make sure it's present.) This means a separate file is created for every retrieved value. Output directories and file prefixes are adjustable with "-o" and "-r" respectively, however it's usually safe to leave these alone.

By default, go-derper fetches 10 keys per slab (see the memcached docs for a discussion on slabs; basically similar-sized entries are grouped together.) This default is intentionally low; on an actual assessment this could run into six figures. Use the "-K" switch to adjust:

ruby go-derper.rb -l -K 100 -s x.x.x.x

As mentioned, retrieved data is stored in the "./ouput" directory (or elsewhere if "-o" is used). Within this directory, each new run of the tool produces a set of files prefixed with "runN" in order to keep multiple runs separate. The files produced are:

  • runN-index, an index file containing metadata about each entry retrieved
  • runN-<md5>, a file containing the bytestream from a retrieved value
The mapping between key and file in which the value is stored occurs in the index file, which is useful in that potentially malicious data (keynames) aren't used when interacting with your local filesystem APIs.

At this point, there will (hopefully) be a large number of files in your output directory, which may contain useful info. Start grepping.

What we found with a bit of field experience was that mining large caches can take some time, and repeating grep gets quite boring. The tool permits you to supply your own set of regular expressions which will be applied to each retrieved value; matches are printed to the screen and this provides a scroll-by view of bits of data that may pique your interest (things like URLs, email addresses, session IDs, strings starting with "user", "pass" or "auth", cookies, IP addresses etc). The "-R" switch enables this feature and takes a file containing regexes as its sole argument:

ruby go-derper.rb -l -K 100 -R regexs.txt -s x.x.x.x

Over-writing

In this blog entry I don't cover the kinds of data we discovered (it'll be subject to a separate entry), however it may come to pass that you discover an interesting cache entry that you'd like to overwrite. Recall entries were stored in "./output" by default, with a prefix of "runN". If the interesting entry was stored in "output/run1-e94aae85bd3469d929727bee5009dddd", edit the file in whatever manner you see fit and save it to your local disk. Then, tell go-derper to write the entry back into the cache with:

ruby go-derper.rb -w output/run1-e94aae85bd3469d929727bee5009dddd

This syntax is simple since go-derper will figure out the target server and key from the run's index file.

And so?

Go-derper permits basic manipulations of a memcached instance. We haven't covered finding open instances or the kinds of data one may come across; these will be the subject of followup posts. Below are the slides from the talk, click through to SlideShare for the downloadable PDF.
1 http://www.memcached.org

2 We're hedging here, but we've not come across a static memcached site.

3 If so, you may be as surprised as we were in finding this many open instances.