Grey bar Blue bar
Share this:

Wed, 4 Apr 2012

Towards Firmware Analysis

While I was evaluating a research idea about a SCADA network router during the past week, I used available tools and resources on the Internet to unpack the device firmware and search for interesting components. During security assessments, you may find interesting embedded devices available on the network. Whilst many don't look at the feasibility of doing firmware analysis, I decided to document the steps I took to analysis my target firmware, so you can take the similar approach in the case of assessing such devices. This could also be a good indication on the feasibility of automating this process (An unfinished project was launched in 2007: http://www.uberwall.org/bin/project/display/85/UWfirmforce).

The following process would be easy for most of you who use *nix systems on a daily bases:

Step 1) Scanning the firmware image

The BinWalk tool is useful for scanning firmware image files to identify embedded file systems and compressed streams inside. It can detect common bootloaders, file systems and compressed archives inside a given firmware image file. Since it works by scanning for signature and magic values, it usually has false positives and the results need to be verified manually.

U-Boot bootloader (yes, it's German :-)) signature was identified at offset 262144 and the uImage header information, such as creation date, CPU type, etc appears to be valid. This bootloader was followed by a gzip compressed stream, which probably is the zImage kernel and a squashfs file system at offset 1522004. We will attempt to extract this file system in the next step. The following are common bootloaders that are used in embedded devices with ARM CPU:

Blob bootloader Bootldr Redboot U-Boot ABLE bootloader

The bootloader's task is to load the kernel image at the correct address and pass initial parameters to it. So in most cases we are not interested in analysing the bootloader itself, but in the root file system.

Step 2) Extracting file systems

First, I extracted the uImage content at offset 262144 by using dd command and then used uboot-mkimage (packages.debian.org/uboot-mkimage) to test if it's a valid uImage file and to discover more information about it:

The image format was valid and it contained two other file system images with 1MB and 2MB sizes, which probably are kernel zImage and root file systems (RAMdisk). If you check the uImage file format, you will notice a 64 bytes long header. There is a “multi-file” image list that contains each image size in bytes and this list is terminated by a 32bit zero. So, I would need to skip 64+2*4+4=76 bytes from start of the uImage file to get to the first Image content that would be kernel zImage:

The file command could not detect kernel image or squshfs in the extracted file systems; this might be due to lack of squashfs (with LZMA compression) in my Ubuntu kernel. I proceed by using Firmware Mod Kit which contains a set of programs to decompress various file system images including squashfs-LZMA. After trying the various unsquashfs version 3.x scripts, I was able extract the rootfs image files successfully:

Step 3) Searching the root file system

Once the root file system files were extracted, we can file and strings search tools to look for interesting files and patterns such as RSA private key files, password and configuration files, SQL database files, SQL query string and etc. In my case, I was looking for RSA certificate or private key files and found the following: (a database of private keys in embedded devices was published in 2011 but it's not actively maintained, you can access it at http://code.google.com/p/littleblackbox/)

One can write shell scripts to automate the file system search process.

Step 4) Running and debugging the Executables

The Qemu emulator supports multiple CPU architectures including ARM, MIPS, PowerPC, etc and can be used to run and debug the interesting executable extracted from the firmware image on your system for dynamic analysis purposes. You would need to build the Qemu with —static and —enable-debug options. The following figure demonstrates how to run the web server (httpd) that was extracted from my target firmware using chroot and Qemu:

As you can see from the above screenshot, the web server was working fine, but was not able to display the bootloader version, because it couldn't read this value from the NVRAM (not volatile RAM) normally mounted by the kernel in a real device (there is an interesting post here about resolving the NVRAM access errors while emulating embedded device executables). Some of the executables, like the remote management agent example below, could have more severe problems running under emulator.

For troubleshooting such cases, or monitoring an emulated process while fuzzing it, we would need to attach a debugger to it. This can be achieved by using —g switch in Qemu and using a debugger out of the emulator process or even on a remote windows machine. I used IDA pro remote GDB debugging tool as shown in the figures below:

Once successfully attached to the remote emulated process, IDA pro can be used to simply trace the execution of the process, placing breakpoints or running IDA scripts.

Often overlooked during assessments, firmware analysis of devices can yield results and often do when we target them at SensePost. Our methodology includes the above steps and we recommend yours does too.

Fri, 9 Mar 2012

Foot printing – Finding your target...

We were asked to contribute an article to PenTest magazine, and chose to write up an introductory how-to on footprinting. We've republished it here for those interested.

Network foot printing is, perhaps, the first active step in the reconnaissance phase of an external network security engagement. This phase is often highly automated with little human interaction as the techniques appear, at first glance, to be easily applied in a general fashion across a broad range of targets. As a security analyst, footprinting is also one of the most enjoyable parts of my job as I attempt to outperform the automatons; it is all about finding that one target that everybody forgot about or did not even know they had, that one old IIS 5 webserver that is not used, but not powered off.

With this article I am going to share some of the steps, tips and tricks that pentesters and hackers alike use when starting on a engagement.

Approach

As with most things in life having a good approach to a problem will yield better results and overtime as your approach is refined you will consume less time while getting better results. By following a methodology, your footprinting will become more repeatable and thus reliable. A basic footprining methodology covers reconnaissance, DNS mining, various information services (e.g. whois, Robtex, routes), network registration information and active steps such as SSL host enumeration.

While the temptation exists to merely feed a domain name into a tool or script and take the output as your completed footprint, this will not yield a passable footprint for two reasons. Firstly, a single tool will not have access to all the disparate information sources that one should consult, and secondly the footprinting process is inherently iterative and continuous. A footprint is almost never complete; instead, a fork of the footprint data provides the best current view of the target, but the information could change tomorrow as new sites are brought online, or old sites are taken offline. As a new piece of data is found that could expand the footprint, a new iteration of the footprinting process triggers with that datum as the seed, and the results are combined with all discovered information.

Know your target

The very first thing to do is to get to know your target organisation. What they do, who they do it for, who does it for them, where they do it from - both online and in the kinetic world, what community or charity work they are involved in. This will give you an insight into what type of network/infrastructure you can expect. Reading public announcements, financial reports and any other documents published on or by the organisation might also yield interesting results. Any organisation that must publish regular reports (e.g. listed companies), provide a treasure trove of information for understanding the target's core business units, corporate hierarchy and lines of business. All these become very useful when selecting targets.

Dumpster diving, if you are up for it and have physical access to the target, means sifting through trash to get useful information, but in recent times social media can provide us with even more. Sites like LinkedIn, Facebook and Twitter can provide you with lists of employees and projects that the organisation is involved with and perhaps even information about third party products and suppliers that are in use.

One should even keep an eye out for evidence of previous breaches or loss of credentials. It has become common place for hackers to post information about security breaches on sites like pastebin.com. The most likely evidence would be credentials in the form of corporate emails and passwords being reused on unrelated sites that are hacked, and have their user databases uploaded. In addition, developers use sites like Pastebin to share code, ideas and patches, and if you are lucky you might just find a little snippet of code sitting out in the open on Pastebin, that will give you the edge.

DNS

“The Domain Name System (DNS) is a hierarchical distributed naming system for computers, services, or any resource connected to the Internet or a private network.” — WikiPedia

In a nutshell, DNS is used to convert computer names to their numeric addresses.

Start by enumerating every possible domain owned by the target. This is where the information from the initial reconnaissance phase comes in handy, as the target's website will likely point to external domains of interest and also help you guess at possible names. With a list of most discovered domains in hand, move on to a TLD (Top level domain) expand. TLDs are the highest level subdomains in DNS; .com, .net, .za, .mobi are all examples of TLDs (The Mozilla Organization maintains a list of TLDs https://wiki.mozilla.org/TLD_List).

In the next step, we take a discovered discovered domain and check to see if there are any other domains with the same name, but with a different TLD. For example, if the target has the domain victim.com, test whether the domains victim.net, victim.info, victim.org etc. exist and if they exist check to see if they are owned by our target organization. To determine whether a domain exists or not, one should examine the SOA (start of authority) DNS record for the domain. Using commands like nslookup under Microsoft Windows or the dig/host commands under most of the *nix family will reveal SOA records.

Using dig, “dig zonetransfer.me soa”.

?

Figure 1: Using dig to get the SOA (Start of authority) record for a domain

If, by verifying the SOA, it is confirmed that the domain exists, then the next step is to track down who it belongs to. At this point the whois service is called upon. ‘Whois' is simply a registry that contains the information of the owner of a domain. Note that it is not entirely reliable and certainly not consistent. The following very simple query “whois zonetransfer.me” provides us with the owner of the domain “zonetransfer.me” detail.

Figure 2: Using whois to get the domain owner detail

After finding domains, running them through a TLD expansion and verifying their whois information, it is time to track down hosts. First we need to get the NS or name server records for the domains. Again using “dig zonetransfer.me ns” returns a list of all the name servers used by this domain. In many cases the name server will not be part of the target's network and is often out-of-scope, but they will still be used in the next step.

DNS yields much interesting information, but the default methods for extracting information from foreign servers effectively relies on a brute force. However, DNS supports a trick where all DNS information for a zone can be downloaded if the server allows it, and this is called a “zone transfer”. When enabled, they are extremely useful as they negate the need for guessing or brute-forcing; sadly they are commonly disabled. Still, given the usefulness of zone transfers it is always worth testing for. Zone transfers should be performed against all the name servers that are specified in the NS records of a domain as the data contained in each name server should be the same, but the security configuration might be different. Using dig, the following command will attempt to perform a zone transfer “dig axfr @ns12.zoneedit.com zonetransfer.me”

Figure 3: Performing a zone transfer using dig

As mentioned previously, zone transfers are not that common. When we cannot download the zone file, there are a couple of other tricks that might work. One is to brute force or guess host names: by using a long list of common hostnames one can test for names such as “fw.victim.com”, “intranet.victim.com”, “mail.victim.com” and so on. The names can be commonly seen hostnames, generated names when computers are assigned numeric or algorithmic names, or from sets of related names such as characters from a book series. When brute forcing DNS, be sure to check the following DNS records: CNAME, A and AAAA. Again this is easy using a tool like dig. “dig www.google.com a” produces the DNS configuration for www.google.com, note that the hostname www.google.com actually has multiple DNS entries, one CNAME record, and multiple A records. Looking at the IP addresses it is clear that there are several different hosts (2 in the screenshot below).

Figure 4: Using dig to get the a record for a host entry

Doing this manually seems easy and quick, (and it is) but if we want to brute force or guess many host names, then this will take too long. Of course, it is easy enough to script these commands to automate the process; however there are existing tools written specifically for this purpose. One of the most popular tools, Fierce, is a perl script written by RSnake (http://ha.ckers.org/fierce/), which is easy to use and has many useful functions. Additionally, there are tools like Paterva's Maltego and SensePost's Yeti (a tool I wrote) which provide graphical tools for this purpose.

If we happen to have a list of IP addresses or IP netblocks of the target, then a further DNS trick is to convert the addresses into hostnames using reverse lookups to get the PTR record entry. This is useful since reverse records are easily brute forced in IPv4. Bear in mind that DNS does not require a PTR record (reverse entry) or that entries in the reverse zone must match entries in the forward zone. But the result can give you an idea of whether the host is a shared host, owned and hosted by the company or just remote hosted.

To test once more, try using dig, “dig 104.66.194.173.in-addr.arpa ptr”. While this too can be easily automated, the previously mentioned tools will also handle PTR records.

Search engines:

DNS interrogation and mining forms the bulk foot printing, but thanks to modern search engines like Google and Bing, finding targets has become much easier.

Apart from the normal searching for your target, as you would do in your initial phase, you can actually use the data that you discovered during the course of the DNS mining to try and get further information using search engines. Bing from Microsoft provides us with two really useful search operators: “ip:” and “site:”. When using the “ip:” operator, Bing will return a list of hosts that it has indexed that resolve to the IP address that you have specified. Alternatively the “site:” operator when used with a domain name, will return a list of host names that have been indexed by the search engine and belong to the domain specified. Quick and easy, and Bing also provides you with a very simple free API that you can use to automate these searches.

Address mapping

All this fuss with DNS is important, but it is only useful insofar as they lead us to addresses. The next step is discovering where the target exists within the IP address space. Luckily useful tools and resources exist to help us uncover these ranges, by automating a combination of manual techniques such as whois querying, traceroute and netblock calculators. In the previous section the whois tool was used to get the domain owner information. The same tool can be used to discover the ownership/assignment details of a specific IP address. Let's take www.facebook.com; one of the IP addresses that it resolves to is 69.63.190.10. “whois 69.63.190.10” produces the following output.

Figure 5: Getting the netblock and owner using whois

From the whois output we get really useful information. First is a netblock range 69.63.176.0-69.63.190.255 as well as the owner of this net block, namely Facebook, Inc. In this case we are lucky and the netblock is registered to facebook, but often you will only get the network service provider to which the netblock is allocated to. In that case, you will have to query the service provider in order to gain more info about the specific netblock. Online resources can also be very useful, for example ARIN (American Registry for Internet Numbers) or any of the other regional registries (RIPE, AfriNIC, APNIC and LACNIC) provides a reverse whois search interface where one can search for organisation names and other terms, even performing wild card searches. Giving Facebook a second look, we try a search on the reverse whois interface found at http://whois.arin.net/ with the term “facebook”, and get a list of five additional network ranges.

Figure 6: Search results for ARIN reverse whois

SSL Certificates

Lastly, we turn to SSL. SSL may be more familiar as a “protection” against nasty eavesdroppers and men-in-the-middle, but it is useful for footprinters. How? It is really simple actually, one of the security checks performed by browsers when deciding on the validity of a SSL certificate is whether the Common Name contained in the certificate matches the DNS name of the host requested from the browser. How does this help? Say a list of IP addresses has been produced; the next step would be to perform a reverse lookup of all these addresses. However, if no reverse entry is present and Bing has no record of the IP, then some creativity is called for. If an HTTPS website is hosted on that address then simply browse to that IP address and, when presented with the invalid certificate error, message, look for the “real” host name.

Figure 7: Firefox reporting the common name contained in a SSL certificate for a host

Again, this is something that is easily automated, so we have included a module in Yeti to actually do this for you.

Conclusion

Foot printing might at first glance appear to be simple and mundane, but the more you do it, the more you will realise that very few organisations have a handle on exactly what they have and what they present to the Internet. As the Internet and networks evolve so will the way companies and organisations use it, and so will their footprint. A year-old footprint could be hopelessly outdated, and ongoing footprinting helps organisations maintain a current view of their threat landscape.

With the ongoing move away from local infrastructure to hosted infrastructure, the footprint expands, spreads and grows, and so will our quest to find as much as possible.

Wed, 7 Mar 2012

Mobile Security - Observations from the developing world

By the year 2015 sub-Saharan Africa will have more people with mobile network access than with access to electricity at home.
This remarkable fact from a 2011 MobileMonday report [1] came to mind again as I read an article just yesterday about the introduction of Mobile Money in the UK: By the start of next year, every bank customer in the country may have the ability to transfer cash between bank accounts, using an app on their mobile phone. [2]

I originally came across the MobileMonday report while researching the question of mobility and security in Africa for a conference I was asked to present at [3]. In this presentation I examine the global growth and impact of the so-called mobile revolution and then its relevance to Africa, before looking at some of the potential security implications this revolution will have.

The bit about the mobile revolution is easy: According to the Economist there will be 10 billion mobile devices connected to the Internet by 2020, and the number of mobile devices will surpass the number of PCs and laptops by this year already. The mobile-only Internet population will grow 56-fold from 14 million at the end of 2010 to 788 million by the end of 2015. Consumerization - the trend for new information technology to emerge first in the consumer market and then spread into business organizations, resulting in the convergence of the IT and consumer electronics industries - implies that the end-user is defining the roadmap for these technologies as manufacturers, networks and businesses scramble desperately to absorb their impact.

Africa, languishing behind in so many other respects, is right there on the rushing face of this new wave, as my initial quote illustrates. In fact the kind of mobile payment technology referred to in the BBC article is already quite prevalent in our home markets in Africa and we're frequently engaged to test mobile application security in various forms. In my presentation for example, I make reference to m-Pesa - the mobile payments system launched in Kenya and now mimicked in South Africa also. Six million people in Kenya use m-Pesa, and more than 5% of that country's annual GDP is moved to and fro directly from mobile to mobile. There are nearly five times the number of m-Pesa outlets than the total number of PostBank branches, post offices, bank branches, and automated teller machines (ATMs) in the country combined.

Closer to home in South Africa, it is estimated that the number of people with mobile phones outstrips the number of people with fixed-line Internet connections by a factor of ten! And this impacts our clients and their businesses directly: Approximately 44% of urban cellphone users in South Africa now make use of mobile banking services. The reasoning is clear: Where fixed infrastructure is poor mobile will dominate, and where the mobile dominates mobile services will soon follow. Mobile banking, mobile wallets, mobile TV and mobile social networking and mobile strong-authentication systems are all already prevalent here in South Africa and are already bringing with them the expected new array of security challenges. Understanding this is one of the reasons our customers come to us.

In my presentation I describe the Mobile Threat Model as having three key facets:

  • Security: The challenge of ensuring Confidentiality, Integrity and Authenticity for the data and transactions on the device;
  • Privacy: The implications of mobility (and especially convergence) for citizens and their rights to talk, move, think and act unobserved; and
  • Control: The challenge presented by the mobile revolution to governments fighting crime, gangsterism and terrorism.
All of these issues are real and complex, but I'm restricting myself to the security question here. I encourage readers to peruse the presentation itself for a full breakdown of the Threat Model because for this article I think it suffices to consider just the conclusion of my presentation, and it's this:

The technical security issues we discover on mobile devices and mobile applications today are really no different from what we've been finding in other environments for years. There are some interesting new variations and interesting new attack vectors, but it's really just a new flavor of the same thing. But there are four attributes of the modern mobile landscape that combine to present us with an entirely new challenge:

Firstly, mobiles are highly connected. The mobile phone is permanently on some IP network and by extension permanently on the Internet. However, it's also connected via GSM and CDMA; it's connected with your PC via USB, your Bluetooth headset and your GPS, and soon it will be connected with other devices in your vicinity via NFC. Never before in our history have communications been so converged, and all via the wallet-sized device in your pocket right now!

Secondly, the mobile device is deeply integrated. On or through this platform is everything anyone would ever want to know about you: Your location, your phone calls, your messages, your personal data, your photos, your location, your location history and your entire social network. Indeed, in an increasing number of technical paradigms, your mobile device is you! Moreover, the device has the ability to collect, store and transmit everything you say, see and hear, and everywhere you go!

Thirdly, as I've pointed out, mobile devices are incredibly widely distributed. Basically, everyone has one or soon will. And, we're rapidly steering towards a homogenous environment defined by IOS and Google's Android. Imagine the effect this has on the value of an exploit or attack vector. Finally, the mobile landscape is still being very, very poorly managed. Except for the Apple AppStore, and recent advances by Google to manage the Android market, there is extremely little by way of standardization, automated patching or central management to be seen. Most devices, once deployed, will stay in commission for years to come and so security mistakes being made now are likely to become a nightmare for us in the future.

Thus, the technical issues well known from years of security testing in traditional environments are destined to prevail in mobile, and we're already seeing this in the environments we've tested. This reality, combined with how connected, integrated, distributed and poorly managed these platforms are, suggests that careless decisions today could cost us very dearly in the future...

[1] Mobile Africa Report 2011, Regional Hubs of Excellence and Innovation by Dr Madanmohan Rao, Research Project Director, MobileMonday March 2011

[2] http://www.bbc.co.uk/news/business-17115946

[3] http://prezi.com/as-szhrug5zr/examining-the-impact-of-the-adoption-of-mobile-devices-throughout-africa-and-the-subsequent-rise-of-security-related-risks-sensepost-information-security/