tools

Kippo Honeypot Cousin Cowrie

Posted on

I’m ISC handler of the day and I’ve got a great post on setting up Apparmor, SQlite3, and Dshield with Cowrie. Please drop by ISC and check it out.

Creating a log2timeline plugin

Posted on Updated on

I have a new post on the SANS forensics blog.  This post covers the process of creating a plugin for the log2timeline tool. Using a step-by-step instruction, I break down each section of the code and how it works. This should be used as a template for creating any plugin, but I cover how to parse an OS X plist.

Emet 2.1 Follow-Up

Posted on

I’ve had great response with the EMET post and had a couple of issues to follow up on.

How did you get SEHOPS  to be Always on?

The system I was running when taking the screen shots was Vista 64-bit and apparently this is a Vista only option. On windows 7, by default, you have only “Application Opt in and Application Opt Out”.  I did some testing on this and used process monitor to determine what registry key was being changed on the systems.

HKLM\System\CurrentControlSet\Control\SESSION MANAGER\kernel\DisableExceptionChainValidation
disabled is 1  and  always on is 0

This is the same key on both Windows 7 and Vista, so this must be controlled at a deeper level then we can directly interact with.

Lsass and Spooler Crashing  on Boot.

Rationallyparanoid has several great posts about EMET. They mentioned adding LSASS.exe and Spooler.exe to the protected applications. This worked on older versions of Emet, but I’m having crash issues on Vista 64-bit SP2 with 2.1. I have  removed the BottomUPRand and EAF and it appears to fix the instability issues on these applications.  Windows 7 64-bit does not seem to be experiencing this issue.

EMET 2.1 Deployment

Posted on Updated on

If you have not used Microsoft EMET and your in charge of managing or securing Windows PC’s then you need to start looking at it. In short, EMET uses a number of techniques (DEP, ASLR, HeapSpray prevention ect…) to make it much more difficult to exploit an application.  The latest versions allows you to import and export a xml file to make it easy to deploy. There is still no direct management from GPO, but this new update makes it very easy.

(UPDATE) Scripts posted to dropbox due to weirdness (Formatting and omissions of partial lines) . You can get them here config.xml  and  emet_network.vbs.

 

 

Step 1 Testing your applications for Compatibility

While EMET does some cool tricks to prevent exploitation of applications, it can cause some stabilitity issues.  I’ve been running it for a while and have not had any issues with applications.  You will want to add any application that the user directly interacts with untrusted networks or with files received from untrusted network. Adding an application to be protected can be done from the GUI or the console. Startup the GUI  and Select configure Apps.

Then select add and then browse to the desired location.

Once you have selected the application, you can then change what security settings you want applied. The default is to include all and I would leave it that way unless you run into issues. To troubleshoot, clear all the settings for an app and start by adding each protecting until you crash the application. Leave that one protection unselected.

Step 2 Export your Settings

By default EMET is installed at C:\Program Files (x86)\EMET\.  You will need to run the command-line version  of the tool  (as Admin)to export your settings.

Select Start -> Accessors -> and right click on Command Prompt and select “Run as Administrator”

>cd "C:\Program Files (x86)\EMET\"
>emet_config.exe --export config.xml

I have included a version of a EMET Config  below (Available to download due to WordPress issues posting the code) .  It list both 32 and 64 bit versions of Office version 12 and 14, Firefox, IE, Itunes and others…

Step 3 Copy Emet to Network drive

Emet does come as a MSI file, but you do not need to install it on every computer to make these changes. Just copy the entire C:\Program Files (x86)\EMET\ along with your config file to a network share that all users can access.

Step 4 Deploy Script

I wrote a script  (Available to download due to WordPress issues posting the code) to import the settings because I wanted to add Google Chrome to the protected list. Chrome is installed under each user directory, so you have to dynamically generate its setting to work properly. (The current version does not support system variables). If you are not using chrome, then you can reduce the complexity of the script to just run the import from the network config file.  The script only needs to be run once for each user,  and then only when you update the config file. A typical deployment would have the script run at login via GPO or setup a scheduled task for the user.

To get the script to work in your environment you will need to make changes to the variables at the top of the script.

The basics steps of the script are:

  1. Download the xml file to local tmp drive
  2. Add Google chrome to the XML
  3. Run Emet from the network to import local xml file.

OpenVAS 3.X Ubuntu Install

Posted on

OpenVAS is a fork of the open source version of Nessus. The new version 3.0 has a web interface, Greenbone Security Assistant (GSA), seems to have many of the features from the old inprotect interface. One of the best things I liked about inprotect was the granular user permissions to scan certain subnets. While I’ve only had a little while to play with it, it appears to be well polished. You can download a VM or Live CDROM to try it out. This install will cover using Ubuntu packages  stored on the opensuse .org servers. Information about how to setup the repository was gathered from this link.

Setting up the repository.

Edit the source repository to add the open suse servers.

#vi /etc/apt/sources.list
Add the following line to the bottom of the file.
deb http://download.opensuse.org/repositories/security:/OpenVAS:/STABLE:/v3/xUbuntu_10.04/ /

Update the repository for known available packages to install.

#apt-get update

You will get a key error BED1E87979EAFD54.  You’ll need to add the key to trusted keys. Add the key to the end of the statement below. The key below will expire on 02-April-2011.

#sudo apt-key adv --keyserver hkp://wwwkeys.de.pgp.net --recv-keys BED1E87979EAFD54

Update the repository again.

#apt-get update

Basic install process.

#apt-get install openvas-scanner  libopenvas3

You need to make an SSL certificate for the service.

#/usr/sbin/openvas-mkcert

Setup plugin update via crontab.

#crontab -u root -l >/tmp/crontab
#echo "0 3 * * * /usr/sbin/openvas-nvt-sync" >>/tmp/crontab
#crontab -u root -l /tmp/crontab

Setup listening port for service. Remember to setup IPTABLES to block access if you change from localhost access.

#vi /etc/default/openvas-scanner
SCANNER_ADDRESS=(IP of your server)
SCANNER_PORT=9390

Start the service.

#/etc/init.d/openvas-scanner start
--

Install the client

#apt-get install openvas-client

Once installed, connect to the server and run your scans using the client. I hope to have a follow-up with configuring the web interface soon.

Creating a OS X Live IR CD-ROM

Posted on

About a year ago, I built my 1st OS X live response CD-ROM and I’m still not aware of any free tools to do this.  I’ve have heard that the Raptor CD-ROM is great for booting a machine that is powered off, but most of the time I’m dealing with live systems that need to have live analysis done.  Lets cover the basics so you can create your own.

Static Binaries

In OS X, they do not use static binaries.  When building your incident response disk, you must copy the binary files to the CD-ROM along with the required libraries. This topic has been covered many times in the Linux environment, but not OS X.  You want to make sure the machine you are using to collect the binaries from is a trusted machine. I used a freshly loaded  and patched laptop to complete this.

The command to determine what libraries are used by the specified binary in Linux is ldd. This tool is not available in OS X, but they do have a similar tool called otool which is part of the xcode toolkit that is free. Using the the command otool -L /bin/ps will list the required library files.

When I upgraded my laptop to Snow Leopard I was unsure if I would need to rebuild a new set of binaries for the latest version of the OS, but when testing this I did not receive any errors . I have not tried running the 10.6 binaries on a 10.5 system, so I’m not sure if there would be any backward compatibly issues. I have recently tried to run the universal desktop binaries on a OSX 10.4 server, but was not able to run them. This is a problem I’m going to look at when I get a little more time. The temporary fix is to create separate binaries folders if you need both desktop and server binaries. This may be the only solution to this problem, but I’ll let you know once I’ve tested more.

What binaries to include?

How to determine what commands should be ran, you should reference the previous post under what to collect. Additionally, Apples command_line admin guide is a great place to learn how to get the data you want out of the system.

ps
lsof
netstat
md5sum
ifconfig
route
iptables
cat
echo
vi
fdisk
mmls
fls
sed
screen
dd
dcfldd
bash
awk
cat
date
hostname
who
arp
serversetup
macrobber
PlistBuddy
fdisk
df
du
mount
find
crontab
less
md5
sha1
gzip
tar
kextstat
system_profiler
vmmap
ls
mount
disktool
which

Automation of disk creation

I have created a little script that uses a list of file, like the one included above, to find the required libraries, then move them into a /bin folder and /lib and rename the binaries to IR_filename. Renaming the binaries makes it very easy to tell what commands are being run by the responder. This saves time while you are reviewing the output and prevents you from chasing your own tail. To run the script, simply call the script and pass the location of the file that has the list of the binaries you want to have on the disk.

#./mac-ir.sh /Users/demo/filelist.txt
#!/bin/bash
#Created by:Tom Webb
#Version 0.1
#usage mac-ir-create.sh filelist

#Error if no file given
if [ -z "$1" ]; then
 echo -e "\nUsage: `basename $0` /path/to/file list"
 exit 1
fi

#Error if not sudo/root
if [[ $EUID -ne 0 ]]; then
 echo "You must be root/sudo to run the script"
 exit 1;
fi

echo "Enter the path where you want the /bin and /lib folders to be created"
read IR_LOCATION

#Setup DIR PATH
mkdir $IR_LOCATION/bin
mkdir $IR_LOCATION/lib

while read line; do

 FIND_BIN=`whereis $line` #Find the location of the binary file
 if  [ -z $FIND_BIN ]; then #if results empty
    echo "$line is not installed or in your path"
 else
   cp $FIND_BIN $IR_LOCATION/bin/IR_$line #Copies binary file to the new directory and renames it
   for i in `otool -L $FIND_BIN |sed '1d' | cut -d ' ' -f1`; #Takes the path of the bin file and looks up required libraries and removes the 1st line and set as a variable
   do
      cp $i $IR_LOCATION/lib #Copies the library file to the new IR Location for each library
   done
 fi
done <$1 #Use file from command line argument

Response Script

Having a script when doing live response is very important, because executing commands on a running system makes changes to it. But if you use a script, you have a documented list of what was ran and accessed on the system. Of course, any other commands that you run outside the script still need to be documented.  Consistency is also another important advantage. As systems get more complicated, it is becoming more difficult to remember to collect everything you need before the system is taken offline. Additionally, how often you are preforming IR on certain OS’s  can also make manual collection more challenging.

After running the script, you should have two directories (bin and lib).  We need to do something with these files, so  it’s time to build a script.  Like all good programmers it’s time to start with code that someone has already started. Pull down the old helix CDROM and under the static binaries directory is a script linux-ir.sh. This is the framework I used to build the script which I included below.  One major thing I changed in this script was the order in which data is collect. I would recommend follow NIST guidelines in SP800-86. On page 5-8 they list order as follows:

1. Network connections
2. Login sessions
3. Contents of memory
4. Running processes
5. Open files
6. Network configuration
7. Operating system time

The Apple Examiner website has a great list of some OS X specific items to collect. The best program I’ve found to script analysis of plist files is a program called PlistBuddy.  This program allows you to dump  both binary and XML files and convert them to ASCII.

I’ve created a starter script for gathering information. While this does not include all the commands that are listed in the Helix IR script, it does cover at least one command for all major information that should be gathered.  Many times responders will want to use different tools to get the same information and compare the output or sort the output in different ways. You will need to customize it for your needs, but if other commands should be added please let me know, and I’ll keep updating the script.

#Public MAC-IR.sh
#Version 0.1
#Author:Tom Webb
#usage ir_script.sh /path/to/folder containing bin and lib directory
#output is to standard out

#!/bin/bash
clear

if [[ $EUID -ne 0 ]]; then
 echo "You must be root or sudo to run script"
 exit 1;
fi

#Error if no file given
if [ -z "$1" ]; then
 echo -e "\nUsage: `basename $0` usage ir_script.sh /path/to/folder containing bin and lib directory"
 exit 1
fi

if [[ ! -d "$1" ]]; then
 echo "Directory does not exist"
 exit 1;
fi

BINDIR="$1/bin" #$1 is command line argument for path
LD_LIBRARY_PATH="$1/lib"
PATH=$BINDIR

IR_echo "========="
IR_echo "Start Date:"
IR_echo "========="
IR_date
IR_echo

IR_echo "========="
IR_echo "hostname:"
IR_echo "========="
IR_hostname
IR_echo

IR_echo "==================================="
IR_echo "netstat output(current connections)"
IR_echo "==================================="
IR_netstat -an
IR_echo

IR_echo "==================================="
IR_echo "lsof -i Network Connections"
IR_echo "==================================="
IR_lsof -i
IR_echo

IR_echo "=========================="
IR_echo "currently logged in users:"
IR_echo "=========================="
IR_who
IR_echo

IR_echo "=========================="
IR_echo "List of running processes:"
IR_echo "=========================="
IR_ps auxwww
IR_echo

IR_echo "=========================="
IR_echo "Memory Mapping of all Processes"
IR_echo "=========================="
for i in `IR_ps aux| IR_awk '{print $2}'`; do IR_vmmap $i ; done
IR_echo

IR_echo "============"
IR_echo "List of open files:"
IR_echo "============"
IR_lsof
IR_echo

IR_echo
IR_echo "======================"
IR_echo "serversetup -getDefaultDNSServer :"
IR_echo "======================"
IR_serversetup -getDefaultDNSServer *
IR_echo

IR_echo "=============="
IR_echo "routing table:"
IR_echo "=============="
IR_netstat -rn
IR_echo

IR_echo "=================="
IR_echo "arp table entries:"
IR_echo "=================="
IR_arp -an
IR_echo

IR_echo "======================"
IR_echo "Network interface info"
IR_echo "======================"
IR_ifconfig -a
IR_echo
IR_ifconfig -L
IR_echo

IR_echo "======================"
IR_echo "Mount"
IR_echo "======================"
IR_mount
IR_echo

IR_echo "======================"
IR_echo "disktool"
IR_echo "======================"
IR_disktool -l
IR_echo

IR_echo "======================"
IR_echo "macrobber"
IR_echo "======================"
IR_macrobber /
IR_echo

IR_echo "======================"
IR_echo "LS -LAR /System/Library/StartupItems"
IR_echo "======================"
IR_ls -laR /System/Library/StartupItems
IR_echo

IR_echo "======================"
IR_echo "LS -LAR /System/Library/StartupItems"
IR_echo "======================"
IR_ls -laR /System/Library/StartupItems
IR_echo

IR_echo "======================"
IR_echo "LS -LAR /Library/StartupItems"
IR_echo "======================"
IR_ls -laR /Library/StartupItems

IR_echo "=============================================="
IR_echo "/etc/hosts.allow"
IR_echo "=============================================="
IR_cat /etc/hosts.allow
IR_echo

IR_echo "=============================================="
IR_echo "cat /etc/passwd"
IR_echo "=============================================="
IR_cat /etc/passwd
IR_echo

IR_echo "=============================================="
IR_echo "cat /etc/group"
IR_echo "=============================================="
IR_cat /etc/group
IR_echo

IR_echo "==========="
IR_echo "   fstab   "
IR_echo "==========="
IR_cat /etc/fstab
IR_echo

IR_echo "==========="
IR_echo "SystemVersion.plist"
IR_echo "==========="
IR_PlistBuddy -c  Print /System/Library/CoreServices/SystemVersion.plist
IR_echo
IR_echo

IR_echo "==========="
IR_echo "ServerVersion.plist"
IR_echo "==========="
IR_PlistBuddy -c  Print /System/Library/CoreServices/ServerVersion.plist
IR_echo
IR_echo

IR_echo "==========="
IR_echo " SoftwareUpdate.plist  (Last softwareupdate)    "
IR_echo "==========="
IR_PlistBuddy -c  Print /Library/Preferences/com.apple.SoftwareUpdate.plist
IR_echo
IR_echo

IR_echo "==========="
IR_echo " /Library/Preferences/com.apple.preferences.accounts.plist  "
IR_echo "List of Deleted User Accounts "
IR_echo "==========="
IR_PlistBuddy -c  Print /Library/Preferences/com.apple.preferences.accounts.plist
IR_echo
IR_echo

for i in `IR_ls -l /Users |IR_awk '{print $9}'`; do #This setup up each user with a dir as a variable
 IR_echo "User $i"
 IR_PlistBuddy -c Print /Users/$i/Library/Safari/LastSession.plist
 IR_echo
done

IR_echo "==========="
IR_echo " /Library/Preferences/com.apple.alf.plist"
IR_echo "Firewall settings "
IR_echo "==========="
IR_PlistBuddy -c  Print /Library/Preferences/com.apple.alf.plist
IR_echo
IR_echo

IR_echo "========="
IR_echo "End Date:"
IR_echo "========="
IR_date
IR_echo

Finishing Up

Let’s make sure that you have everything ready to burn to a disk. You will need to make sure that your file permissions are correct.  In the example below I have my /bin and /lib dir in /tmp/ir directory.

#chmod  -R 755  /tmp/ir

Now its time to take the /bin and /lib directories along with the mac-ir.sh and burn it to disk. Once it’s burned we need to test it out.

1. Launch terminal.

This is under Finder -> Utilities -> Terminal

2. Find out your mounted CD-ROM drive.  From Terminal:

$mount |grep cd9660

/dev/disk1s0 on /Volumes/ir_1.0 (cd9660, local, nodev, nosuid, read-only, noowners)

3. Change directory to your cd mount

$cd /Volumes/ir_1.0

4. Determine where you want to output your results. You should not write the output to the system you are doing analysis on. I always carry two drives when doing analysis one small flash drive to dump volatile data and a large one for the disk image. You can also shoot the data across the network using cryptcat.

5. Run the script (this examples redirects the output to a removable drive /volumes/usb and filename is ir.txt)

$sudo ./mac-ir.sh /volumes/ir_1.0  >/volumes/usb/ir.txt

6. Before you shutdown the system, ALWAYS make sure that your script worked by checking the results file.

Live IR CDROM Basic Needs

Posted on Updated on

Preparation, in my opinion, is the most important phase of incident response.  When possible you should have well test procedures for all major tasks you preform, especially when it comes to how you collect your data. In many situations, you will get only one chance to gather the information and you need make it count. These procedures should be review on a quarterly bases, so setup a reoccurring calendar appointment to get that done today.

Having a IR CDROM with all your collection tools is critical to having a great start to a incident.  I’ve previously talked about a couple freely available ones, but now I’m we are going to cover what the basic requirements that need to be met when creating your own. Why create your own CDROM? The simple answer is control. Creating your own script allows you to do what you want to do, when and how you want to do it.  Additionally, you want to make sure the trusted binaries are really trusted. Installing and verifying them yourself is the best way to make sure you can trust them.

One of the hardest questions to answer when first trying to get started is, what do you need to collect from a compromised host? It depends on your policy and questions you want to answers from the investigation, but here is a list of some of the most common things to collect and analyze from a live host:

-RAM dump
-Processes lists ( process id, user running, parent process, path of executable)
-Network stats (listening process, established connections, PID)
-List of open files
-Modify,Access,Creation times (mac)
-System logs (logs for services running, authentication and authorization)
-Internet history
-Registry/system config
-Virus logs
-Schedule’s tasks and start-up items

You could run these tools manually, but this leads to inconsistent data collection. The most effective way to collect this data is using an automated script. Scripts are a great way to help reduce documentation for collecting the evidence and make data collection consistent.The data should be collected using trusted binaries, which are ran from an external source. It’s best to rename the trusted file to distinguish the difference between versions that may already be running on the system ( e.g. ls to ir_ls). The script  and tools should be ran from a write protected USB or CDROM where the result are dumped to another system (smb or cryptcat) or an external drive.

Lessons learned is generally the step most people do not to complete. Many time you may have overlapping incidents with little time to document things that went right and wrong.  This step is important to make sure that you improve on your experience, and prevent the same mistake from happening more than once.  If you do not have a formal meeting, it’s good to keep a to-do list of things that need to be added to your collection script, new tools and process adjustments. Keep it as an ongoing list and when you start your quarterly review of processes you’ll have an action list.