Thursday, September 13, 2018

Setting up web based sguil


One of the problems with using Sguil to do network forensics investigations is the client. You need to make sure that your system can support how it runs and it in the end is a think client. The nice thing about OSX is that is has wish installed so getting is running is as simple as running it but the other issue is client hangups etc.  

Recently the author Bamm Visscher updated the code to include a web client on the server. In this post I am going to get it installed on a fresh install of Security Onion. One of the hurdles that you will have is that SO is already running a web server on 443 so we will have to make some modifications. Lets get started. 


First you will need to clone the repo. 

cd /opt/ && git clone https://github.com/bammv/sguil.git 

This will place a fresh install in your opt directory where we will begin. 

Lets stop sguil and do a quick little back up of you current files 

sudo so-sguild-stop && mkdir /opt/sguil_bak && tar zcvf /opt/sguil_bak/lib.bak.tgz /usr/lib/sguild/ && tar zcvf /opt/sguil_bak/sguild.tgz /usr/bin/sguild

Now that we have our back up lets copy our lib files over

sudo rsync -avh /opt/sguil/server/lib/* /usr/lib/sguild/

Ok now lets edit the config. Note change 4433 to whatever port you want to use for the web page.


echo "set HTTPS 1" >> /etc/sguild/sguild.conf 
echo "set HTTPS_PORT 4433" >> /etc/sguild/sguild.conf
echo "set HTML_PATH {/opt/sguil/server/html}" >> /etc/sguild/sguild.conf

Once we have the config in place we need to edit sguild really quick.

sed s/cert.pem/wcert.pem /usr/bin/sguildsed s/privkey.pem/wprivkey.pem /usr/bin/sguild


Last thing before we log into the console we need to generate the keys for the web service 

sudo openssl req -newkey rsa:2048 -new -nodes -x650 -keyout /etc/sguild/certs/wprivkey.pem -out /etc/sguild/certs/wcert.pem

Now that everything is in place go ahead and fire up sguild

sudo so-sguild-start 

You should see everything start ok. When this is done you can open up your browser and head to localhost:4433 or whatever you chose in the previous configs.  You should be presented with your login. This is all the same information that you created while setting up SO.  See below.





After playing with the web client I like the fact that I now no longer have to have a think client and can log in with any device and browser. 

I also wrapped up the above commands into a very simple shell script so that its very easy to get operational.

https://github.com/bl4ck0ut/scripts/blob/master/web_sguil.sh 

Also you can watch the script in action getting sguil web setup.





Saturday, September 8, 2018

17 years of linux desktop captures




I know that this isn't a security post but I thought I would take a moment and post some screenshots of the Operating System that I had a passion for many years. I just wish I had my screen shots of the ACID, BASE and Sguil deployments through the years as well

Ever since I switched 100 percent to a linux desktop I would take screen shots just to track the progress of how it evolved. I had been using it since 1998 but it wasn't until 2001 that I was able go all in. In the beginning I had many hurdles, I was building out a large network for the company but they were stuck with Twinaxe cabling and 5250 emulation cards in old 486's. Once I switched the iseries as400 to tcp, then it became how to emulation 5250 which wasn't that bad.

2001 

Getting 5250 emulation working since I was an RPG400 and ILE programmer at the time




December 2001 

Just Playing with more setups pretty sure previous and this was Enlightenment desktop.


Feb 2004

Ximian hits the scene and playing with KDE as my desktop - resolution getting much better



July 2004

Yeah Playing with SUSE again since my first purchase of 6.2 box set in 1999. I bought the box set because I was trying my play my part in growing Linux. 




April 2005

Gentoo - enough said "emerge next-os". Funny you can see my in and outside ids terminals and snort. Only glimps at my deployments.




October 2007 

Pretty sure I was playing with Fedora at the time.



February 2009 

My first Ubuntu build with Gnome. 



May 2011


Multiple screens are just a normal life by this time, Running Ubuntu.





Just a little documented history fun hope you enjoyed the trip. 




Wednesday, September 17, 2014

Thursday, March 28, 2013

Live Linux forensics in a KVM based environment (part 1 memory)

Live Linux forensics in a KVM based environment (part 1)


Most of this blog will be based on a image that I created that I will be walking through. You can obtain this image https://docs.google.com/file/d/0B4pePbirlzGvMjdIblpiV1FjanM/edit?usp=sharing .  You will need to image this to a usb drive preferably a 8gb drive like I used in the talk  To write the image you just need to issue a dd if=./4n6.img of=/dev/your_drive.

Scenerio:
Network team has mentioned they are seeing abnormal traffic to 172.20.20.114 please check out srv03 at 192.168.122.226.

Host system:
OS= ubuntu 12.04 server
user = admin-user
pass = master

Compromised guest:
OS= centos 6.4 64bit
HDD config = 3 disk RAID5 luks encrypted
luks passphrase = mi4n6mi4n6
root pass = master

I will try and write this in a way that will parallel using the techniques on a live virtual instance.

Second, note that the domain that I will be using is srv03. You can replace this value with whatever running domain that you have in your instance.

Memory

One important piece of the incident response puzzle is a memory dump. We know this and obtaining it from a windows machine is fairly trivial. Getting it from a Linux machine also not that hard you just need to have root level access. What about if you have access to a KVM based virtual machine that has several servers running on it? How do you get the memory from the running instances of these systems without having to touch each one of them? Well lets dive in and see. 

First things first, once you have booted the image you will need to run the "srv03_restore.sh" in order to restore the state of the running VM. Once this has completed you will have a running instance of the VM. If you open up xterm you can run the "virt-manager" command and have a visible console on the VM instance.

The quick and dirty


If you have a suspect system and you want to look into what is going on you can just do a dump. This will essentially dump the used memory of the running server. One caveat is that it will temporarily pause the running OS.While it dumps the memory. The nice thing is that it will only dump the used memory, but that can be bad as well since we could miss something, but alas this is quick and dirty. So lets get to it.

You can dump using "virsh dump svr03 ./mem.dump"




now that we have this we can start our investigation. The fast way is just using strings and grep. Since we know were to start traffic to 172.20.20.114 we can grep for that.  "sudo strings mem.dump | grep 172 | less"

with a little searching in the less we come across this:


This is interesting I would say. Something has opened a reverse shell to the suspected ip address. Since I want to be thorough I keep looking and then I find this later in the memory:


Hmm what is this matahari.py and what is it doing the uploads directory? A quick search lends to the knowledge that it is a simple command and control system. This command is showing that that is checking in with the server at 172.20.20.114 . Ok, I think at this point we can say the system is compromised. I am not going to dive into this just yet since I want to show another way that we can access the memory of the system since we have the nice feature of being virtual.

Memory for Volatility

Since we talked the quick and dirty lets get a little more elegant. Let's say that we want full memory in a format that we can use such awesome tools as volatility. How can we do this? With libvmi of course. http://code.google.com/p/vmitools/ . This tool ends up being perfect for what we are looking for.

You will first need to install the tool which has a perfectly good readme that will tell you how to install. For those that cannot wait its just as simple as

autogen.sh
./configure
make
sudo make install

The tool that we are going to talk about is pyvmifs.py . In order to use this tool you will also need to install it. It will be located in the libvmi/tools/pyvmi directory. The installation is also strait forward with "python setup.py build" and "python setup.py install" . This will get you up and running. As a side note there are package dependencies they are met with the build that you downloaded.

Now that we have it installed we can actually mount the memory in a format that we can read with volatility. You can mount it like this. "sudo python ./pyvmifs.py -o domain=srv03 /path/youwant/to/mount/"  and you should see a file names "mem" the size of the memory that is allocated for that virtual instance.

Now we have the mounted memory we can do what we want with it. Strings and grep or use volatility. Let's go the volatility route since we already went the strings grep way. To do this you will need to have a profile for the running linux instance that include the dwarf file and the symbols from boot. Here is the profile for a default centos 6.4 64bit.

http://dl.dropbox.com/u/12565646/cent64.zip

This profile I placed in the volatility directory on the image. If you downloaded the image and are using it there is no need to download again.

One interesting command that requires a little extra information that I want to run first is the "linux_bash" command. This command requires the use of a value that you can obtain by disassembling the /bin/bash binary history_list. You can obtain the value from a previous post but I will also include it here.  Let's run the command and see what we can get from it.

"sudo python vol.py --profile=Linuxcent64x64 -f /media/g/mem linux_bash -H 0x6e0970 -P"


You can see that this gives some interesting information but doesn't show the host that we want. You can see that some one has used "nc" to 192.168.122.129. This is interesting but not what we are looking for. Let's try another command such as netstat.


Great this gives us what we are looking for further confirmation. Let's try another command just for fun linux_psaux.

"sudo python vol.py --profile=Linuxcent67x64 -f /media/g/mem linux_psaux"



Ok, now we have more information.  We can now proceed and get the forensic disk image and see what this matahari.py is.

In conclusion we now have 2 ways to obtain memory from a virtual instance. I also talked about a few ways to deal with this memory. How you choose to go forward with this will be your choice. Hopefully this gives you the chance to experiment and play with these ways on a running system. 

Part 2 will entail how to use the images and mount the luks encrypted raid 5 guest OS .







Monday, March 18, 2013

Linux bash history_list info for volatility

Starting Linux analysis with volatility

When working in the security field you will eventually come across a time that you will need to do some memory analysis. This analysis will entail working with volatility for memory forensics. Most of the time you will end up working with windows systems that may have been compromised. When you are working against a Linux memory dump you will need a few extra things to make this possible.

linux_bash

The linux_bash option within volatility requires you to to know the history_list location so that you can scrape the bash history out of memory. The way that you would do this would be to use gdb and disassemble the history_list and in the comments you will note the information that you will need. I will include a few of them on this page. I don't want to include too many since they are trying to create a way to determine the value on the fly but I cannot confirm the status of that. Here are a few values that I quickly gdb grabbed out that might help others as well. I will include more if people find it to be beneficial to have a single location. A well documented way to obtain the values are located on the volatility site http://code.google.com/p/volatility/wiki/LinuxCommandReference23#linux_bash


Centos
6.4  - 0x6e0970 - bash-4.1.2-14.el6.x86_64.rpm
6.3 -  0x6e0950 - bash-4.1.2-9.el6_2.x86_64.rpm
6.2 -  0x6e0910 - bash-4.1.2-8.el6.centos.x86_64.rpm
6.1 -  0x6e0910 - bash-4.1.2-8.el6.centos.x86_64.rpm
6.0 -  0x6e1af0 -  bash-4.1.2-3.el6.x86_64.rpm
5.9 -  0x6bf970 -  bash-3.2-32.el5.x86_64.rpm
5.8 -  0x6bf970 -  bash-3.2-32.el5.x86_64.rpm
5.7 -  0x6bf970 -  bash-3.2-32.el5.x86_64.rpm
5.6 -  0x6bf970 -  bash-3.2-24.el5.x86_64.rpm

Ubuntu
11.04 - 0x6ed3a8


This probably gives an idea of what I will talk about next ...linux profiles.  I will create a profile for all of the above systems and provide them on the next post. This post will also be updated until the disassemble piece in 2.3 happens.

References:

http://code.google.com/p/volatility/

Monday, February 25, 2013

Tale of the misconfigured script

This attack attempt made me laugh a bit. I see the following event from my Sguil instance running on Security Onion that is monitoring my honeypot:


I pull the transcript and find the following:






I thought that was a very odd password to attempt. I wanted to a transcript of all the attempts the attacker tried, so I used tcpdump to carve out the session between my honeypot and the attacker with the following commands:

 cd /nsm/sensor_data/jbc-eth0/dailylogs/2013-02-24/
 tcpdump -r snort.log.1361664061 -w ~/ftpbruteforce-pcap-20130224.pcap ip and host 61.129.71.42 and host 192.168.1.20 and port 21 and proto 6

 Securityonion will log full pcap's to /nsm/sensor_data/<sensor-name>/dailylogs/<date>. The tcpdump command is pretty much the same command that sguil issues to the sensor to generate transcripts. The -r is to read in a pcap file, -w is to write the results to a pcap file. After that is the BPF (Berkley Packet Filter) filter which defines what traffic we want to carve out.

I opened ftpbruteforce-pcap-20130224.pcap in wireshark and started looking at the sessions and found this one:





Turns out, there are multiple attempts of using %username% and some other strings. It seems our attacker forgot to configure this field. Logically, I assume it's supposed to try different variations of Administrator and a string such as:

Administrator1
Administrator12
Administrator123
Administrator1234

If anyone knows the FTP brute forcing tool that was likely used, please let us know. My Google-fu is failing as Google drops punctuations from searches.

Monday, February 18, 2013

Brief NSM Analysis of FTP Dictionary Attack

On the 15th I saw an event in Sguil I had been waiting for: "ET POLICY FTP Login Successful". The credentials are Administrator/password. I was surprised how long it took.


I ran an event query on the destination IP (this traffic is flipped from the true src/dst):




You'll see PADS registering a new asset as the attacker's IP is first observed, an alert that the attacker is trying at least 5 unsuccessful attempts to log in as Administrator, then finally a successful login:


Since FTP is in cleartext, we can easily inspect the traffic by right clicking the Successful Login event and selecting "Transcript". I'm pasting the traffic instead of using a screen shot:

DST: 220-FileZilla Server version 0.9.41 beta
DST:
DST: 220-FileZilla Server version 0.9.41 beta
DST:
DST: 220-written by Tim Kosse (Tim.Kosse@gmx.de)
DST:
DST: 220-written by Tim Kosse (Tim.Kosse@gmx.de)
DST:
DST: 220 Please visit http://sourceforge.net/projects/filezilla/
DST:
DST: 220 Please visit http://sourceforge.net/projects/filezilla/
DST:
SRC: USER Administrator
SRC:
DST: 331 Password required for administrator
DST:
DST: 331 Password required for administrator
DST:
SRC: USER Administrator
SRC:
SRC: USER Administrator
SRC: USER Administrator
SRC:
DST: 331 Password required for administrator
DST:
DST: 331 Password required for administrator
DST:
DST: 331 Password required for administrator
DST:
DST: 331 Password required for administrator
DST:
DST: 331 Password required for administrator
DST: 331 Password required for administrator
DST: 331 Password required for administrator
DST:
DST: 331 Password required for administrator
DST: 331 Password required for administrator
DST: 331 Password required for administrator
DST:
SRC: PASS
SRC:
DST: 530 Login or password incorrect!
DST:
DST: 530 Login or password incorrect!
DST:
SRC: USER Administrator
SRC:
DST: 331 Password required for administrator
DST:
DST: 331 Password required for administrator
DST:
SRC: PASS abc123
SRC:
DST: 530 Login or password incorrect!
DST:
DST: 530 Login or password incorrect!
DST:
SRC: USER Administrator
SRC:
DST: 331 Password required for administrator
DST:
DST: 331 Password required for administrator
DST:
SRC: PASS password
SRC:
DST: 230 Logged on
DST:
DST: 230 Logged on
DST:
SRC: RMD sarcaxxo
SRC:
DST: 550 Permission denied
DST:
DST: 550 Permission denied
DST:
SRC: QUIT
SRC:
DST: 221 Goodbye
DST:
DST: 221 Goodbye
DST:

We see the attacker used Administrator as the username in all the attempts and iterated through a couple of guesses for the password: abc123, password. What I was curious about was the "RMD sarcaxxo" command the attacker issued after logging in which is attempting to remove the directory (RMD) named "sarcaxxo", which did not exist on my honeypot. After searching Google, it seems this is a command issued by a tool "Multi-thread FTP scanner v0.2.5" by Inode. If someone wanted to create an alert for the use of this tool, they could use something like the following Snort rule (not tested):

alert tcp any any -> $HOME_NET 21 (msg:"Multi-thread FTP scanner v0.2.5 by Inode - Successful Login and Attempted Directory Removal"; flow:from_client,established; content:"RMD sarcaxxo"; classtype:misc-activity; sid:5001990; rev:1;)

While there were no more alerts, it does not mean the attacker did nothing else. I right click on the Dst IP part of the FTP Successful login event again and select Quick Query -> Query Sancp Table -> Query DstIP/1 Hour.


The results show us that he attacker connected on port 3389 but unlike his connection on port 21, there is no byte count. You can confirm the lack of interesting traffic by pulling the transcript. Since the end times for the traffic on 3389 were before the end time of the FTP traffic, we can guess that the attacker did not have the credentials yet and therefore is likely to be part of a port scan.