Suramya's Blog : Welcome to my crazy life…

June 8, 2021

Great book on Military Crypto analytics by Lambros Callimahos released to public

Filed under: Computer Security,Computer Software,My Thoughts,Tech Related — Suramya @ 9:58 PM

I find Cryptography and code breaking to be very interesting as there are huge implications on Cyber security. The current world is based on the presumption that cryptographic algorithms are secure, it is what ensures that we can use the internet, bank online, find love online and even work online. Cryptography historically has been a field working under heavy classification and there are multiple folks we don’t know about because their existence and work was classified.

Lambros Callimahos was one such Cryptologist, he was good enough that two of his books on Military Cryptanalytics covering code breaking (published in 1977) were blocked from public release till 1992. The third and last volume in the series was blocked from release till December 2020. It is now finally available for download as a PDF file so you can check it out.

The book covers how code breaking can be used to solve “impossible puzzles” and one of the key parts of the book is it’s explanation of how to use cryptodiagnosis to decrypt data that has been encrypted using an unknown algorithm. It has a whole bunch of examples and walks you through the process which is quite fascinating. I am going to try getting through it over the next few weeks if I can.

Check it out if you like to learn more about cryptography.

– Suramya

May 30, 2021

You can now run GUI Linux Apps on Windows 10 natively

Filed under: Computer Software,Linux/Unix Related,Tech Related — Suramya @ 10:17 PM

With the latest update of Windows Subsystem for Linux (WSL), you can now run Linux GUI applications on Windows natively. This is pretty impressive considering Steve Ballmer famously branded Linux “a cancer that attaches itself in an intellectual property sense to everything it touches” back in 2001. In just 20 years, Microsoft has changed it’s stance and started adding more Linux functionality to it’s operating system.

Arguably, one of the biggest, and surely the most exciting update to the Windows 10 WSL, Microsoft has been working on WSLg for quite a while and in fact first demoed it at last year’s conference, before releasing the preview in April… Microsoft recommends running WSLg after enabling support for virtual GPU (vGPU) for WSL, in order to take advantage of 3D acceleration within the Linux apps…. WSLg also supports audio and microphone devices, which means the graphical Linux apps will also be able to record and play audio.

Keeping in line with its developer slant, Microsoft also announced that since WSLg can now help Linux apps leverage the graphics hardware on the Windows machine, the subsystem can be used to efficiently run Linux AI and ML workloads… If WSLg developers are to be believed, the update is expected to be generally available alongside the upcoming release of Windows.

The feature is still only available in Windows 10 Preview Builds but is expected to be released for general use in the near future.

I would love to see the reverse being developed. The ability to install and run Windows applications on Linux natively / officially. There is Wine/Crossover but they don’t support 100% of the applications yet. It would be cool if MicroSoft contributes to either of the tools to allow people to run windows software on Linux.

I personally use Crossover to run the Office Suite and it works great for me (For the most part). The latest version supports Office 365 and most of it works fine except for Excel which still has a bit of a problem with large files but works otherwise. Which is why I also have Office 2007 also installed where Excel works without issues even with large files.

Compatibility with MS Office suite is why a lot of users don’t want to switch from Windows to Linux or Mac. OpenOffice/LibreOffice is great but the UI sucks and the files are not 100% compatible (atleast the last time I tried it, it wasn’t) so the files might not look the same as you expected when you share them with Office users.

Source: Microsoft doubles down on Windows Subsystem for Linux

– Suramya

May 23, 2021

Rapid Prototyping by Printing circuits using an Inkjet Printer

Filed under: Computer Hardware,Emerging Tech,Tech Related — Suramya @ 10:50 PM

Printing circuits using commercial inkject printers is something that is becoming more and more convenient and affordable day by day. In their 2014 paper Instant inkjet circuits: lab-based inkjet printing to support rapid prototyping of UbiComp devices Prof. Kawahara and others showcased several applications from touch sensors to capacitive liquid level sensors. If you are interested in trying this out (I am sorely tempted), then checkout this Instructable.com: Print Conductive Circuits With an Inkjet Printer post that walks you through how to modify your printer.

The Ink to print these circuits is available for purchase online at novacentrix.com. You need the following to start printing circuits:

  • A low-cost printer such as EPSON WF 2010
  • Printing substrates like PET and glossy paper
  • Oven or hot plate for sintering & drying the ink
  • Empty refillable cartridges

A good area for experimentation would be for wearable circuits on clothing and other such places. But there are a ton of other applications especially in the embedded electronics market.

Well this is all for now. Will write more later.

Thanks to Hackernews: Rapid Prototyping with a $100 Inkjet Printer for the link.

– Suramya

May 20, 2021

Thoughts on NVIDIA crippling cryptocurrency mining on some of its cards

Filed under: Computer Security,Computer Software,My Thoughts,Tech Related — Suramya @ 8:11 PM

You might have heard the news that NVIDIA has added code to it’s GPUs that make them less attractive for cryptocurrency mining by reducing the efficiency of such computations using a software patch. On one side this is great news because it means that GPUs will be less attractive for mining and be available for gamers and others to use in their setup. However, I feel that this is a bad precedent being set by a company. In effect they are deciding to control what you do with the card after you have bought it. A similar case would be a restriction in your car purchase to stop you from using it on non-highway roads. Or to stop you from carrying potatoes in the trunk.

This all comes back to the old story about DRM and how it is being used to restrict us from actually owning a device. With DRM you are essentially renting the device and if you do anything that the owner corporation doesn’t agree with then you are in for a fun time at the local jail. DRM/DMCA is already being used to block farmers from fixing their farm equipment, medical professionals from fixing their health equipment and a whole lot more.

Cory Doctorow has a fantastic writeup on how DRM works and the problems caused by it. DRM does not support innovation, it actually forces status-quo because it is illegal to bypass it.

I have an old X-Box sitting in my closet collecting dust, I want to run Linux on it but that requires me to break the law because I would need to bypass the DRM protections in order to install a new OS. Today we are ok when they are blocking cryptocurrency, what if tomorrow the company gets into a fight with a gaming company and decides that they will degrade the game performance because they didn’t pay the fees for full performance. What if tomorrow they decide, to charge a subscription fee to get the full performance from the device? What is to stop them from degrading or crippling any other activity they don’t agree with whenever they feel like? The law is in their favor because of DRM, laws like DMCA (and other such laws) make it illegal to bypass the protections they have placed around it.

This is a slippery slope and we can’t trust the corporations to have our best interest at heart when there is money to be made.

There is more discussion on this happening over at HackerNews. Check it out.

– Suramya

May 17, 2021

IBM’s Project CodeNet: Teaching AI to code

Filed under: Computer Software,Emerging Tech,My Thoughts,Tech Related — Suramya @ 11:58 PM

IBM recently launched a new program called Project CodeNet that is an opensource dataset that will be used to train AI to better understand code. The idea is to automate more of the engineering process by applying Artificial Intelligence to the problem. This is not the first project to do this and it won’t be the last. For some reason AI has become the cure all for all ‘ills’ in any part of life. It doesn’t matter if it is required or not but if there is a problem someone out there is trying to apply AI and Machine Learning to the problem.

This is not to say that Artificial Intelligence is not something that needs to be explored and developed. It has its uses but it doesn’t need to be applied everywhere. In one of my previous companies we interacted with a lot of companies who would pitch their products to us. In our last outing to a conference over 90% of the idea’s pitched had AI and/or Machine Learning involved. It got to the point where we started telling the companies that we knew what AI/ML was and ask them to just explain how they were using it in their product.

Coming back to Project CodeNet, it consists of over 14M code samples and over 500M lines of code in 55 different programming languages. The data set is high quality and curated. It contains samples from Open programming competitions with not just the code, it also contains the problem statements, sample input and output files along with details like code size, memory footprint and CPU run time. Having this curated dataset will allow developers to benchmark their software against a standard dataset and improve it over a period of time.

Potential use cases to come from the project include code search and cloud detection, automatic code correction, regression studies and prediction.

Press release: Kickstarting AI for Code: Introducing IBM’s Project CodeNet

– Suramya

May 14, 2021

NTFS has a massive performance hit on Linux compared to ext4

Filed under: Computer Software,Linux/Unix Related,My Thoughts,Tech Related — Suramya @ 12:47 PM

NTFS has long been a nemesis of Linux. I remember in the 2000’s getting NTFS working on linux required so much effort and config changes that I stopped using it on my systems as FAT32 was more than sufficient for my needs at that time. Initially the driver was very unstable and it was recommended that you only use it for Read operations rather than Read/Write as there was a high probability of data corruption. That has changed over the years and the driver is stable. However, there is a massive performance hit when using NTFS vs ext4 on a Linux machine and I saw this when I tried using a NTFS partition on my laptop instead of ext4.

I have a 1 TB drive on my laptop along with a SSD. I dual boot the laptop (need it for my classes) between Windows & Debian and wanted to have all my files available on both OS’s. When I last tried this, ext support on Windows was not that great (and I didn’t feel like searching for options) so I decided to format the drive to NTFS so that I would have access to the files on both OS. The formatting took ages and once the drive was ready I was able to copy my files from the desktop to the laptop. While the files were being copied I noticed very high CPU usage on the laptop and the UI was lagging randomly. Since I was busy with other stuff I let it be and ignored it.

Yesterday I was trying to move files around on the laptop so that the root partition had enough space to do an upgrade and I again noticed that file copy and most of the disk operations were taking way longer than I expected. For example there would be a second of delay when I tried listing the directory when it had a lot of files. So, I decided to test it out. My data on the Laptop is an exact copy of the files on the Desktop. I timed the commands on the desktop with the same command on the laptop and there was a significant difference.

My desktop is obviously a lot more powerful than the laptop so I decided to try an experiment where I would run a command on the NTFS drive, then format the drive to ext4 and run the same command. (after copying all the files back). When I did this I saw that there was a massive difference in the time it took to run the command. On ext4 the command took less than 1 second (0.107s) whereas it took almost 34 seconds (33.997s) on NTFS parition. The screenshot for both commands are below:


du -hs command on a ext4 partition


du -hs command on a NTFS partition

That’s a ridiculous amount of difference between the two. So I obviously have to switch back to ext4 which brought us back in a full circle – I still needed to be able to access my files from Windows as well as from Linux. Decided to go a search on the Internet for options and found out that Windows 10 now lets you mount Linux ext4 filesystems in WSL 2. I haven’t tried it yet but I will test over the next few days once I am done with some of my assignments. If there is something interesting I will blog about it in the near future.

As of now, I am back to using ext4 on the laptop and the OS performance is a lot better.

Well this all for now. Will post more later.

– Suramya

May 9, 2021

Teaching Cyber Security basics to kids

Filed under: Computer Security,My Thoughts,Tech Related — Suramya @ 8:04 PM

There is an ongoing effort over at Australia to teach cyber-security to five-year-old kids. I am sure that it will be no surprise to anyone who knows me that I think that this is a brilliant idea. Security is a mindset and the earlier we can teach kids about the pitfalls and dangers online, the safer they will be online.

Our generation grew up with the internet and still I see that most people are not that serious about security. I had a long argument/discussion with Jani on why she had to have a passcode for her phone and why she couldn’t use the same password for everything. Now she understands what I was talking about and uses a password manager with unique password for each account. But that is not the same with my parents, I still have not managed to convince them to use a password manager. 🙁

A little while ago I was talking to mom and she commented that my nephew Vir doesn’t share his account passwords with anyone and when my mom is typing her password he looks away. I credit Vinit for teaching him this and am really happy about it. This is what you get when a kid is taught about security from the get go. Instead of learning it later as an add on. Another year or so and I will have him start using a password manager as well.

Habits learnt as a kid are really hard to unlearn and that is why I think it is really important that we get to kids as early as possible and teach them about cyber security. I mean we already teach them regular security and safety so why not cyber security and safety? Remember, they are spending a lot more time on the computer and the internet than we ever did and they need to be taught how to be careful online.

Well this is all for now. Will post more later.

– Suramya

April 30, 2021

Review and test of Fawkes: Software to protect your pictures from AI/Reverse searches.

Filed under: Computer Software,My Thoughts,Tech Related — Suramya @ 11:28 PM

Yesterday, I wrote about Fawkes & Photo Ninja which can be used to protect your photos from facial recognition models and reverse image searches. This is a very interesting field and I had mentioned about creating a service that does it for free instead of charging like what Photo Ninja is doing.

The first step to that is to check if the program (Fawkes) actually works the way it is supposed to, so I downloaded a pic from the internet (my profile pic on Twitter) and ran it through Fawkes. The program takes a while to run (~20 seconds per image) depending on the no of people in the photo. It detected the faces very reliably and modified the image. When using the default settings the output is saved as a PNG file but you can override it using a command line parameter. It requires you to provide the directory you want to run it against but if you don’t pass it the directory, it doesn’t give any errors. It took me a few mins to figure out what the issue was (yes, I know… My brain is tired). The command to run it in the current directory with debug (because I like seeing what the software is doing) is:

./protection --debug --directory .

I then took the resultant, file and searched for it via Google Images, Yandex and TinEye. None of them were able to find any results with the new image. So that part of the software works great. 🙂 Now coming to how the software modifies the image, I saw that it adds 2 rows of pixelisation to the image. First is near the hairline and cuts across the hair and forehead, and the second is near the chin and is about 5-10 pixels wide. It is clearly visible in larger photos, but when zoomed out it doesn’t look too jarring. Frankly it looks like the image got damaged and is kind of obvious when you look at it.

In my very basic tests it made the same change everytime so I have a feeling that we can train image recognition software to look for this modification and ignore it. It might be more powerful to put the modifications at random locations in the image (over the faces) that way it is harder to train the software to counter it. Plus if the visual noise section can be reduced it would be great. Maybe instead of a long blur that is noticeable we can try to do multiple small changes that change the pic without making it obvious that the image was modified.

Below are the two images, the original on the left and the modified version on the right.


Sample output of the Fawkes

I then looked at running this on my webserver, but due to the restrictions there I wasn’t able to get it to run. Although, to be honest I only tried for about 20-30 mins because I was tired. If I can’t get it to run on the server then the other option is that I run it on my home computer but I will need to look at that in more detail before I commit to making this site. I have a rough draft of the requirements and feature list but still looking at the options before I start working on it. It will be a good way to take my mind of what is going on in the world so that is good.

Well this is all for now. Will keep you posted on how this project goes.

– Suramya

April 8, 2021

Moving a Windows install to another drive on the same computer shouldn’t be this hard

Filed under: Computer Software,Linux/Unix Related,My Thoughts,Tech Related — Suramya @ 11:27 PM

I recently bought a new SSD drive for my Laptop because even after upgrading everything else (except the CPU) the system was still slow and looking at the process use I could see that it was waiting for disk read/write for the most part and that was causing the slowness. Once I got the new drive, I had to move the existing OS installs from the old disk to the new one. I have three operating systems (OS) on the disk: Windows, Debian and Kali. I need the windows OS for my classes (my proctored exams have to be taken on a windows machine) and others are for my tinkering and general use computing. The disk layout on the old drive was as follows:

root@Wyrm:~# fdisk -l
Disk /dev/sda: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: ST1000LM024 HN-M
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x0f04ad34

Device     Boot     Start       End   Sectors   Size Id Type
/dev/sda1  *         2048   1126399   1124352   549M  7 HPFS/NTFS/exFAT
/dev/sda2         1126400 102402047 101275648  48.3G  7 HPFS/NTFS/exFAT
/dev/sda3       102402048 135956479  33554432    16G 82 Linux swap / Solaris
/dev/sda4       135956480 468862127 332905648 158.7G  5 Extended
/dev/sda5       135958528 175017985  39059458  18.6G 83 Linux
/dev/sda6       175022080 237936641  62914562    30G 83 Linux
/dev/sda7       237940736 468862127 230921392   675G 83 Linux

I partitioned the new disk as a copy of the old drive, except for the data partition which was smaller as the disk was smaller. I used dd to clone each partition on to the corresponding new partition using the following command: (where sdb was the new drive).

dd if=/dev/sda1 of=/dev/sdb1 bs=2k

Once I copied the partitions over, all I had to do was refresh the GRUB boot loader config using the following command:

update-grub

After the config was updated, I was able to boot into Linux from both my Debian and Kali partitions on the new drive. However, that didn’t work for Windows. It gave be a screen-full of random characters like what you see when you try to open a binary file in a text editor and refused to boot. Thankfully I had not deleted the old windows partition so I was able to try a few more things, but *nothing* worked. Windows would just refuse to boot from the new drive. The only solution I found that could have potentially worked was a Paid software that supposedly allows you to clone your windows install on new disks/computers. Since I didn’t want to spend money on something I should have been able to do for free, I didn’t try it.

In the end after wasting a lot of time on this, I was tired of trying various things so just decided to reinstall windows on the new drive. It wasn’t a major loss because I didn’t have much data on Windows but I still dislike the fact that I had to do so just to put in a new drive. Imagine the hoops I would have had to jump if I wanted to move to a new computer. Actually I don’t have to imagine, I did jump thorough them when I moved my install from my old laptop to this one.

My linux install on the laptop is an exact clone of my desktop install. I used dd to create an image of my Linux install on the desktop and then wrote the image on the laptop. It worked perfectly fine at the first try. All I had to change was the hostname so that my DHCP server didn’t have a nervous breakdown but other than that everything worked without a single problem. Even the graphics drivers auto adjusted on the new machine. Imagine if we could do the same thing for a Windows install.

– Suramya

March 27, 2021

Outrun: Run a local command on a remote server

A lot of times we have to run a command that requires a lot of processing power and is extremely slow on your local computer. I have faced this issue in the past and at times wished there was a way to push these commands to a remote machine with a more powerful CPU to run the command. Now, thanks to the efforts of Alexander Overvoorde (Overv), Jakub Wilk and Xiretza this is now possible. They have created a tool called Outrun which lets you execute a local command using the processing power of another Linux machine without having to install the command on the remote machine.


Sample Execution of ffmpeg on a remote server

The software does have a few limitations, but on the whole it is very cool:

  • We need to have root access on the remote server (or sudo access) as the system needs to run chroot on the remote server
  • Both client and remote server need to be on the same architecture, so you can’t set up a session from an x86 machine to an ARM machine. Which is unfortunate because the first usecase I had for this tool was to run software from the RaspberryPI on my server as and when it needed more processing power.
  • File system performance remains a bottleneck

Check it out if you need to run commands with more CPU cycles than what is available on the local machine.

Thanks to Hacker News for the initial link.

– Suramya

« Newer PostsOlder Posts »

Powered by WordPress