Suramya's Blog : Welcome to my crazy life…

April 14, 2022

Ensure your BCP plan accounts for the Cloud services you depend on going down

Filed under: Computer Software,My Thoughts,Tech Related — Suramya @ 1:53 AM

Long time readers of the blog and folks who know me know that I am not a huge fan of putting everything on the cloud and I have written about this in the past (“Cloud haters: You too will be assimilated” – Yeah Right…), I mean don’t get me wrong, the cloud does have it’s uses and advantages (some of them are significant) but it is not something that you want to get into without significant planning and thought about the risks. You need to ensure that the ROI for the move is more than the increased risk to your company/data.

One of the major misconceptions about the cloud is that when we put something on there we don’t need to worry about backups/uptimes etc because the service provider takes care of it. This is obviously not true. You need to ensure you have local backups and you need to ensure that your BCP (Business Continuity Plan) accounts for what you would do if the provider itself went down and the data on the cloud is not available.

You think that this is not something that could happen? The 9 day and counting outage over at Atlassian begs to differ. On Monday, April 4th, 20:12 UTC, approximately 400 Atlassian Cloud customers experienced a full outage across their Atlassian products. This is just the latest instance where a cloud provider has gone down leaving it’s users in a bit of a pickle and as per information sent to some of the clients it might take another 2 weeks to restore the services for all users.

One of our standalone apps for Jira Service Management and Jira Software, called “Insight – Asset Management,” was fully integrated into our products as native functionality. Because of this, we needed to deactivate the standalone legacy app on customer sites that had it installed. Our engineering teams planned to use an existing script to deactivate instances of this standalone application. However, two critical problems ensued:

Communication gap. First, there was a communication gap between the team that requested the deactivation and the team that ran the deactivation. Instead of providing the IDs of the intended app being marked for deactivation, the team provided the IDs of the entire cloud site where the apps were to be deactivated.
Faulty script. Second, the script we used provided both the “mark for deletion” capability used in normal day-to-day operations (where recoverability is desirable), and the “permanently delete” capability that is required to permanently remove data when required for compliance reasons. The script was executed with the wrong execution mode and the wrong list of IDs. The result was that sites for approximately 400 customers were improperly deleted.

To recover from this incident, our global engineering team has implemented a methodical process for restoring our impacted customers.

To give you an idea of how serious this outage is, I will use my personal experience with their products and how they were used in one of my previous companies. Without Jira & Crucible/Fisheye no one will be able to commit code into the repositories or do code reviews of existing commits. The users will not be able to do production / dev releases of any product. Since Confluence is down users/teams can’t access guides/instructions/SOP documents/documentation for any of their systems. Folks who use Bitbucket/sourcetree would not be able to commit code. This is the minimal impact scenario. It gets worse for organizations who use CI/CD pipelines and proper SDLC processes/lifecycles that depend on their products.

If the outage was on the on-premises servers then the teams could fail over to the backup servers and continue, but unfortunately for them the issue is on the Atlassian side and now everyone just has to wait for it to be fixed.

Code commits blocks (pre-commit/post-commit hooks etc) can be disabled but unless you have local copies of the documentation stored in Confluence you are SOL. We actually faced this issue once with our on-prem install where the instructions on how to do the failover were stored on the confluence server that had gone down. We managed to get it back up by a lot of hit & try methods but after that all teams were notified that their BCP/failover documentation needed to be kept in multiple locations including hardcopy.

If the companies using their services didn’t prepare for a scenario where Atlassian went down then there are a lot of people scrambling to keep their businesses and processes running.

To prevent issues, we should look at setting up systems that take auto-backups of the online systems and store it on a different system (can be in the cloud but use a different provider or locally). All documentation should have local copies and for really critical documents we should ensure hard copy versions are available. Similarly we need to ensure that any online repositories are backed up locally or on other providers.

This is a bad situation to be in and I sympathize with all the IT staff and teams trying to ensure that their companies business is running uninterrupted during this time. The person who ran the script on the other hand on the Atlassian server should seriously consider getting some sort of bad eye charm to protect themselves against all the curses flying their way (I am joking… mostly.)

Well this is all for now. Will write more later.

January 27, 2022

New MoonBounce UEFI Bootkit that can’t be removed by replacing the Hard Disk

Filed under: Computer Security,Computer Software,My Thoughts,Tech Related — Suramya @ 1:05 AM

Viruses and malware have evolved a lot in the past 2-2.5 decades. I remember the first virus that infected my computer back in 1998, it corrupted the boot sector and the partition table to the point where I couldn’t even format the drive as it wasn’t detected by the OS. I tried booting via a floppy and running scandisk on it (this is on DOS 6.1/Windows 3.1) but it wouldn’t detect the disk, same issue with Norton Disk Doctor (NDD). Was scared to tell the parents that I had broken the new computer but after a whole night of trying various things based on conversations with friends, suggestions in books etc I managed to get NDD to detect the disk and repair the partition table. After that it was a relatively simple task to format the disk and reinstall DOS. Similarly all the other viruses I encountered could be erased by formatting the disk or replacing it.

There were a few that tried using the BIOS for storing info but not many. I did create a prank program that would throw insults at you when you typed the wrong command every 5th boot. The counter for the boot was kept in the BIOS. But this didn’t have any propagation logic in the code and had to be manually run on each machine, plus it had to be customized manually for very new BIOS type/version so wasn’t something that could spread on its own.

With the new malware/viruses that have come out in the past few decades we are seeing more advanced capabilities of propagation and persistence, but till now you could still replace the drive infected with a virus and be able to start with a clean slate. However, that has now changed with the new MoonBounce UEFI Bootkit which can’t be removed by replacing the Hard Drive as it stores itself in the SPI flaws memory that is found on the motherboard. Which means that the bootkit will remain on the device till the SPI memory is re-flashed or the whole motherboard is replaced. Which makes it very difficult and expensive to recover from the infection.

Securelist has a very detailed breakdown of the Bootkit which you should check out. The scary part is that this is not the only bootkit that uses this method, there are a few others such as ESPectre, FinSpy’s UEFI bootkit that prove that the capability is becoming more mainstream and that we should expect to see more such bootkits in the near future.

Source: Slashdot: New MoonBounce UEFI Bootkit Can’t Be Removed by Replacing the Hard Drive

– Suramya

January 25, 2022

Intentionally breaking popular opensource projects for… something

Filed under: Computer Software,My Thoughts,Tech Related — Suramya @ 10:23 AM

Recently Marak Squires, the developer of extremely popular npm modules Colors & Faker decided to intentionally commit changes into the code that broke the module and brought down thousands of apps world wide. Initially it was thought that the modules were hacked as others have been in the past, but looking at the commit history it was obvious that the changes were committed by the developer themselves. Which brings us to the question of why on earth would someone do something like this? Marak didn’t explicitly state on why the changes were made but considering their past comments it does seem like this was done intentionally:

In November 2020, Marak had warned that he will no longer be supporting the big corporations with his “free work” and that commercial entities should consider either forking the projects or compensating the dev with a yearly “six figure” salary.

“Respectfully, I am no longer going to support Fortune 500s ( and other smaller sized companies ) with my free work. There isn’t much else to say,” the developer previously wrote.

“Take this as an opportunity to send me a six figure yearly contract or fork the project and have someone else work on it.

The aftermath of the changes is that NPM has revoked the developers rights to commit code, their github account has been suspended and the modules in question have been forked. Now Marak is pleading for his accounts to be reinstated because the issue was caused due to a ‘programming mistake’ which seems like a far fetched excuse. Especially given how they made fun of the problem right after people reporting it. That doesn’t seem like the reaction we would see if this was a legitimate mistake.

My guess is that they thought this would play out differently with companies falling over themselves to give them money/contracts etc or something but didn’t anticipate how it would blow back on them. I mean if I was hiring right now and their resume came up I would think twice about hiring them because of this stunt. They have shown that they can’t be trusted and what is to stop them from making changes to my company’s software and bring it a screeching halt because they felt that they were not being paid their dues? I mean they have already done it once, what is to stop them from doing it again? This looks like a textbook example of what not to do in order to get people to work with you/hire you.

One of the things that I have heard from detractors of OpenSource software when I was pushing for it in my previous companies is the question about how can we be sure the software will be there a year for now and who do we blame if the software is broken and we need help. Stunts like this don’t help improving the image of Open Source software and this person is now reaping their just deserts.

The positive side is that because the code is opensource, it has already been forked and others have taken over the codebase to ensure we don’t hit similar issues going forward.

– Suramya

January 22, 2022

Malware can now Intercept and fake an iPhone reboot

Filed under: Computer Security,Computer Software,My Thoughts,Tech Related — Suramya @ 1:50 AM

Rebooting the system has always been a good way to clean start your system (phone or computer). Some of the phone malware specifically don’t have the ability to persist so can be removed just by rebooting the phone (Especially on the iPhone). Now, researchers from the ZecOps Research Team have figured out how to fake a reboot on an iPhone. Which allows malware/surveillance software to spoof the shutdown / reboot of a phone. As you can imagine, this has massive security impact. The first problem is that we can’t be sure that the phone has been rebooted so malware can’t be removed. Secondly, some of the folks shutdown their phones while discussing sensitive information. Using this technique the attackers can pretend that the phone is switched off, while it is still on and eavesdrop using the phone’s camera and mic.

We’ll dissect the iOS system and show how it’s possible to alter a shutdown event, tricking a user that got infected into thinking that the phone has been powered off, but in fact, it’s still running. The “NoReboot” approach simulates a real shutdown. The user cannot feel a difference between a real shutdown and a “fake shutdown.” There is no user-interface or any button feedback until the user turns the phone back “on.”

The problem is exacerbated due to there not being any physical method of powering the device off. Earlier phone models had removable batteries which allowed a user to physically remove the battery when they wanted to secure the device. Now the battery is built in and there is no way to remove it without dismantling the device and voiding your warranty in the process. I have discussed this with various folks over the years that it is impossible to ensure a device is powered off when we shut it down because we can’t remove the battery.

A silver lining around this is that it looks like hard reboots are harder to spoof so if you want to be sure that your phone is actually off, you can shut it down using a hard-reboot. Another solution is to carry a Faraday bag with you and put your phone inside when you need to be off-grid.

Source: Schneier’s Blog: Faking an iPhone Reboot

– Suramya

January 21, 2022

nerd-dictation: A fantastic Open Source speech to text software for Linux

After a long time of searching I finally found a speech to text software for Linux that actually works well enough that I can use it for dictating without having to jump through too many hoops to configure and use. The software is called nerd-dictation and is an open source software. It is fairly easy to setup as compared to the other voice-to-text systems that are available but still not at a stage where a non-tech savvy person would be able to install it easily. (There is effort ongoing to fix that)

The steps to install are fairly simple and documented below for reference:

  • pip3 install vosk
  • git clone https://github.com/ideasman42/nerd-dictation.git
  • cd nerd-dictation
  • wget https://alphacephei.com/kaldi/models/vosk-model-small-en-us-0.15.zip
  • unzip vosk-model-small-en-us-0.15.zip
  • mv vosk-model-small-en-us-0.15 model

nerd-dictation allows you to dictate text into any software or editor which is open so I can dictate into a word document or a blog post or even the command prompt. Previously I have used tried using software like otter.ai which actually works quite well but doesn’t allow you to edit the text as you’re typing, so you basically dictate the whole thing and the system gives you the transcription after you are done. So, you have to go back and edit/correct the transcript which can be a pain for long dictations. This software works more like Microsoft dictate which is built into Word. Unfortunately my word install on Linux using Crossover doesn’t allow me to use the built in dictate function and I have no desire to boot into windows just so that I can dictate a document.

This downloads the software in the current directory. I set it up on /usr/local but it is up to you where you want it. In addition, I would recommend that you install one of the larger dictionaries/models which makes the voice recognition a lot more accurate. However, do keep in mind that the larger models use up a lot more memory so you need to ensure that your computer has enough memory to support the larger models. The smaller ones can run on systems as small as a raspberry pi, so depending on your system configuration you can choose. The models are available here.

The software does have some quirks, like when you are talking and you pause it will take it as a start of a new sentence and for some reason it doesn’t put a space after the last word. So unless you’re careful you need to go back and add spaces to all the sentences that you have dictated, which can get annoying. (I started manually pressing space everytime I paused to add the space). Another issue is that it doesn’t automatically capitalize the words when you dictate such as those at the beginning of the sentence or the word ‘I’. This requires you to go back and edit, but that being said it still works a lot better than the other software that I have used so far on Linux. For Windows system Dragon Voice Dictation works quite well but is expensive. I tested it out by typing out this post using it and for the most part it does work it worked quite well.

Running the software again requires you to run commands on the commandline, but I configured shortcut keys to start and stop the dictation which makes it very convenient to use. Instructions on how to configure custom shortcut keys are available here. If you don’t want to do that, then you can start the transcription by issuing the following command (assuming the software is installed in /usr/local/nerd-dictation):

/usr/local/nerd-dictation/nerd-dictation begin --vosk-model-dir=/usr/local/nerd-dictation/model  --continuous

This starts the software and tells it that we are going to dictate for a long time. More details on the options available are available on the project site. To stop the software you should run the following command:

/usr/local/nerd-dictation/nerd-dictation end

I suggest you try this if you are looking for a speech-to-text software for Linux. Well this is all for now. Will post more later.

Thanks to Hacker News: Nerd-dictation, hackable speech to text on Linux for the link.

– Suramya

June 8, 2021

Great book on Military Crypto analytics by Lambros Callimahos released to public

Filed under: Computer Security,Computer Software,My Thoughts,Tech Related — Suramya @ 9:58 PM

I find Cryptography and code breaking to be very interesting as there are huge implications on Cyber security. The current world is based on the presumption that cryptographic algorithms are secure, it is what ensures that we can use the internet, bank online, find love online and even work online. Cryptography historically has been a field working under heavy classification and there are multiple folks we don’t know about because their existence and work was classified.

Lambros Callimahos was one such Cryptologist, he was good enough that two of his books on Military Cryptanalytics covering code breaking (published in 1977) were blocked from public release till 1992. The third and last volume in the series was blocked from release till December 2020. It is now finally available for download as a PDF file so you can check it out.

The book covers how code breaking can be used to solve “impossible puzzles” and one of the key parts of the book is it’s explanation of how to use cryptodiagnosis to decrypt data that has been encrypted using an unknown algorithm. It has a whole bunch of examples and walks you through the process which is quite fascinating. I am going to try getting through it over the next few weeks if I can.

Check it out if you like to learn more about cryptography.

– Suramya

May 30, 2021

You can now run GUI Linux Apps on Windows 10 natively

Filed under: Computer Software,Linux/Unix Related,Tech Related — Suramya @ 10:17 PM

With the latest update of Windows Subsystem for Linux (WSL), you can now run Linux GUI applications on Windows natively. This is pretty impressive considering Steve Ballmer famously branded Linux “a cancer that attaches itself in an intellectual property sense to everything it touches” back in 2001. In just 20 years, Microsoft has changed it’s stance and started adding more Linux functionality to it’s operating system.

Arguably, one of the biggest, and surely the most exciting update to the Windows 10 WSL, Microsoft has been working on WSLg for quite a while and in fact first demoed it at last year’s conference, before releasing the preview in April… Microsoft recommends running WSLg after enabling support for virtual GPU (vGPU) for WSL, in order to take advantage of 3D acceleration within the Linux apps…. WSLg also supports audio and microphone devices, which means the graphical Linux apps will also be able to record and play audio.

Keeping in line with its developer slant, Microsoft also announced that since WSLg can now help Linux apps leverage the graphics hardware on the Windows machine, the subsystem can be used to efficiently run Linux AI and ML workloads… If WSLg developers are to be believed, the update is expected to be generally available alongside the upcoming release of Windows.

The feature is still only available in Windows 10 Preview Builds but is expected to be released for general use in the near future.

I would love to see the reverse being developed. The ability to install and run Windows applications on Linux natively / officially. There is Wine/Crossover but they don’t support 100% of the applications yet. It would be cool if MicroSoft contributes to either of the tools to allow people to run windows software on Linux.

I personally use Crossover to run the Office Suite and it works great for me (For the most part). The latest version supports Office 365 and most of it works fine except for Excel which still has a bit of a problem with large files but works otherwise. Which is why I also have Office 2007 also installed where Excel works without issues even with large files.

Compatibility with MS Office suite is why a lot of users don’t want to switch from Windows to Linux or Mac. OpenOffice/LibreOffice is great but the UI sucks and the files are not 100% compatible (atleast the last time I tried it, it wasn’t) so the files might not look the same as you expected when you share them with Office users.

Source: Microsoft doubles down on Windows Subsystem for Linux

– Suramya

May 20, 2021

Thoughts on NVIDIA crippling cryptocurrency mining on some of its cards

Filed under: Computer Security,Computer Software,My Thoughts,Tech Related — Suramya @ 8:11 PM

You might have heard the news that NVIDIA has added code to it’s GPUs that make them less attractive for cryptocurrency mining by reducing the efficiency of such computations using a software patch. On one side this is great news because it means that GPUs will be less attractive for mining and be available for gamers and others to use in their setup. However, I feel that this is a bad precedent being set by a company. In effect they are deciding to control what you do with the card after you have bought it. A similar case would be a restriction in your car purchase to stop you from using it on non-highway roads. Or to stop you from carrying potatoes in the trunk.

This all comes back to the old story about DRM and how it is being used to restrict us from actually owning a device. With DRM you are essentially renting the device and if you do anything that the owner corporation doesn’t agree with then you are in for a fun time at the local jail. DRM/DMCA is already being used to block farmers from fixing their farm equipment, medical professionals from fixing their health equipment and a whole lot more.

Cory Doctorow has a fantastic writeup on how DRM works and the problems caused by it. DRM does not support innovation, it actually forces status-quo because it is illegal to bypass it.

I have an old X-Box sitting in my closet collecting dust, I want to run Linux on it but that requires me to break the law because I would need to bypass the DRM protections in order to install a new OS. Today we are ok when they are blocking cryptocurrency, what if tomorrow the company gets into a fight with a gaming company and decides that they will degrade the game performance because they didn’t pay the fees for full performance. What if tomorrow they decide, to charge a subscription fee to get the full performance from the device? What is to stop them from degrading or crippling any other activity they don’t agree with whenever they feel like? The law is in their favor because of DRM, laws like DMCA (and other such laws) make it illegal to bypass the protections they have placed around it.

This is a slippery slope and we can’t trust the corporations to have our best interest at heart when there is money to be made.

There is more discussion on this happening over at HackerNews. Check it out.

– Suramya

May 17, 2021

IBM’s Project CodeNet: Teaching AI to code

Filed under: Computer Software,Emerging Tech,My Thoughts,Tech Related — Suramya @ 11:58 PM

IBM recently launched a new program called Project CodeNet that is an opensource dataset that will be used to train AI to better understand code. The idea is to automate more of the engineering process by applying Artificial Intelligence to the problem. This is not the first project to do this and it won’t be the last. For some reason AI has become the cure all for all ‘ills’ in any part of life. It doesn’t matter if it is required or not but if there is a problem someone out there is trying to apply AI and Machine Learning to the problem.

This is not to say that Artificial Intelligence is not something that needs to be explored and developed. It has its uses but it doesn’t need to be applied everywhere. In one of my previous companies we interacted with a lot of companies who would pitch their products to us. In our last outing to a conference over 90% of the idea’s pitched had AI and/or Machine Learning involved. It got to the point where we started telling the companies that we knew what AI/ML was and ask them to just explain how they were using it in their product.

Coming back to Project CodeNet, it consists of over 14M code samples and over 500M lines of code in 55 different programming languages. The data set is high quality and curated. It contains samples from Open programming competitions with not just the code, it also contains the problem statements, sample input and output files along with details like code size, memory footprint and CPU run time. Having this curated dataset will allow developers to benchmark their software against a standard dataset and improve it over a period of time.

Potential use cases to come from the project include code search and cloud detection, automatic code correction, regression studies and prediction.

Press release: Kickstarting AI for Code: Introducing IBM’s Project CodeNet

– Suramya

May 14, 2021

NTFS has a massive performance hit on Linux compared to ext4

Filed under: Computer Software,Linux/Unix Related,My Thoughts,Tech Related — Suramya @ 12:47 PM

NTFS has long been a nemesis of Linux. I remember in the 2000’s getting NTFS working on linux required so much effort and config changes that I stopped using it on my systems as FAT32 was more than sufficient for my needs at that time. Initially the driver was very unstable and it was recommended that you only use it for Read operations rather than Read/Write as there was a high probability of data corruption. That has changed over the years and the driver is stable. However, there is a massive performance hit when using NTFS vs ext4 on a Linux machine and I saw this when I tried using a NTFS partition on my laptop instead of ext4.

I have a 1 TB drive on my laptop along with a SSD. I dual boot the laptop (need it for my classes) between Windows & Debian and wanted to have all my files available on both OS’s. When I last tried this, ext support on Windows was not that great (and I didn’t feel like searching for options) so I decided to format the drive to NTFS so that I would have access to the files on both OS. The formatting took ages and once the drive was ready I was able to copy my files from the desktop to the laptop. While the files were being copied I noticed very high CPU usage on the laptop and the UI was lagging randomly. Since I was busy with other stuff I let it be and ignored it.

Yesterday I was trying to move files around on the laptop so that the root partition had enough space to do an upgrade and I again noticed that file copy and most of the disk operations were taking way longer than I expected. For example there would be a second of delay when I tried listing the directory when it had a lot of files. So, I decided to test it out. My data on the Laptop is an exact copy of the files on the Desktop. I timed the commands on the desktop with the same command on the laptop and there was a significant difference.

My desktop is obviously a lot more powerful than the laptop so I decided to try an experiment where I would run a command on the NTFS drive, then format the drive to ext4 and run the same command. (after copying all the files back). When I did this I saw that there was a massive difference in the time it took to run the command. On ext4 the command took less than 1 second (0.107s) whereas it took almost 34 seconds (33.997s) on NTFS parition. The screenshot for both commands are below:


du -hs command on a ext4 partition


du -hs command on a NTFS partition

That’s a ridiculous amount of difference between the two. So I obviously have to switch back to ext4 which brought us back in a full circle – I still needed to be able to access my files from Windows as well as from Linux. Decided to go a search on the Internet for options and found out that Windows 10 now lets you mount Linux ext4 filesystems in WSL 2. I haven’t tried it yet but I will test over the next few days once I am done with some of my assignments. If there is something interesting I will blog about it in the near future.

As of now, I am back to using ext4 on the laptop and the OS performance is a lot better.

Well this all for now. Will post more later.

– Suramya

« Newer PostsOlder Posts »

Powered by WordPress