Suramya's Blog : Welcome to my crazy life…

December 1, 2022

Analysis of the claim that China/Huawei is remotely deleting videos of recent Chinese protests from Huawei phones

Filed under: Computer Hardware,Computer Software,My Thoughts,Tech Related — Suramya @ 2:23 AM

There is an interesting piece of news that is slowly spreading over the internet in the past few hours where Melissa Chen is claiming over at Twitter that Huawei phones are automatically deleting videos of the protests that took place in China, without notifying their owners. Interestingly I was not able to find any other source reporting this issue. All references/reports of this issue are linking back to this tweet and based on this single tweet that is not supported by external validation. Plus the tweet does not even provide enough information to validate that this is happening other than a single video shared as part of the original tweet.


Melissa Chen claiming on Twitter that videos of protests are being automatically deleted by Huawei without notification

However, it is an interesting exercise to think how this could have been accomplished, what the technical requirements for this to work would look like and if this is something that would happen. So lets go ahead and dig in. In order to delete a video remotely, we would need the following:

  • The capability to identify the videos that need to be deleted without impacting other videos/photos on the device
  • The capability to issue commands to the device remotely that all sensitive videos from xyz location taken at abc time need to be nuked and Monitor the success/failure of the commands
  • Identify the devices that need to have the data on the looked at. Keeping in mind that the device could have been in airplane mode during the filming

Now, lets look at how each of these could be accomplished one at a time.

The capability to identify the videos that need to be deleted without impacting other videos/photos on the device

There are a few ways that we can identify the videos/photos to be deleted. If it was a video from a single source then we could have used a HASH value of the video to identify it and then delete. Unfortunately in this case the video in question is recorded by the device so each video file will have a separate hash value so this is not how we could do this.

The second option is to use the Metadata in the file, to identify the date & time along with the physical location of the video to be deleted. If videos were recorded within a geo-fence area in a specific timeframe then we potentially have the information required to identify the videos in question. The main problem would be that the user could have disabled geo-tagging of photos/videos taken by the phone or the date/time stamp might be incorrect.

One way to bypass this attempt to save the video would be to have the app/phone create a separate geo-location record of every photo/video taken by the device even when GPS is disabled or Geo tagging is disabled. This would require a lot of changes in the OS/App file and since a lot of people have been looking at the code in Huawei phones for issues ever since there was an accusation that they are being used by China to spy on western world, it is hard to imagine this would have escaped from scrutiny.

If the app was saving the data in the video/photo itself rather than a separate location then it should be easy enough to validate by examining the image/video data of photos/videos taken by any Huawei phone. But I don’t see any claims/reports that prove that this is happening.

The capability to issue commands to the device remotely that all sensitive videos from xyz location taken at abc time need to be nuked and Monitor the success/failure of the commands

Coming to the second requirement, Huawei or the government would need the capability to remotely activate the functionality to delete the videos. In order to do this the phone would need to be connecting to a Command & Control (C&C) channel frequently to check for commands. Or the phone would have something listening to remote commands from a central server.

Both of these are hard to disguise and hide. Yes, there are ways to hide data in DNS queries and other such methods to cover the tracks but thanks to Botnets, malware and Ransomware campaigns the ability to identify hidden C&C channels is highly developed and it is hard to hide from everyone looking at this. If the phone has something listening to commands then a scan of the device for open ports/apps listening to connections would be an easy thing to check and even if the app listening is disguised it should be possible to identify that something is listening.

You might say that the commands to activate might be hidden in the normal traffic going to & from the device to the Huawei servers and while that is possible we can check for it by installing a root certificate and passing all the traffic to/from the device via a proxy to be analyzed. Not impossible to do but hard to achieve without leaving signs, and considering the scrutiny these phones are going through hard to accept that this is something that is happening without anyone finding out about it.

Identify the devices that need to have the data on the looked at. (Keeping in mind that the device could have been in airplane mode during the filming)

Next, we have the question on how would Huawei identify the devices that need to run the check for videos. One option would be to issue the command to all their phones anywhere in the world. This would potentially be noisy and there is a possibility that a sharp eyed user catches the command in action. So far more likely option would be for them to issue it against a subset of their phones. This subset could be all phones in China, all phones that visited the location in question around the time the protest happened or all phones that are there in or around the location at present.

In order for the system to be able to identify users in an area, they have a few options. One would be to use GPS location tracking which would require the device to constantly track its location and share with a central location. Most phones already do this. One potential problem would be when users disable GPS on the device but other than that this would be an easy request to fulfill. Another option is to use cell tower triangulation to locate/identify the phones in the area at a given time. This is something that is easily done at the provider side and from what I read quite common in China. Naomi Wu AKA RealSexyCyborg had a really interesting thread on this a little while ago that you should check out.

This doesn’t even account for the fact that China has CCTV coverage across most of its jurisdiction and claim to have the ability to run Facial recognition across this massive amount of video collected. So, it is quite easy for the government to identify the phones that need to be checked for sensitive photos/videos with existing & known technology and ability.

Conclusion/Final thoughts

Now also remember that if Huawei had the ability to issue commands to its phones remotely then they also have the ability to extract data from the phones, or plant information on the phone. Which would be a espionage gold mine as people use their phones for everything and have then with them always. Loosing the ability to do this just to delete videos is not something that I feel China/Huawei would do as harm caused by the loss of intelligence data would far outweigh the benefits of deleting the videos. Do you really think that every security agency, Hacker Collective, bored programmers, Antivirus/cybersec firms would not immediately start digging into the firmware/apps on any Huawei phone once it was known and confirmed that they are actively deleting stuff remotely.

So, while it is possible that Huawei/China has the ability to scan and delete files remotely I doubt that this is the case right now. Considering that there is almost no reports of this happening anywhere and no independent verification of the same plus it doesn’t make sense for China to nuke this capability for such a minor return.

Keeping that in mind this post seems more like a joke or fake news to me. That being said, I might be completely mistaken about all this so if you have additional data or counter points to my reasoning above I would love for you to reach out and discuss this is more detail.

– Suramya

November 18, 2022

Twitter Extract: Downloading data not exposed in the Official Data export

Filed under: Computer Software,Software Releases — Suramya @ 2:46 PM

It looks like Twitter is imploding and even though I don’t think that it will go down permanently it seemed like a good time to export data so that I have a local backup available. Usually I just ask Twitter to export my data but this time I needed additional data as I was preparing a backup that would work even when Twitter was down completely. The Twitter Export didn’t give me all the data I wanted, specifically I needed an export of the following which wasn’t there in the official export:

  • Owned Lists (Including Followers and Members of the list)
  • Subscribed Lists with Followers and Members of the List
  • List of all Followers (ScreenName, Fullname and ID)
  • List of all Following (ScreenName, Funnname and ID)

So I created a script that exports the above. It is available for download at: Github: TwitterExtract. Instructions for installation and running are there in the ReadMe file.

This was created as a quick and dirty solution so it is not productionalized (i.e. it doesn’t have a lot of error checking, hardening etc) but it does what it is supposed to do. Check it out and let me know what you think. Bug Fixes and additional features are welcome…

– Suramya

November 15, 2022

Extracting Firefox Sites visited for archiving

Filed under: Computer Software,Linux/Unix Related,Tech Related — Suramya @ 3:01 AM

I have been using Firefox since it first version (0.1) launched back in 2003. At that time it was called Phoenix but had to change its name due to a trademark claim from Phoenix Technologies to Firebird which was then renamed to Firefox. Over the years I have upgraded in place so I had assumed that all my Browser History etc was still safely stored in the browser. A little while ago I realized that this wasn’t the case as there is a history page limit defined under the about:config. The property is called

places.history.expiration.transient_current_max_pages: 137249

and on my system it is configured for 137249 entries. This was a disappointment as I wanted to save an archive of the sites I have visited over the years so I started looking at how to export the history from Firefox from the command line so that I can save it in another location as part of my regular backup. I knew that the history is stored in a SQLite database so I looked at the contents of the DB using a SQLite viewer. The DB was simple enough to understand but I didn’t want to recreate the wheel so I searched on Google to see if anyone else has already written the queries to extract the data and found this Reddit post that gave the command to extract the data into a file.

I tried the command out and it worked perfectly with just one small hitch. The command would not run unless I shutdown Firefox as the DB file was locked by FF. This was a big issue as it meant that I would have to close the browser every time the backup ran which is not feasible as the backup process needs to be as transparent and seamless as possible.

Another search for the solution pointed me to this site that explained how to connect to a locked DB in Read Only mode. Which was exactly what I needed, so I took the code from there and merged it with the previous version and came up with the following command:

sqlite3 'file:places.sqlite?immutable=1' "SELECT strftime('%d.%m.%Y %H:%M:%S', visit_date/1000000, 'unixepoch', 'localtime'),
                                                   url FROM moz_places, moz_historyvisits WHERE moz_places.id = moz_historyvisits.place_id ORDER BY visit_date;" > dump.out 

this command gives us an output that looks like:

28.12.2020 12:30:52|http://maps.google.com/
28.12.2020 12:30:52|http://maps.google.com/maps
...
...
14.11.2022 04:37:17|https://www.google.com/?gfe_rd=cr&ei=sPvqVZ_oOefI8AeNwZbYDQ&gws_rd=ssl,cr&fg=1

Once the file is created, I back it up with my other files as part of the nightly backup process on my system. In the next phase I am thinking about dumping this data into a PostgreSQL DB so that I can put a UI in front of it that will allow me to browse/search through the file. But for now this is sufficient as the data is being backed up.

I was able to get my browsing history going back to 2012 by restoring the oldest Firefox backup that I have on the system and then extracting the data from it. I still have some DVD’s with even older backups so when I get some time I will restore and extract the data from there as well.

Well this is all for now. Will write more later.

– Suramya

October 21, 2022

Disable Dark Theme in the Private Browsing mode in Firefox 106

Filed under: Computer Software,Computer Tips,Knowledgebase,Tech Related — Suramya @ 10:09 AM

A lot of people like Dark themes for their apps but I am not one of them. For me the Dark mode strains my eyes more so I usually disable it as soon as possible. In the latest Firefox update (v106), Firefox changed a bunch of defaults and one of the changes is that when you open a window in incognito mode it uses the Dark theme by default. As per the release notes this is a conscious decision:

We also added a modern look and feel with a new logo and updated it so Private Browsing mode now defaults to dark theme, making it easier to know when you are in Private Browsing mode.

The dark theme really annoys me so I started looking for ways to disable it. Unfortunately, it can’t be disabled without having to change my default Theme (which is to use the System Defaults) which I didn’t want to do and a quick internet search didn’t return any useful results. So I decided to check out the about:config section to see if there is a hidden setting and lo-behold it was there. A quick change disabled the theme for the Private browsing mode and things were back to normal.

The steps to disable the dark theme in incognito mode are as follows:

  • Type about:config in the address bar and press Enter.
  • A warning page may appear. Click Accept the Risk and Continue to go to the about:config page.
  • Search for “theme” in the Search preference name box at the top of the page and you will see an entry for “browser.theme.dark-private-windows”
  • Double click on “True” for the entry to change the value to false.
  • The entry should look like the following. Then you can close the tab and you are done.


To revert the change, just repeat the steps and set the value back to True.

– Suramya

September 25, 2022

How is everyone ok that Windows is showing advertisements everywhere in the system?

Filed under: Computer Software,My Thoughts,Tech Related — Suramya @ 11:55 PM

Linux is an Open Source operating system that is available for free while Windows is a paid OS that costs a fair bit of money (~$200 per license). One would think that because we are getting something for free when using Linux then we are the product. Strangely this is not the case and it is Windows that is showing me advertisements like I got it for free and even more strangely people seem to be ok with it.

My Linux setup has 0 ads on it that are pushed to it by the OS, Windows on the other hand seems to be determined to put advertisements where ever it can find some space. For example, you get ads in the Start Menu, the lock screen, Windows Explorer etc etc. If I am paying money for the OS I don’t want to have ads pushed to me that I can’t get rid of. I mean the folks over at How to Geek have a 14 page document explaining how to disable all the built-in advertising in Windows 10, which shows how strongly MS is trying to push advertisements on their platform.

Which is ridiculous, I mean I would complain about this much ads on a system that I didn’t pay for but apparently it is fine for a billion dollar company to waste my screen viewing estate, bandwidth and processor power to show me advertisements on a OS that I paid money for. If a system is showing me ads then they should be making the OS free so at least they have some excuse for the behavior, similar to what Netflix is doing where the plan with the advertisements in the programing is cheaper than the one without.

What do you think?

– Suramya

August 31, 2022

Thoughts around Coding with help and why that is not a bad thing

Filed under: Computer Software,My Thoughts,Tech Related — Suramya @ 11:40 PM

It is fairly common for the people who have been in the industry to complain about how the youngsters don’t know what they are doing and without all the fancy helpful gadgets/IDE’s they wouldn’t be able to do anything and how things were better the way the person doing the complaining does it because that is how they learnt how to do things! The rant below was posted to Hacker News a little while ago in response to an question about coPilot and I wanted to share some of my thoughts around it. But first, lets read the rant:

After decades of professional software development, it should be clear that code is a liability. The more you have, the worse things get. A tool that makes it easy to crank out a ton of it, is exactly the opposite of what we need.

If a coworker uses it, I will consider it an admission of incompetence. Simple as that.

I don’t use autoformat, because it gets things wrong constantly. E.g. taking two similar lines and wrapping one but not the other, because of 1 character length difference. Instead I explicitly line my code out by hand to emphasize structure.

I also hate 90% of default linter rules because they are pointless busywork designed to catch noob mistakes.

These tools keep devs stuck in local maxima of mediocrity. It’s like writing prose with a thesaurus on, and accepting every single suggestion blindly.

I coded for 20 years without them, why would I need them now? If you can’t even fathom coding without these crutches, and think this is somehow equivalent to coding in a bare notepad, you are proving my point.

Let’s break this gem down and take it line by line.

After decades of professional software development, it should be clear that code is a liability. The more you have, the worse things get. A tool that makes it easy to crank out a ton of it, is exactly the opposite of what we need.

If a coworker uses it, I will consider it an admission of incompetence. Simple as that.

This is a false premise. There are times where extra code is a liability but most of times the boiler-plate and error-checking etc is required. The languages today are more complex than what was there 20 years ago. I know because I have been coding for over 25 years now. It is easy to write Basic/C/C++ code in a notepad and run it, in fact even for C++ I used TurboC++ IDE to write code over 25 years ago… We didn’t have distributed micro-services 20 years ago and most applications were a simple server-client model. Now we have applications connecting in peer-to-peer model etc. Why would I spend time retyping code that a decent IDE would auto-populate when I could use that time to actually solve more interesting problems.

This is the kind of developer who would spend days reformating the code manually to look just right instead of coding the application to perform as per specifications.

I don’t use autoformat, because it gets things wrong constantly. E.g. taking two similar lines and wrapping one but not the other, because of 1 character length difference. Instead I explicitly line my code out by hand to emphasize structure.

This is a waste of time that could have been spent working on other projects. I honestly don’t care how the structure is as long as it is consistent and reasonably logical. I personally wouldn’t brag about spending time formatting each line just so but that is just me.

I also hate 90% of default linter rules because they are pointless busywork designed to catch noob mistakes.These tools keep devs stuck in local maxima of mediocrity. It’s like writing prose with a thesaurus on, and accepting every single suggestion blindly.

I am not a huge fan of linter but it is a good practice use this to catch basic mistakes. Why would I spend manual effort to find basic issues when a system can do it for me automatically?

I coded for 20 years without them, why would I need them now? If you can’t even fathom coding without these crutches, and think this is somehow equivalent to coding in a bare notepad, you are proving my point.

20 years ago we used dialup modem and didn’t have giga-bit network connections. We didn’t have mobile-phone/internet coverage all over the world. Things are changing. We need to change with them.

Why stop at coding with notepad/vi/emacs? You should move back to assembly because it allows you full control over the code and write it more elegantly without any ‘fluff’ or extra wasted code. Or even better start coding directly in binary. That will ensure really elegant and tight code. (/s)

I had to work with someone who felt similarly and it was a painful experience. They were used to of writing commands/code in Hex to make changes to the system which worked for the most part but wasn’t scalable because they didn’t have others who could do it as well as him and he didn’t want to teach others in too much detail because I guess it gave them job security. I was asked to come in and create a system that allowed users to make the same changes using a WebUI that was translated to Hex in the backend. It saved a ton of hours for the users because it was a lot faster and intutive. But this person fought it tooth and nail and did their best to get the project cancelled.

I am really tired of all these folks complaining about the new way of doing things, just because that is not how they did things. If things didn’t change and evolve over the years and new things didn’t come in then we would still be using punch cards or abacus for computing. 22 years ago, we had a T3 connection at my university and that was considered state of the art and gave us a blazing speed of up to 44.736 Mbps that was shared with the entire dorm. Right now, I have a 400Mbps dedicated connection that is just for my personal home use. Things improve over the years and we need to keep up-skilling ourselves as well. There are so many examples I can give about things that are possible now which weren’t possible back then… This sort of gatekeeping doesn’t serve any productive purpose and is just a way for people to control access to the ‘elite’ group and make them feel better about themselves even though they are not as skilled as the newer folks.

The caveat is that not all new things are good, we need to evaluate and decide. There are a bunch of things that I don’t like about the new systems because I prefer the old ways of doing things. It doesn’t mean that anyone using the new tools is not a good developer. For example, I still prefer using SVN instead of GIT because that is what I am comfortable with, GIT has its advantages and SVN has its advantages. It doesn’t mean that I get to tell people who are using GIT that they are not ‘worthy’ of being called a good developer.

I dare this person to write a chat-bot without any external library/IDE or create a peer-to-peer protocol to share data amongst multiple nodes simultaneously or any of the new protocols/applications in use today that didn’t exist 20 years ago

Just because you can’t learn new things doesn’t mean that others are inferior. That is your problem, not ours.

– Suramya

August 28, 2022

Debian looking at changing how it handles non-free firmware

Filed under: Computer Software,Linux/Unix Related,Tech Related — Suramya @ 5:38 PM

One of the major problems when installing Debian as a newbie is that if your hardware is not supported by an Open (‘free’) driver/firmware then the system doesn’t install any and then it is a painful process to download and install the driver, especially if it is for the Wireless card. In earlier laptops you could always connect via a network cable to install the drivers but the newer systems don’t come with a LAN connection (which I think sucks BTW) so installing Debian on those systems is a pain.

How this should be addressed is a question that has been debated for a while now. It was even one of the questions Jonathan Carter discussed in his post on ‘How is Debian doing’. There are a lot of people with really strong opinions on the topic and ‘adulterating’ Debian by allowing non-free drivers to be installed by default has a lot of people up in arms. After a lot of debate on how to resolve there are three proposals to solve this issue that are up for vote in September:

Proposal A and B both start with the same two paragraphs:
We will include non-free firmware packages from the “non-free-firmware” section of the Debian archive on our official media (installer images and live images). The included firmware binaries will normally be enabled by default where the system determines that they are required, but where possible we will include ways for users to disable this at boot (boot menu option, kernel command line etc.).

When the installer/live system is running we will provide information to the user about what firmware has been loaded (both free and non-free), and we will also store that information on the target system such that users will be able to find it later. The target system will also be configured to use the non-free-firmware component by default in the apt sources.list file. Our users should receive security updates and important fixes to firmware binaries just like any other installed software.

But Proposal A adds that “We will publish these images as official Debian media, replacing the current media sets that do not include non-free firmware packages,” while Proposal B says those images “will not replace the current media sets,” but will instead be offered alongside them.

And Proposal C? “The Debian project is permitted to make distribution media (installer images and live images) containing packages from the non-free section of the Debian archive available for download alongside with the free media in a way that the user is informed before downloading which media are the free ones.

Debian is not the more new user friendly system out there and a lot of distributions got popular because they took the Debian base and made it more userfriendly by allowing non-free drivers and firmware. So this is a good move in my opinion. Personally I feel that option B might be the best option that will keep both the purists and the reformers happy. I don’t think Option C is a good option at all as it would be confusing.

Source: Slashdot: Debian Considers Changing How It Handles Non-Free Firmware

– Suramya

August 26, 2022

Using MultiNerf for AI based Image noise reduction

Filed under: Computer Software,Emerging Tech,My Thoughts,Tech Related — Suramya @ 2:58 PM

Proponents of AI constantly come up with claims that frequently don’t hold up to extensive testing, however the new release from Google Research called MultiNerf which runs on RAW image data to generate what the photos would have looked like without the video noise generated by imaging sensors seems to be the exception. Looking at the video it almost looks like magic, and appears to work great. Best of all, the code is open source and already released on GIT Hub under the Apache License. The repository contains the code release for three CVPR 2022 papers: Mip-NeRF 360, Ref-NeRF, and RawNeRF.

TechCrunch has a great writeup on the process. DIYPhotography has created a video demo of the process (embedded below) that showcases the process:


Video Credits: DIYPhotography

I like the new tools to make the photographs come out better, but I still prefer to take unaltered photos whenever I can. The most alteration/post-processing that I do on the photos is cropping and resizing. That also is something I do infrequently. But this would be of great use to professional photographers in conditions that are less than optimal.

– Suramya

August 7, 2022

Winamp is back in action (!) after 9 years of no releases

Filed under: Computer Software,My Thoughts — Suramya @ 11:59 PM

Anyone who was using computers in the late 90’s and 2000’s knows that the best MP3 player of all time was Winamp, it really whips the llama’s ass. First released back in 1997, it spread like wildfire. I used it as my primary music player till I switched to Linux and even then I used a player that was skinned to look and work like Winamp.

Development for the player was paused back in 2013 and then resumed in 2018. It took 4 years of hard work and the Winamp 5.9 Release Candidate 1 is now available for download. Most of the changes in this version as in the backend as the code was migrated from Visual Studio 2008 to Visual Studio 2019. This modernizes the whole setup and the next release will focus on new features.

The only downside of this is that it is not available for Linux so I still have to use some other software rather than the original. I wonder if it would work over Wine/Crossover? If so then that would be awesome. Let me go try that out and see if that works (I will update this post if it actually works).

Well this is all for now. Will post more later.

Update (8/8/2022): It Works on Linux! I downloaded and installed the latest RC on Linux using Crossover and it works flawlessly. (Although the preset names are in Chinese for some reason)

– Suramya

August 6, 2022

Post Quantum Encryption: Another candidate algorithm (SIKE) bites the dust

Filed under: Computer Security,Computer Software,Quantum Computing — Suramya @ 8:23 PM

Quantum Computing has the potential to make the current encryption algorithms obsolete once it gets around to actually being implemented on a large scale. But the Cryptographic experts in charge of such things have been working on Post Quantum Cryptography/Post Quantum Encryption (PQE) over the past few years to offset this risk. SIKE was one of KEM algorithms that advanced to the fourth round earlier this year and it was considered as an attractive candidate for standardization because of its small key and ciphertext sizes.

Unfortunately while that is true researchers have found that the algorithm is badly broken. Researchers from the Computer Security and Industrial Cryptography group at KU Leuven published a paper over the weekend “An Efficient Key Recovery Attack on SIDH” (Preliminary Version) that describes a technique which allows an attacker to recover the encryption keys protecting the SIKE Protected transactions in under an hours time using a single traditional PC. Since the whole idea behind PQE was to identify algorithms that are stronger than the traditional ones this immediately disqualifies SIKE from further consideration.

Abstract. We present an efficient key recovery attack on the Supersingular Isogeny Diffie–Hellman protocol (SIDH), based on a “glue-and-split” theorem due to Kani. Our attack exploits the existence of a small non-scalar endomorphism on the starting curve, and it also relies on the auxiliary torsion point information that Alice and Bob share during the protocol. Our Magma implementation breaks the instantiation SIKEp434, which aims at security level 1 of the Post-Quantum Cryptography standardization process currently ran by NIST, in about one hour on a single core.

The attack exploits the fact that SIDH has auxiliary points and that the degree of the secret isogeny is known. The auxiliary points in SIDH have always been an annoyance and a potential weakness, and they have been exploited for fault attacks, the GPST adaptive attack, torsion point attacks, etc.

This is not a bad thing as the whole testing and validation process is supposed to weed out weak algorithms and it is better to have them identified and removed now than after their release as then it becomes almost impossible to phase out systems that use the broken/compromised encryption algorithms.

Source: Schneier on Security: SIKE Broken

– Suramya

Older Posts »

Powered by WordPress