Suramya's Blog : Welcome to my crazy life…

January 13, 2023

Fixing autopost to LinkedIn not working via Social Networks Auto-Poster

Filed under: Knowledgebase,Tech Related — Suramya @ 9:57 PM

A few days ago my authentication token that allowed Social Networks Auto-Poster to post my blog entries automatically expired (I can only authorize for a max of 2 months at a time). Usually the fix is quite simple, I just go to the account settings in the Plugin and then click on ‘Authorize Your LinkedIn Account (without Marketing API)’, after which I just have to authenticate using my LinkedIn password select Yes and then I am done. This time however, when I clicked on the Authorize connection button the system would redirect me to www.suramya.com/blog instead of the Plugin page which meant that the authentication process couldn’t complete. I spent a few hours trying to troubleshoot and for the life of me I couldn’t figure out the problem. I even tried installing another plugin but faced the same issue over there as well. In the end I decided to take a break and crash for the night as I was going nowhere.

Today I started looking at the problem again and was going to raise a support ticket with the plugin author to have them take a look at the issue and decided to check the FAQs just to ensure I didn’t miss anything obvious. The first entry in the FAQ talked about what to do if the plugin was redirecting to a “Blank Page” or an error page, according to the FAQ this could be caused by certain plugins and the fix was to disable the plugin, authorize and then re-enable them.

The last plugin I had installed was to Autopost my blog posts to Mastodon so I disabled the plugin and then tried authorizing the app again. To my utter delight the system immediately authorized the connection and I was able to make a test post successfully.

The moral of the story is that you should always check the documentation when something is not working rather than trying various things randomly.

Well this is all for now. Will post more later.

– Suramya

January 6, 2023

Good developers need to be able to communicate and collaborate and those are not euphemisms for politics and org building

Filed under: Computer Software,My Thoughts,Tech Related — Suramya @ 11:25 PM

Saw this gem in my Twitter feed a little while ago and had to save it so that I could comment on it.

Twitter screenshot stating: Because to some people, in order to be a senior software engineer it's about politics and org building (perhaps you'll hear euphemisms communication and collaboration)
Because to some people, in order to be a senior software engineer it’s about politics and org building (perhaps you’ll hear euphemisms communication and collaboration)

There is a constant theme in Programming that the good developers are anti-social, can’t be bothered to collaborate and should be left alone so that they can create a perfect product. The so called 10x developer. This is emphasized by movie stories about the genius developer creating something awesome sitting in their basement. Unfortunately that is not how real life works as this 10x developer is a myth. In real life you need to be able to communicate, collaborate and work in a team in order to be successful as a programmer. No single person can create an enterprise level software alone and even if you could it needs to be something that people want/need, so guess what you will have to talk to your users to understand what problems they are facing and then work on software that will fix them or make their lives easier.

In one of my previous company, my role was to look at new software/systems and bring them into the company. So we went to expos, talked to startups and explored the market and found a really cool software that we thought would be extremely useful for the business so we went back and pitched it to the business. To our shock no one was interested in adopting the software because it didn’t address any of the pain points that the business was facing. We thought it would be useful for them because we were looking at it from the outside and hadn’t bothered talking to them about what their pain points were. Then we sat down with the business and their development teams to understand the setup and find out what are the most urgent/painful problems that we should fix. After multiple discussions we went out and found a software that addressed a significant pain point for the business and as soon as we demo’d it, we were asked to expedite getting it validated/approved for installed in their org.

Similarly, one of the startups I was working with during the same time were creating tech to help blind people and I happened to mention that to the founder of a NGO (Non-Government Organization) that works with blind people and his response was that what they are creating is cool but I wish they would actually talk to some blind people before they start working on tech to help them, as the blind people don’t want systems that will give them sight but rather assist them in doing things without trying to recreate sight.

Coming back to the original point about Senior Software Engineer, it is not their job to work on every part of the project themselves. Their job is to look at the high level goal, design the architecture and work with other developers in their team to create the software. Another major task of the senior Software Engineer is to mentor their juniors, teach them the tricks of the trade and help them grow in their skills and role. I personally believe that I should always be training the people under me so that they can one day replace me so that I can move on to more interesting projects. If you make yourselves indispensable in your current role and no one can replace you then you will always be doing the same thing and can never move on. Yes, there is a risk that you might be replaced with a junior and get fired but that can even happen to the 10x developer as well. Personally, I would rather have 10 regular developers than a single 10x developer as they are a pain to work with. They will insist on having full control of the entire dev process will refuse to share information that other developers/database/network folks need and basically become a bottle-neck for the entire project.

The way I look at being a senior engineer/architect is that I get to work on the really interesting problems, write code for PoC’s (Proof of Concept) that fix the problem. Then I can handoff the code to others who can productionalize it with me providing guidance and support. Its not to say that I wouldn’t get my hands dirty productionalizing the system but I rather solve interesting problems.

Another myth is that the only person who knows the system will never get fired. I have taken over multiple systems over the years (at least 4 that I can recall for sure) where they were originally managed by a single person who refused to collaborate/communicate with the rest of the team. In some cases they were fired and I was asked to take over, in others they were moved to other non-critical projects so they stopped being a road block. It each case took us a lot of time to reverse engineer/understand the system but it was worth the effort to do that so that we could make future changes without fighting with someone for every change or having to call the person for information everytime the system gave problems.

Long story short: communications doesn’t equate politics and collaboration doesn’t equate org building. If you think that they do then you will be miserable in any mid to large size company. You might get away with it in a startup initially but not for long as the team grows you will be expected to work together with other developers/admins (collaborate) to create systems that others want and for that you will need to communicate with others to ensure what you are making is actually useful.

Well this is all for now. Will write more later.

– Suramya

December 1, 2022

Analysis of the claim that China/Huawei is remotely deleting videos of recent Chinese protests from Huawei phones

Filed under: Computer Hardware,Computer Software,My Thoughts,Tech Related — Suramya @ 2:23 AM

There is an interesting piece of news that is slowly spreading over the internet in the past few hours where Melissa Chen is claiming over at Twitter that Huawei phones are automatically deleting videos of the protests that took place in China, without notifying their owners. Interestingly I was not able to find any other source reporting this issue. All references/reports of this issue are linking back to this tweet and based on this single tweet that is not supported by external validation. Plus the tweet does not even provide enough information to validate that this is happening other than a single video shared as part of the original tweet.


Melissa Chen claiming on Twitter that videos of protests are being automatically deleted by Huawei without notification

However, it is an interesting exercise to think how this could have been accomplished, what the technical requirements for this to work would look like and if this is something that would happen. So lets go ahead and dig in. In order to delete a video remotely, we would need the following:

  • The capability to identify the videos that need to be deleted without impacting other videos/photos on the device
  • The capability to issue commands to the device remotely that all sensitive videos from xyz location taken at abc time need to be nuked and Monitor the success/failure of the commands
  • Identify the devices that need to have the data on the looked at. Keeping in mind that the device could have been in airplane mode during the filming

Now, lets look at how each of these could be accomplished one at a time.

The capability to identify the videos that need to be deleted without impacting other videos/photos on the device

There are a few ways that we can identify the videos/photos to be deleted. If it was a video from a single source then we could have used a HASH value of the video to identify it and then delete. Unfortunately in this case the video in question is recorded by the device so each video file will have a separate hash value so this is not how we could do this.

The second option is to use the Metadata in the file, to identify the date & time along with the physical location of the video to be deleted. If videos were recorded within a geo-fence area in a specific timeframe then we potentially have the information required to identify the videos in question. The main problem would be that the user could have disabled geo-tagging of photos/videos taken by the phone or the date/time stamp might be incorrect.

One way to bypass this attempt to save the video would be to have the app/phone create a separate geo-location record of every photo/video taken by the device even when GPS is disabled or Geo tagging is disabled. This would require a lot of changes in the OS/App file and since a lot of people have been looking at the code in Huawei phones for issues ever since there was an accusation that they are being used by China to spy on western world, it is hard to imagine this would have escaped from scrutiny.

If the app was saving the data in the video/photo itself rather than a separate location then it should be easy enough to validate by examining the image/video data of photos/videos taken by any Huawei phone. But I don’t see any claims/reports that prove that this is happening.

The capability to issue commands to the device remotely that all sensitive videos from xyz location taken at abc time need to be nuked and Monitor the success/failure of the commands

Coming to the second requirement, Huawei or the government would need the capability to remotely activate the functionality to delete the videos. In order to do this the phone would need to be connecting to a Command & Control (C&C) channel frequently to check for commands. Or the phone would have something listening to remote commands from a central server.

Both of these are hard to disguise and hide. Yes, there are ways to hide data in DNS queries and other such methods to cover the tracks but thanks to Botnets, malware and Ransomware campaigns the ability to identify hidden C&C channels is highly developed and it is hard to hide from everyone looking at this. If the phone has something listening to commands then a scan of the device for open ports/apps listening to connections would be an easy thing to check and even if the app listening is disguised it should be possible to identify that something is listening.

You might say that the commands to activate might be hidden in the normal traffic going to & from the device to the Huawei servers and while that is possible we can check for it by installing a root certificate and passing all the traffic to/from the device via a proxy to be analyzed. Not impossible to do but hard to achieve without leaving signs, and considering the scrutiny these phones are going through hard to accept that this is something that is happening without anyone finding out about it.

Identify the devices that need to have the data on the looked at. (Keeping in mind that the device could have been in airplane mode during the filming)

Next, we have the question on how would Huawei identify the devices that need to run the check for videos. One option would be to issue the command to all their phones anywhere in the world. This would potentially be noisy and there is a possibility that a sharp eyed user catches the command in action. So far more likely option would be for them to issue it against a subset of their phones. This subset could be all phones in China, all phones that visited the location in question around the time the protest happened or all phones that are there in or around the location at present.

In order for the system to be able to identify users in an area, they have a few options. One would be to use GPS location tracking which would require the device to constantly track its location and share with a central location. Most phones already do this. One potential problem would be when users disable GPS on the device but other than that this would be an easy request to fulfill. Another option is to use cell tower triangulation to locate/identify the phones in the area at a given time. This is something that is easily done at the provider side and from what I read quite common in China. Naomi Wu AKA RealSexyCyborg had a really interesting thread on this a little while ago that you should check out.

This doesn’t even account for the fact that China has CCTV coverage across most of its jurisdiction and claim to have the ability to run Facial recognition across this massive amount of video collected. So, it is quite easy for the government to identify the phones that need to be checked for sensitive photos/videos with existing & known technology and ability.

Conclusion/Final thoughts

Now also remember that if Huawei had the ability to issue commands to its phones remotely then they also have the ability to extract data from the phones, or plant information on the phone. Which would be a espionage gold mine as people use their phones for everything and have then with them always. Loosing the ability to do this just to delete videos is not something that I feel China/Huawei would do as harm caused by the loss of intelligence data would far outweigh the benefits of deleting the videos. Do you really think that every security agency, Hacker Collective, bored programmers, Antivirus/cybersec firms would not immediately start digging into the firmware/apps on any Huawei phone once it was known and confirmed that they are actively deleting stuff remotely.

So, while it is possible that Huawei/China has the ability to scan and delete files remotely I doubt that this is the case right now. Considering that there is almost no reports of this happening anywhere and no independent verification of the same plus it doesn’t make sense for China to nuke this capability for such a minor return.

Keeping that in mind this post seems more like a joke or fake news to me. That being said, I might be completely mistaken about all this so if you have additional data or counter points to my reasoning above I would love for you to reach out and discuss this is more detail.

– Suramya

November 28, 2022

Internet Archive makes over 500 Palm Pilot apps available online for free

Filed under: Interesting Sites,Tech Related — Suramya @ 5:05 AM

The Palm Pilot was the first ‘smart’ device that I owned, and coincidentally it was the first device that I bought with my own money, so it always has a special place in my heart. I started off with the Palm V and then upgraded to the m505 when it came out. I loved the device and used it almost constantly for a long time. Unfortunately, they made a bunch of bad business decisions and the company collapsed.

Now, the Internet Archive has created an online archive of 565 Palm Pilot apps available to run in your web browser and on touchscreen devices. The apps are not as sophisticated as what you get nowadays but they are a blast of the past and some of them stand up to the passage of time quite well.

Check out the archive at: Software Library: Palm and Palmpilot.
More details on the project: The Internet Archive just put 565 Palm Pilot apps in your web browser

– Suramya

November 19, 2022

I am a speaker at SmartBharat 2022 Conference

Filed under: My Life,Tech Related — Suramya @ 11:56 PM

Happy to announce that I am one of the speakers at SmartBharat 2022 and I will be presenting on “IoT and Opensource: Re-purposing hardware & Improving interoperability“. My session is scheduled for 24 November at 12:30 PM in hall 2. As a kid I would read EFY regularly and now I am presenting at one of their conferences so this is a pretty big deal for me.


You can register for the conference at: https://www.iotshow.in/

If you are coming for the conference do stop by and say hello, I am planning on being there for all three days of the conference. Post the conference I will share the slides (and the video if possible) here.

– Suramya

November 15, 2022

Extracting Firefox Sites visited for archiving

Filed under: Computer Software,Linux/Unix Related,Tech Related — Suramya @ 3:01 AM

I have been using Firefox since it first version (0.1) launched back in 2003. At that time it was called Phoenix but had to change its name due to a trademark claim from Phoenix Technologies to Firebird which was then renamed to Firefox. Over the years I have upgraded in place so I had assumed that all my Browser History etc was still safely stored in the browser. A little while ago I realized that this wasn’t the case as there is a history page limit defined under the about:config. The property is called

places.history.expiration.transient_current_max_pages: 137249

and on my system it is configured for 137249 entries. This was a disappointment as I wanted to save an archive of the sites I have visited over the years so I started looking at how to export the history from Firefox from the command line so that I can save it in another location as part of my regular backup. I knew that the history is stored in a SQLite database so I looked at the contents of the DB using a SQLite viewer. The DB was simple enough to understand but I didn’t want to recreate the wheel so I searched on Google to see if anyone else has already written the queries to extract the data and found this Reddit post that gave the command to extract the data into a file.

I tried the command out and it worked perfectly with just one small hitch. The command would not run unless I shutdown Firefox as the DB file was locked by FF. This was a big issue as it meant that I would have to close the browser every time the backup ran which is not feasible as the backup process needs to be as transparent and seamless as possible.

Another search for the solution pointed me to this site that explained how to connect to a locked DB in Read Only mode. Which was exactly what I needed, so I took the code from there and merged it with the previous version and came up with the following command:

sqlite3 'file:places.sqlite?immutable=1' "SELECT strftime('%d.%m.%Y %H:%M:%S', visit_date/1000000, 'unixepoch', 'localtime'),
                                                   url FROM moz_places, moz_historyvisits WHERE moz_places.id = moz_historyvisits.place_id ORDER BY visit_date;" > dump.out 

this command gives us an output that looks like:

28.12.2020 12:30:52|http://maps.google.com/
28.12.2020 12:30:52|http://maps.google.com/maps
...
...
14.11.2022 04:37:17|https://www.google.com/?gfe_rd=cr&ei=sPvqVZ_oOefI8AeNwZbYDQ&gws_rd=ssl,cr&fg=1

Once the file is created, I back it up with my other files as part of the nightly backup process on my system. In the next phase I am thinking about dumping this data into a PostgreSQL DB so that I can put a UI in front of it that will allow me to browse/search through the file. But for now this is sufficient as the data is being backed up.

I was able to get my browsing history going back to 2012 by restoring the oldest Firefox backup that I have on the system and then extracting the data from it. I still have some DVD’s with even older backups so when I get some time I will restore and extract the data from there as well.

Well this is all for now. Will write more later.

– Suramya

November 9, 2022

FOSS: Asking folks to run their own servers/services is not the answer

Filed under: My Thoughts,Tech Related — Suramya @ 1:06 AM

A few days ago a discussion was going on in a FOSS (Free and Open Source Software) group that I am part of about Twitter and how it is imploding due to the recent changes. One of the members commented that “Both Twitter and Gmail are private services (not public utilities). Hence FOSS. Hence self-host your blog / email.” This is a very problematic view that is unfortunately quite common amongst techies. They (we) tend to believe that everyone has the time, knowledge, interest and resources to do things the way we do.

In the early 2000’s I hosted my site & blog on a VPS (Virtual Private Server) which I maintained on my own. It was a great experience because I got to learn Linux Sysadmin skills on a live environment and I did it for a few years. Then as my responsibilities and workload started increasing I had less time to devote to managing the server, plus I had issues with the costing so ended up moving hosting providers and to a shared hosting plan. Since I was moving to a different role, I just wanted to host my site and not worry about managing the server and this move allowed me to do that. I can move back to a VPS if I need to since I have the tech background and skills to manage it. Expecting everyone to do the same is nonsensical and impractical. I know the time it took me to walk my parents through how to access their email from various computers & phones. Just thinking about asking them to manage sendmail/postfix servers and secure them is enough to give me nightmares about hours on the phone trouble shooting.

FOSS is a great thing, it has made life easier and allowed us to retain control of the devices that we use to manage a large portion of our life. However, it is not practical to expect everyone to have the skills to host their own servers. Imagine if other services did the same thing, you would need to run your own sewage treatment plant to process your waste and have to manage your own power generation plant, or grow your own food. That sounds pretty nonsensical right? Which is how you sound when you tell folks to run their own servers for stuff that shouldn’t need it (like email or social media). Unless you want to only communicate with a microscopic portion of the population and feel superior to everyone.

Our goal is to encourage people to use FOSS whenever possible and that requires us to make the software usable, stable, have a shallow learning curve and smooth/easy onboarding. If you think that people will learn about server configs to access your product then you are dreaming. For example, GIMP is an awesome software but its UI sucks, which is why it has been unable to gain popularity and beat Photoshop. One of the great mods for GIMP which came out in 2006 called GIMPShop modified the UI to make it similar to Photoshop and people loved it. Other software / systems have the same problem as well. The most recent example is Mastodon which is a pretty cool software but onboarding process that explains how you would access it and setup accounts is something that I still am confused about. I had a client I worked with early in my carrier who would ask me to “Just make it work” when faced with complicated software setup. She was smart as hell but didn’t have the time to waste to setup/configure software as that took her time away from her core responsibilities.

The general user will go for ease of use, they will go for easy onboarding and accessibility. IRC was an amazing protocol but the clients sucked (I mean they worked but didn’t have mobile clients and were not user-friendly) as the years passed newer protocols and clients came into the picture and they had snazzy UI and clients (e.g. Slack) which enabled them to take over as the communication channel for a lot of communities. We can moan and complain that IRC was much better but from the end user perspective it wasn’t better because it didn’t allow them to do what they wanted using the devices they wanted to use. Like it or not mobile is hear to stay and not having a native mobile client made IRC a hard sell. (There are a few clients now, but the damage is done).

Usability is not a curseword. We need to start embracing making the software/systems we create more userfriendly. I am not saying remove the advanced / power functionality, I would be one of the first to leave if you did that. A good example on how to balance the two is the approach Firefox takes: they have the general UI for all users with sensible defaults and a configuration setup that allows power users to go in and modify pretty much every aspect of the system.

Coming back to Twitter, the fix for this current issue is not to run our own servers but to make the existing systems interoperable the same way Email systems are interoperable. Cory Doctorow has a fantastic post “How to ditch Facebook without ditching your friends” where he talks about how this could work. It would require pressure (regulatory/government/user) on the companies to adopt this model but in the long run that would removed the walled gardens that have popped up everywhere and restore the old more distributed style of internet.

I still need to figure out if I want to join a Mastodon server and if so which one, I will probably look into this later this month once I have some free time.

Well this is all for now. Will post more later.

– Suramya

October 21, 2022

Disable Dark Theme in the Private Browsing mode in Firefox 106

Filed under: Computer Software,Computer Tips,Knowledgebase,Tech Related — Suramya @ 10:09 AM

A lot of people like Dark themes for their apps but I am not one of them. For me the Dark mode strains my eyes more so I usually disable it as soon as possible. In the latest Firefox update (v106), Firefox changed a bunch of defaults and one of the changes is that when you open a window in incognito mode it uses the Dark theme by default. As per the release notes this is a conscious decision:

We also added a modern look and feel with a new logo and updated it so Private Browsing mode now defaults to dark theme, making it easier to know when you are in Private Browsing mode.

The dark theme really annoys me so I started looking for ways to disable it. Unfortunately, it can’t be disabled without having to change my default Theme (which is to use the System Defaults) which I didn’t want to do and a quick internet search didn’t return any useful results. So I decided to check out the about:config section to see if there is a hidden setting and lo-behold it was there. A quick change disabled the theme for the Private browsing mode and things were back to normal.

The steps to disable the dark theme in incognito mode are as follows:

  • Type about:config in the address bar and press Enter.
  • A warning page may appear. Click Accept the Risk and Continue to go to the about:config page.
  • Search for “theme” in the Search preference name box at the top of the page and you will see an entry for “browser.theme.dark-private-windows”
  • Double click on “True” for the entry to change the value to false.
  • The entry should look like the following. Then you can close the tab and you are done.


To revert the change, just repeat the steps and set the value back to True.

– Suramya

October 5, 2022

3D Scanning was used over 160 years ago to create photosculptures

Filed under: Interesting Sites,My Thoughts,Tech Related — Suramya @ 1:32 PM

When we talk about 3D scanning we all assume it is one of the emerging technologies and with the recent advances it has been growing more and more popular. A usecase that is becoming popular is to scan a sculpture or art installations so that the scans are published online and can be converted to VR or used to 3D print an exact replica. For example, The State Darwin Museum in Europe has been slowly digitizing / 3D scanning its collection. Other museums have been doing the same as well.

But interestingly, this is not a new technology and it was in use over 160 years ago to create what is known as photosculptures. A recent article on Hackaday.com talks about how in the late 19th century (1861) the art of creating realistic, 3-dimensional replicas using a series of 24 photos that were combined to create a 3D image was extremely popular. This process was called photosculpture and was invented by François Willème, a French painter, photographer and sculptor.


Example of a photosculpture created using this technique. (PC: University of Virginia: Department of Art)

He perfected the art of taking photos from 24 camera’s in a circle with the subject standing in the middle, synchronizing them to create a 3D model that could be projected on a screen. Then a pantograph was used to cut the layers of the picture into thin sheets of wood. The artist would then assemble the cuttings to create a rough 3D replica of the object. Once the base was created they would fill in the details using materials such as bronze, plaster of Paris and terra cotta to create a realistic result.


A visual overview of how Photosculptures were created

This whole process was a lot cheaper than having a sculpture created via the normal process and a lot faster so it became quite popular for a while with the public. But with other competitors patenting their own versions and the demand reducing he had to shutdown the studio by late 1868. Check out the following article for more details on the process More than 100 Years before 3D Printers, We had Photosculpture which is quite fascinating.

It made me think that we have this unspoken assumption that the previous generations were not as smart/advanced as we were and only in the modern world we have these amazing breakthroughs that wouldn’t have been possible earlier and then you read about these inventions and techniques that were there hundreds of years ago that does the same thing (albeit a bit more crudely) as our modern cutting edge technologies. There was a lot of scientific advances done historically that were lost due to various reasons and sometimes I dream about how the world would have been if we had not lost the Library of Alexandria or the Nalanda University which were amongst the many institutes destroyed by invaders and their staff & students slaughtered. Imagine how many advances were lost, how much wisdom was lost over the years due to this…

– Suramya

October 4, 2022

Workaround for VPN Unlimited connection issues with latest Debian

VPN’s are a great way to ensure that your communication remains private when using a pubic internet connection such as when you are connected to an Airport or Coffee shop Wifi. Plus they are good for getting access when a site is blocked where you are, for example in India VideoLan.org the main site for VLC Media player has been blocked for a while. I primarily use VPN Unlimited on all my systems as I have a lifetime subscription though I also have other VPN’s that I use sometimes.

Unfortunately, the native VPN Unlimited application for Linux has stopped working a while ago due to a compatibility issue with SSL. When I upgraded to the latest version of Debian back in July 2022 it suddenly stopped working with the following error message:

vpn-unlimited: symbol lookup error: /lib/libvpnu_private_sdk.so.1: undefined symbol: EVP_CIPHER_block_size

Reinstalling the software didn’t resolve the issue and neither did a search on the internet help. When I reached out to support they told me that Debian 11 wasn’t yet supported and they didn’t have an ETA for the new version to be released. They did recommend that I manually create & download an openvpn config from their site that would allow me to connect to the VPN manually using OpenVPN instead of the App. Unfortunately, the config generated didn’t work either as it would fail to connect with the following error message in the logs:

Sep 21 02:56:55 StarKnight NetworkManager[1123]:  [1663709215.0845]vpn[0x559d7fc46900,833a72d8-a08a-474e-a854-c926cd6c694a,"VPN Unlimited"]: starting openvpn
Sep 21 02:56:55 StarKnight NetworkManager[1123]:  [1663709215.0847] audit: op="connection-activate" uuid="833a72d8-a08a-474e-a854-c926cd6c694a" name="VPN Unlimited" pid=2829 uid=1000 result="success"
Sep 21 02:56:55 StarKnight kded5[2780]: org.kde.plasma.nm.kded: Unhandled VPN connection state change: 2
Sep 21 02:56:55 StarKnight kded5[2780]: org.kde.plasma.nm.kded: Unhandled VPN connection state change: 3
Sep 21 02:56:55 StarKnight NetworkManager[233850]: 2022-09-21 02:56:55 WARNING: Compression for receiving enabled. Compression has been used in the past to break encryption. Sent packets are not compressed unless
"allow-compression yes" is also set.
Sep 21 02:56:55 StarKnight nm-openvpn[233850]: DEPRECATED OPTION: --cipher set to 'AES-256-CBC' but missing in --data-ciphers (AES-256-GCM:AES-128-GCM:CHACHA20-POLY1305). OpenVPN ignores --cipher for cipher negotiations.
Sep 21 02:56:55 StarKnight nm-openvpn[233850]: OpenVPN 2.6_git x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [PKCS11] [MH/PKTINFO] [AEAD] [DCO]
Sep 21 02:56:55 StarKnight nm-openvpn[233850]: library versions: OpenSSL 3.0.5 5 Jul 2022, LZO 2.10
Sep 21 02:56:55 StarKnight nm-openvpn[233850]: WARNING: No server certificate verification method has been enabled. See http://openvpn.net/howto.html#mitm for more info.
Sep 21 02:56:55 StarKnight nm-openvpn[233850]: NOTE: the current --script-security setting may allow this configuration to call user-defined scripts
Sep 21 02:56:55 StarKnight nm-openvpn[233850]: OpenSSL: error:0A00018E:SSL routines::ca md too weak
Sep 21 02:56:55 StarKnight nm-openvpn[233850]: Cannot load certificate file /home/suramya/.local/share/networkmanagement/certificates/E87E7A7D6DA16A89C7B4565273D3A792_hk_openvpn/cert.crt
Sep 21 02:56:55 StarKnight nm-openvpn[233850]: Exiting due to fatal error
Sep 21 02:56:55 StarKnight NetworkManager[1123]:  [1663709215.1095] vpn[0x559d7fc46900,833a72d8-a08a-474e-a854-c926cd6c694a,"VPN Unlimited"]: dbus: failure: connect-failed (1)
Sep 21 02:56:55 StarKnight NetworkManager[1123]:  [1663709215.1095] vpn[0x559d7fc46900,833a72d8-a08a-474e-a854-c926cd6c694a,"VPN Unlimited"]: dbus: failure: connect-failed (1)

After a little more back and forth with the support team (which was extremely responsive and quick) which in turn reached out to their developers we identified the issue with the OpenVPN config. The fix for this will be deployed to all their servers by the end of this month. In the mean time I was given a workaround that resolved the issue for me. To fix the issue add this line to your OVPN file under the VPN section:

tls-cipher=DEFAULT:@SECLEVEL=0 

More information on this is available in the OpenVPN forum. Keep in mind that this is not a really secure configuration and if you are working on something really top secret you should use another VPN till the issue is actually fixed instead of this workaround as it is not secure.

However, just wanted to share this here for others who might be having this same issue. Hope this helps.

– Suramya

« Newer PostsOlder Posts »

Powered by WordPress