Suramya's Blog : Welcome to my crazy life…

November 15, 2022

Extracting Firefox Sites visited for archiving

Filed under: Computer Software,Linux/Unix Related,Tech Related — Suramya @ 3:01 AM

I have been using Firefox since it first version (0.1) launched back in 2003. At that time it was called Phoenix but had to change its name due to a trademark claim from Phoenix Technologies to Firebird which was then renamed to Firefox. Over the years I have upgraded in place so I had assumed that all my Browser History etc was still safely stored in the browser. A little while ago I realized that this wasn’t the case as there is a history page limit defined under the about:config. The property is called

places.history.expiration.transient_current_max_pages: 137249

and on my system it is configured for 137249 entries. This was a disappointment as I wanted to save an archive of the sites I have visited over the years so I started looking at how to export the history from Firefox from the command line so that I can save it in another location as part of my regular backup. I knew that the history is stored in a SQLite database so I looked at the contents of the DB using a SQLite viewer. The DB was simple enough to understand but I didn’t want to recreate the wheel so I searched on Google to see if anyone else has already written the queries to extract the data and found this Reddit post that gave the command to extract the data into a file.

I tried the command out and it worked perfectly with just one small hitch. The command would not run unless I shutdown Firefox as the DB file was locked by FF. This was a big issue as it meant that I would have to close the browser every time the backup ran which is not feasible as the backup process needs to be as transparent and seamless as possible.

Another search for the solution pointed me to this site that explained how to connect to a locked DB in Read Only mode. Which was exactly what I needed, so I took the code from there and merged it with the previous version and came up with the following command:

sqlite3 'file:places.sqlite?immutable=1' "SELECT strftime('%d.%m.%Y %H:%M:%S', visit_date/1000000, 'unixepoch', 'localtime'),
                                                   url FROM moz_places, moz_historyvisits WHERE moz_places.id = moz_historyvisits.place_id ORDER BY visit_date;" > dump.out 

this command gives us an output that looks like:

28.12.2020 12:30:52|http://maps.google.com/
28.12.2020 12:30:52|http://maps.google.com/maps
...
...
14.11.2022 04:37:17|https://www.google.com/?gfe_rd=cr&ei=sPvqVZ_oOefI8AeNwZbYDQ&gws_rd=ssl,cr&fg=1

Once the file is created, I back it up with my other files as part of the nightly backup process on my system. In the next phase I am thinking about dumping this data into a PostgreSQL DB so that I can put a UI in front of it that will allow me to browse/search through the file. But for now this is sufficient as the data is being backed up.

I was able to get my browsing history going back to 2012 by restoring the oldest Firefox backup that I have on the system and then extracting the data from it. I still have some DVD’s with even older backups so when I get some time I will restore and extract the data from there as well.

Well this is all for now. Will write more later.

– Suramya

November 9, 2022

FOSS: Asking folks to run their own servers/services is not the answer

Filed under: My Thoughts,Tech Related — Suramya @ 1:06 AM

A few days ago a discussion was going on in a FOSS (Free and Open Source Software) group that I am part of about Twitter and how it is imploding due to the recent changes. One of the members commented that “Both Twitter and Gmail are private services (not public utilities). Hence FOSS. Hence self-host your blog / email.” This is a very problematic view that is unfortunately quite common amongst techies. They (we) tend to believe that everyone has the time, knowledge, interest and resources to do things the way we do.

In the early 2000’s I hosted my site & blog on a VPS (Virtual Private Server) which I maintained on my own. It was a great experience because I got to learn Linux Sysadmin skills on a live environment and I did it for a few years. Then as my responsibilities and workload started increasing I had less time to devote to managing the server, plus I had issues with the costing so ended up moving hosting providers and to a shared hosting plan. Since I was moving to a different role, I just wanted to host my site and not worry about managing the server and this move allowed me to do that. I can move back to a VPS if I need to since I have the tech background and skills to manage it. Expecting everyone to do the same is nonsensical and impractical. I know the time it took me to walk my parents through how to access their email from various computers & phones. Just thinking about asking them to manage sendmail/postfix servers and secure them is enough to give me nightmares about hours on the phone trouble shooting.

FOSS is a great thing, it has made life easier and allowed us to retain control of the devices that we use to manage a large portion of our life. However, it is not practical to expect everyone to have the skills to host their own servers. Imagine if other services did the same thing, you would need to run your own sewage treatment plant to process your waste and have to manage your own power generation plant, or grow your own food. That sounds pretty nonsensical right? Which is how you sound when you tell folks to run their own servers for stuff that shouldn’t need it (like email or social media). Unless you want to only communicate with a microscopic portion of the population and feel superior to everyone.

Our goal is to encourage people to use FOSS whenever possible and that requires us to make the software usable, stable, have a shallow learning curve and smooth/easy onboarding. If you think that people will learn about server configs to access your product then you are dreaming. For example, GIMP is an awesome software but its UI sucks, which is why it has been unable to gain popularity and beat Photoshop. One of the great mods for GIMP which came out in 2006 called GIMPShop modified the UI to make it similar to Photoshop and people loved it. Other software / systems have the same problem as well. The most recent example is Mastodon which is a pretty cool software but onboarding process that explains how you would access it and setup accounts is something that I still am confused about. I had a client I worked with early in my carrier who would ask me to “Just make it work” when faced with complicated software setup. She was smart as hell but didn’t have the time to waste to setup/configure software as that took her time away from her core responsibilities.

The general user will go for ease of use, they will go for easy onboarding and accessibility. IRC was an amazing protocol but the clients sucked (I mean they worked but didn’t have mobile clients and were not user-friendly) as the years passed newer protocols and clients came into the picture and they had snazzy UI and clients (e.g. Slack) which enabled them to take over as the communication channel for a lot of communities. We can moan and complain that IRC was much better but from the end user perspective it wasn’t better because it didn’t allow them to do what they wanted using the devices they wanted to use. Like it or not mobile is hear to stay and not having a native mobile client made IRC a hard sell. (There are a few clients now, but the damage is done).

Usability is not a curseword. We need to start embracing making the software/systems we create more userfriendly. I am not saying remove the advanced / power functionality, I would be one of the first to leave if you did that. A good example on how to balance the two is the approach Firefox takes: they have the general UI for all users with sensible defaults and a configuration setup that allows power users to go in and modify pretty much every aspect of the system.

Coming back to Twitter, the fix for this current issue is not to run our own servers but to make the existing systems interoperable the same way Email systems are interoperable. Cory Doctorow has a fantastic post “How to ditch Facebook without ditching your friends” where he talks about how this could work. It would require pressure (regulatory/government/user) on the companies to adopt this model but in the long run that would removed the walled gardens that have popped up everywhere and restore the old more distributed style of internet.

I still need to figure out if I want to join a Mastodon server and if so which one, I will probably look into this later this month once I have some free time.

Well this is all for now. Will post more later.

– Suramya

October 21, 2022

Disable Dark Theme in the Private Browsing mode in Firefox 106

Filed under: Computer Software,Computer Tips,Knowledgebase,Tech Related — Suramya @ 10:09 AM

A lot of people like Dark themes for their apps but I am not one of them. For me the Dark mode strains my eyes more so I usually disable it as soon as possible. In the latest Firefox update (v106), Firefox changed a bunch of defaults and one of the changes is that when you open a window in incognito mode it uses the Dark theme by default. As per the release notes this is a conscious decision:

We also added a modern look and feel with a new logo and updated it so Private Browsing mode now defaults to dark theme, making it easier to know when you are in Private Browsing mode.

The dark theme really annoys me so I started looking for ways to disable it. Unfortunately, it can’t be disabled without having to change my default Theme (which is to use the System Defaults) which I didn’t want to do and a quick internet search didn’t return any useful results. So I decided to check out the about:config section to see if there is a hidden setting and lo-behold it was there. A quick change disabled the theme for the Private browsing mode and things were back to normal.

The steps to disable the dark theme in incognito mode are as follows:

  • Type about:config in the address bar and press Enter.
  • A warning page may appear. Click Accept the Risk and Continue to go to the about:config page.
  • Search for “theme” in the Search preference name box at the top of the page and you will see an entry for “browser.theme.dark-private-windows”
  • Double click on “True” for the entry to change the value to false.
  • The entry should look like the following. Then you can close the tab and you are done.


To revert the change, just repeat the steps and set the value back to True.

– Suramya

October 5, 2022

3D Scanning was used over 160 years ago to create photosculptures

Filed under: Interesting Sites,My Thoughts,Tech Related — Suramya @ 1:32 PM

When we talk about 3D scanning we all assume it is one of the emerging technologies and with the recent advances it has been growing more and more popular. A usecase that is becoming popular is to scan a sculpture or art installations so that the scans are published online and can be converted to VR or used to 3D print an exact replica. For example, The State Darwin Museum in Europe has been slowly digitizing / 3D scanning its collection. Other museums have been doing the same as well.

But interestingly, this is not a new technology and it was in use over 160 years ago to create what is known as photosculptures. A recent article on Hackaday.com talks about how in the late 19th century (1861) the art of creating realistic, 3-dimensional replicas using a series of 24 photos that were combined to create a 3D image was extremely popular. This process was called photosculpture and was invented by François Willème, a French painter, photographer and sculptor.


Example of a photosculpture created using this technique. (PC: University of Virginia: Department of Art)

He perfected the art of taking photos from 24 camera’s in a circle with the subject standing in the middle, synchronizing them to create a 3D model that could be projected on a screen. Then a pantograph was used to cut the layers of the picture into thin sheets of wood. The artist would then assemble the cuttings to create a rough 3D replica of the object. Once the base was created they would fill in the details using materials such as bronze, plaster of Paris and terra cotta to create a realistic result.


A visual overview of how Photosculptures were created

This whole process was a lot cheaper than having a sculpture created via the normal process and a lot faster so it became quite popular for a while with the public. But with other competitors patenting their own versions and the demand reducing he had to shutdown the studio by late 1868. Check out the following article for more details on the process More than 100 Years before 3D Printers, We had Photosculpture which is quite fascinating.

It made me think that we have this unspoken assumption that the previous generations were not as smart/advanced as we were and only in the modern world we have these amazing breakthroughs that wouldn’t have been possible earlier and then you read about these inventions and techniques that were there hundreds of years ago that does the same thing (albeit a bit more crudely) as our modern cutting edge technologies. There was a lot of scientific advances done historically that were lost due to various reasons and sometimes I dream about how the world would have been if we had not lost the Library of Alexandria or the Nalanda University which were amongst the many institutes destroyed by invaders and their staff & students slaughtered. Imagine how many advances were lost, how much wisdom was lost over the years due to this…

– Suramya

October 4, 2022

Workaround for VPN Unlimited connection issues with latest Debian

VPN’s are a great way to ensure that your communication remains private when using a pubic internet connection such as when you are connected to an Airport or Coffee shop Wifi. Plus they are good for getting access when a site is blocked where you are, for example in India VideoLan.org the main site for VLC Media player has been blocked for a while. I primarily use VPN Unlimited on all my systems as I have a lifetime subscription though I also have other VPN’s that I use sometimes.

Unfortunately, the native VPN Unlimited application for Linux has stopped working a while ago due to a compatibility issue with SSL. When I upgraded to the latest version of Debian back in July 2022 it suddenly stopped working with the following error message:

vpn-unlimited: symbol lookup error: /lib/libvpnu_private_sdk.so.1: undefined symbol: EVP_CIPHER_block_size

Reinstalling the software didn’t resolve the issue and neither did a search on the internet help. When I reached out to support they told me that Debian 11 wasn’t yet supported and they didn’t have an ETA for the new version to be released. They did recommend that I manually create & download an openvpn config from their site that would allow me to connect to the VPN manually using OpenVPN instead of the App. Unfortunately, the config generated didn’t work either as it would fail to connect with the following error message in the logs:

Sep 21 02:56:55 StarKnight NetworkManager[1123]:  [1663709215.0845]vpn[0x559d7fc46900,833a72d8-a08a-474e-a854-c926cd6c694a,"VPN Unlimited"]: starting openvpn
Sep 21 02:56:55 StarKnight NetworkManager[1123]:  [1663709215.0847] audit: op="connection-activate" uuid="833a72d8-a08a-474e-a854-c926cd6c694a" name="VPN Unlimited" pid=2829 uid=1000 result="success"
Sep 21 02:56:55 StarKnight kded5[2780]: org.kde.plasma.nm.kded: Unhandled VPN connection state change: 2
Sep 21 02:56:55 StarKnight kded5[2780]: org.kde.plasma.nm.kded: Unhandled VPN connection state change: 3
Sep 21 02:56:55 StarKnight NetworkManager[233850]: 2022-09-21 02:56:55 WARNING: Compression for receiving enabled. Compression has been used in the past to break encryption. Sent packets are not compressed unless
"allow-compression yes" is also set.
Sep 21 02:56:55 StarKnight nm-openvpn[233850]: DEPRECATED OPTION: --cipher set to 'AES-256-CBC' but missing in --data-ciphers (AES-256-GCM:AES-128-GCM:CHACHA20-POLY1305). OpenVPN ignores --cipher for cipher negotiations.
Sep 21 02:56:55 StarKnight nm-openvpn[233850]: OpenVPN 2.6_git x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [PKCS11] [MH/PKTINFO] [AEAD] [DCO]
Sep 21 02:56:55 StarKnight nm-openvpn[233850]: library versions: OpenSSL 3.0.5 5 Jul 2022, LZO 2.10
Sep 21 02:56:55 StarKnight nm-openvpn[233850]: WARNING: No server certificate verification method has been enabled. See http://openvpn.net/howto.html#mitm for more info.
Sep 21 02:56:55 StarKnight nm-openvpn[233850]: NOTE: the current --script-security setting may allow this configuration to call user-defined scripts
Sep 21 02:56:55 StarKnight nm-openvpn[233850]: OpenSSL: error:0A00018E:SSL routines::ca md too weak
Sep 21 02:56:55 StarKnight nm-openvpn[233850]: Cannot load certificate file /home/suramya/.local/share/networkmanagement/certificates/E87E7A7D6DA16A89C7B4565273D3A792_hk_openvpn/cert.crt
Sep 21 02:56:55 StarKnight nm-openvpn[233850]: Exiting due to fatal error
Sep 21 02:56:55 StarKnight NetworkManager[1123]:  [1663709215.1095] vpn[0x559d7fc46900,833a72d8-a08a-474e-a854-c926cd6c694a,"VPN Unlimited"]: dbus: failure: connect-failed (1)
Sep 21 02:56:55 StarKnight NetworkManager[1123]:  [1663709215.1095] vpn[0x559d7fc46900,833a72d8-a08a-474e-a854-c926cd6c694a,"VPN Unlimited"]: dbus: failure: connect-failed (1)

After a little more back and forth with the support team (which was extremely responsive and quick) which in turn reached out to their developers we identified the issue with the OpenVPN config. The fix for this will be deployed to all their servers by the end of this month. In the mean time I was given a workaround that resolved the issue for me. To fix the issue add this line to your OVPN file under the VPN section:

tls-cipher=DEFAULT:@SECLEVEL=0 

More information on this is available in the OpenVPN forum. Keep in mind that this is not a really secure configuration and if you are working on something really top secret you should use another VPN till the issue is actually fixed instead of this workaround as it is not secure.

However, just wanted to share this here for others who might be having this same issue. Hope this helps.

– Suramya

October 3, 2022

Debian to allow non-free firmware in its default installer

Filed under: Linux/Unix Related,My Thoughts,Tech Related — Suramya @ 10:19 AM

One of the problems preventing new users from using Debian is that if your hardware is not supported by an Open (‘free’) driver/firmware then the system doesn’t install any and then it is a painful process to download and install the driver, especially if it is for the Wireless card. In earlier laptops you could always connect via a network cable to install the drivers but the newer systems don’t come with a LAN connection (which I think sucks BTW) so installing Debian on those systems is a pain.

Debian leadership has been debating on how to fix this over the past few months and there was a vote to decide Debian would handle non-free firmware going forward. Now the voting has completed and the verdict is in, Debian has decided that the Debian Official Installer Media can include firmware that is otherwise not part of the Debian system. The non-free firmware would be automatically installed and activated when the installer determines that it is needed for the OS to function. The setup would notify the user in such cases and provide instructions on how to disable the changes if required.

The Debian Project also makes the following statement:

We will include non-free firmware packages from the “non-free-firmware” section of the Debian archive on our official media (installer images and live images). The included firmware binaries will normally be enabled by default where the system determines that they are required, but where possible we will include ways for users to disable this at boot (boot menu option, kernel command line etc.).

When the installer/live system is running we will provide information to the user about what firmware has been loaded (both free and non-free), and we will also store that information on the target system such that users will be able to find it later. Where non-free firmware is found to be necessary, the target system will also be configured to use the non-free-firmware component by default in the apt sources.list file. Our users should receive security updates and important fixes to firmware binaries just like any other installed software.

We will publish these images as official Debian media, replacing the current media sets that do not include non-free firmware packages.

This is a great choice and will allow the installer to work pretty seamlessly for most users. I know there are purists who will be shouting and screaming that this is not the ‘true way for free software‘ but they will be a minority for the most part. Installers need to be simple, while allowing power users more granular control of the process. This change removes a major barrier in the adoption of Debian and makes the lives of millions of system administrators a lot easier.

Source: Slashdot: Debian Considers Changing How It Handles Non-Free Firmware
More details at: Debian Choose A Reasonable, Common Sense Solution To Dealing With Non-Free Firmware

– Suramya

October 2, 2022

Upgrading Debian Unstable – How to avoid obvious problems

Filed under: Computer Tips,Linux/Unix Related,Tech Related — Suramya @ 11:59 PM

If you are using Debian Unstable there is a possibility that your system might not work correctly after an upgrade, because as the name states it is an ‘unstable’ distribution that might have bugs. I use it because Debian Stable has older versions of the software available and I want to the latest versions if possible. Plus I don’t mind tinkering with the system if things break so that helps as well. Over the years I have found some easy ways to prevent the most obvious problems when upgrading and I will share them here.

First option is to upgrade the system regularly. You can decide what the frequency of the upgrade is but do it regularly. I upgrade twice a month and that ensures that the system has the latest updates and we are not so far out of sync that we need to download a ton of files for the upgrade. This is very useful when you don’t have much free space available in the root partition as the longer you wait the more files you have to download and the less free space we have for the actual upgrade.

Another thing I do that has helped me a lot is to ensure that you look at the packages being upgraded, specifically any packages being removed. Don’t upgrade if there are a lot of packages being removed without updated versions being installed. To give an example, I tried upgrading my system yesterday and it told me that it was going to “457 upgraded, 11 newly installed, 297 to remove and 0 not upgraded.” Looking at the packages it was going to remove I found that if I had blindly allowed the upgrade to proceed it would have ended up uninstalling my entire KDE install, VPN server and a whole bunch of other stuff. I waited for a day and tried again and the bug that was causing the system to insist on removing KDE during the upgrade was resolved and I was able to upgrade successfully.

I also pipe the output from the apt-get dist-upgrade command to a log file so that I have a log of what was changed and any errors are logged so I can look at it later in case there are issue. The command I use for that is as below:

apt-get dist-upgrade 2>&1 |tee ~suramya/Documents/Suramya/Computer\ Update\ Logs/StarKnight/2022/10032022

I keep all the logs from the upgrades so I can see exactly what was changed on the system and when. Makes it a lot easier to troubleshoot issues caused due to an upgrade.

If you have multiple systems, then I recommend you don’t upgrade all of them at the same time. I stagger them by a day or two so that in case of issues I have a working system. This has saved my sanity a few times.

Well, this is all for now. Do share any tips you might have for avoiding issues during an upgrade.

– Suramya

September 25, 2022

How is everyone ok that Windows is showing advertisements everywhere in the system?

Filed under: Computer Software,My Thoughts,Tech Related — Suramya @ 11:55 PM

Linux is an Open Source operating system that is available for free while Windows is a paid OS that costs a fair bit of money (~$200 per license). One would think that because we are getting something for free when using Linux then we are the product. Strangely this is not the case and it is Windows that is showing me advertisements like I got it for free and even more strangely people seem to be ok with it.

My Linux setup has 0 ads on it that are pushed to it by the OS, Windows on the other hand seems to be determined to put advertisements where ever it can find some space. For example, you get ads in the Start Menu, the lock screen, Windows Explorer etc etc. If I am paying money for the OS I don’t want to have ads pushed to me that I can’t get rid of. I mean the folks over at How to Geek have a 14 page document explaining how to disable all the built-in advertising in Windows 10, which shows how strongly MS is trying to push advertisements on their platform.

Which is ridiculous, I mean I would complain about this much ads on a system that I didn’t pay for but apparently it is fine for a billion dollar company to waste my screen viewing estate, bandwidth and processor power to show me advertisements on a OS that I paid money for. If a system is showing me ads then they should be making the OS free so at least they have some excuse for the behavior, similar to what Netflix is doing where the plan with the advertisements in the programing is cheaper than the one without.

What do you think?

– Suramya

September 24, 2022

Keep your disk temperatures below 40 Deg C to increase their life

Filed under: My Thoughts,Tech Related — Suramya @ 12:18 PM

Over the past few decades since I got my first computer, Hard Drive failures have been a problem for me till recently as my disks would last about a max of 2 years before I started seeing errors & disk failure on them. I tried all the brands including Segate, WD and a couple of others but still had the same issue. It had gotten bad enough that I was looking at buying enterprise hard disks instead of the Desktop versions even though the enterprise versions are a lot more expensive.

Then one day I was randomly looking at the temperature sensors for the system, I noticed that the hard drive temperature for 2 of the disks was at 41 Deg C. Plus the logs on the disk showed that this was a common occurrence for the disks. Then a quick Google search told me that the drives should be kept below 40 Deg to avoid disk failures. So I opened up the case and added a couple of more fans in the casing so that I had a constant flow of air over the disks. Took me about 20 mins and I already had the extra fans lying around. With the new fans the disk temperatures dropped to an between of 33-35 Deg C and I left it as is.

This was about 5 years ago. Today I was running my quarterly SMART scan of all my disks and noticed that the disks have been running for an average of 50k hours now (one of the disks is at 2k, but all others have been running constantly for a while). The max value of lifetime hours for my system is currently at: 52381 hours -> 2182.5 days -> 5.97 hours. That is a massive improvement over the previous average of <2 years. I am sure the same would be true for laptops as well but it is difficult to add another fan to a laptop so haven’t tested it. Plus my laptop doesn’t get used as often as my desktop since I mainly only use it while traveling whereas the desktop has been running pretty much 24×7 since I got it.

This shows that having your CPU/devices at the recommended temperature is essential for a longer life of the components. This is one of the reasons that all data-centers are cooled to the degree they are and any increase in the temperature maintained needs to be carefully tested before implementation.

– Suramya

August 31, 2022

Thoughts around Coding with help and why that is not a bad thing

Filed under: Computer Software,My Thoughts,Tech Related — Suramya @ 11:40 PM

It is fairly common for the people who have been in the industry to complain about how the youngsters don’t know what they are doing and without all the fancy helpful gadgets/IDE’s they wouldn’t be able to do anything and how things were better the way the person doing the complaining does it because that is how they learnt how to do things! The rant below was posted to Hacker News a little while ago in response to an question about coPilot and I wanted to share some of my thoughts around it. But first, lets read the rant:

After decades of professional software development, it should be clear that code is a liability. The more you have, the worse things get. A tool that makes it easy to crank out a ton of it, is exactly the opposite of what we need.

If a coworker uses it, I will consider it an admission of incompetence. Simple as that.

I don’t use autoformat, because it gets things wrong constantly. E.g. taking two similar lines and wrapping one but not the other, because of 1 character length difference. Instead I explicitly line my code out by hand to emphasize structure.

I also hate 90% of default linter rules because they are pointless busywork designed to catch noob mistakes.

These tools keep devs stuck in local maxima of mediocrity. It’s like writing prose with a thesaurus on, and accepting every single suggestion blindly.

I coded for 20 years without them, why would I need them now? If you can’t even fathom coding without these crutches, and think this is somehow equivalent to coding in a bare notepad, you are proving my point.

Let’s break this gem down and take it line by line.

After decades of professional software development, it should be clear that code is a liability. The more you have, the worse things get. A tool that makes it easy to crank out a ton of it, is exactly the opposite of what we need.

If a coworker uses it, I will consider it an admission of incompetence. Simple as that.

This is a false premise. There are times where extra code is a liability but most of times the boiler-plate and error-checking etc is required. The languages today are more complex than what was there 20 years ago. I know because I have been coding for over 25 years now. It is easy to write Basic/C/C++ code in a notepad and run it, in fact even for C++ I used TurboC++ IDE to write code over 25 years ago… We didn’t have distributed micro-services 20 years ago and most applications were a simple server-client model. Now we have applications connecting in peer-to-peer model etc. Why would I spend time retyping code that a decent IDE would auto-populate when I could use that time to actually solve more interesting problems.

This is the kind of developer who would spend days reformating the code manually to look just right instead of coding the application to perform as per specifications.

I don’t use autoformat, because it gets things wrong constantly. E.g. taking two similar lines and wrapping one but not the other, because of 1 character length difference. Instead I explicitly line my code out by hand to emphasize structure.

This is a waste of time that could have been spent working on other projects. I honestly don’t care how the structure is as long as it is consistent and reasonably logical. I personally wouldn’t brag about spending time formatting each line just so but that is just me.

I also hate 90% of default linter rules because they are pointless busywork designed to catch noob mistakes.These tools keep devs stuck in local maxima of mediocrity. It’s like writing prose with a thesaurus on, and accepting every single suggestion blindly.

I am not a huge fan of linter but it is a good practice use this to catch basic mistakes. Why would I spend manual effort to find basic issues when a system can do it for me automatically?

I coded for 20 years without them, why would I need them now? If you can’t even fathom coding without these crutches, and think this is somehow equivalent to coding in a bare notepad, you are proving my point.

20 years ago we used dialup modem and didn’t have giga-bit network connections. We didn’t have mobile-phone/internet coverage all over the world. Things are changing. We need to change with them.

Why stop at coding with notepad/vi/emacs? You should move back to assembly because it allows you full control over the code and write it more elegantly without any ‘fluff’ or extra wasted code. Or even better start coding directly in binary. That will ensure really elegant and tight code. (/s)

I had to work with someone who felt similarly and it was a painful experience. They were used to of writing commands/code in Hex to make changes to the system which worked for the most part but wasn’t scalable because they didn’t have others who could do it as well as him and he didn’t want to teach others in too much detail because I guess it gave them job security. I was asked to come in and create a system that allowed users to make the same changes using a WebUI that was translated to Hex in the backend. It saved a ton of hours for the users because it was a lot faster and intutive. But this person fought it tooth and nail and did their best to get the project cancelled.

I am really tired of all these folks complaining about the new way of doing things, just because that is not how they did things. If things didn’t change and evolve over the years and new things didn’t come in then we would still be using punch cards or abacus for computing. 22 years ago, we had a T3 connection at my university and that was considered state of the art and gave us a blazing speed of up to 44.736 Mbps that was shared with the entire dorm. Right now, I have a 400Mbps dedicated connection that is just for my personal home use. Things improve over the years and we need to keep up-skilling ourselves as well. There are so many examples I can give about things that are possible now which weren’t possible back then… This sort of gatekeeping doesn’t serve any productive purpose and is just a way for people to control access to the ‘elite’ group and make them feel better about themselves even though they are not as skilled as the newer folks.

The caveat is that not all new things are good, we need to evaluate and decide. There are a bunch of things that I don’t like about the new systems because I prefer the old ways of doing things. It doesn’t mean that anyone using the new tools is not a good developer. For example, I still prefer using SVN instead of GIT because that is what I am comfortable with, GIT has its advantages and SVN has its advantages. It doesn’t mean that I get to tell people who are using GIT that they are not ‘worthy’ of being called a good developer.

I dare this person to write a chat-bot without any external library/IDE or create a peer-to-peer protocol to share data amongst multiple nodes simultaneously or any of the new protocols/applications in use today that didn’t exist 20 years ago

Just because you can’t learn new things doesn’t mean that others are inferior. That is your problem, not ours.

– Suramya

« Newer PostsOlder Posts »

Powered by WordPress