Suramya's Blog : Welcome to my crazy life…

August 31, 2022

Thoughts around Coding with help and why that is not a bad thing

Filed under: Computer Software,My Thoughts,Tech Related — Suramya @ 11:40 PM

It is fairly common for the people who have been in the industry to complain about how the youngsters don’t know what they are doing and without all the fancy helpful gadgets/IDE’s they wouldn’t be able to do anything and how things were better the way the person doing the complaining does it because that is how they learnt how to do things! The rant below was posted to Hacker News a little while ago in response to an question about coPilot and I wanted to share some of my thoughts around it. But first, lets read the rant:

After decades of professional software development, it should be clear that code is a liability. The more you have, the worse things get. A tool that makes it easy to crank out a ton of it, is exactly the opposite of what we need.

If a coworker uses it, I will consider it an admission of incompetence. Simple as that.

I don’t use autoformat, because it gets things wrong constantly. E.g. taking two similar lines and wrapping one but not the other, because of 1 character length difference. Instead I explicitly line my code out by hand to emphasize structure.

I also hate 90% of default linter rules because they are pointless busywork designed to catch noob mistakes.

These tools keep devs stuck in local maxima of mediocrity. It’s like writing prose with a thesaurus on, and accepting every single suggestion blindly.

I coded for 20 years without them, why would I need them now? If you can’t even fathom coding without these crutches, and think this is somehow equivalent to coding in a bare notepad, you are proving my point.

Let’s break this gem down and take it line by line.

After decades of professional software development, it should be clear that code is a liability. The more you have, the worse things get. A tool that makes it easy to crank out a ton of it, is exactly the opposite of what we need.

If a coworker uses it, I will consider it an admission of incompetence. Simple as that.

This is a false premise. There are times where extra code is a liability but most of times the boiler-plate and error-checking etc is required. The languages today are more complex than what was there 20 years ago. I know because I have been coding for over 25 years now. It is easy to write Basic/C/C++ code in a notepad and run it, in fact even for C++ I used TurboC++ IDE to write code over 25 years ago… We didn’t have distributed micro-services 20 years ago and most applications were a simple server-client model. Now we have applications connecting in peer-to-peer model etc. Why would I spend time retyping code that a decent IDE would auto-populate when I could use that time to actually solve more interesting problems.

This is the kind of developer who would spend days reformating the code manually to look just right instead of coding the application to perform as per specifications.

I don’t use autoformat, because it gets things wrong constantly. E.g. taking two similar lines and wrapping one but not the other, because of 1 character length difference. Instead I explicitly line my code out by hand to emphasize structure.

This is a waste of time that could have been spent working on other projects. I honestly don’t care how the structure is as long as it is consistent and reasonably logical. I personally wouldn’t brag about spending time formatting each line just so but that is just me.

I also hate 90% of default linter rules because they are pointless busywork designed to catch noob mistakes.These tools keep devs stuck in local maxima of mediocrity. It’s like writing prose with a thesaurus on, and accepting every single suggestion blindly.

I am not a huge fan of linter but it is a good practice use this to catch basic mistakes. Why would I spend manual effort to find basic issues when a system can do it for me automatically?

I coded for 20 years without them, why would I need them now? If you can’t even fathom coding without these crutches, and think this is somehow equivalent to coding in a bare notepad, you are proving my point.

20 years ago we used dialup modem and didn’t have giga-bit network connections. We didn’t have mobile-phone/internet coverage all over the world. Things are changing. We need to change with them.

Why stop at coding with notepad/vi/emacs? You should move back to assembly because it allows you full control over the code and write it more elegantly without any ‘fluff’ or extra wasted code. Or even better start coding directly in binary. That will ensure really elegant and tight code. (/s)

I had to work with someone who felt similarly and it was a painful experience. They were used to of writing commands/code in Hex to make changes to the system which worked for the most part but wasn’t scalable because they didn’t have others who could do it as well as him and he didn’t want to teach others in too much detail because I guess it gave them job security. I was asked to come in and create a system that allowed users to make the same changes using a WebUI that was translated to Hex in the backend. It saved a ton of hours for the users because it was a lot faster and intutive. But this person fought it tooth and nail and did their best to get the project cancelled.

I am really tired of all these folks complaining about the new way of doing things, just because that is not how they did things. If things didn’t change and evolve over the years and new things didn’t come in then we would still be using punch cards or abacus for computing. 22 years ago, we had a T3 connection at my university and that was considered state of the art and gave us a blazing speed of up to 44.736 Mbps that was shared with the entire dorm. Right now, I have a 400Mbps dedicated connection that is just for my personal home use. Things improve over the years and we need to keep up-skilling ourselves as well. There are so many examples I can give about things that are possible now which weren’t possible back then… This sort of gatekeeping doesn’t serve any productive purpose and is just a way for people to control access to the ‘elite’ group and make them feel better about themselves even though they are not as skilled as the newer folks.

The caveat is that not all new things are good, we need to evaluate and decide. There are a bunch of things that I don’t like about the new systems because I prefer the old ways of doing things. It doesn’t mean that anyone using the new tools is not a good developer. For example, I still prefer using SVN instead of GIT because that is what I am comfortable with, GIT has its advantages and SVN has its advantages. It doesn’t mean that I get to tell people who are using GIT that they are not ‘worthy’ of being called a good developer.

I dare this person to write a chat-bot without any external library/IDE or create a peer-to-peer protocol to share data amongst multiple nodes simultaneously or any of the new protocols/applications in use today that didn’t exist 20 years ago

Just because you can’t learn new things doesn’t mean that others are inferior. That is your problem, not ours.

– Suramya

August 30, 2022

Oregon Trail: You can now play the MSDOS version online for free at

Filed under: Interesting Sites — Suramya @ 3:37 PM

Oregon Trail is a game that has become a cultural touchstone of the era with the famous “You have died of dysentery” message that most of us got when we played it. There are multiple versions of the game available but the original version from Atari and then the DOS versions are the most popular ones. has continued their effort to archive classic games and now has the DOS version of the game available to play online for free. I briefly tried it out using Firefox on Linux and it works great. I did have to consciously decide to stop playing as the game is addictive, So consider yourself forewarned. 🙂

Screenshot from Oregon Trail

The version here is running on FreeDOS so in theory you should be able to download and play it locally but from what I could tell this version is online only. Check it out if you have some free time to kill.

Edit (31st Aug 2022): You can play the 1978 version of Oregon Trail online as well.

– Suramya

August 29, 2022

Can you explain complex ideas using only the top 1k most used English words?

Filed under: Humor,Interesting Sites — Suramya @ 10:27 PM

They say that the best way to gauge if someone understands a topic in depth is to see if they can explain it in language simple enough that a 5 year old can understand. Too many people use acronyms and buzzwords to explain stuff that just confuses people and makes it harder to figure out what people are talking about. There was an old XKCD joke about explaining something using the top 1k most commonly used words in English (See an example about a Rocket below). There is a book about it as well called the ‘Thing Explainer‘ that I had gifted to Vir (my Nephew) a while ago. They both (Vir & Sara) love it and still refer to it quite often.

Theo Sanderson was inspired by the idea and has created a website where you can attempt to explain something with only the top 1000 most commonly used English words.

Up Goer Five

Writing like this actually sounds a lot easier than it is and when I tried it, it took me a few tries to write something that passed the test. Check it out and share your creative writing. 🙂

– Suramya

August 28, 2022

Debian looking at changing how it handles non-free firmware

Filed under: Computer Software,Linux/Unix Related,Tech Related — Suramya @ 5:38 PM

One of the major problems when installing Debian as a newbie is that if your hardware is not supported by an Open (‘free’) driver/firmware then the system doesn’t install any and then it is a painful process to download and install the driver, especially if it is for the Wireless card. In earlier laptops you could always connect via a network cable to install the drivers but the newer systems don’t come with a LAN connection (which I think sucks BTW) so installing Debian on those systems is a pain.

How this should be addressed is a question that has been debated for a while now. It was even one of the questions Jonathan Carter discussed in his post on ‘How is Debian doing’. There are a lot of people with really strong opinions on the topic and ‘adulterating’ Debian by allowing non-free drivers to be installed by default has a lot of people up in arms. After a lot of debate on how to resolve there are three proposals to solve this issue that are up for vote in September:

Proposal A and B both start with the same two paragraphs:
We will include non-free firmware packages from the “non-free-firmware” section of the Debian archive on our official media (installer images and live images). The included firmware binaries will normally be enabled by default where the system determines that they are required, but where possible we will include ways for users to disable this at boot (boot menu option, kernel command line etc.).

When the installer/live system is running we will provide information to the user about what firmware has been loaded (both free and non-free), and we will also store that information on the target system such that users will be able to find it later. The target system will also be configured to use the non-free-firmware component by default in the apt sources.list file. Our users should receive security updates and important fixes to firmware binaries just like any other installed software.

But Proposal A adds that “We will publish these images as official Debian media, replacing the current media sets that do not include non-free firmware packages,” while Proposal B says those images “will not replace the current media sets,” but will instead be offered alongside them.

And Proposal C? “The Debian project is permitted to make distribution media (installer images and live images) containing packages from the non-free section of the Debian archive available for download alongside with the free media in a way that the user is informed before downloading which media are the free ones.

Debian is not the more new user friendly system out there and a lot of distributions got popular because they took the Debian base and made it more userfriendly by allowing non-free drivers and firmware. So this is a good move in my opinion. Personally I feel that option B might be the best option that will keep both the purists and the reformers happy. I don’t think Option C is a good option at all as it would be confusing.

Source: Slashdot: Debian Considers Changing How It Handles Non-Free Firmware

– Suramya

August 26, 2022

Using MultiNerf for AI based Image noise reduction

Filed under: Computer Software,Emerging Tech,My Thoughts,Tech Related — Suramya @ 2:58 PM

Proponents of AI constantly come up with claims that frequently don’t hold up to extensive testing, however the new release from Google Research called MultiNerf which runs on RAW image data to generate what the photos would have looked like without the video noise generated by imaging sensors seems to be the exception. Looking at the video it almost looks like magic, and appears to work great. Best of all, the code is open source and already released on GIT Hub under the Apache License. The repository contains the code release for three CVPR 2022 papers: Mip-NeRF 360, Ref-NeRF, and RawNeRF.

TechCrunch has a great writeup on the process. DIYPhotography has created a video demo of the process (embedded below) that showcases the process:

Video Credits: DIYPhotography

I like the new tools to make the photographs come out better, but I still prefer to take unaltered photos whenever I can. The most alteration/post-processing that I do on the photos is cropping and resizing. That also is something I do infrequently. But this would be of great use to professional photographers in conditions that are less than optimal.

– Suramya

August 23, 2022

Water droplets can turn to Hydrogen Peroxide when hitting a surface

Filed under: Science Related — Suramya @ 3:36 PM

Science and Technology are fascinating fields and everyday there are new discoveries that show us how amazing the world around us is and how much more we have to learn. Today I learnt that water can be converted to Hydrogen Peroxide if the water droplets are small enough. The phenomena was discovered three years ago by researchers at Stanford University when they sprayed small drops of water onto a special strip of paper that turns blue in the presence of hydrogen peroxide – the main ingredient in bleach. Interestingly other researchers have had problems replicating the results so the team spent the past few years trying to understand why it happens and how.

They mixed water with dye that glows in the presence of hydrogen peroxide and then injected the mixture into microscopic channels made of glass like silica and they found that the liquid was glowing when passing through the tubes. They then found that the water contained hydrogen peroxide at a concentration of 0.0019 grams per liter after passing through the channel. The theory is that the liquid takes electrons from the channels and causing the water molecules, which are made of two hydrogen atoms and one oxygen atom, to reconfigure into hydrogen peroxide molecules, which have an additional oxygen.

Contact electrification between water and a solid surface is crucial for physicochemical processes at water–solid interfaces. However, the nature of the involved processes remains poorly understood, especially in the initial stage of the interface formation. Here we report that H2O2 is spontaneously produced from the hydroxyl groups on the solid surface when contact occurred. The density of hydroxyl groups affects the H2O2 yield. The participation of hydroxyl groups in H2O2 generation is confirmed by mass spectrometric detection of O in the product of the reaction between 4-carboxyphenylboronic acid and O–labeled H2O2 resulting from O2 plasma treatment of the surface. We propose a model for H2O2 generation based on recombination of the hydroxyl radicals produced from the surface hydroxyl groups in the water–solid contact process. Our observations show that the spontaneous generation of H2O2 is universal on the surfaces of soil and atmospheric fine particles in a humid environment.

This effect could be a possible explanation of why certain viruses don’t spread as quickly during the high humidity season of the year. As the hydrogen peroxide being created would act as a disinfectant to kill the viruses. If we can consistently get this to occur then it would be a quick and easy way to do a basic disinfection of a high movement area. However, before practical implementations can be discussed there is still a lot of work to be done.

Paper: Water–solid contact electrification causes hydrogen peroxide production from hydroxyl radical recombination in sprayed microdroplets
Source: Water droplets can sometimes turn into bleach when hitting a surface

– Suramya

August 16, 2022

Debian: My Favorite Linux Distro turns 29!

Filed under: Linux/Unix Related,My Thoughts — Suramya @ 9:23 PM

Debian, one of the most popular Linux Distributions that has served as the base for over 100 derivative distributions (See here for the partial list) is celebrating it’s 29th Birthday! I have been using it since 2003, so it’s been 19 years since I started using it and I have to say the OS has been improving constantly over the years while keeping the core values/stability.

I have tried other distro’s in the middle: Ubuntu, Mint, Knoppix but keep coming back to Debian because of the stability and functionality. I do use Kali as the primary OS on my laptop as I use that for my security research/testing but all other systems run Debian. I even managed to get it to work on my Tablet. 🙂

One note of caution/advice is to always look at the packages being changed/removed when you are upgrading esp if you are on the Unstable branch as things can break in that branch. Usually if I see that packages are being removed that I want to keep I just have to wait for a few days and the issue gets resolved. It is not the best distribution if you are looking for ‘newbie friendly’ but is one that will let you learn Linux the fastest. (Linux from Scratch will get you to learn more about Linux internals than you ever wanted if you can manage to get it to work and have the time required to install/configure it. For me the effort spent for the gain wasn’t worth it, but your mileage may vary.

In any case, I think I will be sticking with Debian for the foreseeable future. Here’s to another 30 years!

– Suramya

August 15, 2022

Wishing everyone a Happy 75th Independence Day!

Filed under: My Thoughts — Suramya @ 8:37 PM

Today marks the 75th anniversary since India freed itself from the British rule. Things have not been perfect (and they can never be) but we have achieved so much in the past 75 years that it is awe inspiring. Looking at my memories over the past 40+ years below are some of the things that have improved/changed in that time:

  • It used to take ~2-3 years to get a new car (maruti 800) in the early 80’s, now you can get one within a couple of days if they have it in stock and they have hundreds of brands/models to choose from
  • Up until 1975, only seven Indian cities had television services with a single TV channel. Now every city in India has TV and as of 2021, over 900 permitted television channels
  • The first mobile phone call was made in 1995 and cost over 8 rs a min, now we have the cheapest calls in the world
  • We have gone from people having to move to US for good Tech jobs to a thriving Tech industry where we are ranked 19th in the world’s startup ecosystems globally
  • The inter-city and inter-state highways are getting better and better all the time. In my last road trip, I was consistently over 120-140 and in fact had to remind myself not to cross 160. (Yes there is a lot of scope for improvement)
  • ISRO has gone from a local joke to a powerhouse that is launching the majority of the world’s satellites into orbit. We also had the world record for max satellites launched in one launch (104) that was broken earlier this year (143). Efforts are underway to take back the title

The list can go on and on. There is a lot of effort put in by folks to make India a success and there is a lot more effort needed to ensure that we keep improving. The mindset of ‘but this is India, nothing can be done’ needs to be changed/eradicated from the thought process.

The top people in India are living a life of luxury that was not feasible/imaginable 75 years ago and we need to ensure that the bottom percentage of our population is uplifted as well. This requires effort not just from the government but from us as a people as well. We need to work with the government and each other to help and uplift people. I see a lot of efforts around us to achieve this, folks in Diamond District (where I stay) are sponsoring the education for multiple kids, other are working with NGO’s to teach them basic skills. Some of my friends volunteer in various organizations to help the poor, and not just by giving them money but helping them learn skills so that they can uplift themselves.

We need to improve the Tourism Industry in India and make people aware that India has more than Delhi, Agra and Jaipur to offer for tourists. We have temples, archeological sites, palaces and other locations etc going back thousands of years. In my travels across India I have seen some of the most beautiful sights that take your breath away but not many know about them so hardly anyone visits. We have to advertise these sites/locations more. We need to make it easier for tourists to come to India and safely see the sights. I love the new ASI mandated tour guides for the major historical places in India, they are cheap and highly knowledgeable plus they are registered and have identification. You should hire them when visiting any of the sites as they give a wholesome overview of the location and any notable highlights. There are so many things that you miss when just walking around on your own that it is worth it to hire these folks. They speak multiple languages including international languages.

We need to clamp down on the corruption and improve the infrastructure more. This includes reducing the bureaucracy and eliminating the Babu rule, where the government officials think it’s their right to demand money to do their job (which they are getting paid for).

Things are improving and have improved a lot but we have a long way to go. As Robert Frost put it ” The woods are lovely, dark and deep, But I have promises to keep, And miles to go before I sleep”. The following poem by हिमांशु (Himanshu) captures the sentiment quite beautifully as well:

जाना है अभी तो बहुत दूर,
सूरज चढ़ा क्षितिज पर,
चारों ओर रेत का ढेर,
तपन घोर, तपस घनघोर,
मृगतृष्णा पसरी चहुंओर,

चलते चलो, बढ़ते चलो,
कहानियां बुनते चलो,
खुद से खेलते चलो,
चलते चलो, चलते चलो।

लंबे बहुत हैं रास्ते,
आयेंगे बहुत से दो-रास्ते,
सही चुनना, नहीं गिनना,
आकाश को ही देखना,
अवकाश को हर भूलना,

गिर गए तो उठते चलो,
मौका पड़े तो उड़ते चलो,
चलते चलो, चलते चलो।

मत ढूंढना बरगद,
ना चाहना लंबा खजूर,

छांव दृश्य नहीं,
साथ सिर्फ छाया है,
सिकुड़ती, जलती, ढलती,
काया ही, अब सरमाया है,
जो देखा, जिया, आजमाया,
वही सच है, अशेष है,
बाकी सब माया है,

बस, बरबस,
अतृप्त, असंतृप्त,
अविदित, अविरल,
चलते चलो,
हर मील का
हर मील पर,
जश्न मनाते चलो,

शाम अभी दूर है,
चांद कुछ और दूर है,
चलते चलो, चलते चलो।।


हिमांशु “चहुंओर”

I was planning to post poems in other Indian languages as well but couldn’t decide on one as I can’t do justice to them (as I can’t read them). Planning on asking Jani for help to get some good collections to share here. Will update the post once I find a good one.

Map of India outlined by trees with an Indian flag in the center
जय हिन्द! জয় হিন্দ ജയ്_ഹിന്ദ് જય_હિંદ ಜೈ_ಹಿಂದ್ ਜੈ ਹਿੰਦ జై హింద్ ஜெய்_ஹிந்த்

Will post more later.

– Suramya

August 12, 2022

Multiple Linux Live CDs on a single USB Drive

Filed under: Computer Tips,Linux/Unix Related,Tech Related — Suramya @ 6:55 PM

Portable Boot disks are a life saver for a techie and I usually carry one with me most of the time (Its part of my keychain 🙂 ) However, the issue I would face was that I could only carry one live CD at a time on a USB stick and if I wanted another one then I would either have to search for the pendrive where I had already installed it or burn another one to the drive which was annoying, especially when I had to switch between OS’s frequently.

So I started searching for an alternative, something similar to the Ultimate Boot CD that allowed you to have multiple diagnostic tools on a CD but for Live Distros and installation media. Tried a bunch of ways but the easiest way I found was to use Ventoy to create a bootable USB.

You can download Ventoy from their GitHub Releases page, and the installation of the tool is as easy as extracting the file to a folder on your system and then running the correct executable for your system (They have executable’s for all architectures). Once you run the file as root, select the USB disk you want to use and click install. It takes about a minute for the software to install on the drive and once completed, it creates two partitions on the disk. The first partition named VTOYEFI is reserved for the boot files by Ventoy so ensure that you don’t change anything in that partition.

The second partition called Ventoy, is an exFAT partition and this is where we will copy all the ISO files for the distributions we want the disk to support. Installing a new OS/Tool/CD is as simple as copying the ISO file for the CD on to the partition. Once we have copied the files to the partition all you have to do is unmount the partition and your new disk is ready to use.

I installed the Debian Installer, Kali Live CD and Kali Installed on a 8GB drive with no issues. When I boot from the disk, I get a menu asking me to select the ISO I want to boot into and then the system boots into the boot menu for that image. So now I can carry one pen-drive with all the OS’s I would need to troubleshoot a system or reinstall the OS. I think you should be able to boot into windows installer as well using this method but I haven’t tried it yet so can’t confirm for sure.

Well, this is all for now. Will post more later.

– Suramya

August 8, 2022

Using Behavioral Biometrics for User Authentication as added security measures – Advantages and Disadvantages

Filed under: Article Releases,Computer Security,My Thoughts — Suramya @ 11:59 PM

In this paper we explore how users can be uniquely identified using biometrics other than fingerprints, facial recognition, iris recognition etc on a continuous basis. We explore the use to techniques such as typing style, computer use style to see if we can create a model to uniquely identify a user based on the way they type and use the computer. As this method allows a system to constantly reauthenticate a user based on characteristics that are almost impossible to fake we look at the complexity of how this can be integrated as a security measure for secure systems. We also look at the pros and cons of implementing this authentication mechanism and explore potential problems this system generates for the user and administrators. Specifically, we look at how the system would deal with users who are sick, under medication or stress that could change their usage patterns and is it worth the expense and privacy issues to implement such a system.

Introduction and background

User authentication is the process of verifying the identify of a user or process trying to access a system, online service, connected device, infrastructure resources etc. Traditionally authentication is done by having the user provide one or more of the following:

  • Something they know
  • Something they have
  • Something they are

Let’s look at each of these one by one. The oldest way of authentication to computer systems is using usernames and passwords. The first password protection system was implemented in 1961 by Fernando J. Corbató at MIT (Workos, 2020). This allowed the system to identify users based on a secret password that only they knew. The first set of passwords were stored in plain text, but then password encryption was implemented so that users could not read the passwords for other users.

However, passwords can be leaked or guessed. In the past few years there have been major leaks of authentication data which have been decrypted and sophisticated password crackers have been created that can crack passwords based on dictionary attacks and brute force attacks. To safeguard against this attack vector another authentication mechanism was created that authenticates users based on something they have with them. This can include hardware keys, smartcards etc and these hardware devices would contain an embedded certificate that can be used to uniquely identify the holder.

The final method of authentication is something you are, which is provided by Biometric authentication. Some of the biometric methods that can be used are fingerprints, hand geometry, retinal or iris scans, face scans, and voice analysis. Fingerprints, Face Scans and iris scans are the most widely used biometric method in use today.

Multifactor Authentication
When a system uses a combination of one or more of the authentication methods described in the previous section the system is said to be using Multi-factor Authentication (MFA). The key point to remember is that a system is only considered to be using MFA if the authentication factors are in at least two of the categories. So if the authentication mechanism uses a password and a second pin to authenticate, it won’t count as MFA because both are things that you know.

Weaknesses in the current User Authentication methods

The current user authentication methods have several weaknesses that make it easy for attackers to compromise and bypass the checks. Complex passwords are harder to crack or guess than simple passwords, but they are harder for users to remember. So, users tend to use the same passwords across multiple sites or use passwords that are simple to remember. Unfortunately, passwords that are simple to remember are also easy to guess.
Another risk is that an attacker can compromise a site or server using vulnerabilities in the OS, services or applications running on it. Once they have access, they can gain access to the stored passwords for all users and depending on the encryption scheme used the passwords for user accounts can be guessed quickly. This is an attack vector that has been seen frequently over the past few years with password lists for major sites such as LinkedIn (Morris, 2021) and Yahoo (Goel & Perlroth, 2016) etc being compromised and leaked.

Hardware tokens or smart cards can be cloned, copied or stolen. If the card is not deactivated when it is lost or stolen an attacker can use it to gain access to restricted resources. Tools to create copies of smartcards are available easily in the market (Benchoff, 2016) using which an attacker can clone the cards quickly.

Biometrics was touted as an authentication mechanism that is almost impossible to bypass but unfortunately the hype didn’t match reality. Fingerprint authentication systems have been compromised using copies of fingerprints lifted from glasses, door knobs etc transferred to jello, Glycerin and gelatin. (Barral & Tria, 2009)

Facial recognition systems have been fooled by photographs and cosmetics. Researchers have also used the StyleGAN Generative Adversarial Network (GAN) to create master faces that can be used to impersonate 40% of the population. (Shmelkin et al., 2021)

Voice authentication systems have been bypassed using voice recordings and AI based ‘deep fake’ technologies. Amazon recently showcased technology that allows Alexa to impersonate the voices of people based on a few minutes long voice recording of the person being impersonated.

Similar bypasses have been found for all authentication mechanisms in use currently and thus researchers have been exploring new authentication mechanisms which would be harder to bypass and fool. One such field being explored in behavioral biometrics and we will explore the field, it’s implications, the pros and cons of the tech in this paper.

Introduction to Behavioral Biometrics

Behavioral biometrics is the study and use of uniquely identifying and measurable patterns in human activities that can include keystroke dynamics, gait analysis, mouse use characteristics, signature analysis etc. The field postulates that a user can be identified based on these characteristics just as uniquely as they can be using physical biometrics.

Another advantage of using Behavioral Biometrics over physical biometrics is that it doesn’t require specialized equipment to collect the data. Data can be collected using existing hardware and only requires software analysis and processing which makes it cheaper to implement to a certain extent and we will look at this in more detail later in the paper.

Behavioral Biometrics can include the following:

Keystroke Dynamics:

According to the studies, if a group of users is asked to type the paragraph of text, each of them will type the text slightly differently with different delays between each character being typed, and different rhythms for the text. This allows a system to identify the user based on how they type including criteria such as:

  • The user’s typing speed
  • Time elapsed between each consecutive keystroke
  • The time that each key is held down
  • The frequency with which the number pad keys are used
  • The timing and sequence of the keys used to type a capital letter
  • The Error Rate in typing, such as using the Backspace keys and words repeatedly mistyped by the user.

As each person would type the password slightly differently the system can use it to identify the authorized user and block attackers who might have gained the password for a given user.

Cursor Movement:

This uses the tracking speed, clicks and path taken by the mouse cursor movement during use to create a profile for the active user. This would be useful if the user uses the same set of applications frequently, if they are using a varied set of applications that keep changing then this would not be accurate.

Finger pressure on keypad:

This analyses the pressure on the keyboard to create a user profile. This is a lot more relevant for mobile devices and other devices with a touchscreen interface as the allow us to capture pressure details easily without extra hardware.

Every person has a different way of standing and a sufficiently trained system can look for differences in how the person sits in front of the computer and their posture while using the system.


Gait analysis attempts to identify a person based on their walking style, which includes movements such as stride length, posture, and speed of travel etc.

Each of the methods we listed above can potentially be used to continuously re-validate a logged in user.

Historical use of Behavioral Biometrics for authentication

Historically, behavioral biometrics have been in use since the 1860s when experienced telegraph operators were able to identify individual operators by the way they would send the signals. In World war II allied officers used it to validate the authenticity of messages they received based on how they were sent. (Das, 2020) Similarly, other organizations used this ability as well as an extra validation layer when communicating instructions over telegraph.

The Military has also used gait recognition to identify imposters in their base who are trying to impersonate authorized personnel to gain access to sensitive information.

Current state & the Future for Behavioral Biometrics

The behavioral biometrics market revenue totaled ~US$ 1.1 Bn in 2020, according to Future Market Insights (FMI). The overall market is expected to reach ~US$ 11.2 Bn by 2031, growing at a CAGR of 23.6% for 2021 – 31. (Future Market Insights, 2021)

As we can see, an increasing number of institutes, financial companies, website owners are using behavioral biometrics in their systems to detect fraudulent usage. The Royal Bank of Scotland uses it to monitor visitors to their websites and apps, others use it in their applications to monitor and authenticate users as an extra verification layer. (, 2018)

With the increase in processing capacity, sensor sensitivity and processing algorithms systems can make more accurate identifications of individual users. This allows systems to detect bots, password sharing/compromise.

Ecommerce sites have increasingly started incorporating this technology into their setup to prevent fraud. It can also potentially allow systems to make an educated judgment about the visitor’s gender and age to show appropriate products.

Considering the advantages and minimal hardware investment we will only see an increase in the use of Behavioral Biometrics for authentication in the future.

Advantages of using Behavioral Biometrics for authentication

Behavioral Biometrics have the following advantages that make them attractive for companies and institutes to implement:

  • Flexibility: The data being analyzed is not limited to currently identified sets that we have discussed so far. Since most of the processing being done is on the software side the organization can easily add additional behavioral data to be analyzed and processed.
  • Convenience: This a major plus point for the technology is that it is a passive layer of security. This allows it to work without interfering with the user workflows. This removes a major obstacle in incorporating security into the system as the traditional security setups decrease the usability of the system.
  • Efficiency: They can be applied in real-time to detect fraudulent use and the system can be run against historic data as well to detect improper use after fact.
  • Security: Behavioral characteristics are hard to replicate and thus incorporating this additional layer of security improves the security of the system.

Disadvantages of using Behavioral Biometrics for authentication

As with all systems there are some disadvantages of using a Behavioral Biometric system for authentication as well. If we are using the Keystroke analysis then the text being entered has to be long enough for the system to generate a profile and match it so if we are only using it as an additional validation step during password entry and the user’s password is too short, then the system might not be able to create and match a profile.

Another problem is that a user’s behavior can change drastically due to various valid reasons and that can cause access issues when the algorithm is unable to account for the changes. Some of the reasons can include:

  • Illness or Injury: If a person is injured or unwell then their usage patternswill change
  • Stress
  • Pregnancy
  • Sleep deficiency
  • Caffeine deficiency or overindulgence
  • Tiredness: If a user logs back in after a session in the gym their usage patterns are going to differ from the pattern before their gym session
  • Time of day: Some people are more active during certain times of day so their usage patterns will vary based on the time of the day.
  • Distractions: If the user is distracted while working , or example, if they are on a call and working at the same time. Their behavior patterns will be different.
  • Location: If the person logs in from a different location and are working with a different setup their metrics are going to be different. For example the profile when using an egronomic keyboard in office vs using a laptop keyboard while working remotely will be drasticly different and the system will have a hard time creating a consolidated profile for such users.

Another major issue with this technology is the Privacy implications. If we are implementing a system that monitors every keystroke and mouse movement and logs it for analysis then that has a serious privacy implication as sensitive data that shouldn’t be logged such as medical information, personal account passwords, other sensitive information etc can get logged as well. Once the data is logged there is a possibility of data leaks or a breach of the security system which would expose the collected information to an attacker.

Depending on the user’s location collection of this kind of data can be illegal due to rules such as the GDPR (Krausová, 2018), the California Consumer Privacy Act (CCPA) and other such rules. They will also limit the information that can be transmitted across state & country boundaries which can be a concern for multinational companies.

Finally incorporating the processing required for behavior analysis on the local system can be resource intensive which might make the setup infeasible for older machines. If the processing of the data is consolidated at a central location then the usage data would need to be transmitted to the location over the network that can potentially max out the bandwidth and depending on network congestion cause unacceptable delays in the processing and access.

Results and Recommendations

Based on our review of the current state of Behavioral Biometrics in the industry and the technological state of the system/algorithms we find that the technology does help increase the security of the system by adding an additional layer of security to the system. However, it is not yet mature enough to deploy for general commercial implementation and should only be used for securing highly sensitive systems and infrastructure where the security considerations outweigh the limitations identified earlier in the paper.
Once the technology is more mature and the issues identified earlier have been mitigated it can slowly be incorporated in the general computing world as an optional additional layer of security. At no point should this be used as the only layer of security for any system.


Behavioral Biometrics as a security measure is a technology still in its early stages of use and implementation and while it does add an additional layer of security the current limitations do not justify a general release and implementation in general use computing. The system should only be implemented in systems such as classified military systems, critical corporate servers containing highly sensitive information etc where the benefits or security concerns outweigh the disadvantages of using a technology that still needs to mature more.


Alzubaidi, A., & Kalita, J. (2016). Authentication of smartphone users using behavioral biometrics. IEEE Communications Surveys & Tutorials, 18(3), 1998–2026.

Araujo, L. C. F., Sucupira, L. H. R., Lizarraga, M. G., Ling, L. L., & Yabu-Uti, J. B. T. (2005). User authentication through typing biometrics features. IEEE Transactions on Signal Processing, 53(2), 851–855.

Banerjee, S. P., & Woodard, D. (2012). Biometric authentication and identification using Keystroke Dynamics: A survey. Journal of Pattern Recognition Research, 7(1), 116–139.

Barral, C., & Tria, A. (2009). Fake fingers in fingerprint recognition: Glycerin supersedes gelatin. Formal to Practical Security, 57–69.

Benchoff, B. (2016, January 18). Emulating and cloning smart cards. Hackaday. Retrieved June 27, 2022, from

Bo, C., Zhang, L., Li, X.-Y., Huang, Q., & Wang, Y. (2013). Silentsense. Proceedings of the 19th Annual International Conference on Mobile Computing & Networking – MobiCom ’13.

Das, R. (2020, October 14). A behavioral biometric – keystroke recognition. A Behavioral Biometric – Keystroke Recognition.
Future Market Insights. (2021, October). Behavioral biometrics market. Future Market Insights.

Goel, V., & Perlroth, N. (2016, December 14). Yahoo says 1 billion user accounts were hacked. The New York Times.

Krausová, A. (2018). Online behavior recognition: Can we consider it biometric data under GDPR? Masaryk University Journal of Law and Technology, 12(2), 161–178.

Morris, C. (2021, June 30). LinkedIn data theft exposes personal information of 700 million people. Fortune. (2018, August 15). What’s behind the rise of behavioral biometrics? Retrieved June 27, 2022, from

Shmelkin, R., Friedlander, T., & Wolf, L. (2021). Generating master faces for dictionary attacks with a network-assisted Latent Space evolution. 2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021).

Workos. (2020, September 5). A developer’s history of authentication – WorkOS. A Developer’s History of Authentication.

Note: This was originally written as a paper for one of my classes at EC-Council University in Q2 2022.

– Suramya

Older Posts »

Powered by WordPress