Suramya's Blog : Welcome to my crazy life…

December 9, 2025

Security vs Accessibility: Thoughts on the problem and how it can be addressed

Security is something that always comes at an expense of Usability and I wrote about this earlier as well. However, in this post I am going to talk about something slightly different: How security measures impact accessibility. At first glance it might look that both topics are the same but there are extra nuances in the Accessibility that unfortunately are not considered a lot of the time when we design a system. To be honest I didn’t think about it much either until I saw a post by James on Mastodon highlighting the issue:

https://mastodon.social/@jscholes@dragonscave.space/115673620717345529
Security measures impacting Accessibility for blind users

A severe issue I’ve seen very few people talking about is the widespread adoption (in my country at least) of touch-only card payment terminals with no physical number buttons.

Not only do these devices offer no tactile affordances, but the on-screen numbers move around to limit the chances of a customer’s PIN number being captured by bad actors. In turn, this makes it impossible to create any kind of physical overlay (which itself would be a hacky solution at best).

When faced with such a terminal, blind people have only a few ways to proceed:

* Switch to cash (if they have it);
* refuse to pay via inaccessible means;
* ask the seller to split the transaction into several to facilitate multiple contactless payments (assuming contactless is available);
* switch to something like Apple Pay (again assuming availability); or
* hand over their PIN to a complete stranger.

Not one of these solutions is without problems.

If you’re , have you encountered this situation, and if so how did you deal with it? It’s not uncommon for me to run into it several times per day.

why do you think this is not being talked about or made the subject of action by blindness organisations? Is it the case that it disproportionately affects people in countries where alternative payment technology (like paying via a smart watch) is slower to roll out and economically out of reach for residents?

It is easy to forget that others have different requirements and needs than you and navigating a world which is moving towards removing tactile feedback makes it harder for people with vision problems or motor control issues from interacting with the world. Every single security feature that we add to a system the more the potential of making the system inaccessible increases. For example, if we have captcha checks while logging into a site or a computer then screen readers can’t read the captcha by design so blind users are unable to log in to the system. A fix for that was to have audible captcha code but with the advances in voice recognition an attacker can use a voice recognition system to identify the code and bypass the security measure.

Accessibility features / functionality seems to be an afterthought (if that) for developers even in 2025. There are major accessibility issues in Linux and Fireborn (Couldn’t find their real name) did a whole series of blog post’s about the issues they face as a blind person using Linux (I Want to Love Linux. It Doesn’t Love Me Back: Post 1 – Built for Control, But Not for People) on a day to day basis. The sad part is that while a lot of people acknowledged the issue and agreed to work on fixing it there were the usual gatekeepers who wrote nasty/condescending messages in response to the post, Fireborn responded to the comment quite beautifully (and a lot more politely than I would have in their position) in another blog post (You Don’t Own the Word “Freedom”: A Full-Burn Response to the GNU/Linux Comment That Tried to Gatekeep Me Off My Own Machine) This right here is the issue that we need to solve. People don’t think we need to work on accessibility because they don’t need it. I remember reading an article about how there was a group of people really upset because a streaming solution was giving more focus on subtitles for their shows. No one is forcing you to enable subtitles but folks who don’t speak the language or have hearing issues they are a lifesaver.

Coming back to the security & accessibility issue for a POS (Point of Sales system), there is no easy way to solve this problem for card users. One option I can think of is for stores to keep a physical bluetooth pin-pad that is paired with the POS machine so that users with vision problems can use the physical keyboard to enter the pin. This would require effort (and have a cost implication) from the store so I don’t know how many stores will do that. It would work if there was a law that required the store to do this but if that is not there then the users are lost.

Another option would be to have a screen/image reader application on a phone that the user (or store) owns that scans the display and then reads out the numbers displayed. Even better functionality would be to have the app detect which number is covered by the user’s finger and let the user know verbally (over a headset ideally) so that they can enter the numbers.

These are some of the ways that I can think of to solve this problem but since I am not the target user a better way to approach this issue would be to work with folks with vision problems and have them confirm if the solution we are coming up is actually solving their problem or not.

– Suramya

July 16, 2025

Grok comes to Tesla and its nowhere near to bringing KITT closer to reality

Filed under: Artificial Intelligence,Computer Software,My Thoughts — Suramya @ 8:34 PM

Elon Musk announced that Grok AI would be included in Tesla’s about a week ago and unlike most of his announcements an initial release of the tool to Tesla happened earlier this week. This is extremely scary because what we are calling AI is nowhere close to being intelligent and putting it in control of a 2000 KG metal vehicle is what I would call a ‘bad idea’.

A lot of us grew up watching Knight Rider and KITT from the series is the golden standard for AI Cars. The extremely talented Design Thinking Comic created a comic where they imagined how Knight Rider 2025 would work with the current generation of LLM’s instead of the fictional KITT AI.

Knight Rider 2025, with a current generation LLM
Knight Rider 2025, with a current generation LLM

Till recently my knowledge of the Tesla cars was based on information I read online and videos that I had watched, but I got to see the car in person during my recent trip to the US and my experience as a passenger was that the car looks really cool but there are a lot of usability issues. Like the weird way to open the door and loads of other issues folks have been posting about which have caused serious enough issues that Tesla had to issue multiple recalls in the past year for its cars. The Cybertruck, on the other hand was even uglier than I had imagined it to be.

The “Move fast and break things” philosophy is not something that should be applied to a car as it has serious real world impact potentially endangering lives. (It is a bad idea in general even for regular software development but that is a separate topic for another day)

– Suramya

February 6, 2025

A Linux Distribution which runs directly within a PDF file

There is a semi-serious joke in the IT industry that anything that can compute is eventually used to play Doom and then run Linux. Now you can do both from inside a PDF file. Since the PDF specification supports Javascript a highschool student who goes by the handle ‘ading2210’ has implemented a RISC-V emulator in it which can run a barebones Linux distribution within the PDF file itself. This builds on top of the work done to get Doom to run inside the PDF file.

The full specfication for the JS in PDFs was only ever implemented by Adobe Acrobat, and it contains some ridiculous things like the ability to do 3D rendering, make HTTP requests, and detect every monitor connected to the user’s system. However, on Chromium and other browsers, only a tiny subset of this API was ever implemented, due to obvious security concerns. With this, we can do whatever computation we want, just with some very limited IO.

C code can be compiled to run within a PDF using an old version of Emscripten that targets asm.js instead of WebAssembly. With this, I can compile a modified version of the TinyEMU RISC-V emulator to asm.js, which can be run within the PDF. For the input and output, I reused the same display code that I used for DoomPDF. It works by using a separate text field for each row of pixels in the screen, whose contents are set to various ASCII characters. For inputs, there is a virtual keyboard implemented with a bunch of buttons, and a text box you can type in to send keystrokes to the VM.

The largest problem here is with the emulator’s performance. For example, the Linux kernel takes about 30-60 seconds to boot up within the PDF, which over 100x slower than normal. Unfortunately, there’s no way to fix this, since the version of V8 that Chrome’s PDF engine uses has its JIT compiler disabled, destroying its performance.

For the root filesystem, there are both 64 and 32 bit versions possible. The default is a 32 bit buildroot system (which was prebuilt and taken from the original TinyEMU examples), and also a 64 bit Alpine Linux system. The 64 bit emulator is about twice as slow however, so it’s normally not used.

You can try out the implementation of LinuxPDF here. More details of the project and the code used to create it is available on the project’s GitHub page.

– Suramya

January 22, 2025

ELIZA Resurrected using original code after 60 years

If you have been following the AI chat bot news/world then you would have heard the name ELIZA come up. Eliza was the world’s first chatbot created over 60 years ago by MIT professor Joseph Weizenbaum and was the first language model which a user could interact with. It had a significant impact on the AI world (Actual AI research not the LLM wanna be AI we have right now) and was the first to attempt the Turing test. It was originally written in a programming language invented by Weizenbaum called the Michigan Algorithm Decoder Symmetric List Processor (MAD-SLIP) and the pattern matching directives were provided as separate scripts. Shortly after the initial release it was rewritten in LISP which went viral. Unfortunately the original code in MAD-SLIP went missing till recently soon after that.

One of the most famous ELIZA scripts was called Doctor that emulated a psychotherapist of the Rogerian school (in which the therapist often reflects back the patient’s words to the patient). Much to his surprise Weizenbaum found that folks attributed human-like feelings to the computer program. Wikipedia explains how the software worked:

ELIZA starts its process of responding to an input by a user by first examining the text input for a “keyword”.[5] A “keyword” is a word designated as important by the acting ELIZA script, which assigns to each keyword a precedence number, or a RANK, designed by the programmer.[15] If such words are found, they are put into a “keystack”, with the keyword of the highest RANK at the top. The input sentence is then manipulated and transformed as the rule associated with the keyword of the highest RANK directs.[20] For example, when the DOCTOR script encounters words such as “alike” or “same”, it would output a message pertaining to similarity, in this case “In what way?”,[4] as these words had high precedence number. This also demonstrates how certain words, as dictated by the script, can be manipulated regardless of contextual considerations, such as switching first-person pronouns and second-person pronouns and vice versa, as these too had high precedence numbers. Such words with high precedence numbers are deemed superior to conversational patterns and are treated independently of contextual patterns.[citation needed]

Following the first examination, the next step of the process is to apply an appropriate transformation rule, which includes two parts: the “decomposition rule” and the “reassembly rule”.[20] First, the input is reviewed for syntactical patterns in order to establish the minimal context necessary to respond. Using the keywords and other nearby words from the input, different disassembly rules are tested until an appropriate pattern is found. Using the script’s rules, the sentence is then “dismantled” and arranged into sections of the component parts as the “decomposition rule for the highest-ranking keyword” dictates. The example that Weizenbaum gives is the input “You are very helpful”, which is transformed to “I are very helpful”. This is then broken into (1) empty (2) “I” (3) “are” (4) “very helpful”. The decomposition rule has broken the phrase into four small segments that contain both the keywords and the information in the sentence.[20]

The decomposition rule then designates a particular reassembly rule, or set of reassembly rules, to follow when reconstructing the sentence.[5] The reassembly rule takes the fragments of the input that the decomposition rule had created, rearranges them, and adds in programmed words to create a response. Using Weizenbaum’s example previously stated, such a reassembly rule would take the fragments and apply them to the phrase “What makes you think I am (4)”, which would result in “What makes you think I am very helpful?”. This example is rather simple, since depending upon the disassembly rule, the output could be significantly more complex and use more of the input from the user. However, from this reassembly, ELIZA then sends the constructed sentence to the user in the form of text on the screen

Now after over 60 years the original code written in MAD-SLIP has been resurrected by Jeff Shrager, a cognitive scientist at Stanford University, and Myles Crowley,an MIT archivist, who found it among Weizenbaum’s papers back in 2021. Which is when they started working on getting the code to run, which was a significant effort. They first created an emulator that approximated the computers available in the 1960’s and then cleaned up the original 420-line ELIZA code to get it to work. They published a paper: ELIZA Reanimated: The world’s first chatbot restored on the world’s first time sharing system on 12th Jan where they explain the whole process.

ELIZA, created by Joseph Weizenbaum at MIT in the early 1960s, is usually considered the world’s first chatbot. It was developed in MAD-SLIP on MIT’s CTSS, the world’s first time-sharing system, on an IBM 7094. We discovered an original ELIZA printout in Prof. Weizenbaum’s archives at MIT, including an early version of the famous DOCTOR script, a nearly complete version of the MAD-SLIP code, and various support functions in MAD and FAP. Here we describe the reanimation of this original ELIZA on a restored CTSS, itself running on an emulated IBM 7094. The entire stack is open source, so that any user of a unix-like OS can run the world’s first chatbot on the world’s first time-sharing system.

You can try it out: here.

Source:

– Suramya

January 21, 2025

Getting my NVIDIA card working after breaking it again with the latest updates

Filed under: Computer Software,Knowledgebase,Linux/Unix Related — Suramya @ 11:10 AM

NVIDIA doesn’t have the best history with Linux as it’s cards historically didn’t work well with Linux. But over the past few years things were changing and at least in my experience they were at a point that the cards worked without major issues. As some of you know I use the unstable version of Debian, primarily because it has the newest versions of software available but the downside of using it is that things break and sometimes they break spectacularly.

This time there was an issue with the NVIDIA driver/configuration which caused my system to stop opening the GUI login interface when I restarted the system. I tried reinstalling the driver as the error messages in the log suggested that the issue was caused by a missing driver. I purged the nvidia drivers by issuing the following command as root:

apt purge *nvidia*

Then reinstalling the drivers using the following command:

apt-get install nvidia-detect nvidia-driver

After this reinstall the driver was being detected correctly but the GUI still wasn’t coming up. A search on the net didn’t return many useful results but on one of the sites, there was a reference to the fact that running nvidia-xconfig recreats the X Configuration file for NVIDIA cards, so I tried that by running the following commands as root

apt-get install nvidia-xconfig 
nvidia-xconfig 

This created the Configuration file and once I rebooted everything started working again. I did have to reconfigure my desktop since one of the things I had tried was to reset all the custom configurations to KDE but that was mostly a minor issue.

This issue was on Kernel 6.12.9-amd64 with Debian Unstable release as of 17th Jan 2025

– Suramya

January 3, 2025

Playing Doom to solve a CAPTCHA

Filed under: Computer Software,Interesting Sites — Suramya @ 10:48 AM

I guess traditional CAPTCHA’s are getting too easy for LLM’s and humans to solve so Guillermo Rauch decided to create a CAPTCHA that lets you play DOOM® to prove that you’re human.

The project works by leveraging Emscripten to compile a minimal port of Doom to WebAssembly and enable intercommunication between the C-based game runloop (g_game.c) and the JavaScript-based CAPTCHA UI.

Some extensions were made to the game to introduce relevant events needed for its usage in the context of a CAPTCHA.

It is actually a fun implementation of the game and while I doubt it will gain widespread usage it is an interesting proof of concept.

– Suramya

September 26, 2024

Python in Excel launched for all Office 365 Business and Enterprise users

Filed under: Computer Security,Computer Software,My Thoughts,Tech Related — Suramya @ 10:35 PM

Excel is both a blessing and a bane for companies. Because of its capabilities folks have created formulas/macros/scripts/functions etc in Excel that allows them to generate data that is used to take major financial decisions with real world impact. But that capability also makes it an ideal vector for infiltrating an organization using Macros or scripts in Excel files to compromise systems.

Back in Aug 2023, Microsoft first announced that they are going to support running Python inside an Excel file. After that there was no major talk about it so I had hoped this meant that they had abandoned the project, but sadly I was mistaken. Redmond announced the official release of Python in Excel for Windows users of Microsoft 365 Business and Enterprise in a blog post. The post has a lot of details on the new capabilities this gives to power users and frankly I can see why folks are excited about it. But from a security and version control point of view this is a disaster waiting to happen.

There is a new learning series available for free for 30 days on LinkedIn that incorporates numerous examples, tutorials, and tips on how to best leverage Python in Excel.

Included in the Excel for Python release is a large language model integration that will allow Excel users to ask the Copilot to build scripts for them with plain language commands.

Microsoft partnered with data science tool maker Anaconda to develop the Python-Excel integration. As we’ve previously reported, data can move effortlessly between the two platforms using a few custom-defined functions.

This two-way function sending is a key part of security – Microsoft states Python processes Excel data without revealing the user’s identity, and all Python code runs in a secure, isolated environment, only accessing libraries approved by Anaconda​.

As with all the stuff MS has released recently, this also has LLM Integration but is on a very restricted list. The service is available to all Office 365 users with a valid Enterprise or Business Microsoft 365 subscription on the Current Channel.

Source: The Register: Python in Excel is here, but only for certain Windows users

– Suramya

August 30, 2024

Admiral Grace Hopper’s NSA Lecture from 1982 on Future Possibilities: Data, Hardware, Software, and People

Filed under: Computer Software,Tech Related — Suramya @ 6:05 PM

Grace Hopper is one of the founders of Programming languages and was the first person to devise the theory of machine-independent programming languages which she then used to develop the FLOW-MATIC programming language and COBOL. She had a phenomenal impact on the field of Computer Science/Engineering and her lectures are extremely interesting to watch as even after 40 years the concepts she talks about are still relevant. The NSA has finally released the video recording of a 1982 lecture by Adm. Grace Hopper titled “Future Possibilities: Data, Hardware, Software, and People.”

Initially they refused to do so because “With digital obsolescence threatening many early technological formats, the dilemma surrounding Admiral Hopper’s lecture underscores the critical need for and challenge of digital preservation. This challenge transcends the confines of NSA’s operational scope. It is our shared obligation to safeguard such pivotal elements of our nation’s history, ensuring they remain within reach of future generations. While the stewardship of these recordings may extend beyond the NSA’s typical purview, they are undeniably a part of America’s national heritage.”.

Thankfully after a massive push from the all over the world to get NSA to release the video saner minds prevailed and the entirety of the lecture has been released in two parts. You can watch them below:


Capt. Grace Hopper on Future Possibilities: Data, Hardware, Software, and People (Part One, 1982)


Capt. Grace Hopper on Future Possibilities: Data, Hardware, Software, and People (Part Two, 1982)

Since I don’t trust online systems to keep information available indefinitely, I have also archived the lectures on my system so if they disappear in the future I will have copies I can publish.

– Suramya

August 27, 2024

MIT Researchers publish AI risk database exposing 700+ ways AI can be risky

Filed under: Artificial Intelligence,Computer Software,My Thoughts — Suramya @ 10:44 AM

AI (or rather what is call AI right now), is not really intelligent but it does have a lot of risks associated with using it. We all know about the Deep Fakes and the hallucinations etc but those are not the only risks of using generative AI. The researchers at MIT have cataloged the over 700 risks of using generative AI.

The risks posed by Artificial Intelligence (AI) are of considerable concern to academics, auditors, policymakers, AI companies, and the public. However, a lack of shared understanding of AI risks can impede our ability to comprehensively discuss, research, and react to them. This paper addresses this gap by creating an AI Risk Repository to serve as a common frame of reference.

This comprises a living database of 777 risks extracted from 43 taxonomies, which can be filtered based on two overarching taxonomies and easily accessed, modified, and updated via our website and online spreadsheets. We construct our Repository with a systematic review of taxonomies and other structured classifications of AI risk followed by an expert consultation. We develop our taxonomies of AI risk using a best-fit framework synthesis. Our high-level Causal Taxonomy of AI Risks classifies each risk by its causal factors (1) Entity: Human, AI; (2) Intentionality: Intentional, Unintentional; and (3) Timing: Pre-deployment; Post-deployment. Our mid-level Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental, and (7) AI system safety, failures, & limitations. These are further divided into 23 subdomains. The AI Risk Repository is, to our knowledge, the first attempt to rigorously curate, analyze, and extract AI risk frameworks into a publicly accessible, comprehensive, extensible, and categorized risk database. This creates a foundation for a more coordinated, coherent, and complete approach to defining, auditing, and managing the risks posed by AI systems.

They have published a paper on it: The AI Risk Repository: A Comprehensive Meta-Review, Database, and Taxonomy of Risks From Artificial Intelligence that you should check out. They have also made their entire database available to copy for free as well.

Check it out if you have some free time.

Source: Boingboing.net: MIT’s AI risk database exposes 700+ ways AI could ruin your life.

– Suramya

August 25, 2024

Browse Open source clones of classic video games

Filed under: Computer Software,Tech Related — Suramya @ 2:19 AM

There are a lot of games that can no longer be played because the systems to run the games are no longer in production and it is illegal to modify their code to work on the new systems or operating systems or emulators. That is where open source comes into play, developers have dedicated a lot of time creating open source clones of their favorite games.

You can access the list and instructions on how to install/play them at: https://osgameclones.com/, which gathers open-source or source-available remakes of great old games in one place.

A Remake is a game where the executable and sometimes the assets as well are remade open source. Some of these games aren’t exact remakes but evolution of original ones, which were eventually open sourced.
A Clone is a game which is very similar to or heavily inspired by a game or series.
An Official project is the official source code release for a game that was formerly closed-source, maintained by the original creators and has minimal changes.
A Similar game is one which has similar gameplay but is not a clone.
A Tool is not a game, but something that assists in playing or modding the game, such as a high resolution patch, or resource extractor.

I see Open source versions of Classics like Decent II, Doom II/III and many more on the site. Check it out if you have some free time.

Source: Boingboing.net: Open source clones of classic video games

Older Posts »

Powered by WordPress