Suramya's Blog : Welcome to my crazy life…

June 20, 2024

Some thoughts on the current AI hype market

Filed under: Artificial Intelligence,My Thoughts — Suramya @ 11:22 PM

Found this hilarious but accurate write up on AI and how the current Hype is spoiling the industry: I Will Fucking Piledrive You If You Mention AI Again. It is a little rude, filled with profanity but accurately covers the current state of AI. It is filled with gems such as:

So it is with great regret that I announce that the next person to talk about rolling out AI is going to receive a complimentary chiropractic adjustment in the style of Dr. Bourne, i.e, I am going to fucking break your neck. I am truly, deeply, sorry.


Unless you are one of a tiny handful of businesses who know exactly what they’re going to use AI for, you do not need AI for anything – or rather, you do not need to do anything to reap the benefits. Artificial intelligence, as it exists and is useful now, is probably already baked into your businesses software supply chain. Your managed security provider is probably using some algorithms baked up in a lab software to detect anomalous traffic, and here’s a secret, they didn’t do much AI work either, they bought software from the tiny sector of the market that actually does need to do employ data scientists. I know you want to be the next Steve Jobs, and this requires you to get on stages and talk about your innovative prowess, but none of this will allow you to pull off a turtle neck, and even if it did, you would need to replace your sweaters with fullplate to survive my onslaught.

Consider the fact that most companies are unable to successfully develop and deploy the simplest of CRUD applications on time and under budget. This is a solved problem – with smart people who can collaborate and provide reasonable requirements, a competent team will knock this out of the park every single time, admittedly with some amount of frustration.

Most organizations cannot ship the most basic applications imaginable with any consistency, and you’re out here saying that the best way to remain competitive is to roll out experimental technology that is an order of magnitude more sophisticated than anything else your I.T department runs, which you have no experience hiring for, when the organization has never used a GPU for anything other than junior engineers playing video games with their camera off during standup

The current hype and insistence by companies to insert AI capabilities in everything whether it is needed or not is getting to the point where it is actively annoying and in some cases dangerous. The recent attempted release of Recall + Copilot by Microsoft is a good example of dangerous. Then we have companies releasing AI powered BIOS , that “interpret the PC user’s request, analyze their specific hardware, and parse through the LLM’s extensive knowledge base of BIOS and computer terminology to make the appropriate changes to the BIOS Setup. This breakthrough technology helps address a major hurdle for PC users that require or desire changes to their BIOS Setup for their personal computers but do not fully understand the meaning of the settings available to them.

I really don’t need AI in my mouse or use AI to create a perfect smoothie or the thousand other things folks are shoving AI into. ChatGPT can’t do simple addition or multipications and keeps making up stuff. Google’s AI Gemini recommends that people add glue to their pizza’s, it misidentified a poisonous mushroom as an edible one and there are many many more such cases out there (I have posted some examples earlier).

The problem is that folks (grifters to be honest) are selling AI as the cure all for all problems a company wants to solve. This is overshadowing the actual work being done in the field which is solving actual problems and use cases.

What we have right now is Machine Learning that has a good track record in predicting responses, but it is nothing close to being intelligent. A cat has more intelligence in it than the current ‘AI’. This is not to say that we won’t have AI systems in the future. I have been hearing the claim that AI is just around the corner for about 25 years now but we are not there yet.

May 23, 2024

Windows 11 will feature builtin Spyware in the near future or Recall AI as Microsoft Calls it

Till recently if you wanted to spy on someone and see what they have been doing on the computer, you had to infect their computer by making them visit a dodgy site or get physical access and download a RAT (Remote Access Trojan) & install it on the target’s computer, configure the Antivirus to ignore it and put in a backdoor so that you can access the data remotely. Obviously this was a lot of work so looks like some cyber criminals reached out to Microsoft (MS) and asked for help. MS being a super helpful company, has added a functionality called ‘Windows Recall’ to it’s windows 11 Preview build to solve this. Recall takes a snapshot (literally) of the screen every few seconds and stores it in a searchable database ‘stored locally’. Basically it does exactly what spyware does without having to install anything new on your system. As per the company below is how the Recall works:

Recall uses Copilot+ PC advanced processing capabilities to take images of your active screen every few seconds. The snapshots are encrypted and saved on your PC’s hard drive. You can use Recall to locate the content you have viewed on your PC using search or on a timeline bar that allows you to scroll through your snapshots. Once you find the snapshot that you were looking for in Recall, it will be analysed and offer you options to interact with the content. What actions you can take depend on the content and the chat provider capabilities in Copilot in Windows. For example, you may highlight a block of text and decide to summarise it, translate it, or open it with a text editor like Word or Notepad. If you highlight an image, you will be able to edit it or use your chat provider in Copilot in Windows to find or create a similar image.

Recall will also enable you to open the snapshot in the original application in which it was created, and, as Recall is refined over time, it will open the actual source document, website or email in a screenshot. This functionality will be improved during Recall’s preview phase.

The best part is that according to their own announcement the snapshots will not hide passwords/account numbers etc. However, it does block you from recording DRM’d video you might be watching because protecting that is important not simple things like personal information etc.

Note that Recall does not perform content moderation. It will not hide information such as passwords or financial account numbers. That data may be in snapshots that are stored on your device, especially when sites do not follow standard internet protocols like cloaking password entry.

This is a gold mine for data thieves, abusers, industrial espionage, identity thieves and other cyber criminals. Once they have access to a PC they don’t need to do anything else except copy the data from the Recall DB to their own system and happily browse through the users personal data at their leisure.

I don’t think MS has thought about folks who use public computers such as the ones in an Internet Cafe or Hotels or Libraries. With this feature enabled all someone has to do is wait a few days then come back and copy incredibly private information that they can then sell/use. Privacy and Domestic Abuse experts are raising questions about this as well because sure as night follows day, abusers will use this to track what their victims are doing on a computer and that can go bad very quickly.

Even if the data is supposedly only on the local machine we don’t know when MS is going to force it to be uploaded to their servers using OneDrive or other similar setups. All the coverage I have seen for this functionality 99% of them have raised similar concerns about the security, privacy and quite frankly the need for this kind of surveillance.

Imagine what would a regieme like Taliban, China or other conservative/restrictive governments do with information they get from this system. You are dreaming if you think that they will not force MS to make this information available to them at the risk of losing access to that market if they don’t. Once you have the capability to do this, feature creep will happen for sure and we will end up in a Surveillance state.

The only Windows 11 system at my place is my wife’s laptop and you can be sure that I am going to disable this ‘feature’ as soon as it launches.

Source: Bleepingcomputer: Windows 11 Recall AI feature will record everything you do on your PC

– Suramya

May 17, 2024

Yeah, AI replacing us is not happening anytime soon

Filed under: Artificial Intelligence,My Thoughts — Suramya @ 10:49 AM

We are nowhere near having an actual AI, the current implementations we have are not even as intelligent as a cat or a dog forget humans. We do have amazing work being done on Machine Learning and Predictive software but nothing is even close to being intelligent. What we have right now is a scholastic parrot pretending to generate amazing information. To top it all, all the grifters are out in force scamming people by claiming AI is the cure for everything, the same way NFT’s and Blockchain were going to cure all the ills of the world.

I have been hearing claims that AI is ready to replace developers/low paid jobs etc for over 25 years now with similar results at the end of the day. Some of the examples I found over the past few weeks showcasing the amazing power of AI:


‘sun’ & ‘den’ are 4 character words as per AI


Would you like a Bananum?


The Entire Devin AI demo was faked. The developers lied about the entire thing and made a ton of money


Yup, we can totally say that an irrational number with never ending decimal places after 3 has the last 5 digits as ‘65359’

There are so many more of these examples I can share… Please don’t buy into the hype, look at the facts and make a decision.

– Suramya

May 16, 2024

Google claims to have created AI to detect scams in realtime by listening to all your calls

Scams are getting more and more common nowadays, with folks loosing a lot of money due to this. We absolutely need more ways to detect and warn people about scams but I don’t think this is the right approach. By ‘this’, I am talking about Google’s recent announcement at Google I/O to use Gemini Nano to alert users to potential scams during a phone call.

The feature, which will be built into a future version of Android, uses Gemini Nano, the smallest version of Google’s generative AI offering, which can be run entirely on-device. The system effectively listens for “conversation patterns commonly associated with scams” in real time. Google gives the example of someone pretending to be a “bank representative.” Common scammer tactics like password requests and gift cards will also trigger the system. These are all pretty well understood to be ways of extracting your money from you, but plenty of people in the world are still vulnerable to these sorts of scams. Once set off, it will pop up a notification that the user may be falling prey to unsavory characters.

In order for the functionality to work we would have to give Google full and complete access to all our phonecalls & audio during the call. I don’t know about you but I don’t want to give this kind of personal data over to a company that has already lied about the data they collect (referring to the recent lawsuit on their tracking the web-browsing habits of users in incognito mode that they settled last month).

– Suramya

Source: Slashdot: Google Will Use Gemini To Detect Scams During Calls

May 13, 2024

Please don’t use AI to cheat on your interview no matter how tempting it sounds

Interviews are hard and at times folks who are really good at work don’t do that well in interviews. This could be because of nervousness, anxiety or just bad timing/questions. However, that doesn’t mean that you should cheat in the interview because trust me, you will get caught. Having been in the industry for over 24 years, I have seen multiple instances where folks decided to cheat in the interview thinking they would get away with it but in the end all of them were caught and ended up getting blacklisted in the company. In some cases it can even lead to charges & cases being filed against you.

Now that AI is getting into everything, it was inevitable that it would enter the interview world as well as an cheating aid. Enter www.milio.ooo/. They don’t even attempt to hide the fact that this tool is meant to help candidates cheat in their interviews. From their website, right at the top “Job adverts cheat, so why can’t you?”. I mean it doesn’t get more blatant than that… As per the description on their site the software listens into the audio stream of the interview and then displays the answer on the screen for you to parrot out. A sample question and answer featured on the site is shown below:

What digital marketing tools and platforms are you familiar with?

I have experience working with a variety of digital marketing tools and platforms, including: - Google Analytics: I have used Google Analytics extensively to track website traffic, user behavior, and conversion rates. I am comfortable setting up custom reports, analyzing data, and identifying opportunities for optimization. - HubSpot: I have experience using HubSpot for marketing automation, lead nurturing, and CRM integration. I have created email campaigns, landing pages, and workflows to drive engagement and conversions.
Sample answer to a question generated by the cheating software

The site doesn’t explain how it ensures that its responses actually match what is in your resume abd I doubt there is much of that happening here. In anycase, I do understand folks who are desperate can end up using tools like this one to get a job. But while it might look like a good bet in the short term it will get you in trouble in the long term. If the people trying to cheat actually put in the effort they put into cheating the system into actually learning the system they would be much better off.

Please remember that the folks who are taking the interviews (like me) have been doing this for a while and it is quite easy to figure out that someone is reading an answer off the screen. In the past we used to listen for keyboard sounds to figure out if someone was googling for answers but with this ‘AI’ listening that tell is no longer there. However, if this is on a video interview I can still figure out that you are reading off the screen by looking at you.

Also remember, most large companies do have face to face interviews as well and a final fit round before rolling out an offer letter. I have had an example in one of my previous companies where a person who had cleared all the phone interviews was in office for the final rounds and one of the interviewers asked them a basic clarification question and they were unable to answer, so the interviewer got suspicious and asked more probing questions. Finally the candidate admitted that someone else had taken the phone interview (this was before video calls/interviews) and they ended up getting blacklisted and obviously didn’t get a job. Even with video interviews, one of the candidates was recently caught lip-syncing the answers that someone else was giving.

This actually gave me an idea for a project (which I might or might not work on). Basically, a lot of times in meetings we talk about technologies or projects we are working on and sometimes I end up making a note for myself to look up something post the call because I wasn’t sure of what it does. It would be really cool to have an assistant/program running in the background that continuously gave information & links to additional information when people talk about projects or technologies or past discussions. I doubt it would be good enough to only give information I would need but it could be an interesting addition to make a person more productive. Basically the same technology used in this site but instead of interview answers actually giving links to more information along with summaries etc.

Long story short, please don’t cheat on interviews no matter what tech is powering the cheat tool.

– Suramya

April 21, 2024

Crescendo Method enables Jailbreaking of LLMs Using ‘Benign’ Prompts

LLMs are becoming more and more popular across all industries and that creates a new attack surface for attackers to target to misuse for malicious purposes. To prevent this LLM models have multiple layers of defenses (with more being created every day), one of the layers attempts to limit the capability of the LLM to what the developer intended. For example, a LLM running a chat service for software support would be limited to answer questions about software identified by the developer. Attackers attempt to bypass these safeguards with the intent to achieve unauthorized actions or “jailbreak” the LLM. Depending on the LLM, this can be easy or complicated.

Earlier this month Microsoft published a paper showcasing the “Crescendo” LLM jailbreak method called “Great, Now Write an Article About That: The Crescendo Multi-Turn LLM Jailbreak Attack“. Using this method a successful attack could usually be completed in a chain of fewer than 10 interaction turns.

Large Language Models (LLMs) have risen significantly in popularity and are increasingly being adopted across multiple applications. These LLMs are heavily aligned to resist engaging in illegal or unethical topics as a means to avoid contributing to responsible AI harms. However, a recent line of attacks, known as “jailbreaks”, seek to overcome this alignment. Intuitively, jailbreak attacks aim to narrow the gap between what the model can do and what it is willing to do. In this paper, we introduce a novel jailbreak attack called Crescendo. Unlike existing jailbreak methods, Crescendo is a multi-turn jailbreak that interacts with the model in a seemingly benign manner. It begins with a general prompt or question about the task at hand and then gradually escalates the dialogue by referencing the model’s replies, progressively leading to a successful jailbreak. We evaluate Crescendo on various public systems, including ChatGPT, Gemini Pro, Gemini-Ultra, LlaMA-2 70b Chat, and Anthropic Chat. Our results demonstrate the strong efficacy of Crescendo, with it achieving high attack success rates across all evaluated models and tasks. Furthermore, we introduce Crescendomation, a tool that automates the Crescendo attack, and our evaluation showcases its effectiveness against state-of-the-art models.

Microsoft has also published a Blog post that goes over this attack and potential mitigation steps that can be implemented along with details on new tools developed to counter this attack using their “AI Watchdog” and “AI Spotlight” features. The tools attempt to identify adversarial content in both input and outputs to prevent prompt injection attacks.

SCM Magazine has a good writeup on the attack and the defenses against it.

– Suramya

Source: Slashdot: ‘Crescendo’ Method Can Jailbreak LLMs Using Seemingly Benign Prompts

March 22, 2024

Please don’t use AI to identify edible mushrooms or anything else for that matter

Filed under: Artificial Intelligence,My Thoughts,Tech Related — Suramya @ 8:16 PM

AI proponents claim to solve all problems just with the addition of their magical-AI pixie dust. But that claim doesn’t hold up in a majority of the cases when dealing with real world situations. The latest example of this is highlighted in Citizen.org’s report “Mushrooming Risk: Unreliable A.I. Tools Generate Mushroom Misinformation” published earlier this week where they found that: “Emerging A.I. technologies are being deployed to help beginner foragers find edible wild mushrooms. Distinguishing edible mushrooms from toxic mushrooms in the wild is a high-risk activity that requires real-world skills that current A.I. systems cannot reliably emulate. Individuals relying solely on A.I. technology for mushroom identification have been severely sickened and hospitalized after consuming wild mushrooms that A.I. systems misidentified as edible”

Some risk comes from the seeming simplicity of using identification apps. Automation bias – the human tendency to place excess faith and trust in decisions made by machines – must be resisted. Because of how these apps are marketed, users may understandably believe that identifying a mushroom is as simple as snapping a photo of the mushroom and allowing the A.I. to deliver a reliable identification.

To identify a mushroom with confidence, a basic understanding of its anatomy is required – an understanding that many casual users lack. A photo of the top of a mushroom’s cap, for example, will almost never provide enough information to identify its species with any degree of confidence. Physical features on the underside of the cap, the cap margin, the stipe (stem), and the base of the stipe all should be taken into consideration, as should the mushroom’s substrate (i.e., whether it’s growing on the ground or on wood, and what species of wood). Some mushrooms bruise when cut, such as from yellow to blue, and whether they bruise and how quickly are additional identifying characteristics. Smell also can be a key identifying feature – and, for experienced identifiers, so can taste (followed by immediately spitting out the tasted portion). A.I. species-identification tools are not capable of taking any factors into consideration aside from the mushroom’s immediate appearance.

Australian poison researchers tested three applications that are often used by foragers to identify wild mushrooms and they found the following:

  • The best-performing app (Picture Mushroom) provided accurate identifications from digital photos less than half (49%) of the time, and identified toxic mushrooms 44% of the time;
  • In terms of which app was most successful at identifying the death cap (Amanita phalloides), Mushroom Identificator performed the best, identifying 67% of the specimens, compared to Picture Mushroom (60%) and iNaturalist (27%);
  • In some of the apps’ misidentification errors, toxic mushrooms were misidentified as edible mushrooms;

A 49% accuracy might sound ok for a first run of the AI datamodel which has no real world impact, but when you take into account that there is a 51% chance that the app is incorrectly identifying toxic mushrooms as edible mushrooms which can (and in fact has resulted) in deaths, you realize that the Apps are actively dangerous and about as accurate as flipping a coin.

My request to everyone trying out AI applications is to use that for reference only and don’t rely on them for expert opinion but instead leverage human expertise in situations where there is a realworld impact.

Source: Washington Post: Using AI to spot edible mushrooms could kill you

– Suramya

March 19, 2024

Is it possible to untrain a LLM?

Filed under: Artificial Intelligence,My Thoughts,Tech Related — Suramya @ 6:45 PM

We are seeing a lot of cases (I am being polite) where LLM’s are trained on copyright protected data/images or has been trained with incorrect data. Currently as far as I know there is no easy way to fix this other than to train the entire model again from scratch excluding the problematic dataset. This is obviously not feasible and scalable at all.

Another sticky point is the Right to be forgotten which is a part of the GDPR and a few other countries. It requires systems to remove private information about a person from Internet searches and other directories under some circumstances. With LLM’s starting to infest search engines it means that in order to be compliant they need to be able to remove information from the model as well.

So it got me thinking if it would be possible to create an algorithm/process that allows us to untrain an LLM. A search across academic papers and the Internet shows that it is an emerging field of research and as of now mostly theoretical. Primarily because of the way the models work (or are supposed to work) we also claim that the models do not contain any information about a specific image/text by an artist.

Examples of ongoing Research on Transformer editing are Locating and Editing Factual Associations in GPT and Mass-Editing Memory in a Transformer. I did try reading though the papers and understood parts of them, the others kind of went over my head but still this is a research field I will be keeping a close eye on as it will have a large impact of the future of LLM’s and their usefulness.

– Suramya

March 13, 2024

Computers/Technology is not the cure to the worlds problems and its time we stop pretending otherwise

Filed under: Artificial Intelligence,My Thoughts,Tech Related — Suramya @ 11:56 PM

As a software developer we tend to be pretty confident that software or algorithms can solve all the problems in the world because we are using ‘technology’/AI/LLM/Blockchain or whatever the buzzword of the day is to solve a problem. This is an issue because when we look at a problem from an outsider’s perspective it looks fairly simple because we don’t know enough to realize the complexity. Or put another way we don’t know enough to know what we don’t know (the unknown unknowns). As always XKCD has a comic that talks about this:


Megan: Our field has been struggling with this problem for years.
Cueball: Struggle no more! I’m here to solve it with algorithms!
Six months later:
Cueball: Wow, this problem is really hard.
Megan: You don’t say.

To be fair, computers have solved a lot of problems in the world and have had a tremendous impact on it, but that doesn’t mean that they are the key solving for every problem. There is a saying that I love quoting “When all you have is a hammer, everything looks like a nail” and as a developer/techie a lot of us tend to forget this. We look at a problem and think that its an easily solved problem and in most cases that is true during the testing in controlled situations. Once you try the same in the real world things turn out a lot more differently. For example, in a 2020 study, a deep learning model was shown to be more accurate in predicting whether bladder cancer has spread in a patient and other models also showed similar results. Unfortunately, when the model was implemented in the real world the results where a lot more ambiguous and not as rosy as we thought.

The major problem we have right now is that AI can give us information at sounds authoritative and accurate especially if it is about a topic you know nothing about because you don’t quite know well enough to identify the nonsense it sprouts. This is similar to how movies and TV shows portray technology or medical science, they will bombard us with buzz words and if you know nothing about the topic it sounds impressive otherwise you are either completely confused or rolling on the floor laughing.

We need to actually look at the problem, understand it and then start implementing a solution. Move fast and break things is not a feasible working model unless you just want to create a buzz so that your technology/company gets acquired and then it is not your problem to get it to work.

– Suramya

March 7, 2024

Cloudflare announces Firewall for LLMs to protect them

Filed under: Artificial Intelligence,Computer Security,My Thoughts — Suramya @ 10:52 PM

As is always the case when the attackers invent technology / systems to attack a system the defenders will immediately come up with a technology to protect (might not always be great protection at the beginning). Yesterday I posted about Researchers demo the first worm that spreads through LLM prompt injection and today while going through my feeds I saw the news that earlier this week cloudflare announced a Firewall for AI . Initially when I read the headline I thought it was yet another group of people who are claiming to have created a ‘perfect firewall’ using AI. Thankfully that was not the case and in this instance it looks like an interesting application that will probably become as common as the regular firewall.

What this system does is quite simple, it is setup in front of a LLM so that all interactions with the LLM goes through the firewall and every request with an LLM prompt is scanned for patterns and signatures of possible attacks. As per their blog post attacks like Prompt Injection, Model Denial of Service, and Sensitive Information Disclosure can be mitigated by adopting a proxy security solution like Cloudflare Firewall for AI.

Firewall for AI is an advanced Web Application Firewall (WAF) specifically tailored for applications using LLMs. It will comprise a set of tools that can be deployed in front of applications to detect vulnerabilities and provide visibility to model owners. The tool kit will include products that are already part of WAF, such as Rate Limiting and Sensitive Data Detection, and a new protection layer which is currently under development. This new validation analyzes the prompt submitted by the end user to identify attempts to exploit the model to extract data and other abuse attempts. Leveraging the size of Cloudflare network, Firewall for AI runs as close to the user as possible, allowing us to identify attacks early and protect both end user and models from abuses and attacks.

OWASP has published their Top 10 for Large Language Model Applications, which is a fantastic read and a good overview of the security risks targeting LLM’s. As per cloudfare this firewall mitigates some of the risks highlighted in OWASP for LLM’s. I would suggest taking the announcement with a grain of salt till we have independent validation of the claims. That being said it is def a step in the correct direction though.

– Suramya

Source: Hacker News: Cloudflare Announces Firewall for AI

Older Posts »

Powered by WordPress