Suramya's Blog : Welcome to my crazy life…

March 13, 2024

Computers/Technology is not the cure to the worlds problems and its time we stop pretending otherwise

Filed under: Artificial Intelligence,My Thoughts,Tech Related — Suramya @ 11:56 PM

As a software developer we tend to be pretty confident that software or algorithms can solve all the problems in the world because we are using ‘technology’/AI/LLM/Blockchain or whatever the buzzword of the day is to solve a problem. This is an issue because when we look at a problem from an outsider’s perspective it looks fairly simple because we don’t know enough to realize the complexity. Or put another way we don’t know enough to know what we don’t know (the unknown unknowns). As always XKCD has a comic that talks about this:


Megan: Our field has been struggling with this problem for years.
Cueball: Struggle no more! I’m here to solve it with algorithms!
Six months later:
Cueball: Wow, this problem is really hard.
Megan: You don’t say.

To be fair, computers have solved a lot of problems in the world and have had a tremendous impact on it, but that doesn’t mean that they are the key solving for every problem. There is a saying that I love quoting “When all you have is a hammer, everything looks like a nail” and as a developer/techie a lot of us tend to forget this. We look at a problem and think that its an easily solved problem and in most cases that is true during the testing in controlled situations. Once you try the same in the real world things turn out a lot more differently. For example, in a 2020 study, a deep learning model was shown to be more accurate in predicting whether bladder cancer has spread in a patient and other models also showed similar results. Unfortunately, when the model was implemented in the real world the results where a lot more ambiguous and not as rosy as we thought.

The major problem we have right now is that AI can give us information at sounds authoritative and accurate especially if it is about a topic you know nothing about because you don’t quite know well enough to identify the nonsense it sprouts. This is similar to how movies and TV shows portray technology or medical science, they will bombard us with buzz words and if you know nothing about the topic it sounds impressive otherwise you are either completely confused or rolling on the floor laughing.

We need to actually look at the problem, understand it and then start implementing a solution. Move fast and break things is not a feasible working model unless you just want to create a buzz so that your technology/company gets acquired and then it is not your problem to get it to work.

– Suramya

March 7, 2024

Cloudflare announces Firewall for LLMs to protect them

Filed under: Artificial Intelligence,Computer Security,My Thoughts — Suramya @ 10:52 PM

As is always the case when the attackers invent technology / systems to attack a system the defenders will immediately come up with a technology to protect (might not always be great protection at the beginning). Yesterday I posted about Researchers demo the first worm that spreads through LLM prompt injection and today while going through my feeds I saw the news that earlier this week cloudflare announced a Firewall for AI . Initially when I read the headline I thought it was yet another group of people who are claiming to have created a ‘perfect firewall’ using AI. Thankfully that was not the case and in this instance it looks like an interesting application that will probably become as common as the regular firewall.

What this system does is quite simple, it is setup in front of a LLM so that all interactions with the LLM goes through the firewall and every request with an LLM prompt is scanned for patterns and signatures of possible attacks. As per their blog post attacks like Prompt Injection, Model Denial of Service, and Sensitive Information Disclosure can be mitigated by adopting a proxy security solution like Cloudflare Firewall for AI.

Firewall for AI is an advanced Web Application Firewall (WAF) specifically tailored for applications using LLMs. It will comprise a set of tools that can be deployed in front of applications to detect vulnerabilities and provide visibility to model owners. The tool kit will include products that are already part of WAF, such as Rate Limiting and Sensitive Data Detection, and a new protection layer which is currently under development. This new validation analyzes the prompt submitted by the end user to identify attempts to exploit the model to extract data and other abuse attempts. Leveraging the size of Cloudflare network, Firewall for AI runs as close to the user as possible, allowing us to identify attacks early and protect both end user and models from abuses and attacks.

OWASP has published their Top 10 for Large Language Model Applications, which is a fantastic read and a good overview of the security risks targeting LLM’s. As per cloudfare this firewall mitigates some of the risks highlighted in OWASP for LLM’s. I would suggest taking the announcement with a grain of salt till we have independent validation of the claims. That being said it is def a step in the correct direction though.

– Suramya

Source: Hacker News: Cloudflare Announces Firewall for AI

March 6, 2024

Researchers demo the first worm that spreads through LLM prompt injection

Filed under: Artificial Intelligence,Computer Security,Computer Software — Suramya @ 10:17 PM

In the past year we have seen an uptick in the tech industry looking towards embedding LLM (Large Language Models) or AI as they are being pitched to the world in all possible places. Windows 11 now has built in Copilot that is extremely hard to disable. Email systems are using LLM’s to get additional details/information using the data from the email to add context etc. This creates new attack surfaces that attackers can target and we have seen instances where attackers have used prompt injection to gain access to data or systems that were restricted.

Building on top of that researchers have now created (and demo’d) the first worm that spreads through prompt injection. This is breakthrough work similar to how the Morris Worm was in the late 80’s. Basically, researchers created an email which has an adversarial prompt embedded in it. This prompt is then ingested by an LLM (using Retrieval-Augmented Generation which allows it to enhance the reliability of the LLM by fetching data from external sources when the email is processed by the LLM) where it jailbreaks the GenAI service and can steal data from the emails (or do whatever else the attacker wants such as changing email text, removing data etc). In addition the prompt also has the ability to make the email assistant forward the email with the malicious prompt to other email addresses allowing it to spread. The researchers have christened their worm as Morris II giving homage to the first email worm.

Abstract: In the past year, numerous companies have incorporated Generative AI (GenAI) capabilities into new and existing applications, forming interconnected Generative AI (GenAI) ecosystems consisting of semi/fully autonomous agents powered by GenAI services. While ongoing research highlighted risks associated with the GenAI layer of agents (e.g., dialog poisoning, membership inference, prompt leaking, jailbreaking), a critical question emerges: Can attackers develop malware to exploit the GenAI component of an agent and launch cyber-attacks on the entire GenAI ecosystem?

This paper introduces Morris II, the first worm designed to target GenAI ecosystems through the use of adversarial self-replicating prompts. The study demonstrates that attackers can insert such prompts into inputs that, when processed by GenAI models, prompt the model to replicate the input as output (replication), engaging in malicious activities (payload). Additionally, these inputs compel the agent to deliver them (propagate) to new agents by exploiting the connectivity within the GenAI ecosystem. We demonstrate the application of Morris II against GenAI-powered email assistants in two use cases (spamming and exfiltrating personal data), under two settings (black-box and white-box accesses), using two types of input data (text and images). The worm is tested against three different GenAI models (Gemini Pro, ChatGPT 4.0, and LLaVA), and various factors (e.g., propagation rate, replication, malicious activity) influencing the performance of the worm are evaluated.

This is pretty fascinating work and I think that this kind of attack will start becoming more common as the LLM usage goes up. The research paper is available at: ComPromptMized: Unleashing Zero-click Worms that Target GenAI-Powered Applications.

– Suramya

January 20, 2024

NFTs, AI and the sad state of Thought Leaders/Tech Influencer’s

Filed under: Artificial Intelligence,My Thoughts,Tech Related — Suramya @ 11:59 PM

NFTs became such a big thing in last few years, going from millions of dollars to 95% of them being worth $0 in Sept 2023. The whole concept of a JPG of an ugly drawing never made sense to me but you won’t believe the no of people who tried to convince me otherwise.

Today I was watching Lift on Netflix and the first 20 minutes are this group of thieves stealing an NFT Oceans 11 style. It is one of the most ridiculous things that I have seen that someone would spend so much effort showing a NFT heist but the movie was scripted in 2021 when the NFT craze was starting to become insane. Haven’t finished the full movie yet and I doubt I will ever do so as it is very slow/corny and has poor acting and script (as if the whole NFT heist thing didn’t give that away).

It is interesting that all the folks who were shilling NFTs a few years ago have ‘pivoted’ to AI now. If you read the posts from Infuencers you will think that AI is the best thing since sliced bread. Saw the following in my feed and I did question the sanity of the person posting such ‘thought leadership’.

I can suggest an equation that has the potential to impact the future: E=mc?+ Al This equation combines Einstein's famous equation E=mc?, which relates energy (E) to mass (m) and the speed of light (c), with the addition of Al (Artificial Intelligence). By including Al in the equation, it symbolizes the increasing role of artificial intelligence in shaping and transforming our future. This equation highlights the potential for Al to unlock new forms of energy, enhance scientific discoveries, and revolutionize various fields such as healthcare, transportation, and technology.
Technology Consultant’s thoughts on AI

Each influencer keeps posting things like this to make them sound more technical and forward thinking but if you start digging into it then you will find out that they are just regurgitating a word salad that really doesn’t mean much but sounds impressive. Actually now that I think about it, they are just like an AI bot that sounds impressive if you are not experienced in the that area but when you start digging into it, you find out that there is no substance to what they are stating.

The current state of AI is basically a massive hype machine which is trying to get folks to buy things or invest in companies because they are working creating an intelligent entity. Whereas in reality, what we have today is a really good Auto Complete or in some cases really nice Machine learning system. It does some things quite well but is nowhere close to being “Intelligent”. What we have now is something that is really good at extrapolating and guessing which can reduce manual efforts in a lot of things but it is not the cure all that everyone is making it out to be.

For example, Github Copilot automates a lot of grunt work while coding allowing users to reduce the time spent of writing code, but in a recent study it was found that Users Write More Insecure Code with AI Assistants. Now this might change in the future with advances in compute power, data and something that we haven’t even thought of yet. But the problem is that in the short term these can cause immense harm and problems.

– Suramya

December 13, 2023

Researchers use living human brain cells to perform speaker identification

Filed under: Computer Hardware,Emerging Tech,My Thoughts — Suramya @ 10:51 AM

The human brain is the most powerful computer ever created and though most people have been trying to create a copy of the brain using Silicon and chips, a dedicated group of people has been actively working on creating computers/processors using living tissue. These computers are called Bio-Computers and recently there has been a major breakthrough in the field due to the work of scientists from the Indiana University Bloomington.

They have managed to grow lumps of nerve cells called Brain organoids from stem cells, each of these organoids contain about 100 million nerve cells. The team placed these organoids on a microelectrode array which sends electrical signals to the organoids and also detects when the nerve cells fire in response. They then sent 240 audio clips as sequence of signals in spatial patterns with the goal of identifying the speech of a particular person. The initial accuracy of the system was at about 30-40 percent but after being trained for two days (with no feedback being given to the cells) the accuracy rose to 70-80% which is a significant increase. The team’s paper on this project has been published in Nature Electronics

Brain-inspired computing hardware aims to emulate the structure and working principles of the brain and could be used to address current limitations in artificial intelligence technologies. However, brain-inspired silicon chips are still limited in their ability to fully mimic brain function as most examples are built on digital electronic principles. Here we report an artificial intelligence hardware approach that uses adaptive reservoir computation of biological neural networks in a brain organoid. In this approach—which is termed Brainoware—computation is performed by sending and receiving information from the brain organoid using a high-density multi-electrode array. By applying spatiotemporal electrical stimulation, nonlinear dynamics and fading memory properties are achieved, as well as unsupervised learning from training data by reshaping the organoid functional connectivity. We illustrate the practical potential of this technique by using it for speech recognition and nonlinear equation prediction in a reservoir computing framework.

This PoC doesn’t have the capability to convert the speech to text but this is early days and it is possible that with more fine-tuning we will be able to create a system that will allow us to to speech-to-text with a much lower power consumption than the traditional systems. However, there is a big issue of maintenance and long term viability of the organoids. Currently the organoids can only be maintained for one or two months before they have to be replaced which makes it difficult to imagine a commercial/home PC like deployment of these machines as the maintenance costs and efforts would make it unfeasible. On the other hand it is possible that we might see more powerful versions of this setup in research labs and data-centers which would have the capacity to maintain these systems.

I am looking forward to seeing more advances in this field.

– Suramya

Source: AI made from living human brain cells performs speech recognition

December 11, 2023

ChatGPT is changing how we search for information and that is not good as it hallucinates often

Filed under: Artificial Intelligence,My Thoughts — Suramya @ 8:23 PM

Much as I dislike it, ChatGPT has changed the way we do things and look for things. Initially I thought that it was a fad/phase and when people would realize that it gives incorrect information mixed with correct info they would stop using it but that doesn’t seem to be the case. A couple of days ago we were having a discussion on worms and how much protein they have in them in a group chat of friends and Surabhi was tried to gross Anil out, instead of getting grossed out Anil asked for a recipe he could use to cook the worms. Immediately Surabhi went on ChatGPT and asked it for a recipe but it refused to give it stating that it is against their policies and might be disturbing to see. Before ChatGPT she would have searched on Google for the recipe and gotten it (I did that in a few mins after I saw her comment). The a few days later another friend commented similarly where they couldn’t find something on ChatGPT so decided to give up instead of searching via a search engine.

Other people have stated that they use it for tone policing of emails to ensure they are professional. Personally I would recommend The Judge for that as I had stated in my review of their site earlier this year.

The problem I have with ChatGPT is highlighted by the following quote shared by @finestructure (Sven A. Schmidt) “Whether it did it correctly I don’t have the expertise to evaluate but it was very impressive sounding.”. The way GPT works it gives information in a very well crafted manner (and that is super impressive) but the fact that it can have errors or it hallucinates from time to time makes it useless for detail oriented work for me. If I have to verify the output generated by ChatGPT using a browser then I might as well use the browser directly and skip a step.

I have screenshots of so many examples of how ChatGPT/Bing/Bard hallucinate and give wrong information. I think I should do a follow up post with those screenshots. (I have them saved in a folder titled AI nonsense 🙂 ).

– Suramya

December 5, 2023

Near real-time Generative AI art is now possible using LCM-LoRA model

Filed under: Artificial Intelligence,My Thoughts — Suramya @ 6:21 PM

There are a lot of advancements happening in Generative AI and while I don’t agree that we have created intelligence (at least not yet) the advances in the Computer generated art are phenomenal. The most recent one is LCM-LoRA, short for “Latent Consistency Model- Low-Rank Adaptation” developed by researchers at the Institute for Interdisciplinary Information Sciences (IIIS) at Tsinghua University in China. Their paper LCM-LORA: A Universal Stable-Diffusion Acceleration Module (PDF) has been published on Arxiv.org last week.

This model allows a system to generate an image given a text prompt in near real-time instead of having to wait a few seconds which was the case earlier. So you can modify the prompt as you go and get immediate feedback which can then be used to modify a prompt. You can test it out at Fal.ai

Latent Consistency Models (LCMs) (Luo et al., 2023) have achieved impressive performance in accelerating text-to-image generative tasks, producing high quality images with minimal inference steps. LCMs are distilled from pre-trained latent diffusion models (LDMs), requiring only ∼32 A100 GPU training hours. This report further extends LCMs’ potential in two aspects: First, by applying LoRA distillation to Stable-Diffusion models including SD-V1.5 (Rombach et al., 2022), SSD-1B (Segmind., 2023), and SDXL (Podell et al., 2023), we have expanded LCM’s scope to larger models with significantly less memory consumption, achieving superior image generation quality. Second, we identify the LoRA parameters obtained through LCM distillation as a universal Stable-Diffusion acceleration module, named LCM-LoRA. LCM-LoRA can be directly plugged into various Stable-Diffusion fine-tuned models or LoRAs with-out training, thus representing a universally applicable accelerator for diverse image generation tasks. Compared with previous numerical PF-ODE solvers such as DDIM (Song et al., 2020), DPM-Solver (Lu et al., 2022a;b), LCM-LoRA can be viewed as a plug-in neural PF-ODE solver that possesses strong generalization abilities. Project page: https://github.com/luosiallen/latent-consistency-model.

The technique works not only for 2D images, but 3D assets as well, meaning artists could theoretically quickly create immersive environments instantly for use in mixed reality (AR/VR/XR), computer and video games, and other experiences. I did try going over the paper but a majority of it went over my head. That being said it is fun playing with this tech.

The model doesn’t address the existing issues with AI Art such as how should the artist’s whose art was used as part of the training data sets be compensated, or the issue of copyright infringement as the art is not public art. We also need to start thinking about who would own the copyright to the art generated using AI. There are a few open court cases on this topic but as of now the courts have refused to give any copyright protection to art generated by AI which would make it a non-starter for use in any commercial project such as a movie or game etc.

– Suramya

Source: Realtime generative AI art is here thanks to LCM-LoRA

October 28, 2023

New tool called Nightshade allows artists to ‘poison’ AI models

Filed under: Artificial Intelligence,Tech Related — Suramya @ 12:20 AM

Generative AI has burst into the scene with a bang and while the Image generation tech is not perfect yet it is getting more and more sophisticated. Due to the way the tech works, the model needs to be trained on existing art and most of the models in the market right now have been trained on artwork available on the internet whether or not it was in the public domain. Because of this multiple lawsuits have been filed against AI companies by artists.

Unfortunately this has not stopped AI models from using these images as training data, so while the question is being debated in the courts, the researchers over at University of Chicago have created a new tool called Nightshade that allows artists to poison the training data for AI models. This functionality will be an optional setting in the their prior product Glaze, which cloak’s digital artwork and alter its pixels to confuse AI models about its style. Nightshade goes one step further by making the AI learn the wrong names for objects etc in a given image.

Optimized prompt-specific poisoning attack we call Nightshade. Nightshade uses multiple optimization techniques (including targeted adversarial perturbations) to generate stealthy and highly effective poison samples, with four observable benefits.

  • Nightshade poison samples are benign images shifted in the feature space. Thus a Nightshade sample for the prompt “castle” still looks like a castle to the human eye, but teaches the model to produce images of an old truck.
  • Nightshade samples produce stronger poisoning effects, enabling highly successful poisoning attacks with very few (e.g., 100) samples.
  • Nightshade samples produce poisoning effects that effectively “bleed-through” to related concepts, and thus cannot be circumvented by prompt replacement, e.g., Nightshade samples poisoning “fantasy art” also affect “dragon” and “Michael Whelan” (a well-known fantasy and SciFi artist).
  • We demonstrate that when multiple concepts are poisoned by Nightshade, the attacks remain successful when these concepts appear in a single prompt, and actually stack with cumulative effect. Furthermore, when many Nightshade attacks target different prompts on a single model (e.g., 250 attacks on SDXL), general features in the model become corrupted, and the model’s image generation function collapses.

In their tests the researchers poisoned images of dogs to include information in the pixels that made it appear to an AI model as a cat. After sampling and learning from just 50 poisoned image samples, the AI began generating images of dogs with strange legs and unsettling appearances. After 100 poison samples, it reliably generated a cat when asked by a user for a dog. After 300, any request for a dog returned a near perfect looking cat.

Obviously this is not a permanent solution as the AI training models will start working on fixing this issue immediately and then the whack-a-mole process of fixes/updates to one up will continue (similar to how virus & anti-virus programs have been at it) for the foreseeable future.

Full paper: Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models (PDF)
Source: Venturebeat: Meet Nightshade, the new tool allowing artists to ‘poison’ AI models

– Suramya

October 9, 2023

Microsoft AI responds with absolute nonsense when asked about a prominent Cyber Security expert

Filed under: Artificial Intelligence,Computer Software — Suramya @ 11:39 PM

The more I read about the Microsoft implementation of ‘AI’ the more I wonder what on earth are they thinking? Their AI system is an absolute shambles and about 99% of the output is nonsense. See the example below:

I did not realise how inaccurate Microsoft's Al is. It's really bad
Microsoft AI returns absolute nonsense when asked about who Kevin Beaumont is

I did not realise how inaccurate Microsoft’s Al is. It’s really bad This is just one example – it lists a range of lawsuits I’ve filed, but they’re all fictional – it invented them and made up the citations. It says I gave Microsoft’s data to @briankrebs. It says Krebs is suing me. It says @malwaretech works for me. The list goes on and on. Very eyebrow raising this is being baked into next release of Windows 11 and Office. It will directly harm people who have no knowledge or recourse.

I mean I can understand if it got one or two facts wrong because the data sources might not be correct, but to get every single detail wrong requires extra skill. The really scary part is that Google AI search is not much better and both companies are in a race to replace their search engine with AI responses. Microsoft is going a step further and including it as a default option in Windows. I wonder how much of the user data being stored on a windows computer is being used to train these AI engines.

There needs to be an effort to create a search engine that filters out these AI generated responses and websites to go back to the old style search engines that actually returned useful & correct results.

– Suramya

October 7, 2023

Oxford researchers develop promising 3D printing method for repairing brain injuries

Filed under: Emerging Tech,Science Related — Suramya @ 11:59 PM

Brain injuries are traditionally extremely hard for us to cure with the current state of medical knowledge. Mild cases of Traumatic brain injury (TBI) or concussion can be treated with rest and slow return to normal activities. However, for severe TBI’s the care mostly focuses on stabilizing the patient, ensuring the brain is getting enough enough oxygen, controlling blood and brain pressure, and preventing further injury to the head or neck. Post stabilization if the patient is stable we use therapies to recover functions, relearn skills etc. But that is just training the brain to use different neurons to perform tasks that the damaged ones used to do.

The researchers at the University of Oxford have had a breakthrough that brings the ability to provide tailored repairs for those who suffer brain injuries. The researchers demonstrated for the first time that neural cells can be 3D printed to mimic the architecture of the cerebral cortex. This research has been published in Nature Communications earlier this month.

Engineering human tissue with diverse cell types and architectures remains challenging. The cerebral cortex, which has a layered cellular architecture composed of layer-specific neurons organised into vertical columns, delivers higher cognition through intricately wired neural circuits. However, current tissue engineering approaches cannot produce such structures. Here, we use a droplet printing technique to fabricate tissues comprising simplified cerebral cortical columns. Human induced pluripotent stem cells are differentiated into upper- and deep-layer neural progenitors, which are then printed to form cerebral cortical tissues with a two-layer organization. The tissues show layer-specific biomarker expression and develop a structurally integrated network of processes. Implantation of the printed cortical tissues into ex vivo mouse brain explants results in substantial structural implant-host integration across the tissue boundaries as demonstrated by the projection of processes and the migration of neurons, and leads to the appearance of correlated Ca2+ oscillations across the interface. The presented approach might be used for the evaluation of drugs and nutrients that promote tissue integration. Importantly, our methodology offers a technical reservoir for future personalized implantation treatments that use 3D tissues derived from a patient’s own induced pluripotent stem cells.

I did try reading the paper but it pretty much went over my head. However I am extremely happy to see significant progress being made in this field and look forward to reading more about this technology as it is refined and improved.

Source: Oxford researchers develop 3D printing method that shows promise for repairing brain injuries

– Suramya

« Newer PostsOlder Posts »

Powered by WordPress