Suramya's Blog : Welcome to my crazy life…

February 27, 2023

It is now possible to put undetectable Backdoors in Machine Learning Models

Filed under: Computer Software,Emerging Tech,My Thoughts,Tech Related — Suramya @ 10:18 PM

Machine Learning (ML) has become the new go to buzzword in the Tech world in the last few years and everyone seems to be focusing on how they can include ML/AI in their products, regardless of whether it makes sense to include or not. One of the bigest dangers of this trend is that we are moving towards a future where an algorithm would have the power to make decisions that have real world impacts but due to the complexity it would be impossible to audit/check the system for errors/bugs, non-obvious biases or signs of manipulation etc. For example, we have had cases where the wrong person was identified as a fugitive and arrested because an AI/ML system claimed that they matched the suspect. Others have used ML to try to predict crimes with really low accuracy but people take it as gospel because the computer said so…

With ML models becoming more and more popular there is also more research on how these models are vulnerable to attacks. In December 2022 researchers (Shafi Goldwasser, Michael P. Kim, Vinod Vaikuntanathan and Or Zamir) from UC Berkely, MIT and Princeton published a paper titled “Planting Undetectable Backdoors in Machine Learning Models” in the IEEE 63rd Annual Symposium on Foundations of Computer Science (FOCS) where they discuss how it would be possible to train a model in a way that it allowed an attacker to manipulate the results without being detected by any computationally-bounded observer.

Abstract: Given the computational cost and technical expertise required to train machine learning models, users may delegate the task of learning to a service provider. Delegation of learning has clear benefits, and at the same time raises serious concerns of trust. This work studies possible abuses of power by untrusted learners.We show how a malicious learner can plant an undetectable backdoor into a classifier. On the surface, such a backdoored classifier behaves normally, but in reality, the learner maintains a mechanism for changing the classification of any input, with only a slight perturbation. Importantly, without the appropriate “backdoor key,” the mechanism is hidden and cannot be detected by any computationally-bounded observer. We demonstrate two frameworks for planting undetectable backdoors, with incomparable guarantees.

First, we show how to plant a backdoor in any model, using digital signature schemes. The construction guarantees that given query access to the original model and the backdoored version, it is computationally infeasible to find even a single input where they differ. This property implies that the backdoored model has generalization error comparable with the original model. Moreover, even if the distinguisher can request backdoored inputs of its choice, they cannot backdoor a new input­a property we call non-replicability.

Second, we demonstrate how to insert undetectable backdoors in models trained using the Random Fourier Features (RFF) learning paradigm (Rahimi, Recht; NeurIPS 2007). In this construction, undetectability holds against powerful white-box distinguishers: given a complete description of the network and the training data, no efficient distinguisher can guess whether the model is “clean” or contains a backdoor. The backdooring algorithm executes the RFF algorithm faithfully on the given training data, tampering only with its random coins. We prove this strong guarantee under the hardness of the Continuous Learning With Errors problem (Bruna, Regev, Song, Tang; STOC 2021). We show a similar white-box undetectable backdoor for random ReLU networks based on the hardness of Sparse PCA (Berthet, Rigollet; COLT 2013).

Our construction of undetectable backdoors also sheds light on the related issue of robustness to adversarial examples. In particular, by constructing undetectable backdoor for an “adversarially-robust” learning algorithm, we can produce a classifier that is indistinguishable from a robust classifier, but where every input has an adversarial example! In this way, the existence of undetectable backdoors represent a significant theoretical roadblock to certifying adversarial robustness.

Basically they are talking about having a ML model that works correctly most of the time but allows the attacker to manipulate the results if they want. One example use case would be something like the following: A bank uses a ML model to decide if they should give out a loan to an applicant and because they don’t want to be accused of being discriminatory they give it to folks to test and validate and the model comes back clean. However, unknown to the testers the model has been backdoored using the techniques in the paper above so the bank can modify the output in certain cases to deny the loan application even though they would have qualified. Since the model was tested and ‘proven’ to be without bias they are in the clear as the backdoor is pretty much undetectable.

Another possible attack vector is that a nation state funds a company that trains ML models and has them insert a covert backdoor in the model, then they have the ability to manipulate the output from the model without any trace. Imagine if this model was used to predict if the nation state was going to attack or not. Even if they were going to attack they could use the backdoor to fool the target into thinking that all was well.

Having a black box making such decisions is what I would call a “Bad Idea”. At least with the old (non-ML) algorithms we could audit the code to see if there were issues with ML that is not really possible and thus this becomes a bigger threat. There are a million other such scenarios that could be played and if we put blind trust in an AI/ML system then we are setting ourselves up for a disaster that we would never see coming.

Source: Schneier on Security: Putting Undetectable Backdoors in Machine Learning Models

– Suramya

No Comments »

No comments yet.

RSS feed for comments on this post. TrackBack URL

Leave a comment

Powered by WordPress