Suramya's Blog : Welcome to my crazy life…

July 27, 2020

Cloaking your Digital Image using Fawkes to thwart unauthorized Deep Learning Models

Filed under: Computer Related,Computer Software,My Thoughts,Tech Related — Suramya @ 3:42 PM

Unless you have been living under a rock you have seen or heard about facial recognition technologies that are actively in use in the world. You have the movie/TV version where a still image from a video feed is instantly compared to every image in the database to match a perp, then you have the real world example where there are systems that take all your social media feeds, images of yours posted anywhere as a dataset to train a system that can identify you from a video feed (not as quickly as the TV version but still fast).

So what is the way to prevent this? Unfortunately there isn’t one (or at least there wasn’t a realistic one till recently). Earlier you had to ensure that no image of yours is ever posted online, you are never caught in a security feed or traffic cam anywhere. Which as you can imagine is pretty impossible in today’s connected world. Even if I don’t post a picture of me online, my friends with whom I attended a party might upload a pic with me in the background and tag me. Or you get peer pressured to upload the photos to FB or Twitter etc.

There is not much we can do about state sponsored learning models but there are plenty of other folks running unauthorized setups that consume photos posted publicly without permission to train their AI models. These are the systems targeted by folks from the SAND Lab at University of Chicago who have developed Fawkes1, an algorithm and software tool (running locally on your computer) that gives individuals the ability to limit how their own images can be used to track them.

At a high level, Fawkes takes your personal images, and makes tiny, pixel-level changes to them that are invisible to the human eye, in a process we call image cloaking. You can then use these “cloaked” photos as you normally would, sharing them on social media, sending them to friends, printing them or displaying them on digital devices, the same way you would any other photo. The difference, however, is that if and when someone tries to use these photos to build a facial recognition model, “cloaked” images will teach the model an highly distorted version of what makes you look like you. The cloak effect is not easily detectable, and will not cause errors in model training. However, when someone tries to identify you using an unaltered image of you (e.g. a photo taken in public), and tries to identify you, they will fail.

The research and the tool will be presented at the upcoming USENIX Security Symposium, to be held on August 12 to 14. The software is available for download at the projects GitHub repository and they welcome contributions.

It would be amazing when this tool matures and I can imagine it becoming a default part of operating systems so that all images uploaded get processed by the tool by default reducing the risk of automatic facial recognition. Although I can’t imagine any of the governments/Facebook being too happy about this tool being publicly available. 🙂

Well this is all for now. Will write more later.

Thanks to Schneier on Security for the initial link.

– Suramya

1 Comment »

  1. […] year I had posted about the Fawkes project that allowed users to modify their photos to avoid them from being used to power facial recognition […]

    Pingback by Using Photo Ninja to shield users’ photos from reverse image searches and facial recognition AI « Suramya's Blog — April 29, 2021 @ 1:15 AM

RSS feed for comments on this post. TrackBack URL

Leave a comment

Powered by WordPress