With new technology and ‘AI’ it is becoming easier and easier to create fake images that look realistic enough to fool the casual eye. The problem is that this can be used to promote lies or scams etc. So we need to be able to identify if a given image is AI generated or real. Unfortunately, this is something that is easier said than done because as soon as the detector comes up with a way to identify fake images, the generators make changes to fix the issue resulting in a on-going game of whack-a-mole. That being said, it is important that we can identify and there is a lot of fascinating work that is happening in this space.
In an actually useful implementation of AI, researchers have trained a system called MISLnet that searches for statistical traces left in synthetic images by their source generator. It looks for relationships between pixel color values that are present in images taken by a digital camera which are not there in the AI generated image. This allows the system to identify AI generated images with over 98% accuracy.
I read the paper Beyond Deepfake Images: Detecting AI-Generated Videos(PDF) and honestly a lot of it went over my head. But based on tests it seems that MISLnet does perform well in identifying AI generated images.
The new tool the research project is unleashing on deepfakes, called “MISLnet”, evolved from years of data derived from detecting fake images and video with tools that spot changes made to digital video or images. These may include the addition or movement of pixels between frames, manipulation of the speed of the clip, or the removal of frames.
Such tools work because a digital camera’s algorithmic processing creates relationships between pixel color values. Those relationships between values are very different in user-generated or images edited with apps like Photoshop.
But because AI-generated videos aren’t produced by a camera capturing a real scene or image, they don’t contain those telltale disparities between pixel values.
The Drexel team’s tools, including MISLnet, learn using a method called a constrained neural network, which can differentiate between normal and unusual values at the sub-pixel level of images or video clips, rather than searching for the common indicators of image manipulation like those mentioned above.
The tool specifically targets images taken with a digital camera. It does not take into consideration that the image might have been taken by an Analog camera or is a scan of a printed images. In both those scenarios the relationships between pixel color values that the tool uses to identify real images will not exist, potentially leading the tool to falsely classify the image as fake or AI generated.
That being said, this is pretty interesting research and I am looking forward to testing the tool once it is released for general use.
Source: Schneier on Security: New Research in Detecting AI-Generated Videos
– Suramya