Spotting Image Manipulation with AI

Twenty-eight years ago, Adobe Photoshop brought the analog photograph into the digital world, reshaping the human relationship with the image. Today, people edit images to achieve new heights of artistic expressionto preserve our history and even to find missing children. On the flipside, some people use these powerful tools to “doctor” photos for deceptive purposes. Like any technology, it’s an extension of human intent, and can be used for both the best and the worst of our imaginations.

In 1710 Jonathan Swift wrote, “Falsehood flies, and the truth comes limping after it.” Even today, as a society, we’ve struggled to understand the way perception and belief are shaped between authenticity, truth, falsehood and media. Add newer social media technologies to the mix, and those falsehoods fly faster than ever.

That’s why, in addition to creating new capabilities and features for the creation of digital media, Adobe is exploring the boundaries of what’s possible using new technologies, such as artificial intelligence, to increase trust and authenticity in digital media.

AI: a new solution for an old problem

Vlad Morariu, senior research scientist at Adobe, has been working on technologies related to computer vision for many years. In 2016, he started applying his talents to the challenge of detecting image manipulation as part of the DARPA Media Forensics program.

Vlad explains that a variety of tools already exist to help document and trace the digital manipulation of photos. “File formats contain metadata that can be used to store information about how the image was captured and manipulated. Forensic tools can be used to detect manipulation by examining the noise distribution, strong edges, lighting and other pixel values of a photo. Watermarks can be used to establish original creation of an image.”

Of course, none of these tools perfectly provide a deep understanding of a photo’s authenticity, nor are they practical for every situation. Some are easily defeated; some tools require deep expertise and some lengthy execution and analysis to use properly.

Vlad suspected technologies, such as artificial intelligence and machine learning, could be used to more easily, reliably and quickly detect whether or not any part of a digital image had been manipulated, and if so, what aspects were modified.

Building on research he started fourteen years ago and continued as a Ph.D. student in computer science at the University of Maryland, Vlad describes some of these new techniques in a recent paper — Learning Rich Features for Image Manipulation Detection.

“We focused on three common tampering techniques—splicing, where parts of two different images are combined; copy-move, where objects in a photograph are moved or cloned from one place to another; and removal, where an object is removed from a photograph, and filled-in,” he notes.

Every time an image is manipulated, it leaves behind clues that can be studied to understand how it was altered.  “Each of these techniques tend to leave certain artifacts, such as strong contrast edges, deliberately smoothed areas, or different noise patterns,” he says. Although these artifacts are not usually visible to the human eye, they are much more easily detectable through close analysis at the pixel level, or by applying filters that help highlight these changes.

Now, what used to take a forensic expert hours to do can be done in seconds. The results of this project are that AI can successfully identify which images have been manipulated. AI can identify the type of manipulation used and highlight the specific area of the photograph that was altered.

“Using tens of thousands of examples of known, manipulated images, we successfully trained a deep learning neural network to recognize image manipulation, fusing two distinct techniques together in one network to benefit from their complementary detection capabilities,” Vlad explains.

The first technique uses an RGB stream (changes to red, green and blue color values of pixels) to detect tampering. The second uses a noise stream filter. Image noise is random variation of color and brightness in an image and produced by the sensor of a digital camera or as a byproduct of software manipulation. It looks a little like static. Many photographs and cameras have unique noise patterns, so it is possible to detect noise inconsistencies between authentic and tampered regions, especially if imagery has been combined from two or more photos.

An example of authentic images, manipulated images, the RGB and noise streams used to detect manipulation, and the results of AI analysis. Source: the NC2016 dataset

While these techniques are still being perfected, and do not necessarily solve the problem of “absolute truth” of a photo, they provide more possibility and more options for managing the impact of digital manipulation, and they potentially answer questions of authenticity more effectively.

Vlad notes that future work might explore ways to extend the algorithm to include other artifacts of manipulation, such as differences in illumination throughout a photograph or compression introduced by repeated saving of digital files.

The human factor

Technology alone is not enough to solve an age-old challenge that increasingly confronts us in today’s news environment: What media, if any, can we treat as authentic versions of the truth?

Jon Brandt, senior principal scientist and director for Adobe Research, says that answering that question often comes down to trust and reputation rather than technology. “The Associated Press and other news organizations publish guidelines for the appropriate digital editing of photographs for news media,” he explains.

In other words, when you see a photo on a news site or newspaper, at some level you must trust the chain of custody for that photo, and rely on the ethics of the publisher to refrain from improper manipulation of the image.

The same will be true of newer techniques that are democratizing the ability to manipulate voice and video, he adds, “I think one of the important roles Adobe can play is to develop technology that helps them monitor and verify authenticity as part of their process.

“It’s important to develop technology responsibly, but ultimately these technologies are created in service to society.  Consequently, we all share the responsibility to address potential negative impacts of new technologies through changes to our social institutions and conventions.”

Read more about artificial intelligence in our Human & Machine collection.

source: https://theblog.adobe.com/spotting-image-manipulation-ai/