Facebook scientists said this week they developed artificial intelligence software to not only identify "deepfake" images but to figure out where they came from.
Deepfakes are photos, videos or audio clips altered using artificial intelligence to appear authentic, which experts have warned can mislead or be completely false.
Facebook research scientists Tal Hassner and Xi Yin said their team worked with researchers from the Michigan State University to create software that reverse engineers deepfake images to figure out how they were made and where they originated.
"Our method will facilitate deepfake detection and tracing in real-world settings, where the deepfake image itself is often the only information detectors have to work with," the scientists said in a blog post. Reverse engineering is a different way of approaching the problem of deepfakes, but it’s not a new concept in machine learning, they explain further.
"This work will give researchers and practitioners tools to better investigate incidents of coordinated disinformation using deepfakes, as well as open up new directions for future research," they added.
Facebook's new software runs deepfakes through a network to search for imperfections left during the manufacturing process, which the scientists say alter an image's digital "fingerprint."
According to the blog post, prior to the deep learning era, researchers typically used a small, handcrafted, and well-known set of tools to generate photos. The fingerprints of these generative models were estimated by their handcrafted features. "Deep learning has made the set of tools that can be used to generate images limitless, making it impossible for researchers to identify specific 'signals' or fingerprint properties by handcrafted features," the blog explains.