With the launch of a tool for analyzing videos and still photos to generate a manipulation score, Microsoft has added to the growing pile of technologies aimed at spotting synthetic media (aka Deepfake). The tool, called Video Authenticator, provides what Microsoft calls “a percentage chance, or score of confidence,” that the media has been manipulated.
If a piece of online content looks genuine, but it’s a high-tech fraud attempting to pass as true — with a sinister intention of misinforming people. And although many deep-fakes are made with a very different intention — to be amusing or entertaining — such digital media can also take on a life of its own as it spreads, meaning it can even end up tricking unsuspecting spectators.
Although AI software is used to create practical deep-fakes, it is still a hard problem to detect visual disinformation using technology — and a thinking mind remains the best method to spot high tech BS.
Yet, technologists are still working on deep-fake spotters — including this new Microsoft deal. While its blog post warns the tech may offer only fleet usefulness in the AI-fuelled disinformation arms race: “The fact that AI is producing [Deepfake] that can continue to learn makes it likely that traditional detection technology will beat them. In the short term, however, advanced detection tools, such as the forthcoming U.S. election, a valuable method to help discerning users recognize deep-fakes.
A competition that Facebook kicked off this summer to create a deep-fake detector produced results that were better than guessing — but the researchers hadn’t had previous access to it in the case of a data collection. Meanwhile, Microsoft says its Video Authenticator tool was built using a Face Forensic++ public dataset and tested on the Deepfake Detection Challenge Dataset, which it states are “the two leading models for training and evaluating deep-facto detection technologies.”
It’s working with the AI Foundation based in San Francisco to make the platform accessible to organizations participating in this year’s democratic process — including news outlets and political campaigns.
The tool was developed by its R&D division, Microsoft Research, in collaboration with its Responsible AI team and an internal advisory body on AI, Ethics, and Effects in Engineering and Research Committee — as part of a broader initiative that Microsoft is running to protect democracy from threats posed by disinformation.
Microsoft has also announced a framework on the latter front that will allow content creators to add digital hashes and certificates to media that remain in their metadata as content moves online — offering a point of reference for authentication.
The system’s second aspect is a reading tool that can be deployed as a browser extension to verify certificates and match the hashes to provide the user with what Microsoft calls “a high degree of accuracy” that a specific piece of content is authentic / has not been modified.
The credential would also provide information to the audience about who created the media. It says Project Root will test the digital watermarking tech to transform it into a standard that can be widely adopted.
Although technology work to identify deep-fakes continues, its blog post also highlights the importance of media literacy — a flagging collaboration with Washington University, Sensity, and USA Today that aims to boost critical thinking ahead of the US election.
This collaboration has introduced a Spot the Deepfake Quiz for U.S. citizens to “learn about social media, improve vital media literacy skills, and become aware of the effect digital media has on democracy,” as it puts it.