By SALAKO EMMANUEL
International Journalists' Network (IJNet)
The perpetrators behind deepfakes seek to negatively impact public knowledge during elections, mislead people about crises and more.
Take for instance, AI-generated images of an explosion in the U.S. Pentagon that went viral in May. Earlier, in March, false AI-generated images of former U.S. President Donald Trump being arrested and of Pope Francis wearing a puffer coat circulated widely on social media. During this year’s general election in Nigeria, manipulated audio claiming that presidential candidate Atiku Abubakar, his running mate, Dr. Ifeanyi Okowa, and Sokoto State Governor Aminu Tambuwal were planning to rig the vote, spread online.
Deepfakes have undeniably raised important questions and concerns for the media, said Silas Jonathan, a researcher at Dubawa and a fellow at the African Academy for Open Source Investigation. The more that disinformers succeed in deploying deepfakes, the more credibility the media loses as audiences become less able to determine what is real and what isn’t, said Jeffrey Nyabor, a Ghanaian journalist and fact-checker at Dubawa Ghana.
AI can be both the problem and the solution. Here are a few tools journalists can use to combat deepfakes:
Deep neural networks: TensorFlow and PyTorch
TensorFlow and PyTorch are two examples of free tools that use deep neural networks to spot deepfakes. They can be used to analyze images, videos and audio cues to detect signs of manipulation. Users simply need to upload examples of real and false media to train a detection model to differentiate between the two.
“These networks can learn from vast amounts of data to identify inconsistencies in facial expressions, movements or speech patterns that may indicate the presence of a deepfake,” said Jonathan. “Machine learning algorithms can also be trained to analyze and detect patterns in videos or images to determine whether they have been manipulated or generated by deepfake techniques.”
Deepware is an open-source technology primarily dedicated to detecting AI-generated videos. The website has a scanner to which videos can be uploaded to find out if they are synthetically manipulated.
Similar to other deepfake detectors, deepware models look for signs of manipulation on the human face. The tool’s main limitation is its inability to spot voice-swapping techniques, which is a much greater danger than face-swapping.
“The result is not always 100% accurate. Sometimes it is hit or miss, depending on how good the fake is, what language it is in and a few other factors,” said Mayowa Tijani, editor-at-large at TheCable.
Sensity specializes in deepfake detection and AI-generated image recognition. Its advanced machine learning models can be used to analyze visual and contextual cues to identify AI-generated images.
The tool isn’t free; charges are customized based on monthly usage and individual areas of interest.
With Hive, independent fact-checkers and internet users can quickly scan digital texts and images to verify their authenticity. To verify an image, users upload a file to the AI-generated content detection page to be quickly processed.
There is also the Hive AI detector chrome extension, which can be added to a computer desktop. This enables users to detect AI-generated text and images from a browser for free.
lluminarty also offers the ability to detect AI-generated images and text. The free version includes basic services while the tool’s premium plans offer more functions.
With this tool, users can identify where the manipulation lies in a fake image, and from which AI models it has been generated. It can also assess how likely it is that an image has been created by AI.
All these tools can help users identify and combat deepfakes, but they aren’t accurate 100% of the time. There is always the risk of false positives or negatives, where legitimate content may be mistakenly flagged as a deepfake or vice versa, explained Jonathan.
Journalists and independent fact-checkers can also detect a deepfake by observing content critically. Not all AI-generated images and videos are super-duper perfect; many can still be detected by looking at them carefully.