By NICHOLAS PEOPLES
Global Business Journalism reporter
In May 2023, reports of a fiery explosion at the Pentagon explosion at the Pentagon took the internet by storm. These “reports,” which contained no eyewitnesses, videos, nor official public statements, were animated by a single, AI-deepfake photo that fake Twitter accounts – verified with the infamous blue checkmark – rapidly circulated across the globe.
“Disinformation has always been a problem in journalism and we have always had to find ways to combat it,” said Patrick Butler, Senior Vice President of Content and Community at the International Center for Journalists. “But it is so much worse now because of technology.”
In today’s information climate, a single viral post may be all it takes to create a groundswell of international alarm that official agencies must then vigorously diffuse. And while the ease of spreading confusion has long made disinformation an attractive tool for political subterfuge and bad actors such as the Wagner Group, emerging AI tools, such as ChatGPT and MidJourney, have extended the ability to generate convincing “deepfakes” right to our fingertips.
In the past year alone, the world has been duped by AI deepfakes of the Israel-Gaza conflict, the Pope in a puffer jacket, and Donald Trump hugging Anthony Fauci. Experts say that some deepfakes can be identified by looking for unusual features or inconsistencies, such as hands with too many fingers.
“But there are also so many other, less obvious ways that disinformation is spreading,” Butler told students in the "Hot Topics" course of the Global Business Journalism program at Tsinghua University on Nov. 30.
Last month, for instance, The British Medical Journal – one of the world’s most reputable scientific forums – highlighted that disinformation can even trickle into scientific literature via poor citation practices. One paper, published in The New England Journal of Medicine in 2017, argued that mistaken interpretation of a 1981 study of opiate safety was repeatedly cited among medical researchers for years. This, they say, might have “contributed to the North American opioid crisis” by supporting a narrative that opiates were completely safe – even when the original 1981 study did not conclude this.
Even mainstream media outlets make their fair share of mistakes. Earlier this year, Bret Stephens wrote an opinion piece for The New York Times entitled, “The Mask Mandates Did Nothing. Will Any Lessons Be Learned.” However, his conclusions were based on a misunderstanding of a review article from the Cochrane Review. The evidence, it turns out, was actually on the side of masking.
Even though errors like Stephens’ are often quickly debunked, they can cause real damage when they become widely amplified. They also raise a more epistemological question: how do we know what we know?
The International Center for Journalists (ICFJ) is one organization that is grappling with these difficult questions. Operating with funding from the Scripps Howard Foundation, its $3.8 million campaign to study and combat disinformation is organized into three areas: 1) investigations to expose sources of disinformation; 2) capacity building to teach journalists how to build trust in factual news; and 3) research.
The research arm may be particularly significant in the long run. As new technologies create new forms of disinformation, journalists and citizens alike need new strategies to diffuse the potential chaos and confusion.
“A lot of outlets do fact-checking,” Butler said during his site visit to Tsinghua University. “But one of the problems with that strategy is that it tends to reach people who are already open to reaching the truth.”
A good start, he thinks, might be a global “heat map” of the penetrance of disinformation by country, which could then be further stratified by certain regions and demographics.
“That is something I have suggested – to do some kind of global mapping of what is going on in the disinformation sphere,” said Butler. “But to my knowledge I don’t know of anybody doing that.”
A second strategy could be to identify what factors make an individual “unconvincible,” in contrast to people who are moderately or even readily willing to consider alternative, authoritative information.
“We just accept there are a certain percentage of [people] that cannot be reached…But then there are some [who can]. So how do we find the techniques to reach those people?” Butler asked.
Such data would allow anti-disinformation efforts to use targeted strategies for different groups and focus their efforts on where they are likely to be successful.
In the long term, however, the most important efforts will likely be strategies to break the cycle of disinformation. It is easy to create disinformation, but time and labor intensive to debunk it. This puts anti-disinformation efforts on the defensive, where they are constantly reacting to viral propaganda and misinformation, rather than taking proactive steps to intervene.
“I don’t think anybody has found the answer yet,” Butler said.
Until they do, it is probably wise to take each viral story with grain of salt – and a healthy awareness for the ratio of fingers to hands in every photo.
Patrick Butler on breaking the cycle of disinformation
“How do you stop it? … Some of these organizations like The Atlantic Council have disinformation specialists. They are looking at how to break that cycle of disinformation. I don’t think anybody’s found the answer yet. The second strategy is to try and find ways to make truthful information
spread.”
Comments