Monday February 12, 2024
How to fight AI-powered Misinformation and Disinformation
As PR practitioners, the media is one of our realms, and the battle against misinformation and disinformation is the new front line for our industry. In this blog, we will delve into the misinformation and disinformation fueled by AI and what we can do to combat it.
Misinformation vs. Disinformation
Misinformation and disinformation are two words often used interchangeably — but do they mean the same thing? Let us first take a look at its definitions according to the Merriam-Webster dictionary.
Misinformation: Incorrect or misleading information.
Disinformation: False information is deliberately and often covertly spread (as by the planting of rumours) in order to influence public opinion or obscure truth.
Therefore, while both terms are related to the spread of false information, disinformation happens when there is a known intent to misstate the facts. However, this does not mean that one is more or less harmful than the other.
During COVID-19, misinformation surrounding the disease was rampant. Top-down misinformation from politicians, celebrities, and other prominent public figures was echoed throughout social media. Bottom-up misinformation also garnered an extensive reach, spreading through private groups and messaging applications.
According to a Reuters Institute report, they were unable to find examples of deep fakes, which are easily done through AI, but found ‘cheap fakes’ instead, which are produced using simpler tools such as photoshopping, lookalikes, or speeding and slowing a video.
Despite the lack of deep fakes, the effects were detrimental. A WHO (World Health Organisation) review found that COVID-19-related misinformation negatively impacts people’s health behaviours, such as evoking feelings of mental, social, political and/or economic distress.
AI-Powered Falsification
The issue is exacerbated in the new age of AI, as those looking to spread false information can conveniently utilise generative AI to create fake content. In March 2023, an X, formerly Twitter user @EliotHiggins, shared deep fake images of former US President Donald Trump getting arrested, created by simply submitting the text prompt, “Donald Trump falling over while getting arrested. Fibonacci Spiral. News footage.”
Many used this as an opportunity to circulate these images and falsely claim them to be real. While this case did not result in significant detrimental effects, people started to exhibit concerns over how easy it is to utilise AI to spread harmful disinformation.
Those concerns were verified shortly after in May 2023, when an AI-generated image of an explosion near the Pentagon in Washington DC went viral, tanking the US stock market. Experts and officials debunked the image within minutes, but they still could not beat the race against the rapid spread of fake news that resulted in real-life consequences. Since then, websites containing misleading news generated by AI have surged by over 1,000%, from 49 to over 600 and counting.
With at least 50 national elections looming this year — including seven out of the world’s ten most populous countries and countries that house nearly half of the world’s population — many experts have voiced their concerns over the impact of AI-powered false news on the 2024 elections. It poses the big question: how can we combat misinformation and disinformation in an age where AI blurs the line between fake and reality?
How to combat AI-powered misinformation
People across industries are scrambling to find the formula for fighting AI misinformation and disinformation. Some have proposed to fight fire with fire, such as in the case of Australia’s University of Queensland, which has partnered with platforms to develop automatic fake news detection systems that provide insight into why it is deemed fake.
Those within education have suggested introducing AI and digital literacy in schools, ensuring that people are equipped with the necessary skills to critically evaluate the content they see online. Research conducted in India and the US has proven that digital literacy interventions enable people to discern between mainstream news and fake news.
Governments have also exhibited concerns, with the European Union pressuring internet giants like Google and Meta to intensify their efforts in combating fake information by labelling text, images, and other AI-generated content last year.
However, when it comes to the PR industry, what can we as practitioners do? First, we could monitor our client coverage closely, allowing us to identify any false information that arises quickly. The sooner we identify it, the less chance that it may snowball into a fake news story.
Organisations can begin this process by including monitoring and analytics tools that track the primary conversation drivers, identify the platforms and people shaping the false story or narrative, and gauge audience response.
But why wait for a scandal to unfold when PR is also about implementing preventative measures to a crisis? Perhaps the new age of AI has called upon the need for a robust crisis communications strategy to be embedded throughout an organisation via various measures, such as pre-identifying spokespeople and content distribution routes to build resilience towards misinformation.
Shayoni Lynn, CEO and Founder of Lynn Global expands on building resilience in a PRCA webinar on the topic of misinformation and disinformation, stating, “It’s about ensuring that you’ve done your tests — that you’ve done your scenario planning — and you know how to be proactive in your approach when it does happen..”
“…and ideally, you’re already building resilience and immunity within your audiences by listening to your information landscape and understanding what false narratives are out there and integrating the counter-narratives to build resilience.”
On another note, as PR practitioners, we are not just storytellers but also fact-checkers who need to be wary of contributing to the flow and circulation of misinformation. For instance, we should be vigilant about the data that we incorporate into our content, whether for client purposes or journalist communications, by ensuring that it has been validated.
A few ways in which we can do this is by looking at the originality of sources. Was it in the form of a research paper published by credible organisations? Was the information taken from a breaking news story in a trustworthy and reliable national paper? Whatever it is, validating data is part of the ethics of PR that should be instilled in every practitioner across various disciplines.
In conclusion, the battle against AI-powered misinformation and disinformation demands a multi-faceted approach from the PR industry. From an ethical standpoint, we must safeguard the integrity of the information we consume and put out. To learn more about the rise of fake news, read our previous blog: Public Relations & the Rise of Fake News.
Curzon PR is a London-based PR firm working with clients globally. If you have any questions, please feel free to contact our Business Development Team [email protected]
Follow us