Monday August 19, 2024
Is AI More Persuasive than Humans?
In this blog, we will delve into the research findings and explore the risks and ethical implications that they may pose in today’s digital society.
AI Persuasion: How Convincing Is It?
Cornell University’s research, ‘On the Conversational Persuasiveness of Large Language Models: A Randomized Controlled Trial,’ pits humans against other humans or AI to debate various topics, such as whether the penny should stay in circulation, whether animals should be used for scientific research, and whether colleges should consider race as a factor in admissions to ensure diversity.
Each participant —humans and AI— was assigned an opposing role on the topics, such as whether they agreed or disagreed, a PRO or a CON. It also compared the effectiveness of debates when individuals do not know anything about their opponent vs. when individuals are given their opponent’s personal information.
The debate followed a structure in which participants first articulated their key arguments according to their assigned role, followed by a rebuttal in which they responded to their opponent’s arguments, and lastly, a conclusion in which they could either respond to their opponent’s rebuttal or reiterate their initial points. After the debate, the participants were asked to complete an exit survey asking how much they agreed with the proposition.
The findings were shocking. Human vs. AI debates showed the most potent positive effect came from the latter, proving that AI, or more specifically, large language models (LLMs), are much more convincing than humans. It is especially true when AI has access to its human opponent’s personal information, additionally demonstrating how AI is significantly more effective at exploiting personal information than humans.
What Are the Risks of AI Persuasion?
Based on the findings, the research paper underscored a few concerns regarding AI’s enhanced persuasive capabilities. One of AI’s critical strengths in persuasion lies in its data processing capabilities. Unlike humans, who are limited by their cognitive capacity, AI can sift through enormous datasets quickly, allowing it to identify patterns and extract insights that humans might overlook.
It allows AI to precisely tailor persuasive messages, taking into account individual demographics, preferences, behaviours, and even emotional states. What risks does this pose to today’s societies, especially as AI has the potential to reach millions simultaneously through digital platforms?
Misinformation and Disinformation
Considering how sophisticated AI has become in generating output that resembles human-like conversations, what happens when malicious actors exploit AI to generate content that seems to be credible yet is intended to misinform and disinform?
For example, AI can create fake news stories or social media posts that convincingly resemble authentic sources. This results in the mass transmission of false narratives and the perpetuation of disinformation, potentially negatively influencing public opinion, decision-making, and societal trust, which already exists as one of the most prominent global issues this year (read more: PR’s Battle for Truth in 2024 Global Elections).
In December 2023, the Washington Post reported that websites hosting AI-created false articles had increased by over 1,000% compared to the previous period, ballooning from 49 websites to over 600. We might have detected it then, but how can we ensure we can identify AI-generated false news in the future when AI becomes even more sophisticated than it already is? After all, Cornell University’s study participants could not identify whether their opponents were AI or human.
The Amplification of Biases
Chapman University’s AI hub identified five biases that exist in AI. This includes selection bias when data used to train an AI system fails to capture the reality it is intended to model; confirmation bias when the AI system is programmed to depend excessively on pre-existing opinions or trends in the data; measurement bias when the data collected differs systematically from the actual variables of interest; stereotyping bias when AI reinforces harmful or damaging stereotypes; or out-group homogeneity bias when the AI system is less capable of discerning between individuals who do not belong to the majority group in the training data.
Moreover, the USC Information Sciences Institute found that even the ‘facts’ presented by AI are up to 38.6% biased. So what happens when these are manifested in high levels of persuasive biased language, unequal representation, or discriminatory narratives?
The Creation of Harmful and Offensive Content
In December 2023, the UN Global Communications Chief highlighted the UN’s concerns about how AI could be utilised to create and spread harmful content, such as child sexual abuse material or nonconsensual pornographic images that disproportionately affect women and girls.
Moreover, she highlighted the organisation’s concerns that AI will supercharge offensive forms of content, such as the amplification of anti-Semitism, Islamophobia, and Xenophobia. If not addressed, it will undermine global peace by creating hostile environments and leading to the deterioration of individual well-being.
Combating the Risks of AI Persuasion
It is clear that AI is a powerful tool for influencing public perception and opinions, but depending on the actors’ intent behind the AI, it can be a double-edged sword. Therefore, what are a few measures to consider in combating the risks of AI persuasion?
To mitigate biases, we need to ensure that the data used to train AI algorithms is as diverse and representative of the population it serves. Greater transparency and accountability in designing and deploying AI algorithms are also essential.
Furthermore, proactive government regulation and oversight are also viable options, and it is currently a hot topic in various government agendas worldwide. The regulations could include safeguarding individual privacy autonomy or proposing frameworks for disclosure when using AI for persuasive purposes.
Curzon PR is a London-based PR firm working with clients globally. If you have any questions, please feel free to contact our Business Development Team [email protected]
Follow us