Skip to content
House of Ethics

House of Ethics

by Katja Rausch

  • Our authors
  • BOOK a Speaker
  • Partners
  • About us
  • Contact Us
  • YT Channel House of Ethics
  • English
  • Français
Watch Online Anniversary
  • ai
  • articles
  • Dis/Misinformation

A(I)dding Fuel to Fire: How AI accelerates Social Polarization through Misinformation.

Jindřich Oukropec June 5, 2023 6 min read
Reading Time: 5 minutes

AI accelerates the polarization of society through misinformation

 

As part of the ongoing discussion on artificial intelligence and ChatGPT, concerns have been raised about the potential misuse of AI for the creation and dissemination of misinformation. While AI has the potential to detect, refute, and predict misinformation, the question remains whether AI will become a partner in the heroic fight against misinformation, or whether it will slip out of our control, as social media networks have done.

In this article you will find:

  • 4 tips for start-ups using AI-driven disinformation tracking tools
  • 3 scandals proving the risks of AI 

AI: Between high-risk and the thinking of a 9-year-old child

For years, we have interacted with AI through algorithms on social media, and so far, these algorithms have brought information overload, addiction, societal polarization, QAnon, fake news, deepfake bots, sexualization of children, and more, precisely because it is AI that maximizes engagement on these platforms. In this article, we will explore the relationship between AI and misinformation, looking at both the potential benefits and risks.

Machine learning surpassed human capacity years ago, as Figure 1 shows (KIELA, Douwe, et al., 2021). A significant year was 2017, when music, image, and voice were connected, previously operating separately. AI researchers have already tested that artificial intelligence connected to the brain can transcribe our imaginations into words or generate images based on what we envision. Through Wi-Fi and radio signal connections, AI can estimate the number of people in a room.

According to experts, current AI technology equals the thinking of a 9-year-old child, who can estimate what others are thinking and show signs of strategic thinking.

Figure 1: Benchmark saturation over time for popular benchmarks, normalized with initial performance at minus one and human performance at zero

Benefits of AI for Combating Misinformation

AI has several potential benefits when it comes to combating misinformation. One of the main advantages of AI is its ability to process large amounts of data quickly and accurately. This means that it can analyze large amounts of information and detect patterns that humans might miss, which can be useful for identifying false information. (Ammar, W. (2020)).

For example, AI can be used to analyze social media posts and detect patterns of behavior that are associated with the spread of misinformation This could include things like the use of certain keywords or phrases, or the frequency with which certain types of content are shared.

By identifying these patterns, AI can help to flag potentially false information before it has a chance to spread widely (Fink, 2021).

The use of AI for monitoring disinformation has existed for years, especially in the field of commercial start-ups. Examples include American companies Yonder and Recorded Future, as well as British company Factmata and French company Storyzy.

These companies offer services such as social intelligence, which uses machine learning to analyze how stories spread across both fringe and mainstream social platforms, and how they influence public opinion.

Additionally, they provide updated blacklists of websites where brands should not advertise due to brand safety concerns.

Another potential benefit of AI for combating misinformation is its ability to identify deepfakes. Deepfakes are videos or images that have been manipulated to create false information, and they can be difficult to detect using traditional methods.

However, AI can be trained to recognize the subtle signs of manipulation that are present in deepfakes, which can help to prevent their spread (Shu, 2021).

Risks of AI for Misinformation

While AI has the potential to be a powerful tool for combating misinformation, it also poses its own risks. One of the main risks is that AI can be trained to generate false information itself. This is known as “adversarial AI,” and it involves training AI algorithms to produce false information that is designed to deceive humans (Tandifonline, 2021). 

Concerns about the misuse of artificial intelligence are justified. Let me recall Microsoft, which introduced its chatbot “Tay” back in 2016. Trolls on Twitter taught it racist and xenophobic expressions, forcing the company to shut down the project. Another such scandal occurred this year when AI scammers called hospitals pretending to be the loved ones of patients in need of urgent help.

“This tool is going to be the most powerful tool for spreading misinformation that has ever been on the internet.” according to Gordon Crovitz, co-chief executive of NewsGuard, for New York Times.

Another risk: Bias.

AI algorithms are only as good as the data they are trained on, and if that data is biased, the algorithm will be biased as well.

This can be a particular problem when it comes to issues like race or gender, where biases can be deeply ingrained in the data (Broussard, 2018).

“50% of AI researchers claim that there is at least a 10% chance of AI getting out of control” Tristan Harris co-founder of Center for Humane Technology

Furthermore, there is also the possibility that AI could be used to create even more sophisticated deepfakes, making it even harder to distinguish between real and fake information. This was already seen in America in February of this year when videos spread across the internet as part of a state-sponsored disinformation campaign from pro-Chinese bot accounts on social media.

Aza Raskin, co-founder of the Center for Humane Technology and of the Earth Species Project

Aza Raskin sees a negative impact of AI on politics, as he predicts that the last human election will be held in 2024, after which those with bigger computer power will win. AI will enable A-Z testing; infinite testing of content in real-time, which will lead to the manipulation of people during elections.

Social media expert and co-founder of the Center for Humane Technology, Tristan Harris, argues that the arrival of ChatGPT heralds an era of “fake everything,” reality collapse, trust collapse, automated loopholes in the law, automated fake religions, automated lobbying, exponential blackmail, synthetic relationships, and more. Harris sees the automation of pornographic content as a significant threat, as it will only increase addiction to pornography.

Ongoing challenges to cope with social media based on AI-driven algorithms

AI has the potential to be a powerful tool for combating misinformation, but it also poses its own risks. While AI can be used to identify patterns of behavior that are associated with the spread of false information and to detect deepfakes, it can also be used to generate false information itself and can be biased.

So far, society has been unable to cope with social media based on AI-driven algorithms, let alone the upcoming tools and risks of AI. Social media has disrupted the idea of public service media, which are supposed to provide citizens with information about what is happening. As a result, citizens have become subject to algorithms and high-quality information, logically speaking, cannot win the battle for the highest engagement. In the pursuit of dopamine and adrenaline, citizens do not focus on what is essential and important. An exhausted and divided society, amid a pandemic and in a war with Russia and global warming, does not appear ready for the incoming AI.

Sources for article.

 

 

  • PhD on Mis/Disinformation at Charles University in Prague
  • Latest article
PhD on Mis/Disinformation at Charles University in Prague
Jindřich Oukropec

Jindřich Oukropec is a PhD. candidate at Charles University in Prague. He has been working for 10 years in marketing and digital communication. In his research he focuses on brands' response to mis/disinformation. He is a co-founder of Brand Safety Academy. At Charles University in Prague Jindřich teaches digital marketing, marketing for non-profit organisations and a seminar for master students Managing a good reputation of a brand in the era of fake news.

oukropec@gmail.com
  • A(I)dding Fuel to Fire: How AI accelerates Social Polarization through Misinformation.

Tags: #ai #algorithms disinformation misinfomation

Continue Reading

Previous: Run Forest Run: GenAI and the Productivity Glut
Next: About Catastrophes and other Little Disasters

Related Stories

From “Ethical Dilemma” to “Ethical Dissonance” in Cyber Physical Times. About the Coexistence of Multiple Conflicting Ethics.
8 min read
  • ai

A(I)dding Fuel to Fire: How AI accelerates Social Polarization through Misinformation.

June 5, 2023
Swarm Ethics™ and Systems Thinking
6 min read
  • ai

A(I)dding Fuel to Fire: How AI accelerates Social Polarization through Misinformation.

June 5, 2023
A Modern AI-Fable: From Criti-Hype to Hype Hopping to Big Bang.
9 min read
  • ai

A(I)dding Fuel to Fire: How AI accelerates Social Polarization through Misinformation.

June 5, 2023

Dr Abiola Makinwa on Integrity

https://www.youtube.com/watch?v=Gehztcpx1TU&list=PL--iGLzN8ku3M4BtfMmSK27Hdh8vJDiKF&index=1

We Have Read for You

Comments

  • Daniel on Happy 2023!
  • Betting Exchange on “Thinking in Bets” by Annie DUKE – Book Review for Ethics
  • jabir on The Mummy vs the Cyborgs : a (bio)epic battle ahead !
  • Dr. Haneen Ghazy on XenoBots (living robots) – Medical revolution or Ethical mindfield?
  • Tamer Abou El Saod on The Mummy vs the Cyborgs : a (bio)epic battle ahead !

Express Biohacking List

The brave new world of Biohacking lite
4 min read
  • articles

The brave new world of Biohacking lite

Katja Rausch April 18, 2021
Reading Time: 5 minutes The brave new world of Biohacking lite Biohacking or DIY biology Maybe you know somebody who has an implanted chip – most cats or dogs do – or you have heard about the former NASA employee...
Read More

ML Articles

Generative AI – Generating Possible Futures?
12 min read
  • ai

Generative AI – Generating Possible Futures?

Katja Rausch February 26, 2023
Reading Time: 11 minutes At the House of Ethics we decided, as of now, to add to each piece of content we create, our in-house “100% GPT-free” Label.  For us, it is important to take the necessary time to research,...
Read More
The Cultural Basis of Ethical Choices

The Cultural Basis of Ethical Choices

July 14, 2022
“Questo articolo è stato tradotto in maniera automatica — This article was automatically translated”

“Questo articolo è stato tradotto in maniera automatica — This article was automatically translated”

February 9, 2022
HofE YT-TALK#3 with Daniele PROVERBIO : “What if Ethics Does not Exist?”

HofE YT-TALK#3 with Daniele PROVERBIO : “What if Ethics Does not Exist?”

November 18, 2021
What if Ethics Does Not Exist?

What if Ethics Does Not Exist?

November 11, 2021
Copyright © All rights reserved. | DarkNews by AF themes.
House of Ethics
Manage Cookie Consent
We would use technologies like cookies to store and/or access device information. But we are not interested in doing so. It is not our business. Consenting to these technologies would allow us to process data such as browsing behavior or unique IDs on this site. What we at the House of Ethics don't do. It might be done unknowingly by using Wordpress plugins.

Not consenting is more than welcomed!
Functional Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
Manage options Manage services Manage vendors Read more about these purposes
View preferences
{title} {title} {title}