© Sarah Batelka, KI-generiertes Bild / Universität Bremen
Three Years of ChatGPT: How Does AI Threaten Public Communication?
Communication researcher Cornelius Puschmann speaks about the risks automated bots pose for public discourse
Three years later, artificial intelligence has become a core component of social, economic, and academic processes, and at the same time, a subject of intense debate. AI-driven bots increase polarization and erode trust. Transparency requirements and platform responsibility are crucial for democratic resistance, as explained by Cornelius Puschmann, head of the ZeMKI Lab “Digital Communication and Information Diversity.”
How exactly do social bots work, and why are they so effective in amplifying polarizing content?
Social bots are algorithmically controlled accounts that mimic human behavior and contribute to the amplification of polarizing content in online debates. By artificially increasing the visibility of polarizing narratives, they distort the perception of consensus and conflict. The manipulation of engagement metrics amplifies emotional and identity-based polarization, undermining the diversity of opinions necessary for democratic debates.
What consequences does this have on our behavior in discussions – on platforms such as X?
Bots, increasingly controlled by AI, not only spread disinformation but also flood deliberative spaces with repetitive, emotionally charged, and misleading content. This overload makes it difficult for users to distinguish between authentic and coordinated communication, leading to superficial, reactive, and less reflective exchanges. This limits the public’s ability to make informed decisions.
Why is it dangerous for users to constantly feel like they are being manipulated?
The spread of automated accounts contributes to growing mistrust in online discussions. When users suspect manipulation, the credibility of political communication drops. This perception of inauthenticity fosters cynicism towards institutions and the media, weakening our collective problem-solving competence and willingness to compromise – essential elements of democratic resilience.
Is it sufficient to mark AI-generated content, or are more far-reaching instruments necessary for transparency?
To address these challenges, stricter transparency measures are necessary, such as clearly labeling bots and AI content, as well as increasing platform engagement in monitoring coordinated fake behavior patterns. Without such protective measures, social bots will continue to pose systemic risks for the integrity of democratic debates and the quality of public discourse.
In public communication, automated disinformation threatens the deliberative quality of democratic discourse. AI requires not only technological skills, but also a strong sense of social responsibility and a proactive approach to managing its impact.
Weitere Informationen
Website ZeMKI Lab “Digital Communication and Information Diversity”