By Cliff Potts, CSO, and Editor-in-Chief of WPS News
March 23, 2026
A Convenient Scapegoat in a Chaotic Moment
As artificial intelligence becomes more visible in everyday life, it has increasingly been blamed for the spread of misinformation, political instability, and declining public trust in institutions. This framing is understandable—but incomplete.
AI did not invent misinformation. It did not create political polarization. It did not weaken public confidence in government. What it did do was lower the cost and increase the speed of deception, exposing problems that already existed.
Treating AI itself as the primary threat risks missing the real source of the problem: organized human networks using new tools to amplify old behaviors.
Misinformation Predates Artificial Intelligence
History makes this clear. Modern propaganda techniques were fully developed long before computers existed. In the 1930s and 1940s, the Nazi propaganda apparatus under Joseph Goebbels used radio, film, newspapers, and mass rallies to normalize lies, dehumanize enemies, and mobilize a population toward catastrophe. The concept of the “big lie”—repeating falsehoods until they become accepted truth—was a deliberate human strategy, not a technological accident.
Deception scaled without AI because people designed systems to make it scale.
Digital Manipulation Before Generative AI
The same pattern appeared in the digital age well before modern AI tools were widely available. During the 2016 U.S. presidential election, Russian influence operations used Facebook, Twitter, and other platforms to spread divisive content, create false personas, and amplify social tensions. These efforts relied on human-written posts, coordinated networks, and algorithmic amplification—not generative AI.
The tactics were effective because of intent, organization, and incentives, not because of advanced automation.
What AI Actually Changed
Artificial intelligence altered three variables:
- Speed – Content can be generated and distributed instantly.
- Scale – One actor can now simulate the output of many.
- Cost – Producing persuasive text, images, or video is cheaper than ever.
These changes matter, but they do not remove human intent. Every misleading AI-generated message still originates from a choice made by a person or an organization.
AI does not decide to deceive. People do.
Modern Misuse Is Still Human Misuse
Recent controversies involving AI underscore this point rather than contradict it. On social media platforms, users have deliberately prompted AI systems to generate non-consensual or sexually explicit images of women, then shared those images widely. The resulting harm did not occur because AI “wanted” to create abuse. It occurred because individuals requested it, platforms failed to prevent it, and accountability mechanisms lagged behind misuse.
The tool responded to a prompt. Responsibility lies with whoever typed it—and with the systems that allowed its spread.
Why Blaming AI Is Politically Convenient
Focusing public anger on AI itself serves several interests:
- It shifts accountability away from political actors and media organizations.
- It avoids difficult conversations about ethics, incentives, and power.
- It allows institutions to frame the problem as technical rather than civic.
This deflection is dangerous. If responsibility is attributed to software, then no one is held accountable—and the underlying behaviors continue unchanged.
The Real Risk: Trust Erosion, Not Technology
The greatest threat facing the United States is not artificial intelligence. It is the erosion of trust combined with information overload.
When citizens feel unable to distinguish credible reporting from manipulation, they disengage. When everything appears suspect, even verified facts lose their power. This environment benefits extremists, profiteers, and authoritarian movements—not technology itself.
What Actually Helps
Evidence suggests that the most effective responses are not technological bans, but civic ones:
- media literacy and source transparency
- slower, verified reporting over reactive amplification
- institutional accountability for deliberate deception
- public recognition that tools do not absolve intent
Artificial intelligence can be regulated, audited, and governed. Human behavior must be confronted.
A Clarifying Distinction
They lied before AI.
They are lying with it now.
The tool changed. The behavior did not.
Understanding that distinction is essential—not to reduce concern, but to aim it correctly.
For more social commentary, please see Occupy 2.5 at https://Occupy25.com
References (APA)
Pew Research Center. (2024). Public trust in government: 1958–2024.
Gallup. (2024). Americans’ views on misinformation and trust in media.
Wardle, C., & Derakhshan, H. (2017). Information disorder: Toward an interdisciplinary framework. Council of Europe.
U.S. Senate Select Committee on Intelligence. (2019). Russian active measures campaigns and interference in the 2016 U.S. election.
Discover more from WPS News
Subscribe to get the latest posts sent to your email.