By Cliff Potts, CSO, and Editor-in-Chief of WPS News
Baybay City, Leyte, Philippines — February 11, 2026

In recent political arguments, a new phrase keeps appearing—sometimes explicitly, sometimes implied:

“I trust my AI, therefore you’re wrong.”

This isn’t a debate tactic. It’s a surrender of judgment.

The issue isn’t artificial intelligence itself. The issue is who controls it, how it is trained, and how people are using it as an authority substitute rather than a tool. Nowhere is this clearer than in arguments where one person relies almost exclusively on Grok, the AI system developed by Elon Musk, and treats its output as definitive.

That’s not fact-checking. That’s outsourcing reality.

Tools Don’t Replace Judgment

AI systems can summarize information quickly. They can surface documents. They can even highlight inconsistencies. What they cannot do is replace human judgment—especially when they are owned, tuned, and governed by a single corporate actor with a public political posture.

When someone says they trust one AI over multiple human-edited sources—Reuters, AP, PBS, Al Jazeera, the New York Times, academic research—they are not making a technical choice. They are making an authority choice.

They are saying: this system decides what is true for me.

That’s not skepticism. It’s dependency.

When AI Is Better at Art Than Truth

It also needs to be said plainly: Grok is often far better at artwork than it is at adjudicating contested facts.

That’s not a compliment or an insult. It’s a limitation. Generative systems excel at visuals, vibes, and aesthetic excess. They can produce striking images and symbolic mashups that feel insightful. They can also get carried away, amplifying drama, exaggeration, or mood without regard for accuracy. We have already seen this pattern play out repeatedly in the short history of generative AI art.

That strength becomes a weakness when the same system is treated as an authority on real-world events, motives, or accountability. Art tolerates exaggeration. Journalism does not.

Confusing an AI’s ability to generate compelling imagery or confident prose with its ability to determine truth is a category error—and a dangerous one. A system designed to be expressive should never be mistaken for one designed to be decisive.

The Elon Musk Problem Isn’t Personal—It’s Structural

This is not about hating Elon Musk or idolizing him. It’s about recognizing power concentration.

Musk owns or controls the AI system, the platform where its answers circulate, and the amplification mechanisms that reward certain framings. That alone should trigger caution. No responsible media consumer would rely on a single billionaire-owned system as their primary arbiter of truth—especially one openly positioned as an ideological counterweight to mainstream journalism.

Pluralism matters. Competing editorial standards matter. Human accountability matters.

Confirmation by Proxy

What’s happening instead is something simpler and more dangerous: confirmation by proxy.

A person doesn’t say, “Here’s the evidence.”
They say, “My AI says you’re wrong.”

The AI becomes a shield. The owner becomes invisible. The user stops thinking.

This dynamic mirrors older habits—“Fox says,” “the algorithm agrees,” “my feed confirms it”—but with a futuristic gloss that makes it feel smarter than it is.

Why Age Helps You See This Clearly

People with decades of experience recognize this pattern immediately. We have seen “neutral” systems quietly pick sides, tech saviors overpromise objectivity, and certainty sold as convenience.

Experience does not make someone infallible. It does make them less likely to confuse speed with truth.

Real analysis is slower. It is messier. It involves disagreement, uncertainty, and cross-checking. That is not a bug. That is the point.

The Real Divide Isn’t Left vs Right

The real divide now is between people who still evaluate sources and people who delegate evaluation to a single machine.

That divide cuts across politics, age, and ideology. And it matters more than most partisan arguments.

Because once judgment is outsourced, debate is over.

A Final Warning

AI should assist thinking—not replace it.

The moment someone says “I trust my AI, therefore you’re wrong,” they are no longer participating in a shared reality. They are renting one.

And rented realities always come with conditions.


Discover more from WPS News

Subscribe to get the latest posts sent to your email.