What If a Simple "No-AI" Seal Could Tell You What’s Real?

I recently attended an event in Denver emceed by Van Jones, the television host, social activist, and author, and his opening story hit me harder than I expected. He talked about how people today, of all ages, scroll endlessly through feeds of clips, memes, influencers, and viral hot takes, yet somehow miss the facts and the real story behind major world events such as the Middle East conflict or the affordability crisis here in America.

Not because people aren’t curious, but because algorithms decide what they should care about and what version of reality they see.

As he spoke, I found myself thinking back to my own upbringing, when news meant four TV channels and the evening broadcast with Walter Cronkite, the most trusted man in America. That was my world growing up. There was a shared sense of reality, and you generally knew where your information came from. Today, there is no single trusted voice, just millions of competing ones, some real, some fake, and some generated entirely by machines. And increasingly, we can’t tell which is which.

Before going further, I should say this: I take legitimately fake news personally.

I was trained at the University of Missouri School of Journalism, the oldest journalism school in the world, and started my career as a cub reporter for United Press International. My desire to become a journalist was sparked by Bob Woodward and Carl Bernstein, whose dogged reporting exposed Watergate and ultimately helped take down a president.

From day one at Mizzou, the lesson was hammered into us: sources matter. Facts matter. And the highest duty of a journalist is to pursue real, unbiased truth, not spin, not opinion, not virality. Journalism at its best speaks truth to power and holds people accountable, whether they are corporate executives, government officials, or anyone else with influence. So when I see AI being used to fabricate news, mimic real people, or distort reality, it hits a nerve.

You don’t have to look far to see the growing problem. In recent months, several AI-generated videos have gone massively viral despite being completely fake:

Each of these was completely made up. Each fooled thousands or millions anyway.

Van Jones argued that while we may never fully stop false or AI generated content, we can make it easier for people to recognize what’s authentic.

Some platforms are trying. LinkedIn, TikTok, Pinterest, and Meta (Facebook, Instagram, Threads) have begun labeling AI-generated images and videos using emerging standards like C2PA. The goal is simple: tell users when content has been created or altered by AI.

But in practice, the system isn’t working very well. A-first-of-its-kind audit by Indicator reviewed 516 AI generated posts across major platforms and found that only about 30 percent were labeled correctly. Pinterest performed best at roughly 55 percent accuracy, while some platforms failed to label any AI content at all.

The gap between policy and reality highlights how hard it is to build trust in today’s feeds. Algorithms push whatever drives engagement, not what is accurate or transparent.

One possible solution is a No-AI Seal: a secure, cryptographically verified mark confirming that content was created entirely by a human. (Think “Good Housekeeping” seal, now 120 years old). Tap the seal, and you could instantly verify what’s probably authentic. Unlike inconsistent platform labels, a universal seal would give people a clear and reliable signal.

Of course, no system is perfect. Importantly, AI detection is inherently probabilistic, not 100% certain. Creating a trusted authority to issue such a seal would take time. Pilot programs would likely begin in major newsrooms or on certain social platforms, expanding as standards mature. But even an imperfect verification system would be better than today’s free for all. Wouldn’t it be ironic if AI backed by humans could sniff out AI disinformation this way?

Of course, a No-AI Seal wouldn’t eliminate fake content or solve algorithmic echo chambers overnight. Far from it. But it could give people a fighting chance to understand what they’re looking at and who created it.

In a world where our feeds increasingly shape our sense of reality, even a small tool like this could help restore clarity, trust, and a shared foundation of truth.

Next
Next

Journalists Turn in Their Badges — and America Loses a Check on Power