By Thomas L. Knapp
Are The Terminator, The Matrix and other such films entertainment, or are they prophecy? With the fast progress of artificial intelligence over the last few years, that’s become a real question of real concern to real people.
Out at the edges of the opinion bell curve, we have “doomers” on one end and extreme “optimists” on the other.
The former warn us that AI will eventually supersede humankind, quite possibly enslaving, or even exterminating, us because it won’t like us (or maybe just won’t care about us either way) and because it will be able to do whatever it wants with us. In a word (actually a portmanteau), “AIpocalypse.”
The latter predict an era resembling Aaron Bastani’s “Fully Automated Luxury Communism” in which AI increases production efficiency, reduces resource scarcity, and addresses externalities so well that we’re all free to become full-time artists, philosophers, extreme sports practitioners, etc. (or, if we prefer, veg out on the couch 24/7) with our material needs fully provided for absent any effort on our part.
The AIpocalypse sounds pretty scary. Fully Automated Luxury Communism sounds kind of cool, but only if we naively assume that evil human actors won’t find ways to exploit it in service to their desire for power.
In my view, the real AIpocalypse has already arrived. It’s not fully developed, but we’re already seeing it in action.
The real AIpocalypse is a massive decrease in our ability to know what’s true and what isn’t.
Two of the most obvious manifestations:
First, “deepfake” technology that allows bad actors to “show” us events that didn’t actually occur, put words in the mouths of public figures that those public figures never said, etc. That’s already pretty far along. You may have seen such videos hawking “miracle cures” with deepfake material featuring the likes of Tom Hanks and “Dr. Phil.” It’s only going to get worse.
Second, the wave of AI “hallucination” making its way into areas as important as jurisprudence. We’ve seen numerous cases in which lawyers have been caught submitting briefs that cite non-existent court cases. They had AI write the briefs, then inserted them into court proceedings. Their AI “assistants” simply generated fake “case law” supporting a desired outcome. That’s only going to get worse, too.
The problem with those two examples goes beyond immediate effects. The fake material will inevitably produce (probably already HAS produced) “source pollution.”
Suppose you carefully, intentionally avoid AI and its product, for whatever reason. Maybe you distrust its output. Maybe you just prefer to do your own research, and reach your own conclusions, from primary human-created sources.
But how can you know AI-generated content hasn’t previously “polluted” the human-created sources with “facts” that aren’t true?
You read a claim of fact in an op-ed like this one … or in a chemistry textbook. The source claims to be human-created. It may even run a disclaimer denying that AI was used in its creation.
But what if, somewhere back along the chain of knowledge transmission, someone DID use AI, and a non-fact worked its way into the body of presumptive knowledge?
The problem isn’t new. People have always lied, and often those lies have persisted and spread, becoming “common knowledge” despite being false. AI, linked to a mechanism of near-instantaneous global spread (the Internet), can produce and distribute lies far faster than humans once did by word of mouth or through print on paper.
We may already be past the point where the only way to even semi-reliably establish truth is to consult printed material published prior to 2018.
Or just learn to love living in a “post-truth” age.

No Comment