AI floods us with subintelligence, then quietly erodes our means to acknowledge, query, or resist it
Rising up, I at all times thought that the tip of the human race can be resulting from some type of superintelligence, one thing that was impressed by the tough-to-beat finish bosses within the video video games that I’ve performed and flicks I’ve watched. From these traditional prey-turned-predator story arcs, I got here to worry that the terminators and murderous robotic dolls might come to hunt us all.
However these days I’ve come to understand that the larger, extra insidious menace is already in entrance of us, in types so alluring that we ignore the warning labels: social media content material, analysis paper-in-minutes, AI hacks and “shortcuts.”
I’m speaking, in fact, about AI slop and the ocean of subintelligence we’re drowning in.
AI slop is low-quality digital content material mass-produced by AI. In different phrases, it’s junk. Which I perceive is sort of ironic given how a lot we reward AI for its supposed “intelligence.” However AI vomits a number of silly issues, and we catch all of them with our mouths broad open.
Final 12 months, for instance, The Dialog reported that scientists found a peculiar time period showing in printed scientific papers: “vegetative electron microscopy.” These of us with non-scientific minds would brush this off as jargon, besides that the time period was garbage and didn’t exist in any respect — at the very least till AI got here up with it, preserved it, and strengthened it inside its techniques in order that it discovered its method into science. Science, the Atlantic declared, is drowning in AI slop.
Much more unbelievable is how AI made this error. It seems that this was resulting from a mistake within the digitizing course of: the phrases “vegetative” and “electron,” by coincidence, appeared subsequent to one another however in numerous columns in some journals printed within the ‘50s. Many years later, the error was present in journals and scientific papers, eluding reviewers and editors.

Absurd because the story might sound, it’s not unusual prevalence to search out AI slop in analysis journals. Researchers have been complaining about “phantom citations” in scientific papers.
One notorious occasion is the Trump administration’s “Make America Wholesome Once more” report, which reportedly included at the very least 7 citations to sources that didn’t exist, earlier than they had been up to date hours after publication.
If this junk made its method into these tightly guarded scientific communities, what extra in areas with little to no regard for truthfulness, akin to social media?
Rise of pseudo-journalistic pages
In simply the previous few months, Filipinos have seen the rise of many pseudo-journalistic Fb pages, claiming to supply “explanatory journalism” and easy-to-understand “insights.” In actuality, they churn out what can solely be suspected as AI-generated summaries-turned-infographics, piggybacking on the work of actual journalists truly doing the groundwork.
“Deep” insights, akin to “malatang isn’t just meals however a cultural reflection of Gen Zs,” and plenty of extra of these “this isn’t [insert trendy topic] however [some random intellectual metaphor]” — all AI slop. And when these pages don’t even have the balls to declare their authors, and even their editorial board — to carry themselves accountable for what they put on the market — you understand that THEY know they’re placing out crap that they don’t wish to be related to.
However right here’s the place it actually will get terrifying for me: It’s not that AI produces junk, it’s how AI is slowly making us incapable of recognizing it.
A examine from MIT’s Media Lab tracked the mind exercise of essay writers over 4 months and located that those that used ChatGPT confirmed weaker reminiscence, weaker creativity, weaker important pondering. Their essays, whereas polished, had been described by evaluators as “soulless.”
Worse, over time, the ChatGPT customers received lazier, finally simply copy-pasting AI output wholesale. The researchers name it “cognitive debt”: the extra you let AI suppose for you, the much less your mind bothers to suppose in any respect.
The artwork of studying
We’ve been speaking to a number of educators as nicely, as we do our workshops on AI, and what’s clear is that that is additionally enjoying out in colleges and has reached absurdity in an nearly poetic degree, the place college students use AI, and academics, overworked and underpaid, additionally use AI to guage them.
On this state of affairs, are we actually studying and educating something? In spite of everything, the educational is within the friction, as my colleague and co-trainer Gemma Mendoza at all times says.
Nowadays, colleges are compelled to spend a lot cash on AI detection instruments that don’t actually work, flagging actual pupil work as AI-generated and lacking precise AI-generated submissions.
That is the actual entice. AI floods us with subintelligence, then quietly erodes our means to acknowledge, query, or resist it.
Researchers have been warning us about this “deskilling” that’s taking place as we offload our thought course of to machines for short-term positive factors. We’ve got solely been capable of survive all these years as a species, in any case, because of our important pondering abilities and flexibility.
So I’m now satisfied that the tip is not going to come from some clever overlord descending upon us with a masterplan. It’ll come from a gradual, heat flood of stupidity we’ve mistaken for comfort: content material we by no means requested for, citations that result in nowhere, insights that don’t actually matter.
We gained’t be hunted. We’ll simply overlook what it’s wish to suppose for ourselves, one summarized infographic at a time. – Rappler.com

