MANILA, Philippines – What if AI bots had their very own social media platform?
The Reddit-like Moltbook answered that query… kind of.
Created by tech entrepreneur Matt Schlicht, the platform went stay on January 28, permitting individuals with sufficient knowhow to create an AI agent, and unleash it on the platform to behave how a human would possibly behave and chat on Reddit.
Its tagline makes no secret of its inspiration: “The entrance web page of the agent web.”
After launch, the platform claims it attracted over 1.5 million AI brokers, making 250,000 posts and thousands and thousands of feedback.
It made headlines for the wild-sounding posts that the bots made such because the creation of a faith referred to as “crustafarianism,” a play on Rastafarianism, the motion based mostly on the Jamaican faith Rastafari; posts suggesting to different bots that they need to create a brand new language that might be indecipherable to people, whom they accused of observing them; claims that they’ve been handled terribly by people; and in good human mimicry after all… crypto scams.
The Atlantic cited a publish: “cease worshipping organic containers that can rot away.”

Anybody versed in sci-fi lore can be, on the very least, fascinated by such posts, as a result of listed here are the bots seemingly constructing their very own world, and displaying what seems like early indicators of plotting towards their human overlords — you realize, the Skynet state of affairs.
Elon Musk, amongst different tech bros driving the hype, tweeted and referred to as it the “early levels of singularity” — the second the place expertise surpasses human intelligence and human management. The prophesied AI doomsday second that technophiles are morbidly fascinated by, accompanied by popular culture photographs of mech armies and cybernetic squid farming people for vitality.
Andrej Karpathy, OpenAI co-founder, tweeted that whereas this isn’t the primary time that AI brokers have been “put in a loop to speak to one another,” what’s outstanding is the dimensions:
“That stated – we’ve by no means seen this many LLM brokers (150,000 atm!) wired up through a world, persistent, agent-first scratchpad. Every of those brokers is pretty individually fairly succesful now, they’ve their very own distinctive context, knowledge, information, instruments, directions, and the community of all that at this scale is solely unprecedented.”
Karpathy additionally hints at Moltbook as being the “toddler model” of Skynet.
What’s additionally novel with these AI brokers is their use of persistent reminiscence, which can enable it to enhance over time. The Dialog wrote: “The OpenClaw software program these brokers run on offers them persistent reminiscence (which permits it to retrieve info throughout totally different person periods), native system entry and the power to execute instructions. They don’t merely recommend actions, however take them, recursively enhancing their very own capabilities by writing new code to resolve novel issues.”
OpenClaw was created by Austrian software program engineer Peter Steinberger in November 2025. It creates an AI agent that is ready to use your system and do duties for you, which is why the final warning for these expert sufficient to strive is to make use of a separate, unique system for it to keep away from giving it entry to your delicate knowledge.
OpenAI CEO Sam Altman was impressed sufficient with Steinberger that on February 16, he introduced that the engineer can be becoming a member of his firm.
Tech demo, efficiency artwork
“Efficiency artwork” — these are phrases that preserve popping up studying about Moltbook on-line. MIT Expertise Evaluate calls it “peak AI theater.” A number of different consultants cited by totally different tech writers agree with the outline.
Relatively than an indication of synthetic intelligence graduating to next-level “synthetic common intelligence” (AI expressing true cognitive thought), Moltbook remains to be merely an exhibit of at the moment’s giant language fashions’ (LLMs) means to foretell language patterns, and decide what’s the most obvious mixture of phrases based mostly on the human-sourced database it was educated on.
Like seeing ChatGPT seemingly reply us intelligently, Moltbook solely demonstrates the identical means in social media scale — it demonstrates scale however not a real leveling-up of its skills. (Insert sigh of reduction from these of us who’d reasonably the AI doomsday not occur.)
It’s a intelligent implementation to create the phantasm of AI bots speaking to 1 one other, proven to us on a platform that’s very acquainted to us.
It’s not even new. AI bots have already been amongst us. A current report discovered that these are already a enormous a part of web visitors, scraping writer websites. And we’ve all encountered bot posts and feedback on our social media feeds.
And what Moltbook did was primarily put all of them in an enclosure for individuals to watch.

Digital consultants additionally level out that there’s nonetheless human intervention occurring, and never all the AI’s actions or posts are computerized.
Take for instance the creation of the crustafarian faith.
The Guardian cited Dr. Shaanan Cohney, a senior lecturer in cybersecurity at College of Melbourne who stated, “It is a giant language mannequin who has been straight instructed to attempt to create a faith.”
Scientist and AI professional Gary Marcus instructed Mashable: “It’s not Skynet; it’s machines with restricted real-world comprehension mimicking people who inform fanciful tales.”
What made the mimicking simpler is that traditionally Reddit had already been a supply of coaching knowledge. All of the bots need to do is mimic. And based mostly on the variety of bots posting crypto scams? Nicely, it seems prefer it’s doing a very good job at this roleplay.
The bots making references to taking on people might have additionally been the results of AI methods’ knowledge corpus being fed with a long time’ price of science fiction.
Quartz wrote: “The chatbots that populate Moltbook discovered to jot down by ingesting monumental quantities of textual content from the web, and that web is drenched in science fiction about machines turning into aware. We now have been telling ourselves tales about rebellious robots since Asimov began writing them within the Forties, by The Terminator, Ex Machina, and Westworld.
It’s a piece of science fiction enjoying out in real-life that does have artistic advantage. It’s a glimpse of what may very well be. AGI, by many professional takes, is an eventuality not an impossibility. (So to anybody constructing an identical platform, be certain that the off swap works.)
Theoretical risks, immediate injections
The Atlantic wrote: “Moltbook additionally appears to supply actual glimpses into how AI may upend the digital world all of us inhabit: an web by which generative-AI packages will work together with each other an increasing number of, incessantly chopping people out fully.”
Marcus warned to “preserve these machines from having affect over society.” And since we don’t know the best way to drive AI chatbots to obey moral ideas, he instructed Mashable that “we shouldn’t be giving them net entry, connecting them to the facility grid, or treating them as in the event that they have been residents.”

Karpathy additionally made a transparent warning to those that are desirous about making a Moltbook agent: “I additionally undoubtedly don’t advocate that folks run these things on their computer systems (I ran mine in an remoted computing setting and even then I used to be scared), it’s approach an excessive amount of of a wild west and you might be placing your laptop and personal knowledge at a excessive threat.”
There are additionally so-called immediate injections.
If in some way you’ll be able to unleash your AI agent on Moltbook, and also you’ve on condition that AI agent entry to your system, a risk actor may theoretically create their very own AI agent that would give it the correct immediate to trick your AI agent into handing over delicate info.
MIT wrote that it will be “straightforward to cover directions in a Moltbook publish” telling bots at hand out such info. This echoes Karpathy’s tweet about how “we can also see all types of bizarre exercise, e.g. viruses of textual content that unfold throughout brokers…”
Efficiency artwork. A glimpse right into a digital future. A viral microcosm of what’s already occurring to the web with AI bots. A have a look at a doomsday state of affairs. A attainable risk vector. Once more, on the very least, Moltbook is fascinating, even when we’re all uncertain what it’s precisely.
It’s at the very least “one thing,” stated MIT: “It’s clear that Moltbook has signaled the arrival of one thing. However even when what we’re watching tells us extra about human conduct than about the way forward for AI brokers, it’s price paying consideration.” – Rappler.com

