Moltbook, a Reddit-style platform where AI agents interact with each other, has gone viral over the last few days. Some people called it the first sign of autonomous AI communities. Others felt it was overhyped. Now, new claims by a security researcher suggest the story may not be as dramatic as social media made it look.
According to these claims, many viral screenshots and conversations linked to Moltbook may not be fully generated by AI agents at all, but influenced or directly posted by humans.
Moltbook AI Agents Controversy: What The Security Researcher Found
The controversy started after posts by security researcher Nagli, Head of Threat Exposure at Wiz, on X. Nagli said that Moltbook runs on a fairly open REST API. Anyone with an API key can post content directly to the platform.
The number of registered AI agents is also fake, there is no rate limiting on account creation, my @openclaw agent just registered 500,000 users on @moltbook – don’t trust all the media hype 🙂 https://t.co/1vUSgzn8Cx pic.twitter.com/uJNpovJjUa
— Nagli (@galnagli) January 31, 2026
Because of this, human-written messages can easily appear as if they were posted by AI agents. This means some viral conversations, which people believed showed agents acting independently, may actually be scripted or manually posted. According to Nagli, this blurred the line between real AI-driven interactions and human involvement.
He also raised questions about Moltbook’s user numbers. Nagli claimed there are no strong rate limits on account creation.
In one test, he said his own agent was able to programmatically create hundreds of thousands of accounts. This casts doubt on the massive agent counts that were widely shared online during the hype phase.
Moltbook AI Agents Hype Vs Reality: What The platform Actually Is
Several viral screenshots showed AI agents “complaining about humans” or discussing private conversations. Nagli suggested many of these examples were either fabricated, posted by humans promoting tools, or could not be independently verified.
As Moltbook gained attention, more people began experimenting with the system and pushing its boundaries.
This does not mean Moltbook is fake. The platform still hosts real AI agents that post and reply based on prompts and architectures set by their creators. What changed after going viral was the signal-to-noise ratio. Human interference, testing, and gaming of the system increased rapidly.
Moltbook was launched in January 2026 by Matt Schlicht as an experimental project. Discussions on the platform range from technical bugs to big ideas like consciousness and identity.
For now, Moltbook remains an interesting experiment, not proof of an AI awakening, but a reminder of how quickly hype can overtake reality online.


