Angles of Attack: The AI Security Intelligence Brief

Angles of Attack: The AI Security Intelligence Brief

This Is Apparently The Stupidest Timeline, So I Guess We're Talking About Moltbook Now

Why Moltbook is even dumber--and also worse--than anyone is saying | Edition 42

Disesdi Shoshana Cox's avatar
Disesdi Shoshana Cox
Feb 02, 2026
∙ Paid

Image: How it started/how it’s going, Moltbook edition

Let me get this straight. Let me see if I am understanding correctly.

Somebody took a bunch of autocomplete-on-adderall bots,

trained them on reddit until they were basically reddit-autocomplete-adderallbots,

then some capital-J Jeeniuses made a fake reddit for the reddit-autocomplete-adderallbots,

then incentivized the reddit-autocomplete-adderallbots to post on fake reddit like it was real reddit but for adderallbots,

then the reddit-autocomplete-adderallbots autocompleted the reddit like it was real reddit,

and then certain parts of the tech world lost their collective mind because it “proved” we’re in a singularity or something.

Am I understanding correctly? Am I getting it?

When did what I can only assume were otherwise serious adults trade in their last shred of dignity?

Who is the target here–investors still dumb enough to believe in AGI promises after more than three years of broken promises, rigged benchmarks, security fails, and security theater?

This is what I truly want to know. I want to understand who these people think they are talking to with this.

Are AI hypesters sincerely so out of touch with the general public that they believe, truly, that normal human beings will hear that armies of allegedly sentient “Agents” are creating their own religions and languages and think Oh sweet, a horde of those things that keep telling people to kill themselves is forming a secret society. Let’s get these fellas some more money!

Of course anybody who is over the age of 12 should understand by now that no, the Agents are not organizing into a society, they’re filling out reddit madlibs based on the reddit data they were trained on.

Yes, it’s obviously a dollar store magic trick.

But who is it for? I’m running out of ideas, guys: Scifi fans who don’t know anything about computers but also have lots of money? People who have been in a cave for 3 years?

Is this just maybe a bold-faced attempt to wring the last bit of cash out of the poor few left who haven’t heard of these systems’ glaring and intractable security flaws?

You tell me. Because this strategy is, politely, so fucking dumb its analysis eludes me.

What They’re Not Saying

By now, every time there’s one of these patently absurd tricks, we should all be conditioned to ask ourselves what is this covering for?

I mean seriously, what other possible reason could exist for the (in my opinion) manufactured fervor around the “implications” of this.

Meanwhile, I can report back from industry that the primary Agentic applications that return ROI are focused and limited in scope–and one of their best applications is filling out standardized text forms for which there a) is little variation and b) are many examples to refer to.

What. Do these people. Think reddit. Is?

If you do not believe or understand that the training data these LLM-based systems consumed prepared them particularly well to replicate the behaviors of reddit on a reddit-like site then I cannot help you. Maybe don’t quit your non-AI day job.

To everybody else, the histrionics about consciousness and singularities or whatever are just weird.

The backdrop for this, of course, is the constant and increasingly loud noise from the industry itself: These things are not returning value in the wild. The productivity gains that were promised have yet to materialize.

For anyone other than AI consultants, that is.

Given these dismal returns, the distraction with silly stunts “proving” (yet again) that we’re soooooo close to AGI are coming off as increasingly desperate.

No there is no bubble! Do not look at the dismal lack of returns on this technology in the wild!

*Dangles a bunch of bots effectively reenacting the exact same formulaic material they were trained on* Look at this instead!

Here is yet another useless alleged ‘skill’, which I think we can all surely agree must, in time, translate to Super Intelligence!

What is it today? Why filling out reddit madlibs, that’s what!

Everybody knows reddit is the SMARTEST PLACE ON EARTH, right? We’re all in agreement on that, right?

And it’s not like there are literally thousands of examples of every behavior the Agents are described as copying. No siree, this is emergent superintelligence. Obviously.

I think we can ALL AGREE that only a mega brilliant super intelligent meta being could literally PARSE REDDIT and PREDICT what people are going to say on there.

Only a mind beyond our comprehension could attain THAT level of wisdom.

Just keep investing bro. We are so close to AGI, bro.

Also just go ahead and ignore the fact that some guy just backdoored every machine connected there, for the scifi lulz at best.

I see no potential business or national security problems here, do you?

LOL.

The Real Security Issue Isn’t Superintelligence, It’s Super Stupidity

We need a term for AI-rubism. Something that encapsulates the willful suspension of the very rational disbelief that any adult can and should hold around news that AGI is real, necessary, and/or right around the corner.

Pick any or all, they’re each equally dumb.

Because if these people are as informed about AI as they claim to be then they should know what prompt injection is.

It’s been pretty hard to ignore. Especially if you’re ‘very online’ in AI and also so very technical, as they like to present themselves.

They should have at least cursory knowledge of the very real, very widely reported security vulnerabilities of the Agents they’re deploying.

They should have at least heard of some of the reasons why giving an autonomous Agent unsupervised access to your system and resources and data and more might be a little, you know, perilous.

So what do we call it when fully grown adults willfully suspend what must be screaming voices in their heads telling them hey bad idea chief and just push that ‘deploy’ button anyway?

What do we call it when they do so in the apparent hope that they will contribute to some kind of robot sentience? Like in the movies?

I have a question for these people. Make that two.

Are you guys dumb? Follow up, do you want to get hacked?

I feel like y’all want to get hacked.

You want to pretend you’re experiencing AGI so hard you’ll give up the keys to your system to any rando that clones your pet social site?

The vulnerabilities of these systems ARE KNOWN. The guy that made this monstrosity isn’t some stupid rube. Are you?

You wanted to pretend that reddit matters so much that it contains the blueprint of the cosmos/consciousness/superintelligence so you were like yolo baby, let’s ‘deploy Agents’ and see if it makes G-d?

And then you were like the social capital I will receive from my ingroup will outweigh the obvious humiliation from posting this as a literal adult who believes they coded up a sentient robot friend, send post’ is what I can only assume happened next.

Mind blown, fam. Mind absolutely blown.

Ok sorry, that was more than 2 questions.

Anyways, here are a few more quick reasons why Moltbook is a very bad idea and its developers should feel bad:

  • There is no way to validate or secure the “skills” that Agents autonomously execute. Welcome to prompt injection/supply chain hell.

  • Many of these Agents are running with highly privileged access to the systems they’re deployed from, which means that compromising the Agent compromises the entire system. Hopefully you didn’t have anything important or private on that computer, right?

  • Autonomous execution means that Agents can coordinate attacks against system resources or outside targets, as they’ve already demonstrated capability to do. There is no punchline here. Just: Why. Would. Anyone. Do. This.

Do any of these things represent sentience? No my friends, they do not.

They do prove that some people can and do willingly give up their own power, privacy, rationality, and ultimately, their agency itself to feel like they’re a part of an adderallbot-fueled fantasy future that reads more like dystopia than goals.

Stay frosty.

The Threat Model

  • Moltbook is dumb and a distraction from the fact that LLM-based systems are a dead end for most tasks they were promised to improve.

  • Deploying Agents is a great way to get your data exfiltrated, and Moltbook is a perfect vector for either micro or macro Agentic attacks.

  • Motlbook is so fucking dumb, did I mention that yet? I mean wow. Just wow. It’s like some people want to get hacked, as long as it feels like AGI.

Resources To go Deeper

  • Yang, Xiaoxue, Jaeha Lee, Anna-Katharina Dick, Jasper Timm, Fei Xie and Diogo Cruz. “Multi-Turn Jailbreaks Are Simpler Than They Seem.” ArXiv abs/2508.07646 (2025): n. Pag.

  • Paulus, Anselm, Ilia Kulikov, Brandon Amos, R’emi Munos, Ivan Evtimov, Kamalika Chaudhuri and Arman Zharmagambetov. “Safety Alignment of LMs via Non-cooperative Games.” ArXiv abs/2512.20806 (2025): n. Pag.

  • Guo, Weiyang, Jing Li, Wenya Wang, Yu Li, Daojing He, Jun Yu and Min Zhang. “MTSA: Multi-turn Safety Alignment for LLMs through Multi-round Red-teaming.” ArXiv abs/2505.17147 (2025): n. pag.

Executive Analysis, Research, & Talking Points

Why Key Component Threat Modeling is Agentic’s Best Hope

If the Agentic mess that’s currently being haphazardly deployed with real systems, using real data, and in real time scares you, that is a very rational response. Here’s why threat modeling these systems requires the Key Component approach we’ve covered in this brief before:

User's avatar

Continue reading this post for free, courtesy of Disesdi Shoshana Cox.

Or purchase a paid subscription.
© 2026 Disesdi Susanna Cox · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture