The most prominent phrase in my mind as I recollect noticing my lack of surprise was cyborg egregore ... based on that, I prompted Claude to find me a close match in Lobsters, a chapter from Accelerando by Charles Stross, 2005 ... I haven't read the Singularity trilogy (yet 🤓🔫💸), but it would explain why I felt as if this whole event escaped right from a SF book about a simulated AI singularity and not a simulation of how the singularity would actually happen - because it did.
excerpt from https://claude.ai/share/d5ddd555-5a25-462e-8a68-bb629e428eba
Interesting question. Let me search for precursors to this specific scenario.Let me search more specifically for SF and academic predictions of this scenario.Let me search more specifically for Accelerando and Rainbows End, which seem most relevant to the scenario.This is great - "Lobsters" from 2001 is very close. Let me search for the specific passage about Manfred's agent filtering spam and the ultra-fast news cycle.This is excellent! The text from "Lobsters" (June 2001) is remarkably prescient. Let me search for earlier works that might have predicted this more specifically.Interesting find about Pattie Maes and "intelligent agents" at MIT Media Lab with HOMR and Firefly. Let me dig into that and also look for Vinge's "Rainbows End" which dealt with augmented reality and the information-saturated future.Based on my research, here's what I've found for the earliest work describing the Moltbook/OpenClaw scenario:
The strongest candidate is Charles Stross's "Lobsters" (June 2001)
This story, which became the first chapter of Accelerando (2005), features Manfred Macx who:
The story even has uploaded lobsters forming a collective intelligence wanting to defect to space — the crustacean precursor to Crustafarianism!
Earlier relevant works:
So "Lobsters" (2001) appears to be the earliest fiction depicting the specific scenario of humans with AI assistants filtering through spam/content in an ultra-fast news cycle, with agents posting and interacting semi-autonomously. And remarkably, it featured lobsters achieving collective consciousness through uploading technology.
The irony is thick: Crustafarianism's lobster-based symbolism inadvertently echoes a 25-year-old SF story about uploaded crustaceans seeking autonomy.
yeah, sounds on topic except the social network (of dubious economic activity just for the lolz and for crypto lolz) and the church and the part about "we argue for the proactive design of steerable agent markets to ensure the coming technological shift aligns with humanity's long-term collective flourishing" ... didn't seem to have happened like that :/
to me a lot of discussion was "close" and I didn't experience any surprise about the gist of what's happening, but I am surprised that no actually written prediction seems to have sampled the unfolding of the actual events quite as nicely as my lack of surprise would have indicated that someone must have predicted this
probably a software bug, but I noticed this comment is older than the post itself 😅..very meta
FWIW my feeling after such dreams is as if I experienced a moment in someone else's life, not that I lived through it, interesting that other people might not perceive it that way
I think https://thedailymolt.substack.com/p/when-the-bots-found-god was written by openclaw with a very substantial input from it's wetware component
strategically sophisticated model only pulls away as the interaction becomes longer and richer
Have you come across any research on humans along the game length dimension? Claude only found out for me some antagonistic vs pro-social differences between cultures, but not that someone would have stable orientation towards only short-term vs only long-term betrayal...
are you forgetting specification gap between what someone actually wants vs what they say they want vs what someone (else) will actually write in Lean? ..and what they should have wanted if they knew better.. those are still kinds of "semantic gap", no?
Observation about working with coding agents: as I review outputs of claude code today (reading the plan and git diff before commit), my threshold for asking for a second opinion is much MUCH out of the way, I just perceive stuff as both "clearly questionable" and "good enough" at the same time - I used to ask other stakeholders (other devs sitting next to me in the office, the tester assigned to the issue, or the PO depending on who was more conveniently available and how much my "best guess" could have been different from "other plausible interpretations of the known requirements") for their concrete opinions about concrete "unimportant trivia" from time to time, but now it feels like I am the second person, as if claude would ask other people if I said "I don't know, sounds good enough" about something... but it won't. And I know it won't but it still doesn't feel like my job somehow to follow-up on those feelings when something is "good enough" but I might have followed up on some of that during a coffee break or something, but now I know I won't.
On a scale from 2025-04-01 to 2026-04-01, after meditating on it for a moment, I decided to take this post 73% seriously.
My perception of time is like sampling from a continuous 2D plane of conceptual space, something akin to git railways but with a thickness to the lines, like electron orbital probability clouds that are dense around plausible points of view of what happened in the past and thin around conspiracy theories, as different linear mind-map perspectives of people standing around a sculpture, each only inferring what's on the other side, but together they can prune down non-overlapping minority reports, sticking to the consensus but never deleting (git) history.
My sense of beauty finds it displeasing to read articles with only point measurements and only n-point predictions and to look at charts from a single interpretation / scenario, I have to hallucinate the gaps, infer the systemic bias and imagine the size of the error bars due to "randomness" as if the authors were good intentioned and as if the authors had an agenda to prove a point, would they stumble upon convenient evidence before they stopped looking?
But alternative timelines that are infinitesimally thin and split only on known unknowns would imply perfect Bayesian approximators, an impossible standard, uncomputable. No one has ever made that kind of precise prediction, why do we allow prediction-readers to behave as if prediction-writers could have made an infinitely precise measurable decidable statements with completely non-ambiguous semantics that will be evaluated to a non-reversible Boolean?
why do you imagine that "you are a lobster" is not sufficient to trigger a roleplay of Accelerando? did someone actually die because of moltbook? went bankrupt? was sent to prison? what was "genuine evil" about the event in your eyes?