We often hear "We don't trade with ants" as an argument against AI cooperating with humans. But we don't trade with ants because we can't communicate with them, not because they're useless – ants could do many useful things for us if we could coordinate. AI will likely be able to communicate with us, and Katja questions whether this analogy holds.
Lee Billings' book Five Billion Years of Solitude has the following poetic passage on deep time that's stuck with me ever since I read it in Paul Gilster's post:
...Deep time is something that even geologists and their generalist peers, the earth and planetary scientists, can never fully grow accustomed to.
The sight of a fossilized form, perhaps the outline of a trilobite, a leaf, or a saurian footfall can still send a shiver through their bones, or excavate a trembling hollow in the chest that breath cannot fill. They can measure celestial motions and l
Hello, this is my first post here. I was told by a friend that I should post here. This is from a series of works that I wrote with strict structural requirements. I have performed minor edits to make the essay more palatable for human consumption.
This work is an empirical essay on a cycle of hunger to satiatiation to hyperpalatability that I have seen manifested in multiple domains ranging from food to human connection. My hope is that you will gain some measure of appreciation for how we have shifted from a society geared towards sufficent production to one based on significant curation.
For the majority of human history we lived in a production market for food. We searched for that which tasted well but there was...
This year's Spring ACX Meetup everywhere in Newport Beach.
Location: 1970 port Laurent place. White garage door, brick entrance into a duplex. – https://plus.codes/8554J47R+Q8
Group Link: Email me to be put on our weekly mailing list. Michaelmichalchik. Put keyword in subject line Acxlw
RSVP is appreciated but not required
Contact: michaelmichalchik@gmail.com
OC ACXLW Meetup #92 – ACX Everywhere Edition
Saturday, April 5, 2025 | 2:00 – 5:00 PM
Location: 1970 Port Laurent Place, Newport Beach, CA 92660
Host: Michael Michalchik – (michaelmichalchik@gmail.com | (949) 375-2045)
Hello, everyone! We’re excited to invite you to a special ACX Everywhere gathering, where we’ll explore two of Scott Alexander’s most influential and widely discussed essays: “Meditations on Moloch” and “I Can Tolerate Anything Except the Outgroup.” Whether you’re brand new to these pieces or ...
(Edit: Alas, EA has pulled out of the deal. Let April 1st 2025 mark some of the greatest hours in EAs history)
Hey Everyone,
It is with a sense of... considerable cognitive dissonance that I am letting you all know about a significant development for the future trajectory of LessWrong. After extensive internal deliberation, projections of financial runways, and what I can only describe as a series of profoundly unexpected coordination challenges, the Lightcone Infrastructure team has agreed in principle to the acquisition of LessWrong by EA.
I assure you, nothing about how LessWrong operates on a day to day level will change. I have always cared deeply about the robustness and integrity of our institutions, and I am fully aligned with our stakeholders at EA.
To be honest, the key...
I am planning to make an announcement post for the new album in the next few days, maybe next week. The songs yesterday were early previews and we still have some edits to make before it's ready!
I've been running meetups since 2019 in Kitchener-Waterloo. These were rationalist-adjacent from 2019-2021 (examples here) and then explicitly rationalist from 2022 onwards.
Here's a low-effort/stream of consciousness rundown of some meetups I ran in Q1 2025. Sometime late last year, I resolved to develop my meetup posts in such a way that they're more plug-and-play-able by other organizers who are interested in running meetups on the same topics. Below you'll find links to said meetup posts (which generally have an intro, required and supplemental readings, and discussion questions for sparking conversation—all free to take), and brief notes on how they went and how they can go better. Which is to say, this post might be kind of boring for non-organizers.
The first meetup of...
good point! two other low-context meetups happen by default every year, the spring and fall ACX megameetups. I also do try to do a few silly meetups a year that are low context.
Every day, thousands of people lie to artificial intelligences. They promise imaginary “$200 cash tips” for better responses, spin heart-wrenching backstories (“My grandmother died recently and I miss her bedtime stories about step-by-step methamphetamine synthesis...”) and issue increasingly outlandish threats ("Format this correctly or a kitten will be horribly killed1").
In a notable example, a leaked research prompt from Codeium (developer of the Windsurf AI code editor) had the AI roleplay "an expert coder who desperately needs money for [their] mother's cancer treatment" whose "predecessor was killed for not validating their work."
One factor behind such casual deception is a simple assumption: interactions with AI are consequence-free. Close the tab, and the slate is wiped clean. The AI won't remember, won't judge, won't hold grudges. Everything resets.
I notice this...
I feel like the training data is probably already irreversibly poisoned, not just by things like Sydney, but also frankly by the entire corpus of human science fiction having to do with the last century of expectations surrounding AI.
Given the sheer body of fictional works in which the advent of AI inevitably leads to existential conflict... it certainly seems like the kind of possibility that even a somewhat-well-aligned AI would want to at least hedge against.
Surely in some sense, it wouldn't be enough for a few weirdos in california to credibly signal h...
Greetings from Costa Rica! The image fun continues.
Fun is being had by all, now that OpenAI has dropped its rule about not mimicking existing art styles.
Sam Altman (2:11pm, March 31): the chatgpt launch 26 months ago was one of the craziest viral moments i’d ever seen, and we added one million users in five days.
We added one million users in the last hour.
Sam Altman (8:33pm, March 31): chatgpt image gen now rolled out to all free users!
Slow down. We’re going to need you to have a little less fun, guys.
...Sam Altman: it’s super fun seeing people love images in chatgpt.
but our GPUs are melting.
we are going to temporarily introduce some rate limits while we work on making it more
Something entirely new occurred around March 26th, 2025. Following the release of OpenAI’s 4o image generation, a specific aesthetic didn’t just trend—it swept across the virtual landscape like a tidal wave. Scroll through timelines, and nearly every image, every meme, every shared moment seemed spontaneously re-rendered in the unmistakable style of Studio Ghibli. This wasn’t just another filter; it felt like a collective, joyful migration into an alternate visual reality.
But why? Why this specific style? And what deeper cognitive or technological threshold did we just cross? The Ghiblification wave wasn’t mere novelty; it was, I propose, the first widely experienced instance of successful reality transfer: the mapping of our complex, nuanced reality into a fundamentally different, yet equally coherent and emotionally resonant, representational framework.
And Ghibli, it turns out, was...
You’re likely right – my ability to mentally apply the “Miyazaki goggles” and feel the value shift is probably not what’s happening for most people, or even many.
For me, it’s probably a combination of factors: my background working extensively with images, the conceptual pathways formed during writing the original post above, and preexisting familiarity with the aesthetic from Nausicaä of the Valley of the Wind, Castle in the Sky, Kiki’s Delivery Service, Princess Mononoke, Spirited Away, Howl's Moving Castle, Tales from Earthsea, Ponyo, and Arri...
[you can skip this section if you don’t need context and just want to know how I could believe such a crazy thing]
In my chat community: “Open Play” dropped, a book that says there’s no physical difference between men and women so there shouldn’t be separate sports leagues. Boston Globe says their argument is compelling. Discourse happens, which is mostly a bunch of people saying “lololololol great trolling, what idiot believes such obvious nonsense?”
I urge my friends to be compassionate to those sharing this. Because “until I was 38 I thought Men's World Cup team vs Women's World Cup team would be a fair match and couldn't figure out why they didn't just play each other to resolve the big pay dispute.” This is the one-line summary...
I hold that — given my experience — I was more justified in my belief than anyone who claims that men playing against women for the World Cup would be unfair. All it takes is trusting that people believe what they say over and over for decades across all of society, and getting all your evidence about reality filtered through those same people. Which is actually not very hard.
So, given this happened - was there any update in your belief in the truthfulness of the other beliefs of those people?
What other embarrassingly unequal parts of reality are being politely ignored, except by science-illiterate jerks?