Unfortunately the reason people are reluctant to admit their mistake is that the potential return is usually negative. If you admit that you were the kind of person to espouse and defend an idea you suspect to be false even though you claim to have changed, that's mostly what people are going to conclude about you, especially if you add that espousing this idea gained you status. They may even conclude that you simply shifted course because it was no longer tenable, and not because you were convinced to be sincere.
HPMOR is ~4.4 million characters, which would cost around $800–$1000 to narrate with ElevenLabs being conservative.
Liv Boeree's voice acting is really impressive! The animation too! Congratulations to the whole team, I think it's fair to say it's professional quality now!
They are advised by Dan Hendrycks. That counts for something.
This happened to me several times when I was a tween, and I came up with the light switch trick. I realized that in dreams the lights are all dimmers, the light never switches instantly from off to on. So when I wondered if I was in a dream I looked for a light switch.
I always thought that this impression I had that the rationalist memeplex was an attractor for people like that was simply survivorship bias on people reporting their experience. This impression was quite reinforced by the mental health figures on the SSC surveys once the usual confounders were controlled for.
Interesting, this is the first time I remember reading a post on LessWrong that's so heavy on conflict theory with a degrowth/social justice perspective. A few of the crux points are brushed aside "for another day", which is a little disappointing.
"the grasshopper coordinated with the many unhappy ants – those who did not like the colonomy but who were trapped in it by its powerful network effects and Moloch’s usual bag of tricks. Together they circumscribed the colonomy (e.g. through culture that holds competition at bay)," this is unfortunately whe...
Do you think it's realistic to assume that we won't have an ASI by the time you reach old age, or that it won't render all this aging stuff irrelevant? In my own model, it's a 5% scenario. Most likely in my model is that we get an unaligned AGI that kills us all or a nuclear war that prevents any progress in my lifetime or even AI to be regulated into oblivion after a large-scale disaster such as a model that can hack into just about anything connected to the Internet bringing down the entire digital infrastructure.