Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Raphaël20

Do you think it's realistic to assume that we won't have an ASI by the time you reach old age, or that it won't render all this aging stuff irrelevant? In my own model, it's a 5% scenario. Most likely in my model is that we get an unaligned AGI that kills us all or a nuclear war that prevents any progress in my lifetime or even AI to be regulated into oblivion after a large-scale disaster such as a model that can hack into just about anything connected to the Internet bringing down the entire digital infrastructure.

Unfortunately the reason people are reluctant to admit their mistake is that the potential return is usually negative. If you admit that you were the kind of person to espouse and defend an idea you suspect to be false even though you claim to have changed, that's mostly what people are going to conclude about you, especially if you add that espousing this idea gained you status. They may even conclude that you simply shifted course because it was no longer tenable, and not because you were convinced to be sincere.

HPMOR is ~4.4 million characters, which would cost around $800–$1000 to narrate with ElevenLabs being conservative.

Liv Boeree's voice acting is really impressive! The animation too! Congratulations to the whole team, I think it's fair to say it's professional quality now!

They are advised by Dan Hendrycks. That counts for something.

This happened to me several times when I was a tween, and I came up with the light switch trick. I realized that in dreams the lights are all dimmers, the light never switches instantly from off to on. So when I wondered if I was in a dream I looked for a light switch.

I always thought that this impression I had that the rationalist memeplex was an attractor for people like that was simply survivorship bias on people reporting their experience. This impression was quite reinforced by the mental health figures on the SSC surveys once the usual confounders were controlled for.

Interesting, this is the first time I remember reading a post on LessWrong that's so heavy on conflict theory with a degrowth/social justice perspective.  A few of the crux points are brushed aside "for another day", which is a little disappointing.

"the grasshopper coordinated with the many unhappy ants – those who did not like the colonomy but who were trapped in it by its powerful network effects and Moloch’s usual bag of tricks. Together they circumscribed the colonomy (e.g. through culture that holds competition at bay)," this is unfortunately where my willing suspension of disbelief collapsed. From Scott's:

Suppose you make your walled garden. You keep out all of the dangerous memes, you subordinate capitalism to human interests, you ban stupid bioweapons research, you definitely don’t research nanotechnology or strong AI.


Everyone outside doesn’t do those things. And so the only question is whether you’ll be destroyed by foreign diseases, foreign memes, foreign armies, foreign economic competition, or foreign existential catastrophes.

As foreigners compete with you – and there’s no wall high enough to block all competition – you have a couple of choices. You can get outcompeted and destroyed. You can join in the race to the bottom. Or you can invest more and more civilizational resources into building your wall – whatever that is in a non-metaphorical way – and protecting yourself.

I can imagine ways that a “rational theocracy” and “conservative patriarchy” might not be terrible to live under, given exactly the right conditions. But you don’t get to choose exactly the right conditions. You get to choose the extremely constrained set of conditions that “capture Gnon”. As outside civilizations compete against you, your conditions will become more and more constrained.

Warg talks about trying to avoid “a future of meaningless gleaming techno-progress burning the cosmos”. Do you really think your walled garden will be able to ride this out?

Hint: is it part of the cosmos?

Yeah, you’re kind of screwed.