All of Raphaël's Comments + Replies

Do you think it's realistic to assume that we won't have an ASI by the time you reach old age, or that it won't render all this aging stuff irrelevant? In my own model, it's a 5% scenario. Most likely in my model is that we get an unaligned AGI that kills us all or a nuclear war that prevents any progress in my lifetime or even AI to be regulated into oblivion after a large-scale disaster such as a model that can hack into just about anything connected to the Internet bringing down the entire digital infrastructure.

Unfortunately the reason people are reluctant to admit their mistake is that the potential return is usually negative. If you admit that you were the kind of person to espouse and defend an idea you suspect to be false even though you claim to have changed, that's mostly what people are going to conclude about you, especially if you add that espousing this idea gained you status. They may even conclude that you simply shifted course because it was no longer tenable, and not because you were convinced to be sincere.

3Quinn
I've certainly wondered this! In spite of the ACX commenter I mentioned suggesting that we ought to reward people for being transparent about learning epistemics the hard way, I find myself not 100% sure if it's wise or savvy to trust that people won't just mark me down as like "oh, so quinn is probably prone to being gullible or sloppy" if I talk openly about my what my life was like before math coursework and the sequences.

HPMOR is ~4.4 million characters, which would cost around $800–$1000 to narrate with ElevenLabs being conservative.

3Solenoid_Entity
You'd probably want to factor in some time for making basic corrections to pronunciation, too. ElevenLabs is pretty awesome but in my experience can be a little unpredictable with specialist terminology, of which HPMOR has... a lot. It wouldn't be crazy to do an ElevenLabs version of it with multiple voices etc., but you're looking at significant human time to get that all right.

Liv Boeree's voice acting is really impressive! The animation too! Congratulations to the whole team, I think it's fair to say it's professional quality now!

7Shankar Sivarajan
I'd say this is almost, but not quite as good as 500 Million, But Not A Single One More, but that's probably because I like the message of that one more.

They are advised by Dan Hendrycks. That counts for something.

4Vaniver
Sure, it's better for them to have that advice then not have that advice. I will refer you to this post for my guess of how much it counts for. [Like, we can see their stated goal of how they're going to go about safety!]
8trevor
Yes, the entire redundancy argument hinges on the state of the situation with Hendrycks. Depending on Hendrycks ability to reform X.AI's current stated alignment plan to a sufficient degree, it would just be another Facebook AI labs, which would reduce, not increase, the redundancy. In particular, if Hendrycks would just be be removed or marginalized in scenarios where safety-conscious labs start dropping like flies (a scenario that Musk, Altman, Hassabis, Lecun, and Amodei are each aware of), then X.AI would not be introducing any redundancy at all in the first place.

This happened to me several times when I was a tween, and I came up with the light switch trick. I realized that in dreams the lights are all dimmers, the light never switches instantly from off to on. So when I wondered if I was in a dream I looked for a light switch.

1Bridgett Kay
I have a similar trick I use with pirouettes- if I can turn and turn without stopping, then it is a dream. Of course, in this dream, I was not a dancer and had never danced, so I didn't even think of it. 

I always thought that this impression I had that the rationalist memeplex was an attractor for people like that was simply survivorship bias on people reporting their experience. This impression was quite reinforced by the mental health figures on the SSC surveys once the usual confounders were controlled for.

Interesting, this is the first time I remember reading a post on LessWrong that's so heavy on conflict theory with a degrowth/social justice perspective.  A few of the crux points are brushed aside "for another day", which is a little disappointing.

"the grasshopper coordinated with the many unhappy ants – those who did not like the colonomy but who were trapped in it by its powerful network effects and Moloch’s usual bag of tricks. Together they circumscribed the colonomy (e.g. through culture that holds competition at bay)," this is unfortunately whe... (read more)

1c.trout
Thanks for reading! Yea, I find myself interested in the topics LWers are interested in, but I'm disappointed certain perspectives are missing (despite them being prima facie as well-researched as the perspectives typical on LW). I suspect a bubble effect. Yup, I suspected that last version would be the hardest to believe for LWers! I plan on writing much more in depth on the topic soon. You might be interested in Guive Assadi's recent work on this topic (not saying he makes the story more plausible, but he does tease out some key premises/questions for its plausibility).  My only intention here was to layout the comparison that needs making (assuming you're a consequentialist with very low discount rates etc): what's the EV of this "once and for all" expansionist solution vs the EV of a "passing the torch" solution? And what level of risk aversion should we be evaluating this with? Neither will last forever or will be perfect. I wouldn't so quickly dismiss the potentially ~10^5 or ~10^6 year long "passing the torch" solution over the comparatively OOMs lower certainty "once and for all" solution. Especially once I add back in the other cruxes that I couldn't develop here (though I encourage reading the philosophical literature on it). I want to see a lot more evidence on all sides – and I think others should too.