To be fair, you have to have a very high IQ to understand HPMOR. the humor is extremely subtle, and without a solid grasp of theoretical physics most of the jokes will go over a typical reader's head. There's also Harry's rationalistic outlook, which is deftly woven into his characterisation- his personal philosophy draws heavily from 80s sci-fi literature, for instance. The fans understand this stuff; they have the intellectual capacity to truly appreciate the depths of these jokes, to realise that they're not just funny- they say something deep about LIFE. as a consequence people who dislike Harry Potter and the Methods of Rationality truly ARE idiots- of course they wouldn't appreciate, for instance, the humour in Harry's rationalistic action of snapping his fingers, which itself is a cryptic reference to Ernest Cline's Ready Player One. I'm smirking right now just imagining one of those addlepated simpletons scratching their heads in confusion as Eliezer Yudkowsky's genius wit unfolds itself on their television screens. What fools.. how I pity them :).
And yes, by the way, I DO have a CFAR membership card. And no, you cannot see it. It's for the ladies' eyes only- and even then they have to demonstrate that they're within 5 IQ points of my own (preferably lower) beforehand. Nothin personnel kid
So that's how you draft scissor statements >:)
Yes. Too many cached decisions.
How did you choose the salary range?
My understanding here is that while this is true, it will discourage the 5%, who will just go work for FAANG and donate money to someone worse (or someone overwhelmed with work), simultaneously losing any chance at a meaningful job. The point being that yes, it's good to donate, but if everyone donates (since that is the default rat race route), noone will do the important work.
No! If everyone donates, there will be enough money to pay direct workers high salaries. I know this goes contra to the image of the selfless, noble Effective Altruist, but if you want shit to get done you should pay people lots of money to do it.
Ok, sick. I largely agree with you btw (about the hamster wheel being corrosive). If I came off as agressive, fyi, I liked the spirit of your post a lot, and I strong-upvoted it.
Yes, selfish agents want to not get turned into paperclips. But they have other goals too. You can prefer alignment be solved, while not wanting to dedicate your mind, body, and soul to waging a jihad against it. Where can Charlie effectively donate, say, 10% of his salary to best mitigate x-risk? Not MIRI (according to MIRI).
The Chinese stated preferences here closely track Western revealed preferences. Americans are more likely to dismiss AI risk post-hoc in order to justify making more money, whereas it seems that Chinese people are less likely to sacrifice their epistemic integrity in order to feel like a Good Guy, Hire people, and pay them money!