I don't have an answer for you, you'll have to chart your own path. I will say that I agree with your take on social media, it seems very peripheral-route-focused.
If you're looking to do something practical on AI consider looking into a career counseling organization like 80000 Hours. From what I've seen, they fall into some of the traps I mentioned here (seems like they mostly think that trying to change people's minds isn't very valuable unless you actively want to become a political lobbyist) but they're not bad overall for answering these kinds of questions.
Ultimately though, it's your own life and you alone have to decide your path through it.
Fair enough, I haven't interacted with CFAR at all. And the "rationalists have failed" framing is admittedly partly bait to keep you reading, partly parroting/interpreting how Yudkowsky appears to see his efforts towards AI Safety, and partly me projecting my own AI anxieties out there.
The Overton window around AI has also been shifting so quickly that this article may already be kind of outdated. (Although I think the core message is still strong.)
Someone else in the comments pointed out the religious proselytization angle, and yeah, I hadn't thought about that, and apparently neither did David. That line was basically a throwaway joke lampshading how all the organizations discussed in the book are left-leaning, I don't endorse it very strongly.
Thank you. I don't think it's possible to review this book without talking a bit about politics, given that so many of the techniques were forged and refined via political canvassing, but I also don't think that's the main takeaway, and I hope this introduced some good ideas to the community.
As mentioned above, David addresses this in the book. There was an unfortunately fraudulent paper published due to (IIRC) the actions of a grad student, but the professors involved retracted the original paper and later research reaffirmed the approach did work.
Thanks for the summary. Yes, David addresses this in the book. There was an unfortunately fraudulent paper published due to (IIRC) the actions of a grad student, but the professors involved retracted the original paper and later research reaffirmed the approach did work.
I single it out because Yudkowsky singled it out and seems to see it as a major negative consequence to the goals he was trying to achieve with the community.
This is absolutely a fair point that I did not think about. All of David's examples in the book are left-ish-leaning and I was mostly basing it on those. My goal with that sentence was to just lampshade that fact.
The mods seem to have shortened my account name. For the record, it was previously bc4026bd4aaa5b7fe0bdcd47da7a22b453953f990d35286b9d315a619b23667a
The mods seem to have shortened my account name. For the record, it was previously bc4026bd4aaa5b7fe0bdcd47da7a22b453953f990d35286b9d315a619b23667a