Mirzhan_Irkegulov comments on Leaving LessWrong for a more rational life - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (268)
This part isn't clear to me. The researcher who goes into generic anti-cancer work, instead of SENS-style anti-aging work, probably has made an epistemic mistake with moderate consequences, because of basic replaceability arguments.
But to say that MIRI's approach to AGI safety is due to a philosophical mistake, and one with significant consequences, seems like it requires much stronger knowledge. Shooting very high instead of high is riskier, but not necessarily wronger.
I think you underestimate how much MIRI agrees with FLI.
SENS is the second largest part of my charity budget, and I recommend it to my friends every year (on the obvious day to do so). My speculations on why EAs don't favor them more highly mostly have to do with the difficulty of measuring progress in medical research vs. fighting illnesses, and possibly also the specter of selfishness.
Yudkowsky obviously supports immortality. Quote from his letter on his brother's death:
If SENS is not sufficiently promoted as a target for charity, I have no idea why is that, and I dispute that it's because of LW community's philosophical objections, unless somebody can convince me otherwise. BTW, EA community != LW community, so maybe lot's of Effective Altruists just don't consider immortality the same way the do malaria (cached thoughts etc).
To be clear this is not an intended implication. I'm aware that Yudkowsky supports SENS, and indeed my memory is fuzzy but it might have been though exactly the letter you quote that I first heard about SENS.