Kaj_Sotala comments on Leaving LessWrong for a more rational life - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (268)
This part isn't clear to me. The researcher who goes into generic anti-cancer work, instead of SENS-style anti-aging work, probably has made an epistemic mistake with moderate consequences, because of basic replaceability arguments.
But to say that MIRI's approach to AGI safety is due to a philosophical mistake, and one with significant consequences, seems like it requires much stronger knowledge. Shooting very high instead of high is riskier, but not necessarily wronger.
I think you underestimate how much MIRI agrees with FLI.
SENS is the second largest part of my charity budget, and I recommend it to my friends every year (on the obvious day to do so). My speculations on why EAs don't favor them more highly mostly have to do with the difficulty of measuring progress in medical research vs. fighting illnesses, and possibly also the specter of selfishness.
Agreed - or, at least, he underestimates how much FLI agrees with MIRI. This is pretty obvious e.g. in the references section of the technical agenda that was attached to FLI's open letter. Out of a total of 95 references:
That's 19/95 (20%) references produced either directly by MIRI or people closely associated with them, or that have MIRI-compatible premises.
I think you and Vaniver both misunderstood my endorsement of FLI. I endorse them not because of their views on AI risk, which are in line MIRI's and entirely misguided in my opinion. But the important question is not what you believe, but what you do about it. Despite those views they are still willing to fund practical, evidence-based research into artificial intelligence, engaging with the existing community rather than needlessly trying to reinvent the field.