Vaniver comments on Leaving LessWrong for a more rational life - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (268)
This part isn't clear to me. The researcher who goes into generic anti-cancer work, instead of SENS-style anti-aging work, probably has made an epistemic mistake with moderate consequences, because of basic replaceability arguments.
But to say that MIRI's approach to AGI safety is due to a philosophical mistake, and one with significant consequences, seems like it requires much stronger knowledge. Shooting very high instead of high is riskier, but not necessarily wronger.
I think you underestimate how much MIRI agrees with FLI.
SENS is the second largest part of my charity budget, and I recommend it to my friends every year (on the obvious day to do so). My speculations on why EAs don't favor them more highly mostly have to do with the difficulty of measuring progress in medical research vs. fighting illnesses, and possibly also the specter of selfishness.
Agreed - or, at least, he underestimates how much FLI agrees with MIRI. This is pretty obvious e.g. in the references section of the technical agenda that was attached to FLI's open letter. Out of a total of 95 references:
That's 19/95 (20%) references produced either directly by MIRI or people closely associated with them, or that have MIRI-compatible premises.
I think you and Vaniver both misunderstood my endorsement of FLI. I endorse them not because of their views on AI risk, which are in line MIRI's and entirely misguided in my opinion. But the important question is not what you believe, but what you do about it. Despite those views they are still willing to fund practical, evidence-based research into artificial intelligence, engaging with the existing community rather than needlessly trying to reinvent the field.
Yudkowsky obviously supports immortality. Quote from his letter on his brother's death:
If SENS is not sufficiently promoted as a target for charity, I have no idea why is that, and I dispute that it's because of LW community's philosophical objections, unless somebody can convince me otherwise. BTW, EA community != LW community, so maybe lot's of Effective Altruists just don't consider immortality the same way the do malaria (cached thoughts etc).
To be clear this is not an intended implication. I'm aware that Yudkowsky supports SENS, and indeed my memory is fuzzy but it might have been though exactly the letter you quote that I first heard about SENS.
Just out of curiosity, what day is that? Both Christmas and April 15th came to mind.
My birthday. It is both when one is supposed to be celebrating aging / one's continued survival, and when one receives extra attention from others.
Oh that's a great idea. I'm going to start suggesting people who ask to donate to one of my favorite charities on my birthday. It beats saying I don't need anything which is what I currently do.
Consider also doing an explicit birthday fundraiser. I did one on my most recent birthday and raised $500 for charitable causes.