You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Vaniver comments on Leaving LessWrong for a more rational life - Less Wrong Discussion

33 [deleted] 21 May 2015 07:24PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (268)

You are viewing a single comment's thread.

Comment author: Vaniver 21 May 2015 08:56:43PM 16 points [-]

On the other hand, de Grey and others who are primarily working on the scientific and/or engineering challenges of singularity and transhumanist technologies were far less likely to subject themselves to epistematic mistakes of significant consequences.

This part isn't clear to me. The researcher who goes into generic anti-cancer work, instead of SENS-style anti-aging work, probably has made an epistemic mistake with moderate consequences, because of basic replaceability arguments.

But to say that MIRI's approach to AGI safety is due to a philosophical mistake, and one with significant consequences, seems like it requires much stronger knowledge. Shooting very high instead of high is riskier, but not necessarily wronger.

Thankfully there is an institution that is doing that kind of work: the Future of Life institute (not MIRI).

I think you underestimate how much MIRI agrees with FLI.

Why they do not get more play in the effective altruism community is beyond me.

SENS is the second largest part of my charity budget, and I recommend it to my friends every year (on the obvious day to do so). My speculations on why EAs don't favor them more highly mostly have to do with the difficulty of measuring progress in medical research vs. fighting illnesses, and possibly also the specter of selfishness.

Comment author: Kaj_Sotala 22 May 2015 11:57:10AM *  15 points [-]

I think you underestimate how much MIRI agrees with FLI.

Agreed - or, at least, he underestimates how much FLI agrees with MIRI. This is pretty obvious e.g. in the references section of the technical agenda that was attached to FLI's open letter. Out of a total of 95 references:

  • Six are MIRI's technical reports that've only been published on their website: Vingean Reflection, Realistic World-Models, Value Learning, Aligning Superintelligence, Reasoning Under Logical Uncertainty, Toward Idealized Decision Theory
  • Five are written by MIRI's staff or Research Associates: Avoiding Unintended AI Behaviors, Ethical Artificial Intelligence, Self-Modeling Agents and Reward Generator Corruption, Problem Equilibirum in the Prisoner's Dilemma, Corrigibility,
  • Eight are ones that tend to agree with MIRI's stances and which have been cited in MIRI's work: Superintelligence, Superintelligent Will, Singularity A Philosophical Analysis, Speculations concerning the first ultraintelligent machine, The nature of self-improving AI, Space-Time Embedded Intelligence, FAI: the Physics Challenge, The Coming Technological Singularity

That's 19/95 (20%) references produced either directly by MIRI or people closely associated with them, or that have MIRI-compatible premises.

Comment author: [deleted] 23 May 2015 12:51:23PM *  4 points [-]

I think you and Vaniver both misunderstood my endorsement of FLI. I endorse them not because of their views on AI risk, which are in line MIRI's and entirely misguided in my opinion. But the important question is not what you believe, but what you do about it. Despite those views they are still willing to fund practical, evidence-based research into artificial intelligence, engaging with the existing community rather than needlessly trying to reinvent the field.

Comment author: Mirzhan_Irkegulov 21 May 2015 11:01:28PM *  6 points [-]

Yudkowsky obviously supports immortality. Quote from his letter on his brother's death:

If you object to the Machine Intelligence Research Institute then consider Dr. Aubrey de Grey's Methuselah Foundation, which hopes to defeat aging through biomedical engineering.

If SENS is not sufficiently promoted as a target for charity, I have no idea why is that, and I dispute that it's because of LW community's philosophical objections, unless somebody can convince me otherwise. BTW, EA community != LW community, so maybe lot's of Effective Altruists just don't consider immortality the same way the do malaria (cached thoughts etc).

Comment author: [deleted] 21 May 2015 11:31:40PM *  2 points [-]

If SENS is not sufficiently promoted as a target for charity, I have no idea why is that, and I dispute that it's because of LW community's philosophical objections, unless somebody can convince me otherwise.

To be clear this is not an intended implication. I'm aware that Yudkowsky supports SENS, and indeed my memory is fuzzy but it might have been though exactly the letter you quote that I first heard about SENS.

Comment author: [deleted] 21 May 2015 11:25:58PM 2 points [-]

I recommend it to my friends every year (on the obvious day to do so)

Just out of curiosity, what day is that? Both Christmas and April 15th came to mind.

Comment author: Vaniver 22 May 2015 02:12:38AM 14 points [-]

My birthday. It is both when one is supposed to be celebrating aging / one's continued survival, and when one receives extra attention from others.

Comment author: [deleted] 22 May 2015 02:24:49AM 8 points [-]

Oh that's a great idea. I'm going to start suggesting people who ask to donate to one of my favorite charities on my birthday. It beats saying I don't need anything which is what I currently do.

Comment author: Kaj_Sotala 22 May 2015 07:22:05AM 10 points [-]

Consider also doing an explicit birthday fundraiser. I did one on my most recent birthday and raised $500 for charitable causes.