Eliezer_Yudkowsky comments on Tiling Agents for Self-Modifying AI (OPFAI #2) - Less Wrong

55 Post author: Eliezer_Yudkowsky 06 June 2013 08:24PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (260)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 28 June 2013 05:27:22AM 6 points [-]

I think I'd be happy with a summary of persistent disagreement where Jonah or Scott said, "I don't think MIRI's efforts are valuable because we think that AI in general has made no progress on AGI for the last 60 years / I don't think MIRI's efforts are priorities because we don't think we'll get AGI for another 2-3 centuries, but aside from that MIRI isn't doing anything wrong in particular, and it would be an admittedly different story if I thought that AI in general was making progress on AGI / AGI was due in the next 50 years".

Comment author: JonahSinick 28 June 2013 05:49:47AM 12 points [-]

I think that your paraphrasing

I don't think MIRI's efforts are valuable because I think that AI in general has made no progress on AGI for the last 60 years, but aside from that MIRI isn't doing anything wrong in particular, and it would be an admittedly different story if I thought that AI in general was making progress on AGI.

is pretty close to my position.

I would qualify it by saying:

  1. I'd replace "no progress" with "not enough progress for there to be a known research program with a reasonable chance of success."

  2. I have high confidence that some of the recent advances in narrow AI will contribute (whether directly or indirectly) to the eventual creation of AGI (contingent on this event occurring), just not necessarily in a foreseeable way.

  3. If I discover that there's been significantly more progress on AGI than I had thought, then I'll have to reevaluate my position entirely. I could imagine updating in the directly of MIRI's FAI work being very high value, or I could imagine continuing to believe that MIRI's FAI research isn't a priority, for reasons different from my current ones.

Comment author: Eliezer_Yudkowsky 28 June 2013 10:35:09PM 9 points [-]

Agreed-on summaries of persistent disagreement aren't ideal, but they're more conversational progress than usually happens, so... thanks!