Kingreaper comments on The Magnitude of His Own Folly - Less Wrong

26 Post author: Eliezer_Yudkowsky 30 September 2008 11:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (127)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: Shane_Legg 30 September 2008 03:04:01PM 1 point [-]

Eli, sometimes I find it hard to understand what your position actually is. It seems to me that your position is:

1) Work out an extremely robust solution to the Friendly AI problem

Only once this has been done do we move on to:

2) Build a powerful AGI

Practically, I think this strategy is risky. In my opinion, if you try to solve Friendliness without having a concrete AGI design, you will probably miss some important things. Secondly, I think that solving Friendliness will take longer than building the first powerful AGI. Thus, if you do 1 before getting into 2, I think it's unlikely that you'll be first.

Comment author: Kingreaper 03 October 2010 04:50:51PM *  3 points [-]

But if when Eliezer gets finished on 1), someone else is getting finished on 2), the two may be combinable to some extent.

If someone (lets say, Eliezer, having been convinced by the above post to change tack) finishes 2), and no-one has done 1), then a non-friendly AGI becomes far more likely.

I'm not convinced by the singularity concept, but if it's true Friendliness is orders of magnitude more important than just making an AGI. The difference between friendly AI and no-AI is big, but the difference between unfriendly AI and friendly AI dwarfs it.

And if it's false? Well, if it's false, making an AGI is orders of magnitude less important than that.

Comment author: Will_Sawin 12 January 2011 04:52:05AM 4 points [-]

This cooperation thing sounds hugely important. What we want is for the AGI community to move in a direction where the best research is FAI-compatible. How can this be accomplished?