JoshuaFox comments on Evaluating the feasibility of SI's plan - Less Wrong

25 Post author: JoshuaFox 10 January 2013 08:17AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (186)

You are viewing a single comment's thread. Show more comments above.

Comment author: JoshuaFox 10 January 2013 05:13:02PM *  0 points [-]

slow-thinking unFriendly AGIs ... not any help in developing a FAI

One suggestion is that slow-thinking unFriendly near-human AIs may indeed help develop an FAI:

(1) As a test bed, as a way of learning from examples.

(2) They can help figure things out. Of course, we don't want them to be too smart, but dull nascent AGIs, if they don't explode, might be some sort of research partner.

(To clarify, unFriendly means "without guaranteed Friendliness", which is close but not identical to "guaranteed to kill us.")

Ben Goertzel and Joel Pitt 2012 suggest the former for nascent AGIs. Carl Shulman's recent article also suggests the latter for infrahuman WBEs.

in the long run

That's the question: How long a run do we have?