You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

gjm comments on Singularity Institute is now Machine Intelligence Research Institute - Less Wrong Discussion

32 Post author: Kaj_Sotala 31 January 2013 08:25AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (99)

You are viewing a single comment's thread. Show more comments above.

Comment author: gjm 03 February 2013 05:47:13PM 1 point [-]

Surely what MIRI would ideally like to do is to find a way of making intelligence not "emergent", so that it's easier to make something intelligent that behaves predictably enough to be classified as Friendly.

Comment author: shminux 03 February 2013 07:57:40PM 0 points [-]

find a way of making intelligence not "emergent"

I don't believe that MIRI has been consciously paying attention to thwarting undesirable emergence, given that EY refuses to acknowledge it as a real phenomenon.

Comment author: gjm 03 February 2013 09:43:56PM 0 points [-]

I fear we're at cross purposes. I meant not "thwart emergent intelligence" but "find ways of making intelligence that don't rely on it emerging mysteriously from incomprehensible complications".

Comment author: shminux 03 February 2013 10:22:27PM 1 point [-]

Sure, you cannot rely on spontaneous emergence for anything predictable, as neural network attempts at AGI demonstrate. My point was that if you ignore the chance of something emerging, that something will emerge in a most inopportune moment. I see your original point, though. Not sure if it can be successful. My guess is that the best case is some kind of "controlled emergence", where you at least set the parameter space of what might happen.