You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

drethelin comments on Singularity Institute is now Machine Intelligence Research Institute - Less Wrong Discussion

32 Post author: Kaj_Sotala 31 January 2013 08:25AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (99)

You are viewing a single comment's thread. Show more comments above.

Comment author: drethelin 31 January 2013 09:43:35PM 2 points [-]

Emergence is a subset of the word Surprise. It's not meaningless but you can't use it to usefully predict things you want to achieve with something, because it's equivalent to saying "If we put all these things together maybe they'll surprise us in an awesome way!"

Comment author: timtyler 01 February 2013 12:10:45AM 1 point [-]

If something is an emergent property, you can bet on it not being the sum of its parts. That has some use.

Comment author: loup-vaillant 01 February 2013 11:21:03AM *  0 points [-]

Aiming the tiny Friendly dot in AI-space is not one of them, though.

Comment author: shminux 31 January 2013 10:07:44PM *  1 point [-]

Sort of. It is not surprising that incremental quantitative changes results in a qualitative change, but the exact nature of what emerges can indeed be quite a surprise. It is nevertheless useful to keep in mind the general pattern in order to not be blindsided by the fact of emergence in each particular case ("But... but.. they are all nice people, I didn't expect them to turn into a mindless murderous mob!"). And to be ready to take action when the emergent entity hits the fan.

Comment author: Baughn 31 January 2013 10:28:10PM *  1 point [-]

Or in simpler terms, AI is a crapshoot.

Comment author: drethelin 01 February 2013 01:32:15AM 0 points [-]

Agreed. Like with surprises, you can try to be robust to them or agile enough to adapt.