You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Baughn comments on Singularity Institute is now Machine Intelligence Research Institute - Less Wrong Discussion

32 Post author: Kaj_Sotala 31 January 2013 08:25AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (99)

You are viewing a single comment's thread. Show more comments above.

Comment author: Baughn 31 January 2013 03:53:46PM 1 point [-]

There's a reason he doesn't like it..

I'm not entirely sure what your sentence means. Could you rewrite it to not use "emergence" (or define "emergence")?

Comment author: shminux 31 January 2013 04:40:24PM *  9 points [-]

The reason he does not like the term is that, as pointed out before, "emergence" is not an explanation of anything. However, it is an observational phenomenon: when you get a lot of simple things together, they combine in ways one could not foresee and the resulting entities behave by the rules not constructable from (but reducible to) those of the simple constituents. When you combine a lot of simple molecules, you get a solid, a liquid or a gas with the properties you generally cannot infer without observing them first. When you get a group of people together, they start interacting in apriori unpredictable ways as they form a group. Once you observe the group behavior, you can often reduce it to that of its constituents, but a useful description is generally not in terms of the constituents, but in terms of the collective. For example, in thermodynamics people use gas laws and other macroscopic laws instead of the Newton's laws.

I am guessing that one reason that the (friendly) machine intelligence problem is so hard is that intelligence is an emergent property: once you understand it, you can reduce it to interactions between neurons, but you cannot infer it from such interactions. And what's more, it's several layers above, given that intelligence evolved long after simpler neural processes got established.

Thus what MIRI is doing is studying the laws of an emergent structure (AI) without being able to observe the structure first, since it does not exist, yet. This is like trying to deduce the behavior of a bee hive by studying single cells. Even if you come up with some new "emergent" laws, it may well end up being more like a tree than a hive.

Comment author: drethelin 31 January 2013 09:43:35PM 2 points [-]

Emergence is a subset of the word Surprise. It's not meaningless but you can't use it to usefully predict things you want to achieve with something, because it's equivalent to saying "If we put all these things together maybe they'll surprise us in an awesome way!"

Comment author: timtyler 01 February 2013 12:10:45AM 1 point [-]

If something is an emergent property, you can bet on it not being the sum of its parts. That has some use.

Comment author: loup-vaillant 01 February 2013 11:21:03AM *  0 points [-]

Aiming the tiny Friendly dot in AI-space is not one of them, though.

Comment author: shminux 31 January 2013 10:07:44PM *  1 point [-]

Sort of. It is not surprising that incremental quantitative changes results in a qualitative change, but the exact nature of what emerges can indeed be quite a surprise. It is nevertheless useful to keep in mind the general pattern in order to not be blindsided by the fact of emergence in each particular case ("But... but.. they are all nice people, I didn't expect them to turn into a mindless murderous mob!"). And to be ready to take action when the emergent entity hits the fan.

Comment author: Baughn 31 January 2013 10:28:10PM *  1 point [-]

Or in simpler terms, AI is a crapshoot.

Comment author: drethelin 01 February 2013 01:32:15AM 0 points [-]

Agreed. Like with surprises, you can try to be robust to them or agile enough to adapt.

Comment author: gjm 03 February 2013 05:47:13PM 1 point [-]

Surely what MIRI would ideally like to do is to find a way of making intelligence not "emergent", so that it's easier to make something intelligent that behaves predictably enough to be classified as Friendly.

Comment author: shminux 03 February 2013 07:57:40PM 0 points [-]

find a way of making intelligence not "emergent"

I don't believe that MIRI has been consciously paying attention to thwarting undesirable emergence, given that EY refuses to acknowledge it as a real phenomenon.

Comment author: gjm 03 February 2013 09:43:56PM 0 points [-]

I fear we're at cross purposes. I meant not "thwart emergent intelligence" but "find ways of making intelligence that don't rely on it emerging mysteriously from incomprehensible complications".

Comment author: shminux 03 February 2013 10:22:27PM 1 point [-]

Sure, you cannot rely on spontaneous emergence for anything predictable, as neural network attempts at AGI demonstrate. My point was that if you ignore the chance of something emerging, that something will emerge in a most inopportune moment. I see your original point, though. Not sure if it can be successful. My guess is that the best case is some kind of "controlled emergence", where you at least set the parameter space of what might happen.