Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

ChristianKl comments on The Reality of Emergence - Less Wrong Discussion

8 Post author: DragonGod 19 August 2017 09:58PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (46)

You are viewing a single comment's thread. Show more comments above.

Comment author: Viliam 22 August 2017 09:24:59PM *  2 points [-]

Maybe this is just me, but it seems to me like there is a "motte and bailey" game being played with "emergence".

The "motte" is the definition provided here by the defenders of "emergence". An emergent property is any property exhibited by a system composed of pieces, where no individual piece has that property alone. Taking this literally, even "distance between two oranges" is an emergent property of those two oranges. I just somehow do not remember anyone using that word in this sense.

The "bailey" of "emergence" is that it is a mysterious process, which will somehow inevitably happen if you put a lot of pieces together and let them interact randomly. It is somehow important for those pieces to not be arranged in any simple/regular way that would allow us to fully understand their interaction, otherwise the expected effect will not happen. But as long as you close your eyes and arrange those pieces randomly, it is simply a question of having enough pieces in the system for the property to emerge.

For example, the "motte" of "consciousness is an emergent property of neurons" is saying that one neuron is not conscious, but there are some systems of neurons (i.e. brains) which are conscious.

The "bailey" of "consciousness is an emergent property of neurons" is that if you simulate a sufficiently large number of randomly connected neurons on your computer, the system is fated to evolve consciousness. If the consciousness does not appear, it must be because there are not enough neurons, or because the simulation is not fast enough.

In other words, if we consider the space of all possible systems composed of 10^11 neurons, the "motte" version merely says that at least one such system is conscious, while the "bailey" version would predict that actually most of them are conscious, because when you have sufficient complexity, the emergent behavior will appear.

The relevance for LW is that for a believer in "emergence", the problem of creating artificial intelligence (although not necessarily friendly one) is simply a question of having enough computing power to simulate a sufficiently large number of neurons.

Comment author: ChristianKl 23 August 2017 05:06:33AM 1 point [-]

The relevance for LW is that for a believer in "emergence", the problem of creating artificial intelligence (although not necessarily friendly one) is simply a question of having enough computing power to simulate a sufficiently large number of neurons.

I don't think in practice that has much to do with whether or not someone uses the word emergence. As far as a I understand EY thinks that if you simulate enough neurons sufficiently well you get something that's conscious.

Comment author: Luke_A_Somers 13 September 2017 01:34:32AM 0 points [-]

I would really want a cite on that claim. It doesn't sound right.

Comment author: ChristianKl 13 September 2017 01:56:34PM 0 points [-]

Can you be more specific about what you are skeptic about?

Comment author: Luke_A_Somers 15 September 2017 01:59:56AM 1 point [-]

I understand EY thinks that if you simulate enough neurons sufficiently well you get something that's conscious.

Without specifying the arrangements of those neurons? Of course it should if you copy the arrangement of neurons out of a real person, say, but that doesn't sound like what you meant.