Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

The Reality of Emergence

8 Post author: DragonGod 19 August 2017 09:58PM

Comments (46)

Comment author: cousin_it 20 August 2017 06:46:26AM *  4 points [-]

Yeah, your take seems right, and agrees with Wikipedia and SEP.

Though I'm not sure it's worth your time to correct Eliezer's mistakes so painstakingly. He made lots of them. The biggest ones were probably betting against academia and betting against neural networks. His attractiveness as a writer comes in part from overconfidence, but in the real world a couple big mistakes from overconfidence can wipe out all most of your wins from it.

Comment author: gworley 22 August 2017 01:18:00AM 2 points [-]

I disagree. Performing this sort of work is part of engaging with the ideas. It's maybe not that interesting to you now, sure, but I've written similar things in the past in the process of building my understanding of ideas.

Comment author: DragonGod 20 August 2017 08:10:43AM 2 points [-]

Which bets did he make against academia?
 
When I eventually become an AI researcher, I do plan to try another approach apart from Neural networks though (I have an idea I think might work, and enough people are already trying their hands at neural nets).  
I agree that I do find Eliezer's overconfidence endearing.

Comment author: cousin_it 20 August 2017 08:25:48AM 3 points [-]

Which bets did he make against academia?

How about refusing to go to college, or refusing to get TDT peer reviewed, or this.

Comment author: Dr_Manhattan 24 August 2017 12:34:06PM *  1 point [-]

I think he solved the college problem by other means (recruiting people who are credible to bridge to the academic community). Not sure this was a mistake, considering costs like "not writing HPMOR". Yes I agree some of the attitudes have produced unnecessary social friction, (and maybe a lot of it) but its harder to claim the individual decision was obviously bad.

Comment author: TheAncientGeek 24 August 2017 12:53:22PM 2 points [-]

That assumes he had nothing to learn from college, and the only function it could have provided is signalling and social credibility.

Comment author: Dr_Manhattan 24 August 2017 05:56:27PM 0 points [-]

Not necessarily, just that costs of going to college were outweighed by other things (not saying that was EYs reasoning)

Comment author: cousin_it 24 August 2017 01:27:49PM 1 point [-]

MIRI publishes very few peer reviewed papers per researcher per year, so the mistake of betting against academia is still hurting it.

Comment author: Dr_Manhattan 24 August 2017 06:00:22PM 0 points [-]

Granted. Do you think the lack of peer review is hurting via the papers not getting enough acceptance in the community that would add to the AI safety cause? Or is the quality of papers themselves lacking due to not aiming at the peer review standard?

Comment author: cousin_it 25 August 2017 05:35:57AM *  3 points [-]

I think everything would've been better (more work, more discussion, more credibility) if the whole thing had been in academia from the beginning.

Another unfortunate factor was Eliezer's crazy emphasis on secrecy in the early days.

Comment author: Wei_Dai 27 August 2017 10:46:37PM 5 points [-]

Academia can be terribly inefficient though. There were hundreds of academic papers on electronic cash before Bitcoin and none of them went anywhere in terms of deployment and commercial success, nor did any of them propose something architecturally close to Bitcoin. Similarly there were dozens of academic papers on the Absent Minded Driver problem but all that work didn't get very far. Since lots of very smart people go into academia, it seems that there's something wrong with the incentives in academia that prevents people from making the kind of progress that we've seen on these topics outside of academia. If Eliezer went into academia he may have fallen prey to the same kinds of problems.

Comment author: cousin_it 28 August 2017 11:29:23AM *  3 points [-]

I think your argument would work better when flipped on its head. There's something wrong with incentives outside academia that prevents people from coming up with the Absent Minded Driver problem, and many other things besides! Most of the prerequisites for both Bitcoin and UDT come from academia.

Comment author: Wei_Dai 28 August 2017 01:07:01PM 5 points [-]

Very few people outside academia has resources to do research so if you look at the total sum of results academia is obviously going to look better. But if you compare results per unit of work or cost, it no longer looks so great. Consider that the few people who worked on digital cash outside of academia quickly converged to ideas resembling Bitcoin, while none of the many academics did even after spending many more years on the problem. Similarly with UDT. It seems to me that if Eliezer went into academia, there's no reason to think he would be much more productive than the average academic decision theorist. Nor would he likely have created a forum for amateurs to discussion decision theory (since no actual academic decision theorist has), in which case I wouldn't have found an outlet for to spread my own decision theory ideas.

And to clarify, in case this is why you're disagreeing with me, obviously academia is very good at certain types of problems, for example those involving incremental technical progress (like making slightly stronger ciphers each year until they're really efficient and practically unbreakable). So I'm only talking about a subset of fields that apparently includes digital money and decision theory.

Comment author: ChristianKl 20 August 2017 06:09:40PM 0 points [-]

MIRI did try to hire an academic to get TDT peer reviewed. It didn't work out but I don't think "refuse" is a good description.

Comment author: turchin 20 August 2017 07:17:06PM 0 points [-]

You don't need to hire an academic to get peer review. Just send it to a relevant journal, and because of the blind review system, they will review it. I did it myself and got many valuable comments on my articles. One was finally accepted and one was rejected.

Comment author: ChristianKl 20 August 2017 08:10:25PM 2 points [-]

Writing academic papers needs a certain skillset. That's not a skillset that EY developed, so it makes sense to give that task to someone who has it.

Comment author: turchin 20 August 2017 09:10:03PM 0 points [-]

I think it is trainable skillset, and EY also has some papers, so I think he is able to master it. But he decided to write texts which are more attention grabbing.

Comment author: ChristianKl 21 August 2017 05:31:44AM 0 points [-]

As far as I understand the papers with EY in the authors list are collaborations between EY and other people where a lot of ideas come from EY but the actual paper writing was done by someone else.

Comment author: IlyaShpitser 24 August 2017 04:16:29PM 1 point [-]

"When I eventually become an AI researcher."

Are you planning to go to graduate school?

Comment author: DragonGod 24 August 2017 09:48:54PM 0 points [-]

I don't know. I have a plan for a start up, if it succeeds, I probably wouldn't go. If the plan fails, I'll probably go to grad school.

I plan to start full AI research at earliest 25, and at latest 30. (I'm 19 now).

Comment author: arundelo 25 August 2017 01:39:29PM 0 points [-]

 

Comment formatting note -- Less Wrong's subset of Markdown does not let you use inline HTML.

Comment author: Viliam 20 August 2017 09:58:03PM *  3 points [-]

Emergency still feels like a "nonapple". You are right that mass is not an emergent property of quarks, but still, pretty much everything else in this universe is. If I understand it correctly, even "the distance between two specific quarks" is already an emergent property of quarks, because neither of those two quarks contains their distance in itself. So if I say e.g. "consciousness is an emergent property of quarks", I pretty much said "consciousness is not mass", which is technically true, but still mostly useless. Most of us already expected that.

Similarly, "consciousness is an emergent property of neurons" is only a surprise to those people who expected individual neurons to be conscious. I am sure such people exist. But for the rest of us, what new information does it convey?

Because the trick is that even if you don't believe that individual neurons are conscious, hearing "consciousness is an emergent property of neurons" still feels like new information. Except, there is nothing more there, only the aura of having an explanation.

Comment author: ChristianKl 21 August 2017 07:13:41AM 4 points [-]

The ability to express basic nonsurprising facts is useful.

When discussing whether or not to allow abortion of a fetus it matters whether you believe that real human consciousness needs a certain amount of neurons to emerge.

Plenty of people believe in some form of soul that's a unit that creates consciousness. Saying that it's emergent means that you disagree.

According to Scott's latest post about EA global, there are people at the foundational research institute who do ask themseves whether particles can be conscious.

There are plenty of cases where people try to find reductionist ways to thinking about a domain. Calories in, calories out is a common paradigm that drives a lot of thinking about diet. If you instead have a paradigm that centeres around a cybernetic system that has an emergent set point that's managed by a complex net of neurons, that paradigm gives a different perspective about what to do about weightloss.

Comment author: Viliam 22 August 2017 09:24:59PM *  2 points [-]

Maybe this is just me, but it seems to me like there is a "motte and bailey" game being played with "emergence".

The "motte" is the definition provided here by the defenders of "emergence". An emergent property is any property exhibited by a system composed of pieces, where no individual piece has that property alone. Taking this literally, even "distance between two oranges" is an emergent property of those two oranges. I just somehow do not remember anyone using that word in this sense.

The "bailey" of "emergence" is that it is a mysterious process, which will somehow inevitably happen if you put a lot of pieces together and let them interact randomly. It is somehow important for those pieces to not be arranged in any simple/regular way that would allow us to fully understand their interaction, otherwise the expected effect will not happen. But as long as you close your eyes and arrange those pieces randomly, it is simply a question of having enough pieces in the system for the property to emerge.

For example, the "motte" of "consciousness is an emergent property of neurons" is saying that one neuron is not conscious, but there are some systems of neurons (i.e. brains) which are conscious.

The "bailey" of "consciousness is an emergent property of neurons" is that if you simulate a sufficiently large number of randomly connected neurons on your computer, the system is fated to evolve consciousness. If the consciousness does not appear, it must be because there are not enough neurons, or because the simulation is not fast enough.

In other words, if we consider the space of all possible systems composed of 10^11 neurons, the "motte" version merely says that at least one such system is conscious, while the "bailey" version would predict that actually most of them are conscious, because when you have sufficient complexity, the emergent behavior will appear.

The relevance for LW is that for a believer in "emergence", the problem of creating artificial intelligence (although not necessarily friendly one) is simply a question of having enough computing power to simulate a sufficiently large number of neurons.

Comment author: TheAncientGeek 23 August 2017 11:28:14AM *  1 point [-]

There are positions between those. Medium-strength emergentism would have it that some systems are conscious, that cosnciousness is not a property of their parts, and that it is not reductively understandable in terms of the parts and their interactions, but that it is by no mean inevitable.

Reduction has its problems too. Many writings on LW confuses the claim that things are understandable in terms of their parts with the claim that they are merely made of parts.

Eg:-

(1) The explanatory power of a model is a function of its ingredients. (2) Reductionism includes all the ingredients that actually exist in the real world. Therefore (3) Emergentists must be treating the “emergent properties” as extra ingredients, thereby confusing the “map” with the “territory”. So Reductionism is defined by EY and others as not treating emergent properties as extra ingredients (in effect).

Comment author: entirelyuseless 23 August 2017 02:01:10PM 0 points [-]

I am questioning the implicit premise that some kinds of emergent things are "reductively understandable in terms of the parts and their interactions." I think humans have a basic problem with getting any grasp at all on the idea of things being made of other things, and therefore you have arguments like those of Parmenides, Zeno, etc., which are basically a mirror of modern arguments about reductionism. I would illustrate this with Viliam's example of the distance between two oranges. I do not see how the oranges explain the fact that they have a distance between them, at all. Consciousness may seem even less intelligible, but this is a difference of degree, not kind.

Comment author: TheAncientGeek 23 August 2017 02:19:45PM 0 points [-]

I am questioning the implicit premise that some kinds of emergent things are "reductively understandable in terms of the parts and their interactions.

It's not so much some emergent things, for a uniform definiton of "emergent", as all things that come under a vriant definition of "emergent".

I think humans have a basic problem with getting any grasp at all on the idea of things being made of other things, and therefore you have arguments like those of Parmenides, Zeno, etc., which are basically a mirror of modern arguments about reductionism

Not really, they are about what we would now call mereology. But as I noted, the two tend to get conflated here.

. I would illustrate this with Viliam's example of the distance between two oranges. I do not see how the oranges explain the fact that they have a distance between them, at all.

Reductionism is about preserving and operating within a physicalist world view, and physicalism is comfortable with spacial relations and causal interactions as being basic elements or reality. Careful reducitonists say "reducible to its parts, their structure, and their interactions".

Comment author: entirelyuseless 23 August 2017 02:43:36PM 0 points [-]

"physicalism is comfortable with spacial relations and causal interactions as being basic elements or reality"

I am suggesting this is a psychological comfort, and there is actually no more reason to be comfortable with those things, than with consciousness or any other properties that combinations have that parts do not have.

Comment author: ChristianKl 23 August 2017 05:06:33AM 1 point [-]

The relevance for LW is that for a believer in "emergence", the problem of creating artificial intelligence (although not necessarily friendly one) is simply a question of having enough computing power to simulate a sufficiently large number of neurons.

I don't think in practice that has much to do with whether or not someone uses the word emergence. As far as a I understand EY thinks that if you simulate enough neurons sufficiently well you get something that's conscious.

Comment author: Luke_A_Somers 13 September 2017 01:34:32AM 0 points [-]

I would really want a cite on that claim. It doesn't sound right.

Comment author: ChristianKl 13 September 2017 01:56:34PM 0 points [-]

Can you be more specific about what you are skeptic about?

Comment author: Luke_A_Somers 15 September 2017 01:59:56AM 1 point [-]

I understand EY thinks that if you simulate enough neurons sufficiently well you get something that's conscious.

Without specifying the arrangements of those neurons? Of course it should if you copy the arrangement of neurons out of a real person, say, but that doesn't sound like what you meant.

Comment author: entirelyuseless 23 August 2017 02:19:35AM 1 point [-]

I don't understand what is supposed to be so bad about "mysterious" things. Take the distance between two oranges: if you look at a single orange, it doesn't tell you anything about how far it should be from another. And special relativity implies that there is no difference between a situation where one orange is moving and the other isn't, and the situation where the movements are reversed. So the distance between two oranges can be changing, even though apparently neither one is changing more than the other, or at all, when you just sit and look at one of the oranges. So the distance between two oranges seems pretty mysterious to me.

Also, I'm not sure anyone actually says that emergent things "inevitably happen" due to a large quantity and randomness.

Comment author: Luke_A_Somers 13 September 2017 01:34:07AM 0 points [-]

Like many cases of Motte-and-Bailey, the Motte is mainly held by people who dislike the Bailey. I suspect that an average scientist in a relevant field somewhere at or below neurophysics in the generality hierarchy (e.g. chemist, physicist, but not sociologist), would consider that bailey to be… non-likely at best, while holding the motte very firmly.

Comment author: Voltairina 22 August 2017 06:50:20PM *  0 points [-]

I think you're right. I also think saying 'x is emergent' may sound more magical than it is, if I am understanding emergence right, depending on your understanding of it. Like it doesn't mean that the higher scale phenomenon isn't /made up of/ lower-level phenomena, but that it isn't (like a homonculi) itself present as anything smaller than that level. Like a robot hopping kangaroo toy needs both a body, and legs. The hopping behavior isn't contained in the body - that just rotates a joint. The hopping behavior isn't contained in the legs - those just have a joint that can connect to the body joint. Its only when the two bits are plugged into each other that the 'hopping' behavior 'emerges' from the torso-legs system. Its not coming from any essential 'hoppiness' in the legs or the torso. I think it can seem a bit magical because it can sound like the behavior just 'appears' at a certain point but its no more than a picture of a tiger 'appears' from a bunch of pixels. Only we're talking about names for systems of functions (hopping is made of the leg and torso behaviors and their interaction with the ground and stuff) more than names for systems of objects (tiger picture is made up of lines and corners and stuff are made of pixels and stuff). In some sense 'tigers' and 'hopping' don't really exist - just pixels (or atoms or whatever) and particle interactions. But we have names for systems of objects, and systems of functions, because those names are useful.

Comment author: tukabel 22 August 2017 06:55:05PM 0 points [-]

Well, size and mass of particles? I would NOT DARE diving into this... certainly not in front of any string theorist (OK, ANY physics theorist, and not only). Even space can easily turn out to be "emergent" ;-).