Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: ChristianKl 13 September 2017 01:56:34PM 0 points [-]

Can you be more specific about what you are skeptic about?

Comment author: Luke_A_Somers 15 September 2017 01:59:56AM 1 point [-]

I understand EY thinks that if you simulate enough neurons sufficiently well you get something that's conscious.

Without specifying the arrangements of those neurons? Of course it should if you copy the arrangement of neurons out of a real person, say, but that doesn't sound like what you meant.

Comment author: ChristianKl 23 August 2017 05:06:33AM 1 point [-]

The relevance for LW is that for a believer in "emergence", the problem of creating artificial intelligence (although not necessarily friendly one) is simply a question of having enough computing power to simulate a sufficiently large number of neurons.

I don't think in practice that has much to do with whether or not someone uses the word emergence. As far as a I understand EY thinks that if you simulate enough neurons sufficiently well you get something that's conscious.

Comment author: Luke_A_Somers 13 September 2017 01:34:32AM 0 points [-]

I would really want a cite on that claim. It doesn't sound right.

Comment author: Viliam 22 August 2017 09:24:59PM *  2 points [-]

Maybe this is just me, but it seems to me like there is a "motte and bailey" game being played with "emergence".

The "motte" is the definition provided here by the defenders of "emergence". An emergent property is any property exhibited by a system composed of pieces, where no individual piece has that property alone. Taking this literally, even "distance between two oranges" is an emergent property of those two oranges. I just somehow do not remember anyone using that word in this sense.

The "bailey" of "emergence" is that it is a mysterious process, which will somehow inevitably happen if you put a lot of pieces together and let them interact randomly. It is somehow important for those pieces to not be arranged in any simple/regular way that would allow us to fully understand their interaction, otherwise the expected effect will not happen. But as long as you close your eyes and arrange those pieces randomly, it is simply a question of having enough pieces in the system for the property to emerge.

For example, the "motte" of "consciousness is an emergent property of neurons" is saying that one neuron is not conscious, but there are some systems of neurons (i.e. brains) which are conscious.

The "bailey" of "consciousness is an emergent property of neurons" is that if you simulate a sufficiently large number of randomly connected neurons on your computer, the system is fated to evolve consciousness. If the consciousness does not appear, it must be because there are not enough neurons, or because the simulation is not fast enough.

In other words, if we consider the space of all possible systems composed of 10^11 neurons, the "motte" version merely says that at least one such system is conscious, while the "bailey" version would predict that actually most of them are conscious, because when you have sufficient complexity, the emergent behavior will appear.

The relevance for LW is that for a believer in "emergence", the problem of creating artificial intelligence (although not necessarily friendly one) is simply a question of having enough computing power to simulate a sufficiently large number of neurons.

Comment author: Luke_A_Somers 13 September 2017 01:34:07AM 0 points [-]

Like many cases of Motte-and-Bailey, the Motte is mainly held by people who dislike the Bailey. I suspect that an average scientist in a relevant field somewhere at or below neurophysics in the generality hierarchy (e.g. chemist, physicist, but not sociologist), would consider that bailey to be… non-likely at best, while holding the motte very firmly.

Comment author: Luke_A_Somers 02 September 2017 04:14:52PM *  0 points [-]

This looks promising.

Also, the link to the Reality of Emergence is broken.

Comment author: Luke_A_Somers 31 August 2017 03:58:18PM 0 points [-]

1) You could define the shape criteria required to open lock L, and then the object reference would fall away. And, indeed, this is how keys usually work. Suppose I have a key with tumbler heights 0, 8, 7, 1, 4, 9, 2, 4. This is an intrinsic property of the key. That is what it is.

Locks can have the same set of tumbler heights, and there is then a relationship between them. I wouldn't even consider it so much an extrinsic property of the key itself, as a relationship between the intrinsic properties of the key and lock.

2) Metaethics is a function from cultural situations and moral intuitions into a space of ethical systems. This function is not onto (i.e. not every coherent ethical system is the result of metaethical analysis on some cultural system and moral intuitions) , and it is not at all guaranteed to yield the same ethical system at use in that cultural situation. This is a very significant difference from Moral relativism, not a mere slight increase in temperature.

Comment author: ImmortalRationalist 03 July 2017 11:18:51PM 0 points [-]

But what exactly constitutes "enough data"? With any finite amount of data, couldn't it be cancelled out if your prior probability is small enough?

Comment author: Luke_A_Somers 08 July 2017 04:15:50PM 0 points [-]

Yes, but that's not the way the problem goes. You don't fix your prior in response to the evidence in order to force the conclusion (if you're doing it anything like right). So different people with different priors will have different amounts of evidence required: 1 bit of evidence for every bit of prior odds against, to bring it up to even odds, and then a few more to reach it as a (tentative, as always) conclusion.

Comment author: TheAncientGeek 22 June 2017 02:36:16PM 1 point [-]

"A ladder you throw away once you have climbed up it".

Comment author: Luke_A_Somers 22 June 2017 06:10:16PM 1 point [-]

Where's that from?

In response to Priors Are Useless
Comment author: Luke_A_Somers 21 June 2017 02:44:43PM *  12 points [-]

This is totally backwards. I would phrase it, "Priors get out of the way once you have enough data." That's a good thing, that makes them useful, not useless. Its purpose is right there in the name - it's your starting point. The evidence takes you on a journey, and you asymptotically approach your goal.

If priors were capable of skewing the conclusion after an unlimited amount of evidence, that would make them permanent, not simply a starting-point. That would be writing the bottom line first. That would be broken reasoning.

Comment author: cousin_it 15 June 2017 08:36:32AM *  1 point [-]

The post proposed to build an arbitrary general AI with a goal of making all conscious experiences in reality match {unmodified human brains + this coarse-grained VR utopia designed by us}. This plan wastes tons of potential value and requires tons of research, but it seems much simpler than solving FAI. For example, it skips figuring out how all human preferences should be extracted, extrapolated, and mapped to true physics. (It does still require solving consciousness though, and many other things.)

Mostly I intended the plan to serve as a lower bound for outcome of intelligence explosion that's better than "everyone dies" but less vague than CEV, because I haven't seen too many such lower bounds before. Of course I'd welcome any better plan.

Comment author: Luke_A_Somers 15 June 2017 02:29:03PM 0 points [-]

Like, "Please, create a new higher bar that we can expect a truly super-intelligent being to be able to exceed."?

Comment author: turchin 11 June 2017 07:09:03PM *  2 points [-]

It missed in all story that superintelligecne will be probably able to resurrect people even if they were not cryopreserved, using creation of their copies based on digital immortality. The problem of identity of a copy and original is not solved, but AI may be able to solve it somehow.

However, similar to different cardinalities of infinities, there are different types of infinite sufferings. Evil AI could constantly upgrade its victim, so it subjective experiences of sufferings will increase million times a second forever, and it could convert half a galaxy into suffertronium.

Quantum immortality in a constantly dying body is not optimised for aggressive growth of sufferings, so could be more "preferable".

Unfortunately, such timelines in the space of all possible minds could merge, that is after death you will appear in a very improbable universe, where you are resurrected for having sufferings. (I also use here and in the next sentence a thesis that if two observer-moments are identical, their timelines merge, which may require a longer discussion.)

But benevolent AI could create an enormous amount of positive observer-moments following any possible painful observer-moment, that it will effectively rescue any conscious beings from jail of evil AI. So any painful moment will have million positive continuations with much higher measure than a measure of universes owned by evil AI. (I also assume that benevolent AIs will dominate over suffering-oriented AIs, and will wage acasual war against them to have more observer-moments of human beings.)

After I imagined such acasual war between evil and benevolent AI, I stopped worry about infinite suffering from evil AI.

Comment author: Luke_A_Somers 12 June 2017 07:19:30PM 0 points [-]

It missed in all story that superintelligecne will be probably able to resurrect people even if they were not cryopreserved, using creation of their copies based on digital immortality.

Enough of what makes me me hasn't and won't make into digital expression by accident short of post-singularity means, that I wouldn't identify such a poor individual as being me. It would be neuro-sculpture on the theme of me.

View more: Next