Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: TheAncientGeek 22 June 2017 02:36:16PM 0 points [-]

"A ladder you throw away once you have climbed up it".

Comment author: Luke_A_Somers 22 June 2017 06:10:16PM 0 points [-]

Where's that from?

In response to Priors Are Useless
Comment author: Luke_A_Somers 21 June 2017 02:44:43PM *  10 points [-]

This is totally backwards. I would phrase it, "Priors get out of the way once you have enough data." That's a good thing, that makes them useful, not useless. Its purpose is right there in the name - it's your starting point. The evidence takes you on a journey, and you asymptotically approach your goal.

If priors were capable of skewing the conclusion after an unlimited amount of evidence, that would make them permanent, not simply a starting-point. That would be writing the bottom line first. That would be broken reasoning.

Comment author: cousin_it 15 June 2017 08:36:32AM *  1 point [-]

The post proposed to build an arbitrary general AI with a goal of making all conscious experiences in reality match {unmodified human brains + this coarse-grained VR utopia designed by us}. This plan wastes tons of potential value and requires tons of research, but it seems much simpler than solving FAI. For example, it skips figuring out how all human preferences should be extracted, extrapolated, and mapped to true physics. (It does still require solving consciousness though, and many other things.)

Mostly I intended the plan to serve as a lower bound for outcome of intelligence explosion that's better than "everyone dies" but less vague than CEV, because I haven't seen too many such lower bounds before. Of course I'd welcome any better plan.

Comment author: Luke_A_Somers 15 June 2017 02:29:03PM 0 points [-]

Like, "Please, create a new higher bar that we can expect a truly super-intelligent being to be able to exceed."?

Comment author: turchin 11 June 2017 07:09:03PM *  2 points [-]

It missed in all story that superintelligecne will be probably able to resurrect people even if they were not cryopreserved, using creation of their copies based on digital immortality. The problem of identity of a copy and original is not solved, but AI may be able to solve it somehow.

However, similar to different cardinalities of infinities, there are different types of infinite sufferings. Evil AI could constantly upgrade its victim, so it subjective experiences of sufferings will increase million times a second forever, and it could convert half a galaxy into suffertronium.

Quantum immortality in a constantly dying body is not optimised for aggressive growth of sufferings, so could be more "preferable".

Unfortunately, such timelines in the space of all possible minds could merge, that is after death you will appear in a very improbable universe, where you are resurrected for having sufferings. (I also use here and in the next sentence a thesis that if two observer-moments are identical, their timelines merge, which may require a longer discussion.)

But benevolent AI could create an enormous amount of positive observer-moments following any possible painful observer-moment, that it will effectively rescue any conscious beings from jail of evil AI. So any painful moment will have million positive continuations with much higher measure than a measure of universes owned by evil AI. (I also assume that benevolent AIs will dominate over suffering-oriented AIs, and will wage acasual war against them to have more observer-moments of human beings.)

After I imagined such acasual war between evil and benevolent AI, I stopped worry about infinite suffering from evil AI.

Comment author: Luke_A_Somers 12 June 2017 07:19:30PM 0 points [-]

It missed in all story that superintelligecne will be probably able to resurrect people even if they were not cryopreserved, using creation of their copies based on digital immortality.

Enough of what makes me me hasn't and won't make into digital expression by accident short of post-singularity means, that I wouldn't identify such a poor individual as being me. It would be neuro-sculpture on the theme of me.

Comment author: cousin_it 12 June 2017 11:08:33AM 1 point [-]

I think this article shows that you probably won't get a crisp answer.

Comment author: Luke_A_Somers 12 June 2017 03:09:19PM 2 points [-]

That's more about the land moving in response to the changes in ice, and a tiny correction for changing the gravitational force previously applied by the ice.

This is (probably?) about the way the water settles around a spinning oblate spheroid.

Comment author: entirelyuseless 11 June 2017 02:09:06PM 0 points [-]

There is some level of incapability at which we stop caring (e.g. head crushed), ... I expect some human could be found or made that would flicker across that boundary regularly.

This is wrong, at least for typical humans such as myself. In other words, we do not stop caring about the one with the crushed head just because they are on the wrong side of a boundary, but because we have no way to bring them back across that boundary. If we had a way to bring them back, we would care. So if someone is flickering back and forth across the so-called boundary, we will still care about them, since by stipulation they can come back.

Comment author: Luke_A_Somers 12 June 2017 01:45:13PM 0 points [-]

Good point; how about, someone who is stupider than the average dog.

Comment author: DragonGod 08 June 2017 08:52:38PM 0 points [-]

Our system considers only humans; another sapient alien race may implement this system, and consider only themselves.

Comment author: Luke_A_Somers 11 June 2017 03:59:08AM 0 points [-]

A) what cousin_it said.

B) consider, then, successively more and more severely mentally nonfunctioning humans. There is some level of incapability at which we stop caring (e.g. head crushed), and I would be somewhat surprised at a choice of values that put a 100% abrupt turn-on at some threshold; and if it did, I expect some human could be found or made that would flicker across that boundary regularly.

Comment author: DragonGod 08 June 2017 06:12:05PM 0 points [-]

Individuals refers only to humans and other sapient entities considered by the system.

Comment author: Luke_A_Somers 08 June 2017 07:05:08PM 1 point [-]

There is a continuum on this scale. Is there a hard cutoff, or is there any scaling? And what about very similar forks of AIs?

Comment author: Elo 21 May 2017 04:09:33AM 1 point [-]

This thread is over. Tapping out on behalf of all participants.

Comment author: Luke_A_Somers 21 May 2017 05:20:06PM 0 points [-]

I'll go along with that.

Comment author: Thomas 05 May 2017 08:29:41PM 5 points [-]

Well, I haven't seen that yet. I mean a reasonable discussion between different political affiliations. Inside one camp, yes. Across some wider divisions, not yet.

Emotions are just too strong, reasons are just too flimsy.

Comment author: Luke_A_Somers 20 May 2017 11:25:08PM 2 points [-]

So, how do you characterize 'Merkelterrorists' and 'crimmigrants'? Terms of reasonable discourse?

View more: Next