You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

skeptical_lurker comments on Questions on the human path and transhumanism. - Less Wrong Discussion

-1 Post author: HopefullyCreative 12 August 2014 08:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (44)

You are viewing a single comment's thread. Show more comments above.

Comment author: skeptical_lurker 14 August 2014 10:24:20AM 0 points [-]

Ok, so why do you believe that there is no way to guarantee friendlessness? The idea seems to pursue mathematically provable FAI, which AFAICT ought to have a very high probability of success, if it can be developed in time.

I'm not even sure if successful friendliness itself is all that different from stagnation.

In this case I would say that this is not what I would consider friendliness, because I, and a fairly large subset of humanity, place a lot of value in things not stagnating.

Comment author: passive_fist 15 August 2014 01:33:21AM 0 points [-]

Only one side of the friendliness coin is the 'mathematical proof' (verification) part. The other side of the coin is the validation part. That is, the question of whether our design goals are really the right design goals to have. A lot of the ink that has been spilled on topics like CEV centers mostly around this aspect.

I, and a fairly large subset of humanity, place a lot of value in things not stagnating.

People say that but it's usually just empty words. If progress necessitated that you be destroyed (or, at the very least, accept being unconditionally ruled over), would you prefer progress, or the status quo?

Try to imagine if humanity were bound by the ethical codes of chimpanzees, then you begin to see what I mean.

Comment author: skeptical_lurker 15 August 2014 05:36:18PM 0 points [-]

If progress necessitated that you be destroyed (or, at the very least, accept being unconditionally ruled over), would you prefer progress, or the status quo?

I'm pretty sure there is a potential continuous transform between myself and a Jupiter brain, (assuming as continuity of personality makes sense). Add one more brain cell or make a small alteration and I'm still myself, so by induction you could add an arbitrarly large number of brain cell up until fundamental physical constrains kick in.

And even supposing there are beings I can never evolve into, well, the universe is quite big, so live and let live?

That is, the question of whether our design goals are really the right design goals to have. A lot of the ink that has been spilled on topics like CEV centers mostly around this aspect.

Well, there are aspects of CEV that worry me, but I would say it seems to be far better than an arbitrary (e.g. generated by evolutionary simulations) utility function.

Try to imagine if humanity were bound by the ethical codes of chimpanzees, then you begin to see what I mean.

Do chimps have a concept of ethics? Suppose you started raising the IQ of chimps, wouldn't they eventually progress and probably develop a vaguely human-like civilisation?

Comment author: passive_fist 18 August 2014 09:10:59AM 0 points [-]

well, the universe is quite big, so live and let live?

Our interests will eventually conflict. Look at ants. We don't go around knocking down anthills just for the heck of it, even though we easily could. But when it comes time to construct an interstate over the anthill, those ants are toast.

Do chimps have a concept of ethics? Suppose you started raising the IQ of chimps, wouldn't they eventually progress and probably develop a vaguely human-like civilisation?

That's a good hypothetical. What if an AI told you that it had decided to start raising your IQ? (I, personally, would find this awesome, but I'm sure many here wouldn't)

Comment author: skeptical_lurker 20 August 2014 04:41:48PM *  0 points [-]

Our interests will eventually conflict. Look at ants. We don't go around knocking down anthills just for the heck of it, even though we easily could. But when it comes time to construct an interstate over the anthill, those ants are toast.

To continue the metaphor, some environmentalists do protest the building of motorways, although not to all that much effect, and rarely for the benefit of insects. But, we have no history of signing treaties with insects, nor does anyone reminisce about when they used to be an insect.

Regardless of whether posthumans would value humans, current humans do value humans, and also value continuing to value humans, so a correct implementation of CEV would not put humanity on a path where humanity would get wiped out. I think this is the sort of point at which TDT comes in, and so CEV could morph into CEV-with-constrains-added-at-initial-runtime. For instance, perhaps CEV(t)=C * CEV(0)+(1-C) * CEV(t) where CEV(t) means CEV evaluated at time t, and C is a constant fixed at t=0, determining how much values should remain unchanged.

What if an AI told you that it had decided to start raising your IQ? (I, personally, would find this awesome, but I'm sure many here wouldn't)

Sounds good to me! I think post-singularity it might be good to fork yourself, with some copies heading off towards superintelligence quickly and others taking the scenic route and exploring baseline human activities first.