You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

passive_fist comments on Questions on the human path and transhumanism. - Less Wrong Discussion

-1 Post author: HopefullyCreative 12 August 2014 08:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (44)

You are viewing a single comment's thread. Show more comments above.

Comment author: passive_fist 18 August 2014 09:10:59AM 0 points [-]

well, the universe is quite big, so live and let live?

Our interests will eventually conflict. Look at ants. We don't go around knocking down anthills just for the heck of it, even though we easily could. But when it comes time to construct an interstate over the anthill, those ants are toast.

Do chimps have a concept of ethics? Suppose you started raising the IQ of chimps, wouldn't they eventually progress and probably develop a vaguely human-like civilisation?

That's a good hypothetical. What if an AI told you that it had decided to start raising your IQ? (I, personally, would find this awesome, but I'm sure many here wouldn't)

Comment author: skeptical_lurker 20 August 2014 04:41:48PM *  0 points [-]

Our interests will eventually conflict. Look at ants. We don't go around knocking down anthills just for the heck of it, even though we easily could. But when it comes time to construct an interstate over the anthill, those ants are toast.

To continue the metaphor, some environmentalists do protest the building of motorways, although not to all that much effect, and rarely for the benefit of insects. But, we have no history of signing treaties with insects, nor does anyone reminisce about when they used to be an insect.

Regardless of whether posthumans would value humans, current humans do value humans, and also value continuing to value humans, so a correct implementation of CEV would not put humanity on a path where humanity would get wiped out. I think this is the sort of point at which TDT comes in, and so CEV could morph into CEV-with-constrains-added-at-initial-runtime. For instance, perhaps CEV(t)=C * CEV(0)+(1-C) * CEV(t) where CEV(t) means CEV evaluated at time t, and C is a constant fixed at t=0, determining how much values should remain unchanged.

What if an AI told you that it had decided to start raising your IQ? (I, personally, would find this awesome, but I'm sure many here wouldn't)

Sounds good to me! I think post-singularity it might be good to fork yourself, with some copies heading off towards superintelligence quickly and others taking the scenic route and exploring baseline human activities first.