Douglas_Reay comments on The Ape Constraint discussion meeting. - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (23)
I don't know how to solve it, aside from including some approximation of Bayesian updating as a necessary condition. (Goetz or someone once pointed out another one, but again it didn't seem useful on its own. Hopefully we can combine a lot of these conditions, and if the negation still seems too strict to serve our purposes, we might possibly have a non-person AI as defined by this predicate bootstrap its way to a better non-person predicate.) I hold out hope for a solution because, intuitively, it seems possible to imagine people without making them conscious (and Eliezer points out that this part may be harder than a single non-person AGI). Oh, and effectively defining some aspects of consciousness seems necessary for judging models of the world without using Cartesian dualism.
But let's say we can't solve non-sentient AGI. Let's further say that humanity is not an abomination we can only address by killing everyone, although in this hypothetical we may be creating people in pain whenever we imagine them.
Since the AGI doesn't exist yet - and if we made one with the desire to serve us, we want to prove it wouldn't change that desire - how do you define "being fair" to the potential of linear regression software? What about the countless potential humans we exclude from our timeline with every action?
Empirically, we're killing the apes. (And by the way, that seems like a much better source of concern when it comes to alien judgment. Though the time for concern may have passed with the visible Neanderthals.) If Dr. Zaius goes back and tells them they could create a different "human race" with the desire to not do that, only a fool of an ape would refuse. And I don't believe in any decision theory that says otherwise.
I agree.
The question is: are there different constraints that would, either as a side effect, or as a primary objective, achieve the end of avoiding humanity wiping out the apes
And, if so, are there other considerations we should be taking into account when picking which constraint to use?