Randaly comments on Depth-based supercontroller objectives, take 2 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (21)
Thanks for your response!
1) Hmmm. OK, this is pretty counter-intuitive to me.
2) I'm not totally sure what you mean here. But, to give a concrete example, suppose that the most moral thing to do would be to tile the universe with very happy kittens (or something). CEV, as I understand, would create as many of these as possible, with its finite resources; whereas g/g* would try to create much more complicated structures than kittens.
3) Sorry, I don't think I was very clear. To clarify: once you've specified h, a superset of human essence, why would you apply the particular functions g/g* to h? Why not just directly program in 'do not let h cease to exist'? g/g* do get around the problem of specifying 'cease to exist', but this seems pretty insignificant compared to the difficulty of specifying h. And unlike with programming a supercontroller to preserve an entire superset of human essence, g/g* might wind up with the supercontroller focused on some parts of h that are not part of the human essence- so it doesn't completely solve the definition of 'cease to exist'.
(You said above that h is an improvement because it is a superset of human essence. But we can equally program a supercontroller not to let a superset of human essence cease to exist, once we've specified said superset.)