TheOtherDave comments on Two questions about CEV that worry me - Less Wrong

29 Post author: cousin_it 23 December 2010 03:58PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (137)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheOtherDave 23 December 2010 10:32:59PM *  6 points [-]

One thing you could say that might help is if you were clearer about when you consider it evil to ignore the volition of an intelligence, since it's clear from your writing that sometimes you don't.

For example, "don't be evil" clearly isn't enough of an argument to convince you to build an AI that fulfills Babykiller or Pebblesorter or SHFP volition, for example, should we encounter any... although at least some of those would indisputably be intelligences.

Given that, it might reassure people to explicitly clarify why "don't be evil" is enough of an argument to convince you to build an AI that fulfills the volition of all humans, rather than (let's say) the most easily-jointly-satisfied 98% of humanity, or some other threshold for inclusion.

If this has already been explained somewhere, a pointer would be handy. I have not read the whole site, but thus far everything I've seen to this effect seems to boil down to assuming that there exists a single volition V such that each individual human would prefer V upon reflection to every other possible option, or at least a volition that approximates that state well enough that we can ignore the dis-satisfied minority.

If that assumption is true, the answer to the question you quote is "Because they'd prefer the results of doing so," and evil doesn't enter into it.

If that assumption is false, I'm not sure how "don't be evil" helps.