Houshalter comments on Diseased thinking: dissolving questions about disease - Less Wrong

236 Post author: Yvain 30 May 2010 09:16PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (343)

You are viewing a single comment's thread. Show more comments above.

Comment author: Houshalter 01 June 2010 01:36:27AM 0 points [-]

In theory, sure. In practice, there's a large number of social dynamics, involving things such as people's tendency to abuse power, that would make this option non-worthwhile.

Allright, so what if it was done by a hypothetical super intelligent AI or an omnicient being of somesort. Would you be ok with it then?

Similar considerations apply to a lot of other things, including many of the ones you mention, such as creating an "eye for eye" society. Yes, you could get overall bad results if you just single-mindedly optimized for one or two variables, but that's why we try to look at the whole picture.

This is exactly what I mean. What are we trying to "optimize" for?

Comment author: Kaj_Sotala 01 June 2010 01:43:03AM *  3 points [-]

Allright, so what if it was done by a hypothetical super intelligent AI or an omnicient being of somesort. Would you be ok with it then?

Probably not, because it really was a super-intelligent AI, it could solve the problem without needing to kill anyone.

This is exactly what I mean. What are we trying to "optimize" for?

For general well-being. Something among the lines of "the amount of happiness minus the amount of suffering", or "the successful implementation of preferences" would probably be a decent first approximation, but even those have plenty of caveats (we probably wouldn't want to just turn everyone to wireheads, for instance). Human values are too complex to really be summed in any brief description. Or book-length ones, for that matter.

Comment author: Houshalter 01 June 2010 02:28:36AM 0 points [-]

Probably not, because it really was a super-intelligent AI, it could solve the problem without needing to kill anyone.

They could possibly come up with an alternative, but we must consider that it very well may be the most efficient thing to do is to kill them, unless we implement goals that make the killing the least efficient option. If your going with AI, then there is another thing to consider: time. How much time would the AI spend considering its options and judging the person in question? Shortest amount of time possible? Longest? There is no such thing as an ultimate trade off.

For general well-being. Something among the lines of "the amount of happiness minus the amount of suffering", or "the successful implementation of preferences" would probably be a decent first approximation, but even those have plenty of caveats (we probably wouldn't want to just turn everyone to wireheads, for instance). Human values are too complex to really be summed in any brief description. Or book-length ones, for that matter.

In other words, we have to set its goal as the ability to predict our values, which is a problem since you can't make AI goals in english.

Comment author: Kaj_Sotala 01 June 2010 03:17:10AM 1 point [-]

They could possibly come up with an alternative, but we must consider that it very well may be the most efficient thing to do is to kill them, unless we implement goals that make the killing the least efficient option. If your going with AI, then there is another thing to consider: time. How much time would the AI spend considering its options and judging the person in question? Shortest amount of time possible? Longest? There is no such thing as an ultimate trade off.

I'm not sure of what exactly you're trying to say here.

In other words, we have to set its goal as the ability to predict our values, which is a problem since you can't make AI goals in english.

Yup.