Giles comments on Human errors, human values - Less Wrong

32 Post author: PhilGoetz 09 April 2011 02:50AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (135)

You are viewing a single comment's thread.

Comment author: Giles 08 April 2011 03:27:00AM *  4 points [-]

Maybe there's some value in creating an algorithm which accurately models most people's moral decisions... it could be used as the basis for a "sane" utility function by subsequently working out which parts of the algorithm are "utility" and which are "biases".

(EDIT: such a project would also help us understand human biases more clearly.)

Incidentally, I hope this "double effect" idea is based around more than just this trolley thought experiment. I could get the same result they did with the much simpler heuristic "don't use dead bodies as tools".

Comment author: PhilGoetz 08 April 2011 03:48:34AM 5 points [-]

Maybe there's some value in creating an algorithm which accurately models most people's moral decisions... it could be used as the basis for a "sane" utility function by subsequently working out which parts of the algorithm are "utility" and which are "biases".

If I wrote an algorithm that tried to maximize expected value, and computed value as a function of the number of people left alive, it would choose in both trolley problems to save the maximum number of people. That would indicate that the human solution to the second problem, to not push someone onto the tracks, was a bias.

Yet the authors of the paper did not make that interpretation. They decided that getting a non-human answer meant the computer did not yet have morals.

So, how do you decide what to accurately model? That's where you make the decision about what is moral.

Comment author: Giles 08 April 2011 04:01:52AM 0 points [-]

I agree the authors of the paper are idiots (or seem to be - I only skimmed the paper). But the research they're doing could still be useful, even if not for the reason they think.