dhasenan comments on Ends Don't Justify Means (Among Humans) - Less Wrong

44 Post author: Eliezer_Yudkowsky 14 October 2008 09:00PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (87)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: FeepingCreature 16 December 2012 04:12:11PM *  1 point [-]

People doing this I think is a problem because people suck at genuinely deciding based on the issues. I would rather live in a society where people were such that they could be trusted with the responsibility to push guys in front of trains if they had sufficient grounds to reasonably believe this was a genuine positive action. But knowing that people are not such, I would much rather they didn't falsely believe they were, even if it sometimes causes suboptimal decisions in train scenarios.

In such a case it would be a mistake.

I don't think you can automatically call a suboptimal decision a mistake.

This actually has a real-life equivalent, in the situation of having to shoot down a plane that is believed to be in the control of terrorists and flying towards a major city. I would not want to be in the position of that fighter pilot, but I would also want him to fire.

And I'm much more willing to trust a FAI with that call than any human.

Comment author: [deleted] 16 December 2012 04:29:09PM *  0 points [-]

I don't think you can automatically call a suboptimal decision a mistake.

Huh? You wouldn't call a decision that results in an unnecessary loss of life a mistake, but rather a suboptimal decision? Note that I altered the hypothetical situation in the comment and this "suboptimal decision" was labeled a mistake in the event that a 3rd party would come up with a superior decision (ie. one that would save all the lives)

And I'm much more willing to trust a FAI with that call than any human.

Edited: There's no FAI we can trust yet and this particular detail seems to be about the friendliness of an AI, so your belief seems a little out of place in this context, but nevermind that since if there were an actual FAI, I suppose I'd agree.

I think there's potential for severe error in the logic present in the text of the post and I find it proper to criticize the substance of this post, despite it being 4 years old.

Anyway for an omniscient being not putting any weight on the potential of error would seem reasonable.

Comment author: [deleted] 09 February 2014 05:43:58AM 0 points [-]

You wouldn't call a decision that results in an unnecessary loss of life a mistake, but rather a suboptimal decision?

I might decide to take a general, consistent strategy due to my own limitations. In this example, the limitation is that if I feel justified in engaging in this sort of behavior on occasion, I will feel justified employing it on other occasions with insufficient justifications.

If I employed a different general strategy with a similar level of simplicity, it would be less optimal.

Other strategies exist that are closer to optimal, but my limitations preclude me from employing them.

I think there's potential for severe error in the logic present in the text of the post

Of course there is. If you can show a specific error, that would be great.