dhasenan comments on Ends Don't Justify Means (Among Humans) - Less Wrong

44 Post author: Eliezer_Yudkowsky 14 October 2008 09:00PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (87)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 16 December 2012 04:29:09PM *  0 points [-]

I don't think you can automatically call a suboptimal decision a mistake.

Huh? You wouldn't call a decision that results in an unnecessary loss of life a mistake, but rather a suboptimal decision? Note that I altered the hypothetical situation in the comment and this "suboptimal decision" was labeled a mistake in the event that a 3rd party would come up with a superior decision (ie. one that would save all the lives)

And I'm much more willing to trust a FAI with that call than any human.

Edited: There's no FAI we can trust yet and this particular detail seems to be about the friendliness of an AI, so your belief seems a little out of place in this context, but nevermind that since if there were an actual FAI, I suppose I'd agree.

I think there's potential for severe error in the logic present in the text of the post and I find it proper to criticize the substance of this post, despite it being 4 years old.

Anyway for an omniscient being not putting any weight on the potential of error would seem reasonable.

Comment author: [deleted] 09 February 2014 05:43:58AM 0 points [-]

You wouldn't call a decision that results in an unnecessary loss of life a mistake, but rather a suboptimal decision?

I might decide to take a general, consistent strategy due to my own limitations. In this example, the limitation is that if I feel justified in engaging in this sort of behavior on occasion, I will feel justified employing it on other occasions with insufficient justifications.

If I employed a different general strategy with a similar level of simplicity, it would be less optimal.

Other strategies exist that are closer to optimal, but my limitations preclude me from employing them.

I think there's potential for severe error in the logic present in the text of the post

Of course there is. If you can show a specific error, that would be great.