Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Aaron_Luchko comments on Fake Optimization Criteria - Less Wrong

30 Post author: Eliezer_Yudkowsky 10 November 2007 12:10AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (20)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Aaron_Luchko 10 November 2007 08:14:18AM -1 points [-]

I think the problem with trying to come up with a concrete definition of morality is the only real problems are ones without real solutions. In science we can solve previously unknown problems because we're constantly building on newly discovered knowledge. But with morality the basic situations have existed mostly unchanged for most of our evolution and we don't have any real advantage over previous generations, thus any problem worth solving is there because we can't solve it.

For instance you're never going to get a leader who's complete moral argument for governing is "I should lead this country because I randomly murder people in horrible ways". Any leader like that will never gain enough supporters to form a government, sure there are leaders who essentially lead in that fashion but they always have some idealist justification for why they should lead.

Thus you can't set down laws like "Always be selfish" or "Always obey the government" since if it's not completely obvious and universal you wouldn't be interested in that question.

However you can set down a moral law like "Don't torture a thousand people to death to achieve the same amount of satisfaction you'd get from eating a strawberry unless there are an unbelievably contrived set of extenuating circumstances involved, probably something involving the number 3^^^3". However, one would hope that's already part of your moral code...