cousin_it comments on The mind-killer - Less Wrong

23 Post author: ciphergoth 02 May 2009 04:49PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (151)

You are viewing a single comment's thread. Show more comments above.

Comment author: cousin_it 02 May 2009 09:49:42PM *  5 points [-]

Before disincentivizing, you face the problem of defining and recognizing moral sabotage. It doesn't sound trivial to me. Remember, groups don't admit to using the outrage tactic; they do it sincerely, sometimes over several generations of members. I repeat the question: how does a rationalist tell "warranted" emotional disutility from "unwarranted" in a fair way?

Comment author: steven0461 02 May 2009 11:41:51PM 2 points [-]

Incentive effects are hugely important, but a utilitarian decision process that causes predictable harm is not a true utilitarian decision process. Your question is a tough one, but it's answerable in principle.

Comment author: ciphergoth 02 May 2009 11:51:05PM 1 point [-]

I don't see the problem in principle with a utilitarian deciding that giving in to an instance of moral sabotage will greatly increase later incidence of moral sabotage, resulting in total disutility greater than the manufactured weeping and gnashing of teeth you face if you stand against it now.

Comment author: cousin_it 03 May 2009 12:35:05AM *  1 point [-]

So a powerful agent (or a mass of tiny agents with large total power) needs a different utility function on future worlds than that of a lone rationalist observer, due to the need to avoid exploits. Well... which should I pick, then?

Looks like we've run into another of those nasty recursive problems: I choose my utility function depending on what every other agent could do to exploit me, and everyone else does the same. The only natural solution might well turn out to be everyone caring about their own welfare and no one else's, to avoid "mugging by suffering". Let's model the problem mathematically and look for other solutions - I love this stuff.

Comment author: loqi 03 May 2009 02:16:36AM 4 points [-]

So a powerful agent (or a mass of tiny agents with large total power) needs a different utility function on future worlds than that of a lone rationalist observer, due to the need to avoid exploits.

No, it needs a different method of maximizing expected utility. Avoiding moral sabotage doesn't reflect a preference, it's purely instrumental.

Comment author: cousin_it 03 May 2009 09:32:14AM 0 points [-]

Thanks, this clicked.