Wei_Dai comments on A Master-Slave Model of Human Preferences - Less Wrong

58 Post author: Wei_Dai 29 December 2009 01:02AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (80)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 29 December 2009 06:40:56AM 2 points [-]

(Quick nitpick:) "rationalize" is an inappropriate term in this context.

Comment author: Wei_Dai 29 December 2009 10:58:51AM 1 point [-]

Is it because "rationalize" means "to devise self-satisfying but incorrect reasons for (one's behavior)"? But it can also mean "to make rational" which is my intended meaning. The ambiguity is less than ideal, but unless you have a better suggestion...

Comment author: Vladimir_Nesov 29 December 2009 12:57:25PM 0 points [-]

On this forum, "rationalize" is frequently used in the cognitive-error sense. "Formalized" seems to convey the intended meaning (preferences being arational, the problem is that they are not being rationally (effectively) implemented/followed, not that they are somehow "not rational" themselves).

Comment author: Wei_Dai 29 December 2009 08:32:47PM 0 points [-]

preferences being arational, the problem is that they are not being rationally (effectively) implemented/followed, not that they are somehow "not rational" themselves

That position may make sense, but I think you'll have to make more of a case for it. Currently, it's standard in decision theory to speak of irrational preferences, such as preferences that can't be represented as expected utility maximization, or preferences that aren't time consistent.

But I take your point about "rationalize", and I've edited the article to remove the usages. Thanks.

Comment author: Vladimir_Nesov 29 December 2009 08:53:21PM 0 points [-]

That position may make sense, but I think you'll have to make more of a case for it. Currently, it's standard in decision theory to speak of irrational preferences, such as preferences that can't be represented as expected utility maximization, or preferences that aren't time consistent.

Agreed. My excuse is that I (and a few other people, I'm not sure who originated the convention) consistently use "preference" to refer to that-deep-down-mathematical-structure determined by humans/humanity that completely describes what a meta-FAI needs to know in order to do things the best way possible.