DaFranker comments on The curse of identity - LessWrong

121 Post author: Kaj_Sotala 17 November 2011 07:28PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (296)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 26 July 2012 08:15:16AM 4 points [-]

The rationality of an agent is its goal

Er, nope.

But if thy goal be fractured, thy whole self shall be full of irrationality.

Humans' goals are fractured. But this has little to do with whether or not they are rational.

Comment author: Will_Newsome 26 July 2012 06:18:03PM *  -2 points [-]

You don't understand. This "rationality" you speak of is monstrous irrationality. And anyway, like I said, Meta knoweth that ye have Meta-shattered values—but your wants are satisfied by serving Meta, not by serving Mammon directly. Maybe you'd get more out of reading the second half of Matthew 6 and the various analyses thereof.

You may be misinterpreting "the rationality of an agent is its goal". Note that the original is "the light of the body is the eye".

To put my above point a little differently: Take therefore no thought for godshatter: godshatter shall take thought for the things of itself. Sufficient unto the day is the lack-of-meta thereof.

For clarity's sake: Yes, I vehemently dispute this idea that a goal can't be more or less rational. That idea is wrong, which is quickly demonstrated by the fact that priors and utility functions can be transformed into each other and we have an objectively justifiable universal prior. (The general argument goes through even without such technical details of course, such that stupid "but the choice of Turing machine matters" arguments don't distract.)

Comment author: DaFranker 26 July 2012 07:31:44PM *  5 points [-]

Let's play rationalist Taboo!

Yes, I vehemently dispute this idea that a goal can't be more or less [Probable to achieve higher expected utility for other agents than (any other possible goals)]

Yes, I vehemently dispute this idea that a goal can't be more or less [Probable to achieve higher expected utility according to goal.Parent().utilityFunction].

Yes, I vehemently dispute this idea that a goal can't be more or less [Kolmogorov-complex].

Yes, I vehemently dispute this idea that a goal can't be more or less [optimal towards achieving your values].

Yes, I vehemently dispute this idea that a goal can't be more or less [easy to describe as the ratio of two natural numbers].

Yes, I vehemently dispute this idea that a goal can't be more or less [correlated in conceptspace to the values in the agent's utility function].

Yes, I vehemently dispute this idea that a [proposed utility function] can't be more or less rational.

Yes, I vehemently dispute this idea that a [set of predetermined criteria for building a utility function] can't be more or less rational.

Care to enlighten me exactly on just what it is you're disputing, and on just what points should be discussed?

Edit: Fixed markdown issue, sorry!