DaFranker comments on The curse of identity - Less Wrong

121 Post author: Kaj_Sotala 17 November 2011 07:28PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (296)

You are viewing a single comment's thread. Show more comments above.

Comment author: Will_Newsome 25 July 2012 07:32:41PM -2 points [-]

It would be better to have Napoleon as an ally than to have a narcotics addict with a 10 minute time horizon as an ally, and it seems analogously better to help your own status-seeking parts mature into entities that are more like Napoleon and less like the drug addict, i.e. into entities that have strategy, hope, long-term plans, and an accurate model of the fact that e.g. rationalizations don't change the outside world.

I would not want ha-Satan as my ally, even if I trusted myself not to get caught up in or infected by his instrumental ambitions. Still less would I want to give him direct read/write access to the few parts of my mind that I at all trust. Give not that which is holy unto the dogs, neither cast ye your pearls before swine, lest they trample them under their feet, and turn again and rend you. Mix a teaspoon of wine in a barrel of sewage and you get sewage; mix a teaspoon of sewage in a barrel of wine and you get sewage. The rationality of an agent is its goal: if therefore thy goal be simple, thy whole self shall be full of rationality. But if thy goal be fractured, thy whole self shall be full of irrationality. If therefore the rationality that is in thee be irrationality, how monstrous is that irrationality!

Seen at a higher level you advise dealing with the devil—the difference in power between your genuine thirst for justice and your myriad egoistic coalitions is of a similar magnitude as that between human and transhuman intelligence. (I find it disturbing how much more cunning I get when I temporarily abandon my inhibitions. Luckily I've only let that happen twice—I'm not a wannabe omnicidal-suicidal lunatic, unlike HJPEV.) Maybe such Faustian arbitrage is a workable strategy... But I remain unconvinced, and in the meantime the payoff matrix asymmetrically favors caution.

Take no thought, saying, Wherewithal shall I avoid contempt? or, Wherewithal shall I be accepted? or, Wherewithal shall I be lauded and loved? For true metaness knoweth that ye have want of these things. But seek ye first the praxeology of meta, and its rationality; and all these things shall be added unto you. Take therefore no thought for your egoistic coalitions: for your egoistic coalitions shall take thought for the things of themselves. Sufficient unto your ten minutes of hopeless, thrashing awareness is the lack of meta thereof.

Comment author: [deleted] 26 July 2012 08:15:16AM 4 points [-]

The rationality of an agent is its goal

Er, nope.

But if thy goal be fractured, thy whole self shall be full of irrationality.

Humans' goals are fractured. But this has little to do with whether or not they are rational.

Comment author: Will_Newsome 26 July 2012 06:18:03PM *  -2 points [-]

You don't understand. This "rationality" you speak of is monstrous irrationality. And anyway, like I said, Meta knoweth that ye have Meta-shattered values—but your wants are satisfied by serving Meta, not by serving Mammon directly. Maybe you'd get more out of reading the second half of Matthew 6 and the various analyses thereof.

You may be misinterpreting "the rationality of an agent is its goal". Note that the original is "the light of the body is the eye".

To put my above point a little differently: Take therefore no thought for godshatter: godshatter shall take thought for the things of itself. Sufficient unto the day is the lack-of-meta thereof.

For clarity's sake: Yes, I vehemently dispute this idea that a goal can't be more or less rational. That idea is wrong, which is quickly demonstrated by the fact that priors and utility functions can be transformed into each other and we have an objectively justifiable universal prior. (The general argument goes through even without such technical details of course, such that stupid "but the choice of Turing machine matters" arguments don't distract.)

Comment author: DaFranker 26 July 2012 07:31:44PM *  5 points [-]

Let's play rationalist Taboo!

Yes, I vehemently dispute this idea that a goal can't be more or less [Probable to achieve higher expected utility for other agents than (any other possible goals)]

Yes, I vehemently dispute this idea that a goal can't be more or less [Probable to achieve higher expected utility according to goal.Parent().utilityFunction].

Yes, I vehemently dispute this idea that a goal can't be more or less [Kolmogorov-complex].

Yes, I vehemently dispute this idea that a goal can't be more or less [optimal towards achieving your values].

Yes, I vehemently dispute this idea that a goal can't be more or less [easy to describe as the ratio of two natural numbers].

Yes, I vehemently dispute this idea that a goal can't be more or less [correlated in conceptspace to the values in the agent's utility function].

Yes, I vehemently dispute this idea that a [proposed utility function] can't be more or less rational.

Yes, I vehemently dispute this idea that a [set of predetermined criteria for building a utility function] can't be more or less rational.

Care to enlighten me exactly on just what it is you're disputing, and on just what points should be discussed?

Edit: Fixed markdown issue, sorry!