XiXiDu comments on Rationalists don't care about the future - Less Wrong

3 Post author: PhilGoetz 15 May 2011 07:48AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (143)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 17 May 2011 03:38:45PM 0 points [-]

Nicely put, very interesting.

Obvious answer: They split their donation, thus achieving a balance between two interests. This would be an irrational thing for a unified rational agent to do, but it is (collectively) rational for a collective.

What about Aumann's agreement theorem? Doesn't this assume that contributions to a charity are based upon genuinely subjective considerations that are only "right" from the inside perspective of certain algorithms? Not to say that I disagree.

Also, if you assume that humans are actually compounds of elementary utility functions trying to reach some sort of equilibrium, how much of the usual heuristics, created for unified rational agents, are then effectively applicable to humans?

Comment author: Perplexed 17 May 2011 11:21:54PM 2 points [-]

Bob comes to agree that Alice likes ballet - likes it a lot. Alice comes to agree that Bob prefers nature to art. They don't come to agree that art is better than nature, nor that nature is better than art. Because neither is true! "Better than" is a three-place predicate (taking an agent id as an argument). And the two agree on the propositions Better(Alice, ballet, Audubon) and Better(Bob, Audubon, ballet).

...if you assume that humans are actually compounds of elementary utility functions trying to reach some sort of equilibrium, how much of the usual heuristics, created for unified rational agents, are then effectively applicable to humans?

Assume that individual humans are compounds? That is not what I am suggesting in the above comment. I'm talking about real compound agents created either by bargaining among humans or by FAI engineers.

But the notion that the well-known less-than-perfect rationality of real humans might be usefully modeled by assuming they have a bunch of competing and collaborating agents within their heads is an interesting one which has not escaped my attention. And, if pressed, I can even provide an evolutionary psychology just-so-story explaining why natural selection might prefer to place multiple agents into a single head.