You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Lumifer comments on A simple game that has no solution - Less Wrong Discussion

10 Post author: James_Miller 20 July 2014 06:36PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (123)

You are viewing a single comment's thread. Show more comments above.

Comment author: Lumifer 22 July 2014 06:49:34AM 1 point [-]

certainly makes sense from their perspective.

That may well be so, but this is a rather different claim than the "Rational Choice assumption".

We know quite well that people are not rational. Why would you model them as rational agents in game theory?

Comment author: TrE 22 July 2014 07:16:45AM *  1 point [-]

As I wrote above, in the limit of large stacks, long pondering times, and decisions jointly made by large organizations, people do actually behave rationally. As an example: Bidding for oil drilling rights can be modelled as auctions with incomplete and imperfect information. Naïve bidding strategies fall prey to the winner's curse. Game theory can model these situations as Bayesian games and compute the emerging Bayesian Nash Equilibria.

Guess what? The companies actually bid the way game theory predicts!

Comment author: Lumifer 22 July 2014 02:47:10PM -1 points [-]

in the limit of large stacks, long pondering times, and decisions jointly made by large organizations, people do actually behave rationally.

I still don't think so. To be a bit more precise, certainly people behave rationally sometimes and I will agree that things like long deliberations or joint decisions (given sufficient diversity of the deciding group) tend to increase the rationality. But I don't think that even in the limit assuming rationality is a "safe" or a "fine" assumption.

Example: international politics. Another example: organized religions.

I also think that in analyzing this issue there is the danger of constructing rational narratives post-factum via the claim of revealed preferences. Let's say entity A decides to do B. It's very tempting to say "Aha! It would be rational for A to decide to do B if A really wants X, therefore A wants X and behaves rationally". And certainly, that happens like that on a regular basis. However what also happens is that A really wants Y and decides to do B on non-rational grounds or just makes a mistake. In this case our analysis of A's rationality is false, but it's hard for us to detect that without knowing whether A really wants X or Y.