LawrenceC comments on Rationality Compendium: Principle 1 - A rational agent, given its capabilities and the situation it is in, is one that thinks and acts optimally - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (28)
Hm. Since this is a core definition, I have an urge to examine it very carefully. First, "performance" is a bit fuzzy, would you mind if I replaced it with utility? We would get "rationality maximizes expected utility". I think that I have a few questions about that.
Rationality maximizes. That implies that every rational action must maximize utility. Anything that does not maximize utility is not (fully) rational. In particular, satisficing is not rational.
Rationality maximizes expected utility. A great deal of heavy lifting is done by this word and there are some traps here. For example, if you define utility as "that what you want" and add a little bit about revealed preferences, we would get caught in a loop: you maximize what you want and how do we know what you want? why, that is what you maximize. In general most every action maximizes some utility and, moreover, there is no requirement for the utility function to be stable across time, so this gets complicated quite fast.
Rationality maximizes expected utility. At issue here are risk considerations. You can wave them away by saying that one should maximize risk-adjusted utility, but in practice this is a pretty big blind spot. Faced with estimated distributions of future utility, most people would pick one with the highest mean (they pick the maximum expected value), but that ignores the width of the distributions which is rarely a good idea.
Take curiosity. It's an accepted rationalist virtue. And yet I don't see how it maximizes expected utility.
I'm not sure if this is correct, but my best guess is:
It maximizes utility, in so far as most goals are better achieved with more information, and people tend to systematically underestimate the value of collecting more information or suffer from biases that prevent them from acquiring this information. Or, in other words, curiosity is virtuous because humans are bounded and flawed agents, and it helps rectify the biases that we fall prey to. Just like being quick to update on evidence is a virtue, and scholarship is a virtue.
There are a couple of problems here. First is the usual thing forgotten on LW -- costs. "More information" is worthwhile iff its benefits outweigh the costs of acquiring it. Second, your argument implies that, say, attempting to read the entire Wikipedia (or Encyclopedia Britannica if you are worried about stability) from start to finish would be a rational thing to do. Would it?
No, it isn't. Being curious is a good heuristic for most people, because most people are in the region where information gathering is cheaper than the expected value of gathering information. I don't think we disagree on anything concrete: I don't claim that it's rational in itself a priori but is a fairly good heuristic.
This is a good point about taking into account the costs. I want to cover this idea in my third post which I am still writing, but will probably be something like Principle 3 – your rationality depends on the usefulness of your internal representation of the world. My view is that truth seeking should be viewed as an optimization process. If it doesn't allow you to become more optimal, then it is not worth it. I have a post about this here.