Eliezer_Yudkowsky comments on Three ways CFAR has changed my view of rationality - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (58)
I agree there's been some inconsistency in usage over the years. In fact, I think What Do We Mean By Rationality? and Rationality are simply wrong, which is surprising since they're two of the most popular and widely-relied-on pages on LessWrong.
Rationality doesn't ensure that you'll win, or have true beliefs; and having true beliefs doesn't ensure that you're rational; and winning doesn't ensure that you're rational. Yes, winning and having true beliefs is the point of rationality; and rational agents should win (and avoid falsehood) on average, in the long haul. But I don't think it's pedantic, if you're going to write whole articles explaining these terms, to do a bit more to firewall the optimal from the rational and recognize that rationality must be systematic and agent-internal.
Instrumental rationality isn't the same thing as winning. It's not even the same thing as 'instantiating cognitive algorithms that make you win'. Rather, it's, 'instantiating cognitive algorithms that tend to make one win'. So being unlucky doesn't mean you were irrational.
Luke's way of putting this is to say that 'the rational decision isn't always the right decision'. Though that depends on whether by 'right' you mean 'defensible' or 'useful'. So I'd rather just say that rationalists can get unlucky.
I'm happy to say that being good at graphic design is instrumentally rational, for people who are likely to use that skill and have the storage space to fit more abilities. The main reason we wouldn't speak of it that way is that it's not one of the abilities that's instrumentally rational for every human, and it's awkward to have to index instrumentality to specific goals or groups.
Becoming good at graphic design is another story. That can require an investment large enough to make it instrumentally irrational, again depending on the agent and its environment.
I don't see any reason not to bite that bullet. This is why epistemic rationality can become trivial when it's divorced from instrumental rationality.
Rationalists should not be predictably unlucky.
Yes, if it's both predictable and changeable. Though I'm not sure why we'd call something that meets both those conditions 'luck'.
"Predictable" and "changeable" have limits, but people generally don't know where those limits are. What looks like bad luck to one person might look like the probable consequences of taking stupid chances to another.
Or what looks like a good strategy for making an improvement to one person might looking like knocking one's head against a wall to another.
The point you and Eliezer (and possibly Vaniver) seem to be making is that "perfectly rational agents are allowed to get unlucky" isn't a useful meme, either because we tend to misjudge which things are out of our control or because it's just not useful to pay any attention to those things.
Is that a fair summary? And, if so, can you think of a better way to express the point I was making earlier about conceptually distinguishing rational conduct from conduct that happens to be optimal?
ETA: Would "rationality doesn't require omnipotence" suit you better?
Are you familiar with Richard Wiseman, who has found that "luck" (as the phrase is used by people in everyday life to refer to people and events) appears to be both predictable and changeable?
That's an interesting result! It doesn't surprise me that people frequently confuse which complex outcomes they can and can't control, though. Do you think I'm wrong about the intension of "luck"? Or do you think most people are just wrong about its extension?
I think the definition of 'luck' as 'complex outcomes I have only minor control over' is useful, as well as the definition of 'luck' as 'the resolution of uncertain outcomes.' For both of them, I think there's meat to the sentence "rationalists should not be predictably unlucky": in the first, it means rationalists should exert a level of effort justified by the system they're dealing with, and not be dissuaded by statistically insignificant feedback; in the second, it means rationalists should be calibrated (and so P_10 or worse events happen to them 10% of the time, i.e. rationalists are not surprised that they lose money at the casino).
Ahh, thanks! This helps me better understand what Eliezer was getting at. I was having trouble thinking my way into other concepts of 'luck' that might avoid triviality.
Theoretically speaking (rare though it would be in practice), there are circumstances where that might happen- a rationalist simply refuses to use on moral grounds methods that would grant him an epistemic advantage.