RobbBB comments on Three ways CFAR has changed my view of rationality - Less Wrong

102 Post author: Julia_Galef 10 September 2013 06:24PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (58)

You are viewing a single comment's thread. Show more comments above.

Comment author: RobbBB 10 September 2013 01:48:32AM *  7 points [-]

I agree there's been some inconsistency in usage over the years. In fact, I think What Do We Mean By Rationality? and Rationality are simply wrong, which is surprising since they're two of the most popular and widely-relied-on pages on LessWrong.

Rationality doesn't ensure that you'll win, or have true beliefs; and having true beliefs doesn't ensure that you're rational; and winning doesn't ensure that you're rational. Yes, winning and having true beliefs is the point of rationality; and rational agents should win (and avoid falsehood) on average, in the long haul. But I don't think it's pedantic, if you're going to write whole articles explaining these terms, to do a bit more to firewall the optimal from the rational and recognize that rationality must be systematic and agent-internal.

Instrumental and epistemic rationality were always kind of handwavey, IMO. For example, if you want to achieve your goals, it often helps to have money. So if I deposit $10,000 [≈ Average community college tuition, four years, 2010] in your bank account, does that make you more instrumentally rational?

Instrumental rationality isn't the same thing as winning. It's not even the same thing as 'instantiating cognitive algorithms that make you win'. Rather, it's, 'instantiating cognitive algorithms that tend to make one win'. So being unlucky doesn't mean you were irrational.

Luke's way of putting this is to say that 'the rational decision isn't always the right decision'. Though that depends on whether by 'right' you mean 'defensible' or 'useful'. So I'd rather just say that rationalists can get unlucky.

You could define instrumental rationality as "mental skills that help people better achieve their goals". Then I could argue that learning graphic design makes you more instrumentally rational, because it's a mental skill and if you learn it, you'll be able to make money from anywhere using your computer, which is often useful for achieving your goals.

I'm happy to say that being good at graphic design is instrumentally rational, for people who are likely to use that skill and have the storage space to fit more abilities. The main reason we wouldn't speak of it that way is that it's not one of the abilities that's instrumentally rational for every human, and it's awkward to have to index instrumentality to specific goals or groups.

Becoming good at graphic design is another story. That can require an investment large enough to make it instrumentally irrational, again depending on the agent and its environment.

You could define epistemic rationality as "mental skills that help you know what's true". Then I could argue that learning about chess makes you more epistemically rational, because you can better know the truth of statements about who's going to win chess games that are in progress.

I don't see any reason not to bite that bullet. This is why epistemic rationality can become trivial when it's divorced from instrumental rationality.

Comment author: Eliezer_Yudkowsky 10 September 2013 02:14:30AM 15 points [-]

Rationalists should not be predictably unlucky.

Comment author: RobbBB 10 September 2013 02:45:18AM 10 points [-]

Yes, if it's both predictable and changeable. Though I'm not sure why we'd call something that meets both those conditions 'luck'.

Comment author: NancyLebovitz 10 September 2013 03:57:31PM 6 points [-]

"Predictable" and "changeable" have limits, but people generally don't know where those limits are. What looks like bad luck to one person might look like the probable consequences of taking stupid chances to another.

Or what looks like a good strategy for making an improvement to one person might looking like knocking one's head against a wall to another.

Comment author: RobbBB 11 September 2013 11:06:58PM *  1 point [-]

The point you and Eliezer (and possibly Vaniver) seem to be making is that "perfectly rational agents are allowed to get unlucky" isn't a useful meme, either because we tend to misjudge which things are out of our control or because it's just not useful to pay any attention to those things.

Is that a fair summary? And, if so, can you think of a better way to express the point I was making earlier about conceptually distinguishing rational conduct from conduct that happens to be optimal?

ETA: Would "rationality doesn't require omnipotence" suit you better?

Comment author: Vaniver 10 September 2013 09:54:36PM 9 points [-]

Are you familiar with Richard Wiseman, who has found that "luck" (as the phrase is used by people in everyday life to refer to people and events) appears to be both predictable and changeable?

Comment author: RobbBB 11 September 2013 11:03:29PM *  4 points [-]

That's an interesting result! It doesn't surprise me that people frequently confuse which complex outcomes they can and can't control, though. Do you think I'm wrong about the intension of "luck"? Or do you think most people are just wrong about its extension?

Comment author: Vaniver 11 September 2013 11:19:41PM 6 points [-]

I think the definition of 'luck' as 'complex outcomes I have only minor control over' is useful, as well as the definition of 'luck' as 'the resolution of uncertain outcomes.' For both of them, I think there's meat to the sentence "rationalists should not be predictably unlucky": in the first, it means rationalists should exert a level of effort justified by the system they're dealing with, and not be dissuaded by statistically insignificant feedback; in the second, it means rationalists should be calibrated (and so P_10 or worse events happen to them 10% of the time, i.e. rationalists are not surprised that they lose money at the casino).

Comment author: RobbBB 11 September 2013 11:25:02PM 2 points [-]

Ahh, thanks! This helps me better understand what Eliezer was getting at. I was having trouble thinking my way into other concepts of 'luck' that might avoid triviality.

Comment author: Carinthium 10 September 2013 07:56:15AM 0 points [-]

Theoretically speaking (rare though it would be in practice), there are circumstances where that might happen- a rationalist simply refuses to use on moral grounds methods that would grant him an epistemic advantage.

Comment author: jdgalt 10 September 2013 08:37:55PM *  -1 points [-]

It seems to me that some of LW's attempts to avoid "a priori" reasoning have tripped up right at their initial premises, by assuming as premises propositions of the form "The probability of possible-fact X is y%." (LW's annual survey repeatedly insists that readers make this mistake, too.)

I may have a guess about whether X is true; I may even be willing to give or accept odds on one or both sides of the question; but that is not the same thing as being able to assign a probability. For that you need conditions (such as where X is the outcome of a die roll or coin toss) where there's a basis for assigning the number. Otherwise the right answer to most questions of "How likely is X?" (where we don't know for certain whether X is true) will be some vague expression ("It could be true, but I doubt it") or simply "I don't know."

Comment author: gjm 15 September 2013 12:32:15AM 5 points [-]

Refusing to assign numerical probabilities because you don't have a rigorous way to derive them is like refusing to choose whether or not to buy things because you don't have a rigorous way to decide how much they're worth to you.

Explicitly assigning a probability isn't always (perhaps isn't usually) worth the trouble it takes, and rushing to assign numerical probabilities can certainly lead you astray -- but that doesn't mean it can't be done or that it shouldn't be done (carefully!) in cases where making a good decision matters most.

When you haven't taken the trouble to decide a numerical probability, then indeed vague expressions are all you've got, but unless you have a big repertoire of carefully graded vague expressions (which would, in fact, not be so very different from assigning probabilities) you'll find that sometimes there are two propositions for both of which you'd say "it could be true, but I doubt it" -- but you definitely find one more credible than the other. If you can make that distinction mentally, why shouldn't you make it verbally?

Comment author: jdgalt 23 November 2013 03:42:17AM *  1 point [-]

If it were a case like you describe (two competing products in a store), I would have to guess, and thus would have to try to think of some "upstream" questions and guess those, too. Not impossible, but unlikely to unearth worthwhile information. For questions as remote as P(aliens), I don't see a reason to bother.

Have you seen David Friedman's discussion of rational voter ignorance in The Machinery of Freedom?