2 min read24th Aug 201712 comments

3

Eliezer defines rationality as such:            

Instrumental rationality: systematically achieving your values.

....

Instrumental rationality, on the other hand, is about steering reality— sending the future where you want it to go. It’s the art of choosing actions that lead to outcomes ranked higher in your preferences. I sometimes call this “winning.”   

Extrapolating from the above definition, we can conclude that an act is rational, if it causes you to achieve your goals/win. The issue with this definition is that we cannot evaluate the rationality of an act, until after observing the consequences of that action. We cannot determine if an act is rational without first carrying out the act. This is not a very useful definition, as one may want to use the rationality of an act as a guide.

 
Another definition of rationality is the one used in AI when talking about rational agents:             

For each possible percept sequence,  a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in  knowledge the agent has. 

A precept sequence is basically the sequence of all perceptions the agent as had from inception to the moment of action. The above definition is useful, but I don't think it is without issue; what is rational for two different agents A and B, with the exact same goals, in the exact same circumstances differs. Suppose A intends to cross a road, and A checks both sides of the road, ensures it's clear and then attempts to cross. However, a meteorite strikes at that exact moment, and A is killed. A is not irrational for attempting to cross the road, giving that t hey did not know of the meteorite (and thus could not have accounted for it). Suppose B has more knowledge than A, and thus knows that there is substantial delay between meteor strikes in the vicinity, and then crosses after A and safely crosses. We cannot reasonably say B is more rational than A.
 

The above scenario doesn't break our intuitions of what is rational, but what about in other scenarios? What about the gambler who knows not of the gambler's fallacy, and believes that because the die hasn't rolled an odd number for the past n turns, that it would definitely roll odd this time (after all, the probability of not rolling odd ). Are they then rational for betting the majority of their fund on the die rolling odd? Letting what's rational depend on the knowledge of the agent involved, leads to a very broad (and possibly useless) notion of rationality. It may lead to what I call "folk rationality" (doing what you think would lead to success). Barring a few exceptions (extremes of emotion, compromised mental states, etc), most humans are folk rational. However, this folk rationality isn't what I refer to when I say "rational".

 
How then do we define what is rational to avoid the two issues I highlighted above?

 

New Comment
12 comments, sorted by Click to highlight new comments since: Today at 9:45 AM

There is also one more level of rationality, which is often assumed, but not presented. I would call it the inner definition of rationality:

"Rationality is a behavior which could be presented by a short set of simple rules". The rules include math, Byes theorem, utility function, virtues of rationality, you name it.

The main problem is following: does "inner definition" of rationality corresponds to the "outer definition", that is rationality as winning? That is, does knowing correct short set of rules results in constant winning?

If we think that yes, then all rationality manuals are useful, as by installing correct set of shots rules, we will get perfect rationality and will start to win.

However, if winning requires something extremely complex, like a very large neural net, which can't be presented by a short set of rules, we need to update our inner definition of rationality.

For example, a complex neural net may win in cat recognition, but it doesn't know any set of rules how to recognize a cat.

It's been almost four months since I wrote this thread. I've started to see the outline of an answer to my question. Over the course of the next year, I would begin documenting it.

Being lucky is not being rational. However it is undoubtable that winning in a lottery is mostly a positive outcome and that it requires for you to have purchased the ticket which is a decision. Something that looks only at outcomes would applaud the decision to buy the ticket (perhaps unconditionally).

The definiton of instrumental rationality is most commonly invoked when critisiing those that employ a complex methodology of choosing correctly while the methodolody can be criticed or the agent had evidence that could have been construed to be a reason to abandon the methodology. The critism "before" "instrumental rationality would focus on making an error in applicaiton of a methodology or not having any methodology at all to make the decision. The common sentiment from these can seem like "have a methodology and apply it correctly". And it seems clear that there are better and worse methodologies and one should try to apply the best available. And it seems "I had a methodology and applied it" doesn't make one to be "rational" (more like "dogmatic").

It seems one coudl have a reasonable chance of being "rational" if one had bad methodologies if one actively upswitches and upgrades their "carry on" methodology whenever they encounter new ones. It seems also that as the argument goes on the focus on metacognition increases. This can be seen also to frame the previous critisms in a new light. Its not that unmethodological decisions are "unrational" per se but doing so means likely that you missed to pick up a good methodology before that where you here could have applied to great success. So rather than "having" an methodology its more important to "pick up" methodologies with it being less essential whether you currently have or do not have a good methodology. With consistent pickups you should in the future have a great quality methodology. but rather than being the means its the effect.

From Jonathan Baron's Thinking and deciding:

The best kind of thinking, which we shall call rational thinking, is whatever kind of thinking best helps people achieve their goals. If it should turn out that following the rules of formal logic leads to eternal happiness, then it is “rational thinking” to follow the laws of logic (assuming that we all want eternal happiness). If it should turn out, on the other hand, that carefully violating the laws of logic at every turn leads to eternal happiness, then it is these violations that we shall call “rational.” When I argue that certain kinds of thinking are “most rational,” I mean that these help people achieve their goals. Such arguments could be wrong. If so, some other sort of thinking is most rational.

It may lead to what I call "folk rationality" (doing what you think would lead to success). Barring a few exceptions (extremes of emotion, compromised mental states, etc), most humans are folk rational. However, this folk rationality isn't what I refer to when I say "rational".

How about "doing what you can figure out would lead to success"? The gambler could figure out the gambler's fallacy, but the person crossing the road couldn't figure out the meteorite.

In harder problems like Newcomb's Problem or Counterfactual Mugging, there are several layers of "figuring out" leading to different answers, and there's no substitute for using intelligence to choose between them. So to define what's rational, we need to define what's intelligent. People are working on that, but don't expect an answer soon :-)

What about the gambler who knows not of the gambler's fallacy, and believes that because the die hasn't rolled an odd number for the past n turns, that it would definitely roll odd this time (afterall, the probability of not rolling odd n times is 2-n). Are they then rational for betting the majority of their fund on the die rolling odd? Letting what's rational depend on the knowledge of the agent involved, leads to a very broad (and possibly useless) notion of rationality. It may lead to what I call "folk rationality" (doing what you think would lead to success).

I think it depends where the knowledge comes from, right?

If he just has an instinct that a 6 should come up again, but can't explain where that instinct comes from or defend that belief in any kind of rational way other then "it feels right", then he's probably not being rational.

If he actually did an experiment and rolled a dice a bunch of times, and just by coincidence it actually seemed to come out that whenever a 6 hadn't come out for a while it would show up, then it might be a rational belief, even though it is incorrect. Granted, if he had better knowledge of statistical methods and such he probably could have ran the experiment in a better way, but I think if someone gathers actual data and uses that to arrive at an incorrect belief and then acts on that belief, he's still behaving rationally. Same thing if you developed your beliefs through other rational methods, like logical deduction based on other beliefs you already had established through rational means, or probabilistic beliefs based on some combination of other things you believe to be true and observations, ect.

A rational agent can not actually know everything, all the rational agent can do is act on the best information it has. And you can only spend so much in the way of resources and time trying to perfect that information before acting on it.

So, I would say rationality is defined by:

A- how did you arrive at your beliefs of the state of the world, and

B- did you act in a way that would maximize your chances of "winning",if your beliefs formed via rational methods are correct

If he just has an instinct that a 6 should come up again, but can't explain where that instinct comes from or defend that belief in any kind of rational way other then "it feels right", then he's probably not being rational.

Maybe in the specific example of randomness, but I don't think you can say the general case of 'it feels so' is indefensible. This same mechanism is used for really complicated black box intuitive reasoning that underpins any trained skill. So in in areas one has a lot of experience in, or areas which are evolutionary keyed in such as social interactions or in nature this isn't an absurd belief.

In fact, knowing that these black box intuitions exist means they they have to be included in our information about the world, so 'give high credence to black box when it says something' may be the best strategy if ones ability for analytic reasoning is insufficient to determine strategies with results better than that.

Maybe in the specific example of randomness, but I don't think you can say the general case of 'it feels so' is indefensible. This same mechanism is used for really complicated black box intuitive reasoning that underpins any trained skill. So in in areas one has a lot of experience in, or areas which are evolutionary keyed in such as social interactions or in nature this isn't an absurd belief.

Eh. Maybe, but I think that any idea which seriously underpins your actions and other belief systems in an important way should be something you can justify in a rational way. It doesn't mean you always need to think about it in that way, some things become "second nature" over time, but you should be able to explain rational underpinnings if asked.

If you're talking about a trained skill, "I've been fixing cars for 20 years and in my experience when you do x you tend to get better results then when you do y" is a perfectly rational reason to have a belief. So is "That's what it said in my medical school textbook", ect.

But, in my experience, people who put too much faith in their "black boxes" and don't ever think through the basis of their beliefs tend to behave in systematically irrational ways that probably harm them.

Its funny, I think this is probably always true as a guideline (that you should try and justify all your ideas) but might always break down in practice (all your ideas probably can't ever be fully justified, because Agrippa's trilemma - they're either justified in terms of each other, or not justified, and if they are justified in terms of other ideas, they eventually either are eventually circularly justified, or continue on into infinite regress, or are justified by things that are unjustified). We might get some ground by separating out ideas from evidence, and say we accept as axiomatic anything that is evidenced by inference until we gain additional facts that lend context that resituates our model so that it can include previous observations... something like that. Or it might be we just have to grandfather in some rules to avoid that Godelian stuff. Thoughts?

Yes that is a very good point. My current view is that the reason for this is a confusion between seeing knowledge as based on rationality when it is in reality based on experience. Rationality is the manipulation of basic experiential building blocks and these 'belief' blocks might correspond to reality or not. With the scientific method this correspondence has been clarified to such an extend that it seems as knowledge is generated purely through rationality but that is because we don't tend to follow our assumptions to the limits you are describing in your comment. If we check our assumptions and then our assumptions behind our assumptions etc. we will reach our fundamental presuppositions.

Yeah, that's a good point; one some level, any purely logical system always has to start with certain axioms that you can't prove within that system, and in the real world that's probably even more true.

I guess, ideally, you would want to be able to at least identify which of your ideas are axioms, and keep an eye on them in some sense to make sure that at least they don't end up conflicting with other axioms?

All humanimal attempts to define rationality are irrational!