I wrote an Admin page for "What do we mean by 'Rationality'?" since this has risen to the status of a FAQ. Comments can go here.
I wrote an Admin page for "What do we mean by 'Rationality'?" since this has risen to the status of a FAQ. Comments can go here.
The only legitimate purpose of words having definitions in the first place is to let two people communicate - the purpose of attaching certain syllables to certain concepts is to help transport meanings from one mind to another.
Not quite, I think. I frequently use words in my own thinking, and have much reason to think I'm not alone in this, and if you really think that's illegitimate then I'd be interested to know why. (I bet you don't.) And it sure seems like I make use of the fact that (some) words have definitions when I do that.
Even when talking to yourself, the rest of his point holds- there is nothing magical about the label "rationality" that you're gonna find in the territory- it's just part of how you mark up your map.
That said, I don't actually seem to think in English (or any other interhuman language) all that much unless I'm planning out what to say (or what I want to say). This is something that I've only noticed fairly recently, and it seems to be something that most people don't realize.
Talking to yourself when planning what to say certainly counts as "for communication between two people".
Yes, the rest of Eliezer's point holds; that would be why I didn't criticize the rest of Eliezer's point.
Different people think in words to different extents. (And for some of what seems like thinking in words, perhaps the word-generation is more or less epiphenomenal -- though I'd expect it always has some value, e.g. in helping the short-term memory along.) I find that I use words in the same sort of way as I use diagrams or mathematical symbols: as a way to avoid losing track of what I'm thinking, and to enable some degree of rigour when it's needed.
Yes, there are situations when talking to yourself can usefully be considered "communication between two people", but those aren't the situations I had in mind.
There are only two ways for the mind to consciously process information - language or images. There are some people who can apparently think clearly and precisely in images - Nikola Tesla and Temple Grandin spring immediately to mind. Language is the only way other than visual images to think consciously and precisely. For this purpose mathematics is a language.
You mean composers can't think consciously and precisely about sound? Chefs about taste? Perfumers (sp?) about smell? Gymnasts about the feel of their moves?
They generally don't - at least, not in ways that they can communicate to others, and if they can't do that, why would we describe their thoughts as 'conscious' and 'precise'?
I don't see how "communicate to others" and "conscious/precise" are related. If something is unconscious, it can still be communicated unconsciously (e.g. body language). If something is imprecise, that doesn't stop it from being communicated. Conversely, just because something is conscious or precise doesn't mean it can be communicated, if there are no points of reference on the receiving end. If a chef or a gymnast tried to communicate with me about such matters, they would probably fail, but that doesn't mean the failure was on their end of the conversation -- and would have nothing to do with the consciousness or precision of the thoughts involved.
The definitions you specify for a word don't actually define it, they merely name a concept on your map. The concept is far richer than the "definition" by which you found it, and the lever of the word that you attached to it allows to patch into the deeper machinery of your mind. You can make use the levers yourself, to craft new structures with your own machinery.
None of which makes it any less true that words-with-definitions are sometimes useful in private thought as well as in communication. For instance, technical terms in mathematics such as "transitive" or "uncountable" can be used robustly in lengthy chains of reasoning largely because they have precise definitions. The fact that when I use such a word (privately or publicly) I have plenty of mental machinery linked with it besides the bare definition doesn't stop it being a definition. (Perhaps you're using "define" in what seems to me to be an eccentric way, such that in fact essentially no words have actual definitions. Feel free, but I don't find that helpful.)
I think we agree, I'm not sure what distinction you are trying to show in this comment. Consider chess: what is the definition of knight's moves? There are rules of the game that the actions of the player must follow, the distilled form of conclusions, and there is overarching machinery of thought. The rules make sure that you stay within the game after however many moves you need, and the thought allows to find the winning moves.
You seemed to be disagreeing with me, but declined to say just what your disagreement was. So I had to guess, and I tried to respond to the criticism I thought you were making. Now it appears that we are in agreement. Fair enough; what then was your point?
(My point, in case it wasn't obvious, was that I think Eliezer erred when he wrote that the only legitimate use of definitions is to ease communication; I think they are sometimes helpful in private thought too.)
"I think they are sometimes helpful in private thought too."
Here I think you're erring: definitions are absolutely necessary in conscious thought. Without them, you don't have conscious processing.
A definition is not merely a name on your map, it's the location in the greater scheme of the map, the longitude and latitude. A definition fixes a notion with respect to some other notions, all of which together form your machinery, your belief network, your map. This machinery may bear no relation to reality, but then, to me, the point of definitions is to be clear, not accurate.
Instrumental rationality: achieving your values. Not necessarily "your values" in the sense of being selfish values or unshared values: "your values" means anything you care about. The art of choosing actions that steer the future toward outcomes ranked higher in your preferences. On LW we sometimes refer to this as "winning".
In my opinion, Wikipedia puts things much better here:
Rationality is a central principle in artificial intelligence, where a rational agent is specifically defined as an agent which always chooses the action which maximises its expected performance, given all of the knowledge it currently possesses.
The advantage wikipedia has is that it is talking about expected performance on the basis of the available information, not about actual performance. That emphasis is correct - rationality is (or should be) defined in terms of whether operations performed on the available information constitute correct use of the tools of induction and deduction - and should not depend on whether the information the agent has is accurate or useful.
This has been discussed many times: there is a distinction between trying to win and winning.
Exactly. Rationality is a property of our understanding of our thinking, not the thinking itself.
Being rational doesn't involve choosing correctly, it's about having a justified expectation that the choices you're making are correct.
Being rational doesn't involve choosing correctly, it's about having a justified expectation that the choices you're making are correct.
Well, if the expectation is justified, you are choosing correctly.
Depends on how you look at it.
If the expectation is justified, then the choice is correct from your point of view. But it can easily be wrong in an absolute sense.
If you are allowed to look at statements in a way that varies their meaning to the opposite, you may as well close your eyes. Justified means being supported by a powerful truth-engine, not being accompanied by a believed rationalization. If "from my point of view", it is correct to expect to safely fly when I step out the window, it doesn't make it correct, this expectation won't be justified in the normal use of the word.
Are you not getting the point? Agents can correctly apply inductive and deductive reasoning, but draw the wrong conclusion - because of their priors, or because of misleading sensory data. Rationality is about reasoning correctly. It is possible to reason correctly and yet still do badly - for example if a hostile agent has manipulated your sense data without giving you a clue about what has happened. Maybe you could have done better by behaving "irrationally". However, if you had no way of knowing that, the behaviour that led to the poor outcome could still be rational.
I absolutely agree with this point. Rationality in this sense is that truth-engine I named in the comment you replied to: it's built for a range of possible environments, but can fail in case of an unfortunate happenstance. As opposed to having an insane maintainer who is convinced that the engine works when in fact it doesn't, not just on the actual test runs, but on the range of possible environments for which it's supposedly built. When you are 90% sure that something will happen, you expect it NOT to happen 1 time in 10.
"If "from my point of view", it is correct to expect to safely fly when I step out the window, it doesn't make it correct, "
Yeah, but your "point of view" doesn't include any stupid belief you have. If you could explicitly justify why you expected to fly when you stepped out that window, and trace that justification all the way back to elementary logic and fundamental observations, it would be totally rational for you to expect that.
It wouldn't be your fault if the "rules" suddenly changed so that you fell, instead.
That's a great analysis, and it should bring more clarity to our discussions if we can all agree on that or modify it as necessary until we basically agree.
One thing I am wondering about though is that the analysis seems to present the two subspecies of rationality -- epistemic and instrumental -- as being on equal footing, or as somehow equally fundamental. (That's my reading based on labeling them as 1 and 2 and not indicating that either is more fundamental.)
It seems to me though that instrumental rationality is what we really, really mean by rationality, and what you have referred to as epistemic rationality is one particular (astoundingly powerful) technique of instrumental rationality.
There are always multiple ways of mapping the territory. We have no way of deciding between them other than in terms of instrumental rationality, by choosing the one that seems most useful for achieving our values.... We might call that most useful map truth or corresponding to reality, but it only acquires that status via instrumental rationality. Epistemic rationality thus depends on instrumental rationality, but the converse is not true.
Due to the recent flurry of arguments about "defining rationality", and that my karma has risen over 20, I was considering writing a post on the same topic as yours Eliezer. There's sort of a "darn it" feeling that you beat me to the punch, but I'm also glad that you did, because your writing is much more clear and elegant than mine. Plus your linking to your past posts on OB on the subject is much more comprehensive than anything I could have accomplished.
My one comment is that I noticed you never used the terms "descriptivism" and "prescriptivism" while I almost always do when talking about e.g. the absurdity of thinking that the contents of "the" dictionary (as if there is only one dictionary, or as if all dictionaries are in perfect agreement with each other) determines the meaning of the words. Are you intentionally avoiding these terms, or simply don't find them useful, or some other reason?
Eliezer: "Similarly, if you find yourself saying "The rational thing to do is X, but the right thing to do is Y" then you are almost certainly using one of the words "rational" or "right" in a way that a huge chunk of readers won't agree with. In this case - or in any other case where controversy threatens - you should substitute more specific language: "The self-benefiting thing to do is to run away, but I hope I would at least try to drag the girl off the railroad tracks""
Yes. Rational does not equal "sensible" or "putting self first".
So can we be rational in arguing about morality? If I decide that human life has value, I can argue from that prior, rationally, that it is Right to try and drag the girl off the railroad tracks.
I believe that human life has value, even though that is not a completely rigorous, defined statement of my belief about human life. I doubt I have the words to fully express my beliefs about the value of human life.
It is possible that I generalise "human life has value" from my own selfish needs, I do not like being alone for too long, I would have to adjust and learn a great deal before I could survive without Society.
So I believe that for me to believe "human life has value" is Right, or at least permissible, but not necessarily Rational (epistemic or instrumental) in itself, though I can take it as an axiom, and argue rationally based upon it.
Or if my belief that "human life has value" derives rationally from "I will base my values on my own selfish needs" which derives from "I want to survive": in "I want to survive" there is a Want, which is not derived rationally from anything.
I'm new to LessWrong so pardon me if my question has an obvious answer or seems silly, and please let me know if there are flaws in my reasoning.
"The self-benefiting thing to do is to run away, but I hope I would at least try to drag the girl off the railroad tracks." In this context, is the rational choice different for different people?
I believe that rationality can mean different things to different people because people have different moral compasses- a set of values they believe is right or are the most important to them. If I place another human's life in high enough regard that I would try to drag the girl off the railroad tracks, and that instinct is greater than my need to run away (self preservation), then I will end up trying to help the girl. If my priorities are reversed and I value self preservation more, I will likely run away.
Now what if I'm the sort of person who values self preservation more and hence am likely to run away, but I WANT to be the kind of person who would stop and help the girl? I'm making a conscious effort to be selfless in life. What is the rational choice for me then? I understand that if I were to actually be in such a situation, I would not have the time to logically make up my mind about what to do but would simply act on instinct, but I'm still interested in understanding what the rational choice would be in that situation.
Suppose I have a blind man and a sighted man. The blind man has a cataract that could be repaired with surgery. I tell them that a jar contains a very large number of pebbles, all of them are either green or blue, and there are twice as many of one color than the other. I pull out a random pebble, which turns out to be green, and show it to both the men. I ask them to write down what they think the probability is that the next pebble I pull out will be green. The sighted man writes 51%; the blind man writes 50%. Who is more rational?
The sighted man is executing an incorrect probability update on better information, leading him to a slightly higher expected score. I answer that the blind man is more rational, unless he has refused to repair the cataract for no apparent reason, in which case he is exhibiting a different, unusual, and in this case slightly more damaging form of irrationality.
But if you define rationality as either "obtaining beliefs that correspond to reality as closely as possible" or "achieving your values", it seems that the sighted man has been more successful. I guess "believing, and updating on evidence, so as to systematically improve the correspondence between your map and the territory" is better, since the blind man has less evidence. I think the question now, though, is when a person "has" a given piece of evidence. What if I fail to recognize that a certain fact is evidence for a certain hypothesis? What if I do recognize this, but don't have the time to apply Bayes' law?
In other words, it's possible to construct and then resolve an arbitrary problem based on the given description.
The blind man should guess 50%. The sighted man should guess 66%. So, the blind man has made a good guess, given the information he has, while the sighted man has better information, has made a mess of his reasoning, but still has the more accurate figure.
How to quantify rationality? I don't know. Maybe model the methodology used in IQ tests. If you are comparing rationality across individuals, you should probably make sure they have the same access to the test information - or the results are likely to be screwed.
Does it count if I want to call the blind man irrational for not having repaired his eyesight? Or are we pretending he has a good reason? Or would the intended example be the same if he had irreparable blindness?
Thanks Eleizer,
I needed appropriate names to specify which of those two I was referring to. I'll be sure to use 'epistemic' or 'instrumental' when the context demands. That'll save many a distracting explanatory sentence.
When reading a textbook or technical work, I frequently use marginalia to comment on the work. I find it a useful tool to increase reading comprehension, force me to organize my thoughts, and allow me to return to that part of the book years later to use it as a reference. I'm reading through The Sequences, but since it is in digital form I am unable to make use of my usual practice. Instead, I intend to leave several comments such as this in the appropriate discussion threads. I initially was using a word document, but have found it tedious to constantly transfer between computers and devices. If these comments and notifications are objectionable to anyone, I'll switch back. The FAQ says that it is worthwhile to comment on ancient posts and long-dead threads, so I took that as encouragement. I only have a couple tangential comments on this particular piece, but I expect most future marginalia to generally be much more extensive.
It’s worrying that people will falsely guess P("Bill plays jazz") < P("Bill plays jazz" & "Bill is accountant"). If these were the profiles for two murder suspects, the jury could easily make a very bad judgment call. However, we are evolutionarily wired to be good at social profiling. I suspect that the error here is that people are reading this problem as if it was the type that we are good at. When all you have is a hammer, everything looks like a nail. For example, they might read the problem to mean P(A) < P(A + B), where + indicates “and/or” (logical disjunction). This would also explain why people tend to believe “if A implies B, than B implies A” (that correlation implies causation).
That's petty. The purpose of such statements is to establish group norms, not assert high status. You're shocked that someone would create a community website and then propose to determine what sort of community would arise from it?
I am not sure you understood the intent of my comment. "The royal we" is a phrase with a technical meaning. See: http://en.wikipedia.org/wiki/Pluralis_Majestatis It seems like an accurate statement of the facts to me.
Whatever it refers to, my immediate reaction was that the "we" doesn't seem to include me - which seems unfortunate, since - AFAICS - my usage is the more standard one. Anyway: your blog - lay down whatever terminology you like.
Is this Eliezers Blog?!
I thought it was OUR blog - as in our community and not Eliezer's community.
And yes the more I think about it the more I think a FAQ which defines rationality as "we" use it needs this comment section.
I do not find Eliezer's definition in itself sufficient. Defining rationality will always be a work in progress and new suggestions should be added. As I see it the present defintion limits itself to a mechanical rationality (as is Eliezer's want) and excludes "searching" - the act of imagination.
In my opinion, Wikipedia puts things much better here:
The advantage wikipedia has is that it is talking about expected performance on the basis of the available information, not about actual performance. That emphasis is correct - rationality is (or should be) defined in terms of whether operations performed on the available information constitute correct use of the tools of induction and deduction - and should not depend on whether the information the agent has is accurate or useful.
This has been discussed many times: there is a distinction between trying to win and winning.
Exactly. Rationality is a property of our understanding of our thinking, not the thinking itself.
Being rational doesn't involve choosing correctly, it's about having a justified expectation that the choices you're making are correct.