I understand rationality to be related to a set of cognitive tools rather than a certain personality or genetic type. Like any other tool it can be misused. You can kill a person with a spoon, but that is a misuse of its intended function. You cut a pound of raw meat with a chainsaw, but that is a misuse of its intended function. Tools are designed with both intended purposes and functional limitations. Intended purposes serve to provide the user with an understanding of how to achieve optimal impact. For example, some intended uses of a sword would be killing, disabling, acting, or training (and many more). Tools can be used outside of their intended purposes. The use might not result in optimal output, it might even damage the tool, but it is possible.  A sword can be used to cut wood, clear shrubbery, as a decoration, a sword could even be used as a door stop. Doorstop has long departed from the intended function for a sword upon its design, but nevertheless it exists as possibility given the structure of a sword. Functional limitations are desired uses that a tool cannot meet given its structure.  A sword alone cannot allow you to fly or breathe underwater, at least not without making significant alterations to its structure, rendering it no longer a sword.

Every tool exists with both intended functions and functional limitations. From reading some essays on this website I get the impression that many members of this community view rationality as a universal tool. That no matter what the conflict a certain degree of rationality would provide the appropriate remedy. I would like to question this idea. I think there are both functional limitations to rationality and ways to misuse one's powers of reasoning. To address these, it is first necessary to identify what the primary function of rationality is.

The Function of rationality

From reading various articles on this website I would suggest that rationality is seen as a tool for accuracy in obtaining desired results, or as Eliezer puts it, for “winning.” I agree with this analysis. Rationality is a tool for accuracy; increased accuracy leads to successfully obtainment of some desired result; obtainment of some desired result can broadly be described as “winning.” If rationality is a tool for increasing accuracy, then the questions becomes “are there ever times when it is more beneficial to be inaccurate,” or in other words, are there times when it should be desired to lose.

Why would a person ever want to lose?

I can think of two situations where increased accuracy is detrimental: 1.) In maintaining moderation; 2.) In maintaining respectful social relations.

1.) *It is better to air on the side of caution*: The more accurate you become the faster you obtain your goals. The faster you obtain your goals the quicker you progress down a projected course. In some sense this is a good thing, but I do not think it is universally good. **The pleasure winning may deter the player from the fundamental question “Is this a game I should be playing?”** A person who grew up playing the violin from an early age could easily find themselves barreling along a trajectory that leads them to a conservatory without addressing the fundamental question “is becoming a violinist what is going to most benefit my life? It is easy to do something you are good at, but it is fallacious to think that just because you are good at something it is what you should be doing. If Wille E. Coyote has taught us anything it is that progressing along a course too fast can result in unexpected pitfalls. Our confidence in an idea, job, a projected course, has no real bearing on its ultimate benefit to us (see my comment here for more on how being wrong feels right). While we might not literally run three meters off a cliff and then fall into the horizon, is it not possible for things to be moving too fast?

2.) *”Wining” all the time causes other people narrative dissonance*:  People don’t like it when someone is right about everything. It is suffocating.  Why is that? I am sure that a community of dedicated rationalists will have experienced this phenomenon, where relationships with family, friends, and other personal networks are threatened/damaged by you having an answer for everything, every causal debate, every trivial discussion; where you being extremely good at “winning” has had a negative effect on those close to you. I have a theory for why this is, is rather extensive, but I will try to abridge it as much as possible. First, it is based in the sociological field of symbolic interactionism, where individuals are constantly working to achieve some role confirmation in social situations. My idea is that there are archetypes of desired roles, and that every person needs the psychological satisfaction of being cast into those roles some of the time. I call these roles “persons of interest.” The wise one, the smart one, the caring one, the cool one, the funny one, these are all roles of interest that I believe all people need the chance to act out. If in a relationship you monopolize one of these roles to the point that your relations are unable to take it on, than I believe you are hurting your relationship. If you win too much, deprive those close to you the chance of winning, effectively causing them anxiety.

For example, I know when I was younger my extreme rationality placed a huge burden on my relationship with my parents. After going to college I began to have a critique of almost everything they did. I saw a more efficient, more productive way of doing things than my parents who had received outdated educations. For a while I was so mad that they did not trust me enough to change their lives, especially when I knew I was right. Eventually, What I realized was that it is psychologically damaging for a parent’s 20 something year old kid to feel that it is their job to show you how to live. Some of the things (like eating healthier and exercising more) I did not let go, because I felt the damages of my role reversal were less than the damages of their habits; however, other ideas, arguments, beliefs, I did let go because they did not seem worth the pain I was causing my parents. I have experienced the need to not win as much in many other relationships. Be they friends, teachers, lovers, peers, colleagues, in general if one person monopolizes the social role of imparter of knowledge it can be psychologically damaging to those they interact with. I believe positive coexistence is more important than achieving some desired impact (winning). Therefore I think it is important to ease up on one’s accuracy for the sake of one’s relationships.

- Honestly I have more limitation and some misuses I to address, but decided to hold off and see what the initial reception of my essay was. I realize this is a rationalist community and I am not trying to pick a fight. I just strongly believe in moderation and wanted to share my idea. Please don't hate me too much for that.

- HungryTurtle

 

Rationally Irrational
New Comment
417 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Your article is interesting, and a lot of the points you make are valid. In practice, LW-style rationality might well have some of the effects you describe, especially in the hands of those who use it or understand it in a limited way. However, I don't think your point is valid as a general argument. For example:

If you win too much, deprive those close to you the chance of winning, effectively causing them anxiety.

To me, this seems to be based on a fallacious understanding of LW-style "winning." Winning here means accomplishing your goals, and using a "rationality" toolkit to win means that you accomplish more of your goals, or accomplish them better, than you would have without those tools.

For some people, being right about everything is a goal. For some people, harmonious social relationships are a goal. For a lot of people, these are both goals, although they may be prioritized differently, i.e. a different weight may be placed on each. If the goal of being right conflicts with the goal of harmonious relationships, and harmonious relationships are prioritary, then according to the toolkit of "rationality", it is rational to lay off a bit and av... (read more)

0HungryTurtle
Thank you for your comments, Thank you, your comments have helped crystallize my ideas. When I said to "rethink what game you are playing" that was a misleading statement. It would be more accurate to my idea to say that some times you have to know when to stop playing. The point I was trying to make is not that the goal you choose is damaging to your relations, but literally winning itself regardless of the goal. From my experience, people don't care about what's right as much as they care about being right. Let's imagine, as you say, your goal is social harmony. This is not an individual goal, like golf, it is a team goal. Achieving this goal requires both a proper method and team subordination. If you let the other players of your team play out their strategies, then you will not win. However, because of the phenomenon I have attempted to explain above (people's need to fulfill certain ideal roles) taking the steps necessary to "win" is damaging to the other players, because it forces them to acknowledge their subordination, and thus in reality does not achieve the desired goal. Does this make sense? It is similar to the daoist idea of action vs inaction. Inaction is technically a type of action, but it is also defined by existing outside of action. The type of "game" I am talking about is technically a game, but it is defined by relinquishing the power/position of control. Even if you can win/ know how to win, sometimes what people need more than winning is to attempt to win by themselves and know that you are in it with them. Of course there are times when it is worth more to win, but I think there are times when it is worth the risk of losing to allow others the chance to feel that they can win, even if it is a lesser win than you envision. Thank you again for your comments.
3Swimmer963 (Miranda Dixon-Luinenburg)
I'm glad my comment helped clarify your ideas for you. I can't say that I entirely understand your point, though. Stop playing what game? Say you're with a group of friends, and you're all playing a game together, like Monopoly or something. You're also playing the "game" of social relations, where people have roles like "the smart one", "the cool one" or "the helpful one" that they want to fulfill. Do you mean that sometimes you have to know when to stop playing to win at Monopoly in order to smooth over the social relations game and prevent people from getting frustrated and angry with you? Or do you mean that sometimes you have to stop playing the social status/relations game? The former is, I think, fairly obvious. Some people get too caught up in games like Monopoly and assign more value to "winning" than to letting everyone else have fun, but that's more a failure of social skills than "rationality". As for the latter, I'm not sure I understand what "deciding to stop playing" at social relations would mean. That you would stop trying to make yourself look good? That you would stop talking to the other people with you? More to the point, I don't think social relations is a game where one person wins over everyone else. If I got to look cool, but it meant that some of my friends didn't have fun and felt neglected, I certainly wouldn't feel like I'd won the game of social harmony. This paragraph makes it sound like you're talking about social status. Yes, social status is somewhat of a zero-sum game, in that you being cooler and getting tons of attentions makes everyone else a bit less cool by comparison and takes away from the attention they get. But that's in no way the goal of social harmony, at least not as I define it. In a harmonious group, no one feels neglected, and everyone enjoys themselves. In summary, I think you may just be describing a problem that doesn't really happen to me (although, thinking back, it happened to me more back when I was 12 an
-1HungryTurtle
Sorry for such a late response, life really picked up this month in many amazing and wondrous ways and I found myself lacking the time or desire to respond. Now things have lulled back, and I would like to address your, and all the other responses to my ideas. When I say game I am referring to a board game, a social game, a dream, really any desired outcome. Social status is a type of game, and it was the one I thought provided the most powerful analogy, but it is not the overall point. The overall point is the social harmony you speak of. You say that in your opinion, I agree with this definition of harmony. The idea I am trying to express goes beyond the poor social skills you are assuming I am attributing to this "nerdy community" (which I am not). Beyond individually motivated goals, I am suggesting that for no one to feel neglected and everyone to enjoy themselves it is necessary for the actor to stop trying to achieve any goal. The pursuit of any one goal-orientation automatically excludes all other potential goal-orientations. If you have an idea of what is funny, what is cool, in attempting to actualize these ideas you are excluding all other possible interpretations of them. For no one to feel neglected and everyone to truly enjoy themselves, then everyone’s ideas of happiness, security, camaraderie, humor, etc must be met. My idea is somewhat similar to Hinesburg’s uncertainty principle, in that your intentionally makes the goal you desire unattainable. Does this make sense?
1Swimmer963 (Miranda Dixon-Luinenburg)
Do you mean that the person in question has to just sit back and relax? That they have to stop trying to steer the direction of the conversation and just let it flow? Or that they have to focus on other people's enjoyment rather than their own enjoyment? The former doesn't feel true for me, in that having someone with good social skills and an idea of people's interests steer the conversation can make it more enjoyable rather than less so. The latter, maybe true, but I wouldn't want to live like that.
-8metaphysicist

[A]re there times when it should be desired to lose[?]

When you should "lose", "losing" is the objective, and instrumental rationality is the art of successfully attaining this goal. When you do "lose", you win. On the other hand, if you "win", you lose. It's very simple.

6Matt_Simpson
Cue laugh track.
0HungryTurtle
Thank you for your insightful comments. I chose to call it winning to try and build off the existing terminology of the community, but that might have been a mistake. What was meant by "winning" was goal achievement, what was meant by "losing" was acting in a way that did not move towards any perceived goal, perhaps it would be better described as having no goal. Inaction is technically a type of action, but I think there needs to be a distinction between them. Choosing to suspend intentionality is technically a type of intentionality, but I still think there needs to be a distinction. What do you think?
[-]TimS200

Some of the things (like eating healthier and exercising more) I did not let go, because I felt the damages of my role reversal were less than the damages of their habits; however, other ideas, arguments, beliefs, I did let go because they did not seem worth the pain I was causing my parents.

Why call this losing instead of winning-by-choosing-your-battles? I don't think members of this community would endorse always telling others "I know a better way to do that" whenever one thinks this is true. At the very least, always saying that risks being wrong because (1) you were instrumentally incorrect about what works better or (2) you did not correctly understand the other person's goals.

More generally, the thing you are labeling rationality is what we might call straw vulcan rationality. We don't aspire to be emotionless computrons. We aspire to be better at achieving our goals.

Eliezer wrote a cute piece about how pathetic Spock was to repeatedly predict things had <1% of succeeding when those sorts of things always worked. As outsiders, we can understand why the character said that, but from inside Spock-the-person, being repeated wrong like that shows something ... (read more)

9Swimmer963 (Miranda Dixon-Luinenburg)
Reminds me of this chapter from Eliezer's fanfiction. "Winning" in the naive, common-usage-of-the-word sense doesn't always result in better accomplishing your goals, and it is sometimes "rational" to lose, which means that losing is sometimes "winning" in the LW/rationality sense. Words are confusing sometimes!
0HungryTurtle
Tims, It is always a pleasure talking! Thanks for the great link to the straw vulcan rationality. Ironically, what Julia says here is pretty much the point I am trying to make Humans are irrational by nature; humans are also social by nature. There is individual health and there is social health. Because humans are irrational, often times social health contradicts individual health. That is what I call rationally irrational.
8Swimmer963 (Miranda Dixon-Luinenburg)
One: what is your evidence that humans are "irrational by nature", and how do you define this irrationality. Two: I've found that since I started reading LW and trying to put some of its concepts into practice, my ability to handle social situations has actually improved. I am now much better at figuring out what people really want and what I really want, and then finding a way to get both without getting derailed by which options "feel high-status". The specific LW rationality toolkit, at least for me, has been VERY helpful in improving both my individual psychological health and my "social health."
8faul_sname
One: I think Lukeprog says it pretty well here: Two: Good point. Social goals and nonsocial goals are only rarely at odds with one another, so this may not be a particularly fruitful line of thought. Still, it is possible that the idea of rational "irrationality" is neglected here.
2thomblake
This seems implausible on the face of it, as goals in general tend to conflict. Especially to the extent that resources are fungible.
1Swimmer963 (Miranda Dixon-Luinenburg)
I agree with you on Lukeprog's description being a good one. I'm curious about whether HungryTurtle agrees with this description, too, or whether he's using a more specific sense of "irrational."
0HungryTurtle
hahah than why is smoking cool for many people? Why is binge drinking a sign of status in American colleges? Why do we pull all nighters and damage our health for the pursuit of the perfect paper, party, or performance. Social goals are a large portion of the time at odds with individual health goals.
0faul_sname
I'm probably generalizing too much from my own experience, which is social pressure to get educated and practice other forms of self-improvement. I've never actually seen anyone who considers binge drinking a good thing, so I had just assumed that was the media blowing a few isolated cases out of proportion. I could easily be wrong though.
-4HungryTurtle
Do you think humans can avoid interpreting the world symbolically? I do not. The human body, the human brain is hardwired to create symbols. Symbols are irrational. If symbols are irrational, and humans are unable to escape symbols, then humans are fundamentally irrational. That said, I should have added to my above statement that humans are also rational by nature.
7[anonymous]
Why isn't this just a contradiction? In virtue of what are these two sentences compatible?
0Gastogh
I think they're compatible in that the inaccurate phrasing of the original statement doesn't reflect the valid idea behind it. Yobi is right: it's not a clean split into black and white, though the original statement reads like it is. I think it would've been better phrased as, "There are rational sides to humans. There are also irrational sides to humans." The current phrasing suggests the simultaneous presence of two binary states, which would be a contradiction.
-9HungryTurtle
1Swimmer963 (Miranda Dixon-Luinenburg)
In what sense do you mean that symbols are irrational? Is it because they only imperfectly represent the world that is "really out there?" Is there a better option for humans/hypothetical other-minds to use instead of symbols?
-2HungryTurtle
Symbols by definition are analogies to reality. Analogies are not rationally based, they are rhetorically based. Rhetoric is by no means rational in the sense that this community uses the word. Therefore language is by definition irrational. No, that is my point. Humans have no other way to relate to reality. The idea of a better option is a fiction of essentialist philosophy.
0Dustin
I don't know if this is what you were thinking of, but here is what lukeprog wrote about Spock.
4sixes_and_sevens
I believe this is what he's thinking of.
3TimS
That's it. Thanks.
[-][anonymous]70

You've confused goal-winning (LW sense) with social-winning.

Rationality is the optimal tool for goal-winning, which is always what is desirable. This relation is established by definition, so don't bother criticizing it.

You can show that our current understanding of rationality or winning does not live up to the definition, but that is not a criticism of the definition. Usually when people debate the above definition, they are taking it to be an empirical claim about spock or some specific goal, which is not how we mean it.

EDIT: Also, "air on the side". It's "err" as in "error". Read Orwell's "politics and the english langauge".

4faul_sname
This phrase worries me.
3wedrifid
I hope it means "If you want to criticize this relationship you must focus your criticism on the definition that establishes it".
0faul_sname
Yes, but considering that social winning is quite often entangled quite closely with goal winning, and that the goal sometimes is social winning. To paraphrase a fairly important post, you only argue a point by definition when it's not true any other way.
2Nectanebo
I agree with you that that particular sentence could have been phrased better. But nyan_sandwich pointed out the key point, that turtle was arguing based upon a specific definition of rationality that did not mean the same thing that LW refers to when they talk about rationality. Therefore when she said the words "by definition" in this case, she was trying to make clear that arguing about it would therefore be arguing about the definition of the word, and not anything genuinely substantial. Therefore it seems that it is very unlikely that sandwich was falling into the common problem the article you linked to is refering to: of saying that (a thing) is (another thing) by definition when actually the definition of the thing does not call for such a statement to be the case at all. Yes, the wording made it seem like it may have been the case that she was falling into that trap, however I percieved that what she was actually doing was trying to inform hungry turtle that he was talking about a fairly different concept to what LW talks about, even though we used the same word (a phenomenon that is explained well in that sequence).
1HungryTurtle
Nectanebo, Perhaps you can explain to me how the LW definition differs from the one I provide, because I pulled my definition from this sites terminology to specifically avoid this issue. I am willing to accept that there is a problem in my wording of this definition, but I respectfully hold the position that we are talking about the same rationality. In my opinion, the problem is not with my concept of rationality, but that I am attacking, even if it is a mild attack, an idea that is held in the highest regard among this community. It is the dissonance of my idea that leads nyan_sandwich to see fault with it, not the idea itself. I hope we can talk this out and see what happens.
4Nectanebo
increased accuracy is not rationality Think about it this way: if you want increased accuracy, then rationality is the best way to increase accuracy. If you want to maintain social relations, then the rational choice is the choice that optimally maintains social relations. I think LessWrong considers rationality as the art of finding the best way of achieving your goals, whatever they may be. Therefore if you think that being rational is not necessarily the best option in some cases, we are not talking about the same concept any longer, because when you attack rationality in this way, you are not attacking the same rationality that people on LessWrong refer to. For example, it is silly for people to try to attempt to increase accuracy to the detriment of their social relationships. This is irrational if you want to maintain your social relationships, based on how LessWrong tends to use the word. The points I make have been covered fairly well by many others who have replied in this thread. if you want to know more about what we may have been trying say, that sequence about words also covers it in detail, I personally found that particular sequence to be one of the best and most useful, and it is especially relevant to the discussion at hand.
0adamisom
Anything can be included in rationality after you realize it needs to be. Or: You can always define your utility function to include everything relevant, but in real life estimations of utility, some things just don't occur to us (at least until later). So sure, increased accuracy [to social detriment] is not rationality. Once you realize it.* But you need to realize it. I think HungryTurtle is helping us realize it. So I think the real question is does your current model of rationality, the way you think about it right now and actually (hopefully) use it, is that inoptimal?*
0HungryTurtle
Do you ever think it is detrimental having goals?
0Nectanebo
Sure, some goals may be detrimental to various things. But surely people have the goal of not wanting detrimental goals, if the detriment is to things they care about.
0HungryTurtle
Yes! So this idea is the core of my essay. I suggest that the individual who has the goal of not wanting detrimental goals acknowledges the following: 1.) Goal-orientations (meaning the desired state of being that drives one's goals at a particular time) are dynamic. 2.) The implementation of genuine rational methodology to a goal-orientation consumes a huge amount of the individual/group's resources. If the individual has the goal of not having detrimental goals, and if they accept that goal-orientations are dynamic, and that a genuinely rational methodology consumes a huge amount of resources, then such an individual would rationally desire a system of regulating when to implement rational methodology and when to abandon rational methodology due to the potential triviality of immediate goals. Because the individual is choosing to abandon rationality in the short-term, I label this as being rationally irrational.
0TimS
Let's play this out with an example. Imagine I have a goal of running a marathon. To do that, I run every day to increase my endurance. One day, I trip and fall, twisting my ankle. My doctor tells me that if I run on the ankle, I will cause myself permanent injury. Using my powers of rationality, I decide to stop running until my ankle has healing, to avoid the permanent injury that would prevent me from achieving my goal of running a marathon. Is my decision to stop training for the marathon, which inevitably moves my goal of running in a marathon further away, "rationally irrational"? Or is there something wrong with my example?
0HungryTurtle
No, your example is fine, but I would say it is the most elementary use of this idea. When faced with a serious threat to health it is relatively easy and obvious to realign goal-orientation. It is harder to make such realignments prior to facing serious damage or threats. In your example, a more sophisticated application of this idea would theoretically remove the possibility of twisting an ankle during training, excluding any extreme circumstances. I imagine this might raise a lot of questions so let me explain a little more. Training is not serious. The purpose of training is to prepare for a race, but the purpose of training is subsumed over the larger purpose of personal health, happiness, and survival. Therefore, any training one does should always be taken with the context of being trivial in light of these overarching goals. Having this mindset, I do not see how a runner could sprain their ankle, barring extreme circumstances. A real runner, taking these overarching values into account would * Prior to running build knowledge about safe running style and practices * During running be primarily concerned with safety and developing positive running habits rather than meeting some short term goal. To me, someone who has integrated my idea would never prioritize a race to the point that they risk spraining their ankle in training. Of course there are bizarre situations that are hard/ impossible to plan for. But tripping and twisting your ankle does not seem to be one of these.
0Nectanebo
That kinda falls apart because it's not being irrational if it's rational not to consume too much of your resources on "rational methodology". I guess it's just a bad label, "rationally irrational", that is, because you're not abandoning rationality, you're just doing the rational thing by choosing not to use too much of your resources when it's better not to. So at no point you're doing anything that could be considered irrational.
0HungryTurtle
Let’s say I am playing soccer. I have decided that any goal-orientation within my soccer game is ultimately not worth the expenditure of resources beyond X amount. Because of this I have tuned out my rational calculating of how to best achieve a social, personal, or game-related victory. To anyone who has not appraised soccer related goal-orientations in this way, my actions would appear irrational within the game. Do you see how this could be considered irrational? I definitely understand how this idea can also be understood as still rational, it is because of that I called it 'rationally irrational,' implying the actor is never truly abandoning rationality. The reason I choose to word it this way instead of finding some other way to label it as meta-rationality is for rhetorical purposes. This community targets a relatively small demographic of thinkers. That being individuals who have both the capacity and the work history to achieve upper levels of rationality. Perhaps this demographic is the majority within this blog, but I thought it was highly possible that there existed Less Wrong members who were not quite at that level, and that it would be a more symbolically appealing idea if it suggest an element of necessary irrationality within the rationalists paradigm. Maybe this was the a poor choice, but it was what I choose to do.
5thomblake
That is a good assessment. Saying something false constitutes exceptionally bad rhetoric here.
0HungryTurtle
I still don't think what I said is false, it is a rhetorical choice. Saying it is rational irrationality still makes sense, it just hits some buzz words for this group and is less appealing than choosing some other form of label.
3thomblake
No, it doesn't. It's a blatant contradiction, which is by definition false. Also: Yes, someone could consider it irrational, and that person would be wrong.
0HungryTurtle
Rational Irrationality is talking about rationality within two different levels of analysis. The result of being rational at the level of goal prioritization, the individual abandons rational methodology at the level of goal achievement. L1- Goal Prioritization L2- Goal Achievement If I am at a party I have desired outcomes for my interactions and experiences that produce goals. In prioritizing my goals I am not abandoning these goals, but placing them in the context of having desires that exist outside of that immediate situation. I still am trying to achieve my goals, but by correctly assessing their relevance to overarching goals, I either prioritize or de-prioritize them. If I de-prioritize my party goals, I am limiting the effort I put into their achievement. So even if I could think of more potent and effective strategies for achieving my party goals, I have abandon these strategies. L1 rationality limits L2 rationality within low priority goal context. Rationally condoning the use of irrational methods in minor goal achievement.
0[anonymous]
That seems false. Perhaps saying something false for the purpose of supporting something else is bad rhetoric. There are possibly also ways of saying something false, or contexts where saying the false thing is bad rhetoric. But for most part saying stuff false is legitimate rhetoric for a bad conclusion.
2Nectanebo
Good, now that you've realised that, perhaps you might want to abandon that name. The idea of using your time and various other resources carefully and efficiently is a good virtue of rationality. Framing it as being irrational is innaccurate and kinda incendiary.
-1HungryTurtle
Here is my reasoning for choosing this title. If you don't mind could you read it and tell me where you think I am mistaken. I realize that saying 'rationally irrational' appears to be a contradiction. However, the idea is talking about the use of rational methodology at two different levels of analysis. Rationality at the level of goal prioritization potentially results in the adoption of an irrational methodology at the level of goal achievement. L1- Goal Prioritization L2- Goal Achievement L1 rationality can result in a limitation of L2 rationality within low priority goal context. Let’s say that someone was watching me play a game of soccer (since I have been using the soccer analogy). As they watched, they might critique the fact that my strategy was poorly chosen, and the overall effort exerted by me and my teammates was lackluster. To this observer, who considers themselves a soccer expert, it would be clear that my and my team’s performance was subpar. The observer took notes of all are flaws and inefficient habits, then after the game wrote them all up to present to us. Upon telling me all these insightful f critiques, the observer is shocked to hear that I am grateful for his effort, but am not going to change how I or my team plays soccer. He tries to convince me that I am playing wrong, that we will never win the way I am playing. And he is correct. To any knowledgeable observer I was poorly, even irrationally, playing the game of soccer. Without knowledge of L1 (which is not observable) the execution of L2 (which is observable) cannot be deemed rational or irrational, and in my opinion, will appear irrational in many situations. Would you say that to you it appears irrational that I have chosen to label this idea as ‘rationally irrational?’ If that is correct. I would suggest that I have some L1 that you are unaware of, and that while my labeling is irrational in regard to L2 (receiving high karma points / recognition in publishing my essay on your
4TheOtherDave
I think you're welcome to have whatever goals you like, and so are the soccer players. But don't be surprised if the soccer players, acknowledging that your goal does not in fact seem to be at all relevant to anything they care about, subsequently allocate their resources to things they care about more and treat you as a distraction rather than as a contributor to their soccer-playing community.
-2HungryTurtle
What would you say if I said caring about my goals in addition to their own goals would make them a better soccer player?
0TheOtherDave
I would say "Interesting, if true. Do you have any evidence that would tend to indicate that it's true?"
0HungryTurtle
I'm trying to find a LW essay, i can't remember what it is called, but it is about maximizing your effort in areas of highest return. For example, if you are a baseball player, you might be around 80% in terms of pitching and 20% in terms of base running. to go from 80% up in pitching becomes exponentially harder; whereas learning the basic skill set to jump from dismal to average base running is not. Basically, rather than continuing to grasp at perfection in one skill set, it is more efficient to maximize basic levels in a variety of skill sets related to target field. Do you know the essay i am talking about?
0TheOtherDave
Doesn't sound familiar. Regardless, I agree that if I value an N% improvement in skill A and skill B equivalently (either in and of themselves, or because they both contribute equally to some third thing I value), and an N% improvement in A takes much more effort than an N% improvement in B, that I do better to devote my resources to improving A. Of course, it doesn't follow from that that for any skill A, I do better to devote my resources to improving A.
0HungryTurtle
Ok, then the next question is that would you agree for a human skills related to emotional and social connection maximize the productivity and health of a person?
0TheOtherDave
No. Though I would agree that for a human, skills related to emotional and social connection contribute significantly to their productivity and health, and can sometimes be the optimal place to invest effort in order to maximize productivity and health.
2HungryTurtle
Ok, so these skill sets contribute significantly to the productivity and health of a person. Then would you disagree with the following: 1. Social and emotional skills signifcantly contribute to health and productivity. 2. Any job, skill, hobby, or task that is human driven can benefit from an increase in the acting agents health and productivity 3. Therefore social and emotional skills are relevant (to some degree) to all other human driven skill sets
0TheOtherDave
Sure, agreed.
0HungryTurtle
Ok, so then I would say that the soccer player in being empathetic to my objectives would be strengthening his or her emotional/ social capacity, which would benefit his or her health/ productivity, and thus benefit his or her soccer playing.
0TheOtherDave
I'm not sure what you mean by "being empathetic to [your] objectives," but if it involves spending time doing things, then one question becomes whether spending a given time doing those things produces more or less improvement in their soccer playing. I would certainly agree that if spending their available time doing the thing you suggest (which, incidentally, I have completely lost track of what it is, if indeed you ever specified) produces more of an improvement in the skills they value than doing anything else they can think of, then they ought to do the thing you suggest.
0TimS
I wouldn't agree to that statement without a lot more context about a particular person's situation.
0adamisom
TheOtherDave is being clear. There are obviously two considerations - right? The comparative benefit of improving two skillsets (take into account comparative advantage!) -and- The comparative cost of improving two skillsets Conceptually easy.
0TimS
Who are you talking about? Your example was a team filled with low effort soccer players. Specifically, whose goals are you considering beside your own?
2TimS
Can you be more concrete with your soccer example. I don't understand what you mean.
0HungryTurtle
In a game of soccer, you could want to improve teamwork, you could want to win the game, you could want to improve your skills, you could want to make a good impression. All these are potential goals of a game of soccer. There is a group of objecetives that would most accurately acheive each of these possible goals. I am suggesting that the for each goal, acheiving the goal to the utmost level requres an objective with relatively high resource demands. Is that better?
3TimS
An observer who thinks you are being stupid for not committing all possible effort to achieving your goal in the game (for example, impressing others) needs a justification for why achieving this goal is that important. In the absence of background like "this is the only chance for the scout from the professional team to see you play, sign you, and cause you to escape the otherwise un-escapable poverty and starvation," the observer seems like an idiot. I hope you don't think pointing out the apparent idiocy of the observer is an insightful lesson. In short, show some examples of people here (or anywhere) making the mistake (or mistakes) you identify, or stop acting like you are so much wiser than us.
0A1987dM
What would that even mean? Do you by detrimental mean something different than ‘making it harder to achieve your goals’?
0HungryTurtle
Detrimental means damaging, but you could definitely read it as damaging to goals. So do you think it is ever damaging or ever harmful to have goals?
0A1987dM
Goals can be damaging or harmful to each other, but not to themselves. And if you have no goal at all, there's nothing to be damaged or harmed.
0HungryTurtle
I think goals can be damaging to themselves. For example, I think anyone who has the explicit goal of becoming the strongest they can be, effectively limits their strength by the very nature of this type of statement.
0Swimmer963 (Miranda Dixon-Luinenburg)
Can you clarify this? What do you think is the goal other than 'be the strongest I can be' that would result in me ending up stronger? (Also, not sure what sort of strength you are talking about here: physical? psychological?)
0HungryTurtle
To me true strength is a physical and physiological balance. I feel that anyone who has the goal of being "the strongest" (whether they mean physically, mentally, in a game, etc) is seeking strength out of a personally insecurity about their strength. Being insecure is a type of weakness. Therefore by having the goal of being the strongest will never allow them to be truly strong. Does that make sense? It is a very Daoist idea.
0Swimmer963 (Miranda Dixon-Luinenburg)
Do you mean someone who wants to be 'the strongest' compared to others. I don't think that's ever a good goal, because whether or not it is achievable doesn't depend on you. But 'be the strongest' is also an incredibly non-specific goal, and problematic for that reason. If you break it down, you could say "right now, my weaknesses are that a) I'm out of shape and can't jog more than 1 mile, and b) I'm insecure about it" then you could set sub-goals in both these areas, prioritize them, make plans on how to accomplish them, and evaluate afterwards whether they had been accomplished...and then make a new list of weaknesses, and a new list of goals, and a new list of plans. You're doing a lot more than just trying to be as strong as you can, but you're not specifically holding back or trying not to be as strong as you can either, which is what your comment came across as recommending.
0HungryTurtle
No not compared to others. Just someone whose goal is to be the strongest. It is the fact that it is an "est" based goal that makes it damaging to itself. I suppose if I were to take all the poetry out of the above mentioned statement I would say that any goal that involves "ests" (fastest, strongest, smartest, wealthiest, etc) involves a degree of abstraction that signifies a lack of true understanding of what the actual quality/ state of being they are targeting encompasses, and that until said person better understands that quality/state they will never be able to achieve said goal. Note that all your examples take my goal and rewrite to have incredibly practical parameters. You define reachable objectives as targets for your examples, but the point of my example was that it was a goal that lacked such empirically bounded markers.
0Swimmer963 (Miranda Dixon-Luinenburg)
OK. Makes sense. As I said in this comment, apparently my brain automatically converts abstract goals into sub-goals...so automatically that I hadn't even imagined someone could have a goal as abstract as 'be as strong as I can' without breaking it down and making it measurable and practicable, etc. I think I understand your point; it's the format of the goal that is damaging, not the content in itself.
0HungryTurtle
Ahhh I am a moron, I did not even read that. I read dave's post prior to it and assumed it was irrelevant to the idea I was trying to convey. X_X
0HungryTurtle
Yes, exactly. And if you do convert abstract goals into sub-goals you are abnormally brilliant. I don't know if you were taught to do that, or you just deduced such a technique on your own, but the majority of people, the vast majority, is unable to do that. It is a huge problem, one many self-health programs address, and also one that the main paradigms of American education are working to counteract. It really is no small feat.
1Swimmer963 (Miranda Dixon-Luinenburg)
I think it comes from having done athletics as a kid... I was a competitive swimmer, and very quickly it became an obvious fact to me that in order to achieve the big abstract goal (being the fastest and winning the race) you had to train a whole lot. And since it's not very easy for someone who's 11 or 12 years old to wake up every morning at 5 and make it to practice, I turned those into little mini subgoals (examples subgoal: get out of bed and make it to all the practices, subgoal: try to keep up with the fast teenage boys in my lane, subgoal: do butterfly even though it hurts). So it just feels incredibly obvious to me that the bigger a goal is, the harder you have to train, and so my first thought is 'how do I train for this?'
0TheOtherDave
Well, there's "be stronger than I am right now."
2Swimmer963 (Miranda Dixon-Luinenburg)
OK, clarify: If I follow the goal 'be the strongest I can be' I will reach a level of strength X. What other goal would allow me to surpass the level of strength X (not just my initial level)?
0TheOtherDave
Again: "be stronger than I am right now." Of course, I need to keep that goal over time, as phrased, rather than unpack it to mean "be stronger than I was back then". Some context: when I was recovering from my stroke a few years back, one of the things I discovered was that having the goal of doing a little bit better every day than the day before was a lot more productive for me (in terms of getting me further along in a given time period) than setting some target far from my current state and moving towards it. If I could lift three pounds with my right arm this week, I would try for 3.5 next week. If I could do ten sit-ups this week, I would try for 12 next week. And so forth. Sure, I could have instead had a goal of "do as many situps as I can", but for me, that goal resulted in my being able to do fewer situps. I suspect people vary in this regard.
0Swimmer963 (Miranda Dixon-Luinenburg)
I guess to me it seems automatic to 'unpack' a general goal like that into short-term specific goals. 'Be as fit as I can' became 'Improve my fitness' became 'improve my flexibility and balance' became 'start a martial art and keep doing it until I get my black belt' became a whole bunch of subgoals like 'keep practicing my back kick until I can use it in a sparring match'. It's automatic for me to think about the most specific level of subgoals while I'm actually practising, and only think about the higher-level goals when I'm revising whether to add new subgoals. I guess, because this is the way my goal structure has always worked, I assume that my highest-level goal is by definition to become as good as X as I can. ('Be the strongest I can' has problems for other reasons, namely its non-specificity, so I'll replace it with something specific, so let's say X=swimming speed.) I don't know the fastest speed is that my body is capable of, but I certainly want to attain that speed, not 0.5 km/h slower. But when I'm actually in the water training, or in bed at home trying to decide whether to get up and go train, I'm thinking about wanting to take 5 seconds off my 100 freestyle time. Once I've taken that 5 seconds off, I'll want to take another 5 seconds off. Etc. I think the way I originally interpreted HungryTurtle's comment was that he thought you should moderate your goals to be less ambitious than 'be as good at X as you can' because having a goal that ambitious will cause you to lose. But you can also interpret it to mean that non-specific goals without measurable criteria, and not broken down into subgoals, aren't the most efficient way to improve. Which is very likely true, and I guess it's kind of silly of me to assume that everyone's brain creates an automatic subgoal breakdown like mine does.
0TheOtherDave
Sure, I can see that. Were it similarly automatic for me, I'd probably share your intuitions here.
0TheOtherDave
Hm. If I have a goal G1, and then I later develop an additional goal G2, it seems likely that having G2 makes it harder for me to achieve G1 (due to having to allocate limited resources across two goals). So having G2 would be detrimental by that definition, wouldn't it?
2A1987dM
Hm... Yeah. So, having goals other than your current goals is detrimental (to your current goals). (At least for ideal agents: akrasia etc. mean that it's not necessarily true for humans.) But I took HungryTurtle to mean ‘having any goals at all’. (Probably I was primed by this.)
0HungryTurtle
Yes. This is very interesting, but I was actually thinking about it in a different manner. I like your idea too, but this is more along the lines of what I meant: Ultimately, I have goals for the purpose of arriving at some desired state of being. Overtime goals should change rationally to better reach desired states. However, what is viewed as a desired state of being also changes over time. When I was 12 I wanted to be the strongest person in the world, when I was 18 I wanted to be a world famous comedian. Both of these desired states undoubtedly have goals that the achievement of would more readily and potently produce such desired states. If I had adopted the most efficient methods of pursuing these dreams, I would have been making extreme commitments for the sake of something that later would turn out to be a false desired state. Until one knows their end desired state, any goal that exceeds a certain amount of resources is damaging to the long term achievement of a desired state. Furthermore, I think people rarely know when to cut their losses. It could be that after investing X amount into desired state Y, the individual is unwilling to abandon this belief, even if in reality it is no longer their desired state. People get into relationships and are too afraid of having wasted all that time and resources to get out. I don’t know if I am being clear, but the train of my logic is roughly 1. Throughout the progression of time what a person finds to be a desired state changes. (Perhaps the change is more drastic in some than others, but I believe this change is normal. Just as through trial and error you refine your methods of goal achievement, through the trials and errors of life you reshape your beliefs and desires. ) 2. If desired states of being are dynamic, then it not wise to commit to too extreme goals or methods for the sake of my current desired state of being. (There needs to be some anticipation of the likelihood that my current desired state m
0TheOtherDave
(nods) I certainly agree that the goals people can articulate (e.g., "become a world-famous comedian" or "make a trillion dollars" or whatever) are rarely stable over time, and are rarely satisfying once achieved, such that making non-reversible choices (including, as you say, the consumption of resources) to achieve those goals may be something we regret later. That said, it's not clear that we have alternatives we're guaranteed not to regret. Incidentally, it's conventional on LW to talk about this dichotomy in terms of "instrumental" and "terminal" goals, with the understanding that terminal goals are stable and worth optimizing for but mostly we just don't know what they are. That said, I'm not a fan of that convention myself, except in the most metaphorical of senses, as I see no reason for believing terminal goals exist at all.
0HungryTurtle
But do you believe that most people pretty predictably experience shifts in goal orientation over a lifetime?
0TheOtherDave
I'd have to know more clearly what you mean by "goal orientation" to answer that. I certainly believe that most (actually, all) people, if asked to articulate their goals at various times during their lives, would articulate different goals at different times. And I'm pretty confident that most (and quite likely all, excepting perhaps those who die very young) people express different implicit goals through their choices at different times during their lives. Are either of those equivalent to "shifts in goal orientation"?
0HungryTurtle
Yes
0TheOtherDave
Then yes, I believe that most people pretty predictably experience shifts in goal orientation over a lifetime.
0HungryTurtle
Ok, me to. Then if you believe that, does it seem logical to set up some system of regulation or some type of limitations on the degree of accuracy you are willing to strive for any current goal orientation?
0TheOtherDave
Again, I'm not exactly sure I know what you mean. But it certainly seems reasonable for me to, for example, not consume all available resources in pursuit of my currently articulable goals without some reasonable expectation of more resources being made available as a consequence of achieving those goals. Is that an example of a system of regulation or type of limitation on the degree of accuracy I am willing to strive for my current goal orientation? Preventing other people from consuming all available resources in pursuit of their currently articulable goals might also be a good idea, though it depends a lot on the costs of prevention and the likelihood that they would choose to do so and be able to do so in the absence of my preventing them.
0HungryTurtle
Is that an example of a system of regulation or type of limitation on the degree of accuracy I am willing to strive for my current goal orientation? Yes in a sense. What I was getting at is that the implementation of rationality , when one's capacity for rationality is high (i.e when someone is really rational), is a HUGE consumption of resources. That 1.) Because goal-orientations are dynamic 2.) The implementation of genuine rational methodology to a goal-orientation consumes a huge amount of the individual/group's resources 3.) Both individuals and groups would benefit from having a system of regulating when to implement rational methodology and to what degree in the pursuit of a specific goal. This is what my essay is about. This is what I call rational irrationality, or rationally irrational; because I see that a truly rational person for the sake of resource preservation and long-term (terminal) goal achievement would not want to achieve all their immediate goals in the fullest sense. This to me is different than having the goal of losing. Because you still want to achieve your goals, you still have immediate goals, you just do not place the efficient achievement of these goals as your top priority.
0TheOtherDave
I certainly agree that sometimes we do best to put off achieving an immediate goal because we're optimizing for longer-term or larger-scale goals. I'm not sure why you choose to call that "irrational," but the labels don't matter to me much.
0HungryTurtle
I call it irrational, because in pursuit of our immediate goals we are ignoring/avoiding the most effective methodology, thus doing what is potentially ineffective? But hell, maybe on a subconscious level I did it to be controversial and attack accepted group norms O_O
0HungryTurtle
agreed!
-2HungryTurtle
With all due respect, you are missing the point I am trying to make with "erring on the side of caution" segment. I would agree that in theory goal-winning is always desirable, but as you yourself point out, the individual's understanding of rationality or winning (goal-orientation) is flawed. You imply that as time progresses the individual will slowly but surely recognize what "true winning is." In response to this notion, I would ask 1.) How do you rationalize omitting the possibility that the individual will never understand what "true rationality" or "true winning" are? What evidence do you have that such knowledge is even obtainable? If there is none, then would it not be more rational adjust one's confidence in one’s goal-orientation to include the very real possibility that any immediate goal-orientation might later be revealed as damaging? 2.) Even if we make the assumption that eventually the individual will obtain a perfect understanding of rationality and winning, how does this omit the need for caution in early stage goal-orientation? If given enough time, I will understand true rationality, then rationally shouldn't all my goals up until that point is reached by approached with caution? My point is that while one’s methodology in achieving goals can become more and more precise, there is no way to guarantee that the bearings at which we place our goals will lead us down a nourishing (and therefore rational) path; and therefore, the speed at which we achieve goals (accelerated by rationality) is potentially dangerous to achieving the desired results of those goals. Does that make sense?
4Arran_Stirton
1.) You should read up on what it really means to have "true rationality". Here's the thing, we don't omit the possibility that the individual will never understand what "true rationality" is, in fact Bayes' Theorem shows that it's impossible to assign a probability of 1.0 to any theory of anything (never mind rationality). You can't argue with math. 2.) Yes, all of your goals should be approached with caution, just like all of your plans. We're not perfectly rational beings, that's why we try to become stronger. However, we approach things with due caution. If something is our best course of action given the amount of information we have, we should take it. Also remember, you're allowed to plan for more than one eventuality, that's why we use probabilities and Bayes’ theorem it order to work out what eventualities we should plan for.

So, sometimes actions that are generally considered rational lead to bad results in certain situations. I agree with this.

However, how are we to identify and anticipate these situations? If you have a tool other than rationality, present it. If you have a means of showing its validity other than the rationalist methods we use here, present that as well.

To say that rationality itself is a problem leaves us completely unable to act.

0HungryTurtle
Well said! I was not trying to attack the use of rationality as a method, but rather to attack the immoderate use of this method. Rationality is a good and powerful tool for acting intentionally, but should there not be some regulation in its use? You state I would counter: To say that there is no problem with rationality leaves us completely without reason to suspend action. As you have suggested, rationality is a tool for action. Are there not times when it is harmful to not act? Are there no reasons to suspend action?
2TimS
Rationality is a tool for making choices. Sometimes the rational choice is not to play.
0HungryTurtle
Which is why I call it rational irrationality, or rationally irrational if you would prefer. I do think it is possible to semantically stretch the conception of rationality to cover this, but I still think a fundamental distinction needs to be acknowledged between rationality that leads to taking control in a situation, and rationality that leads to intentional inaction.
0TimS
I feel like you are conflating terminal values (goals) and instrumental values (means/effectiveness) a little bit here. There's really no good reason to adopt an instrumental value that doesn't help you achieve your goals. But if you aren't sure of what your goals are, then no amount of improvement of your instrumental values will help. I'm trying to distinguish between the circumstance where you aren't sure if inactivity will help achieve what you want (if you want your spouse to complete a chore, should you remind them or not?) or aren't sure if inactivity is what you want (do I really like meditation or not?). In particular, your worry about accuracy of maps and whether you should act on them or check on them seems to fundamentally be a problem about goal uncertainty. Some miscommunication is occurring because the analogy is focused on instrumental values. To push a little further on the metaphor, a bad map will cause you to end up in Venice instead of Rome, but improving the map won't help you decide if you want to be in Rome.

If the action you are engaging in is not helping you achieve your goals, than it is not rational.

You are describing a failure of rationality rather than rationality itself.

0HungryTurtle
What I am describing is the need for a safeguard against overly confident goal orientation.

I find it interesting, even telling, that nobody has yet challenged the assumptions behind the proposition "Rationality is a tool for accuracy," which would be that "rationality is the best tool for accuracy" and/or that "rationality is the sole tool that can be used to achieve accuracy."

8Richard_Kennaway
Why would someone challenge a proposition that they agree with? While I don't see that the proposition "Rationality is a tool for accuracy" presumes "Rationality is the tool for accuracy", I'd agree with the latter anyway. Rationality is the only effective tool there is, and more than merely by definition. Praying to the gods for revelation doesn't work. Making stuff up doesn't work. Meditating in a cave won't tell you what the stars are made of. Such things as observing the world, updating beliefs from experience, making sure that whatever you believe implies something about what you will observe, and so on: these are some of the things in the rationality toolbox, these are the things that work. If you disagree with this, please go ahead and challenge it yourself.
1AspiringKnitter
Supposing that you lived in a universe where you could pray for and would then always receive infallible instruction, it would be rational to pray. If it leads to winning more than other possibilities, it's rational to do it. If your utility function values pretending to be stupid so you'll be well-liked by idiots, that is winning.
7[anonymous]
Key phrase. The accurate map leads to more winning. Acknowledging that X obviously doesn't work, but pretending that it does in order to win is very different from thinking X works. ETA: It is all fine and dandy that I am getting upvotes for this, and by all means don't stop, but really I am just a novice applying Rationality 101 whereever I see fit in order to earn my black belt.
1HungryTurtle
What evidence is there that the map is static? We make maps and the world transforms. Rivers become canyons; mountains become mole hills (pardon the rhetorical ring I could not resist). Given that all maps are approximations isn't it rational to moderate one's navigation with the occasional off course exploration to verify that not drastic changes have occurred in the geography? And because I feel the analogy is pretty far removed at this point, what I mean by that, is that if we have charted a goal-orientation based on our map that puts us on a specific trajectory, would it not be beneficial to occasional abandon our goal-orientation to explore other trajectories for potentially new and more lucrative paths.
2[anonymous]
The evidence that the territory is static is called Physics. The laws does not change, and the elegant counterargument against anti-inductionism is that if induction didn't work our brains would stop working, because our brains depend on static laws. There is no evidence whatsoever that the map is static. It should never be, you should always be prepared to update, there isn't a universal prior that lets you reason inductively about any universe.
-6HungryTurtle
0Vaniver
Why would that not be part of the trajectory traced out by your goal-orientation, or a natural interaction between the fuzziness of your map and your goals?
0HungryTurtle
Well you would try to have that as part of your trajectory, but what I am suggesting is that there will always be things beyond your planning, beyond your reasoning, so in light of this perhaps we should strategically deviate from those plans every now and then to double check what else is out there.
0Vaniver
I'm still confused by what you're considering inside my reasoning and outside my planning / reasoning. If I say "spend 90% of your time in the area with the highest known EV and 10% of your time measuring areas which have at least a 1% chance of having higher reward than the current highest EV, if they exist," then isn't my ignorance about the world part of my plan / reasoning, such that I don't need to deviate from those plans to double check?
1AspiringKnitter
Personally, I think that behavior should be rewarded.
1[anonymous]
Thank you, and I share that view. Why don't we see everyone doing it? Why, I would be overjoyed if everyone was so firmly trained in Rat101 that comments like these were not special. But now I am deviating into a should-world + diff.
1Ben_Welchner
I'm pretty sure we do see everyone doing it. Randomly selecting a few posts, in The Fox and the Low-Hanging Grapes the vast majority of comments received at least one upvote, the Using degrees of freedom to change the past for fun and profit thread have slightly more than 50% upvoted comments and the Rationally Irrational comments also have more upvoted than not. It seems to me that most reasonably-novel insights are worth at least an upvote or two at the current value. EDIT: Just in case this comes off as disparaging LW's upvote generosity or average comment quality, it's not.
2AspiringKnitter
Though among LW members, people probably don't need to be encouraged to use basic rationality. If we could just upvote and downvote people's arguments in real life... I'm also considering the possibility that MHD was asking why we don't see everyone using Rationality 101.
4Richard_Kennaway
I'm talking about the real world, not an imaginary one. You can make up imaginary worlds to come up with a counterexample to any generalisation you hear, but it amounts to saying "Suppose that were false? Then it would be false!"
0HungryTurtle
Richard, Would you agree that the rate of speed that you try to do something is directly correlated to the accuracy you can produce? I imagine the faster you try to do something to poorer your results will be. Do you disagree? If it is true that at times accuracy demands some degree of suspension/inaction, then I would suggest to you that tools such as praying, meditating, and "making stuff up" serve to slow the individual down, allowing for better accuracy in the long term. Whereas, increasing intentionality will beyond some threshold decrease overall results. Does that make sense?
0Richard_Kennaway
Slowing down will only give better results if it's the right sort of slowing down. For example, slowing down to better attend to the job, or slowing down to avoid exhausting oneself. But I wasn't talking about praying, meditating, and making stuff up as ways of avoiding the task, but as ways of performing it. As such, they don't work. It may be very useful to sit for a while every day doing nothing but contemplating one's own mind, but the use of that lies in more clearly observing the thing that one studies in meditation, i.e. one's own mind.
1HungryTurtle
I am suggesting the task they perform has two levels. The first is a surface structure, defined by whatever religious or creative purpose the performer thinks they serve. In my opinion, the medium of this level is completely arbitrary. It does not matter what you pray to, or if you meditate or pray, or play baseball for that matter. The importance of such actions comes from their deep structure, which develops beneficial cognitive, emotional, or physical habits. Prayer is in many cultures a means of cultivating patience and concentration. The idea, which has been verified by the field of psychology, is that patience, concentration, reverence, toleration, empathy, sympathy, anxiety, serenity, these and many other cognitive dispositions are not the result of a personality type, but rather the result of intentional development. Within the last several decades there has been a revolution within the field of psychology as to what action is. Previously cognitive actions were not thought of as actions, and therefore not believed to be things that you develop. It was believed that some people where just born kinder, more stressed, more sympathetic, etc, that there were cognitive types. We know now is that this is not true. While it is true that everyone probably is born with a different degree of competency in these various cognitive actions (just as some people are probably born slightly better at running, jumping, or other more physical actions), more important than innate talent is the amount of work someone puts into a capacity. Someone born with a below average disposition for running can work hard and become relatively fast. In the same way, while there are some biological grounds and limitations, for the majority of people, the total level of capacity they are able to achieve in some action is determined by the amount of work they devote to improving that action. If you work out your tolerance muscles, you will become able to exhibit greater degrees of tolerance. I
0Richard_Kennaway
That is a lot of words, but it seems to me that all you are saying is that meditation (misspelled as "mediation" throughout) can serve certain useful purposes. So will a spade. BTW, slowing a drum rhythm down for a beginner to hear how it goes is more difficult than playing it to speed.
0HungryTurtle
Along with religion, praying, and making stuff up. Meditating (thanks for the correction) was just an example. Oh, I also don't get the spade comment either. I mean I agree a spade has useful purposes but what is the point of saying so here? Not exactly sure what you are trying to express here. Do you mind further explanation?
0A1987dM
Cox's theorem does show that Bayesian probability theory (around here a.k.a. epistemic rationality) is the only way to assign numbers to beliefs which satisfies certain desiderata.
0HungryTurtle
Aliciaparr, This is in a sense the point of my essay! I define rationality as a tool for accuracy, because I believed that was a commonly held position on this blog (perhaps I was wrong). But if you look at the overall point of my essay, it is to suggest that there are times when what is desired is achieved without rationality, therefore suggesting alternative tools for accuracy. As to the idea of a "best tool", as I outline in my opening, I do not think such a thing exists. A best tool implies a universal tool for some task. I think that there are many tools for accuracy, just as there are many tools for cooking. In my opinion it all depends on what ingredients you are faced with and what you want to make out of them.
2DSimon
Maybe think about it this way: what we mean by "rationality" isn't a single tool, it's a way of choosing tools.
0HungryTurtle
That is just pushing it back one level of meta-analysis. The way of choosing tools is still a tool. It is a tool for choosing tools.
0DSimon
I agree, and the thing about taking your selection process meta is that you have to stop at some point. If you have more than 1 tool for choosing tools, how do you choose which one to pick for a given situation? You'd need a tool that chooses tools that chooses tools! Sooner or later you have to have a single top level tool or algorithm that actually kicks things into motion.
2HungryTurtle
This is where we disagree. To have rationality be the only tool for choosing tools is to assume all meaningful action is derived from the intentional transformation. I disagree with this idea, and I think modern psychology disagrees as well. It is not only possible, it is at times essential to have meaningful action that is not intentionally driven. If you accept this statement as fact, then it implies the need for a secondary system of tool choosing. More specifically, a type of emergency brake system. You have rationality that is the choosing system, and then the secondary system that shuts the system down when it is necessary to halt further production of intentionality.
1DSimon
If by "not intentionally driven" you mean things like instincts and intuitions, I agree strongly. For one thing, the cerebral approach is way too slow for circumstances that require immediate reactions. There is also an aesthetic component to consider; I kind of enjoy being surprised and shocked from time to time. Looking at a situation from the outside, how do you determine whether intentional or automatic action is best? From another angle, if you could tweak your brain to make certain sorts of situations trigger certain automatic reactions that otherwise wouldn't, or vice versa, what (if anything) would you pick? These evaluations themselves are part of yet another tool.
0HungryTurtle
Yes, exactly. I think both intentional and unintentional action are required at different times. I have tried to devise a method of regulation, but as of now, the best I have come up with is moderating against extremes on either end. So if it seems like I have been overly intentional in recent days, weeks, etc, I try to rely more on instinct and intuition. It is rarely the case that I am relying too heavily on the later ^_^
2DSimon
Right, this is a good idea! You might want to consider an approach that goes by deciding what situations best require intuition, and which ones require intentional thought, rather than aiming only to keep their balance even (though the latter does approximate the former to the degree that these situations pop up with equal frequency). Overall, what I've been getting at is this: Value systems in general have this property that you have to look at a bunch of different possible outcomes and decide which ones are the best, which ones you want to aim for. For technical reasons, it is always possible (and also usually helpful) to describe this as a single function or algorithm, typically around here called one's "utility function" or "terminal values". This is true even though the human brain actually physically implements a person's values as multiple modules operating at the same time rather than a single central dispatch. In your article, you seemed to be saying that you specifically think that one shouldn't have a single "final decision" function at the top of the meta stack. That's not going to be an easily accepted argument around here, for the reasons I stated above.
0HungryTurtle
Yeah, this is exactly what I am arguing. Could you explain the technical reasons more, or point me to some essays where I could read about this? I am still not convinced why it is more benefical to have a single operating system.
2TheOtherDave
I'm no technical expert, but: if I want X, and I also want Y, and I also want Z, and I also want W, and I also want A1 through A22, it seems pretty clear to me that I can express those wants as "I want X and Z and W and A1 through A22." Talking about whether I have one goal or 26 goals therefore seems like a distraction.
0DSimon
In regards to why it's possible, I'll just echo what TheOtherDaveSaid. The reason it's helpful to try for a single top-level utility function is because otherwise, whenever there's a conflict among the many many things we value, we'd have no good way to consistently resolve it. If one aspect of your mind wants excitement, and another wants security, what should you do when you have to choose between the two? Is quitting your job a good idea or not? Is going rock climbing instead of staying at home reading this weekend a good idea or not? Different parts of your mind will have different opinions on these subjects. Without a final arbiter to weigh their suggestions and consider how important comfort and security are relative to each other, how do you do decide in a non-arbitrary way? So I guess it comes down to: how important is it to you that your values are self-consistent? More discussion (and a lot of controversy on whether the whole notion actually is a good idea) here.
2TheOtherDave
Well, there's always the approach of letting all of me influence my actions and seeing what I do.
0HungryTurtle
Thanks for the link. I'll respond back when I get a chance to read it.
0Arran_Stirton
If you're going to use the word rationality, use its definition as given here. Defining rationality as accuracy just leads to confusion and ultimately bad karma. As for a universal tool for some task? (i.e. updating on your belief) Well you really should take a look at Bayes' theorem before you claim that there is no such thing.
0HungryTurtle
I am willing to look at your defintion of rationality, but don't you see how it is problematic to attempt to prescribe one static defintion to a word? Ok, so you do believe that bayes theorem is a universal tool?