[SEQ RERUN] The Proper Use of Humility
Today's post, The Proper Use of Humility was originally published on 1 December 2006. A summary (taken from the LW wiki):
There are good and bad kinds of humility. Proper humility is not being selectively underconfident about uncomfortable truths. Proper humility is not the same as social modesty, which can be an excuse for not even trying to be right. Proper scientific humility means not just acknowledging one's uncertainty with words, but taking specific actions to plan for the case that one is wrong.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was ...What's a bias, again? and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
[SEQ RERUN] ...What's a bias, again?
Today's post, ...What's a bias, again? was originally published on 27 November 2006. A summary (taken from the LW wiki):
Biases are obstacles to truth seeking caused by one's own mental machinery.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Why truth? And... and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
The right kind of fun?
If you consider that the utility generated by working is much greater than the utility directly generated by having fun, then the main thing that you're going to optimizing when you have fun is how much motivation the memory of having that fun increases your working capabilities. This is distinctly different from optimizing for the direct preference fulfillment generated by the fun, even if the same activities are optimal for both utility functions.
The same model works for any action A such that the utility generated by the effect of that action on another action is much greater than the utility generated by the action itself. This probably applies to most maintainance actions, such as doing laundry, sleeping, eating, but this is more obvious to us -- we usually don't see laundry as an end unto itself, but we often do pursue fun for it's own sake. I'm not advocating that we shouldn't have fun, but that we (or at least I) seem to be optimizing for the wrong thing -- direct preference fulfillment, rather than motivation.
This feels like a significant insight, but I tend to get a significant number of false positives. Any ideas on how we might use this?
Hunger can make you stupid
When I originally wrote "When to scream 'Error!'", I was mainly thinking of bad patterns of thought or bad problem-solving strategies as being the source of the error. Since then, I've come to realize that my own most common source of stupidity is because I've neglected some comfort. I may be hungry without consciously paying attention to it, dehydrated because I've been living on coffee for too long, or simply have a headache and need to take an Ibuprofen -- as a result, I don't think well, get irritated at the fact that I'm not thinking well, and generally begin a death spiral if I don't realize why.
In hindsight, it feels obvious that I should take care of the physiological needs that I can because they're likely preventing me from thinking straight. However, I've failed to do this on numerous occasions and so thought it worth mentioning.
In summary: Whenever you're screaming "Error", I suggest you stop and figure out whether you're hungry, thirsty, tired, or hurting before trying to find a problem in your thinking itself, especially if you're not usually good at noticing such things.
Ithaca Meetup?
I'm at Cornell University in the rather small town of Ithaca, NY. Are there any rationalists around here who might be interested in a meetup / is there a meetup already taking place?
When to scream "Error!"
In Anna’s recent post, she talked about training your mind to notice when it wasn’t curious about something and scream “Error! Look for a different way to do this” in such cases. Johnicholas and TheOtherDave's list of what stupidity feels like also looks useful for this purpose. I'm creating this post to make a more comprehensive list of feelings which indicate that people should reanalyze different possible paths to make sure that the one which they're taking is the most effective one to their objective.
Please suggest additions to the list in your comments -- I'll move them up here (along with links to further explanation, if given.) Keep in mind that your description of the feeling should be as illustrative as possible. For example, "feeling stupid" is unhelpful, while "you feel like you've taken a wrong turn into a never-ending tunnel" is better. Of course, metaphors which are immediately understood by some people may not be so easily understood by others, so try to give a more detailed description of the feeling if other people express that you're probably saying more than they're hearing.
List: "Error! Look for a different way to do this" if you feel like:
- your curiosity is dead
- an automaton -- you're acting, albeit slowly, but your mind isn't grokking
- you've taken a wrong turn into a never-ending tunnel
- you're behaving clunkily, mechanically, when there's an already-formed instinct waiting to be used
- what you're doing is "like watching cable, only with fewer hair replacement infomercials"
- you have to avoid working out the implications of a particular line of reasoning.
- you're going to have to grit your teeth and do a tedious and boring task1
- you're blind and must be led around by more observant people
- you're doing something because you're supposed to, rather than because it helps you achieve goals3
- you just shrugged off something that seemed important
- being bored, being in pain, being distracted, wanting to do anything else than this
- being unworthy of these divine (external) ideas
- blind plodding obedience
- being tired all the time, even if you're not2
- not having enough fingers to hold all of my thoughts in place
- merging onto the highway when I can't see all the oncoming traffic
- someone's playing loud distracting music that I can't hear
- riding on a train with square wheels
1. Sometimes tedious/boring tasks genuinely cannot be made easier or less boring, so your "Error!" message might not return anything useful. However, you should at least look.
2. This may also indicate that your stupidity has biological causes, such as nutrition/sleep deficiency. 20-30 minute naps are awesome, though longer ones might make you groggy.
3. Of course, if a goal-achieving action is also supported by authorities, that is a good thing.
Tool for combating undue hesitation
I sometimes feel negative emotions at the thought that the course of action that I am taking isn't even close to the optimal course of action -- that a more effective mind could sort through whatever situation I'm currently having difficulty with and craft a plan that was much more likely to succeed than any of my own. In short, I feel inept in comparison to the better minds in the space-of-all-possible-minds, and so I experience undue hesitation while I'm trying to figure out the correct action to take. Of course, good planning often leads to better results, but this behavioral pattern has a significantly negative effect in situations (especially social ones) where I need to make a decision quickly.
I've found that it helps me think more rapidly and clearly if I think in terms of which of the possible actions that I've thought of will produce the greatest positive difference in net expected utility in comparison to doing nothing. Once I come up with a course of action, I no longer feel a sense of paralysis at how inept my decision-making skills must be compared to much better minds than my own, which saves a certain amount of mental processing power and emotional effort which can then be used for other things. Doing this also helps to prevent panic and the like from springing up because I'm not thinking in terms of whether I can succeed or not, but sorting through which actions maximize my chances of succeeding out of the set of actions that I've currently thought up.
I feel like I've written less than I think that I've written -- that people may not get much out of this post because they haven't actually shared my brain with me, and I've done an inadequate job of deconstructing my thoughts when I've put them to paper. If this is correct, please tell me and I can try to elaborate.
A Possible Solution to Parfit's Hitchiker
I had what appeared to me to be a bit of insight regarding trade between selfish agents. I disclose that I have not read TDT or any books on decision theory, so what I say may be blatantly incorrect. However, I judged that posting this here was of higher utility rather than waiting until I had read up on decision theory -- I have no intention of reading up on decision theory any time soon because I have more important (to me) things to do. This is not meant to deter criticism of the post itself -- please tell me why I'm wrong if I am. The following paragraph is primarily an introduction.
When a rational agent predicts that he is interacting with another rational agent and that the other agent has motive for deceiving him, (and both have a large amount of computing power), he will not use any emotional basis for ‘trust.’ Instead, he will see the other agent’s commitments as truth claims which may be true or false depending what action will optimize the other agent’s utility function at the time which the commitment is to be fulfilled. Agents which know something of the each other’s utility function may bargain directly on such terms, even when each of their utility functions are largely (or completely) dominated by selfishness.
This leads to a solution to Parfit’s hitchhiker, allowing selfish agents to precommit to future trade. Give Ekman all of your clothes and state that you will buy them back from him when you arrive with an amount higher than the worth of your clothes to him but lower than the worth of your clothes to yourself. Furthermore, tell him that because you don’t have anything more on you, he can’t get any more money off of you than an amount infinitesimally smaller than your clothes are worth to you, and accurately tell him how much your clothes are worth to yourself (you must tell the truth here due to his microexpression-reading capability.) He should judge your words as truth, considering that you have told the truth. Of course, you lose regardless if the value of your clothes to yourself is less than the utility he loses by taking you to town.
Assumptions made regarding Parfit's hitchhiker: 1. Physical assault is judged to be of very low utility by both agents and so isn't a factor in the problem. 2. Trades in the present time may be executed without prompting an infinite cycle of "No, you give me X first."
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)