Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
Since it had a decent amount of traffic until a good two weeks into September (and I thought it was a good idea), I'm reviving this thread.
In an attempt to encourage more people to actually do awesome things (a la instrumental rationality), I am proposing a new monthly thread (can be changed to bi-weekly, should that be demanded). Your job, should you choose to accept it, is to comment on this thread explaining the most awesome thing you've done this month. You may be as blatantly proud of you self as you feel. You may unabashedly consider yourself the coolest freaking person ever because of that awesome thing you're dying to tell everyone about. This is the place to do just that.
Remember, however, that this isn't any kind of progress thread. Nor is it any kind of proposal thread.This thread is solely for people to talk about the awesomest thing they've done all month. not will do. not are working on. have already done. This is to cultivate an environment of object level productivity rather than meta-productivity methods.
So, what's the coolest thing you've done this month?
You've had those moments -- the ones where you're very aware of where you're at in the world, and you're mapping out your future and plans very smartly, and you're feeling great about taking action and pushing important things forwards.
I used to find myself only reaching that place, at random, once or twice per year.
But every time I did, I would spend just a few hours sketching out plans, thinking about my priorities, discarding old things I used to do that didn't bring much value, and pushing my limits to do new worthwhile things. I thought, "This is really valuable. I should do this more often."
Eventually, I named that state: Reflective Control.
As often happens, by naming something it becomes easier to do it more often.
At this time, I still had a hazy poorly working feeling about what it was. So I tried to define it. After many attempts, I came to this:
> Reflective Control is when you're firmly off autopilot, in a high-positive and high-willpower state, and are able to take action.
You'll note there's four discreet components to it: firmly off autopilot (reflective), high positivity, high will, and cable of and oriented towards taking action.
I also asked myself, "How to know if you're in Reflective Control?"
My best answer of an exercise for it is,
> You set aside the impulses/distractions, and try to set a concrete Control-related goal. This is meta-work, meaning the process of defining your life and what needs to happen next. You do this calmly. By setting a concrete Control-related goal successfully and then executing on it, you know you're in an RC state.
> Example: "I will identify all the open projects I've got, and the next steps for each of them."
With that definition and that exercise in hand, I was able to do something which works almost magically when I wanted to take on big challenges: I could rate myself from 1-100 on the four key elements of the component, and then set a concrete goal to achieve, and analyze a little about which factor might be holding me back. Here is an example from my journal:
> Reflective 70/100, positive 70/100, will 65/100, action 40/100… ok, I'm feeling good once a good, just some anxiety suppressing will a little and action quite a bit, but no problem. My goal is to finish the xxx outline before I leave here.
I've found this incredibly useful. Summary:
*There's a state I call "Reflective Control" where I'm off autopilot and thinking (reflective), in a positive mood, with willpower and action-oriented.
*I can put explicit numbers on this, somewhat subjectively, from 1-100. This lets me see where the link in the chain is, if any.
*By setting a concrete goal and working towards it, you can get more objective feedback and balance whichever element is lowest with some practical actions.
Related to: Privileging the Hypothesis
Remember the exercises in critical reading you did in school, where you had to look at a piece of writing and step back and ask whether the author was telling the whole truth? If you really want to be a critical reader, it turns out you have to step back one step further, and ask not just whether the author is telling the truth, but why he's writing about this subject at all.
-- Paul Graham
There's an old saying in the public opinion business: we can't tell people what to think, but we can tell them what to think about.
-- Doug Henwood
Many philosophers—particularly amateur philosophers, and ancient philosophers—share a dangerous instinct: If you give them a question, they try to answer it.
Here are some political questions that seem to commonly get discussed in US media: should gay marriage be legal? Should Congress pass stricter gun control laws? Should immigration policy be tightened or relaxed?
These are all examples of what I'll call privileged questions (if there's an existing term for this, let me know): questions that someone has unjustifiably brought to your attention in the same way that a privileged hypothesis unjustifiably gets brought to your attention. The questions above are probably not the most important questions we could be answering right now, even in politics (I'd guess that the economy is more important). Outside of politics, many LWers probably think "what can we do about existential risks?" is one of the most important questions to answer, or possibly "how do we optimize charity?"
Why has the media privileged these questions? I'd guess that the media is incentivized to ask whatever questions will get them the most views. That's a very different goal from asking the most important questions, and is one reason to stop paying attention to the media.
The problem with privileged questions is that you only have so much attention to spare. Attention paid to a question that has been privileged funges against attention you could be paying to better questions. Even worse, it may not feel from the inside like anything is wrong: you can apply all of the epistemic rationality in the world to answering a question like "should Congress pass stricter gun control laws?" and never once ask yourself where that question came from and whether there are better questions you could be answering instead.
I suspect this is a problem in academia too. Richard Hamming once gave a talk in which he related the following story:
Over on the other side of the dining hall was a chemistry table. I had worked with one of the fellows, Dave McCall; furthermore he was courting our secretary at the time. I went over and said, "Do you mind if I join you?" They can't say no, so I started eating with them for a while. And I started asking, "What are the important problems of your field?" And after a week or so, "What important problems are you working on?" And after some more time I came in one day and said, "If what you are doing is not important, and if you don't think it is going to lead to something important, why are you at Bell Labs working on it?" I wasn't welcomed after that; I had to find somebody else to eat with!
Academics answer questions that have been privileged in various ways: perhaps the questions their advisor was interested in, or the questions they'll most easily be able to publish papers on. Neither of these are necessarily well-correlated with the most important questions.
So far I've found one tool that helps combat the worst privileged questions, which is to ask the following counter-question:
What do I plan on doing with an answer to this question?
With the worst privileged questions I frequently find that the answer is "nothing," sometimes with the follow-up answer "signaling?" That's a bad sign. (Edit: but "nothing" is different from "I'm just curious," say in the context of an interesting mathematical or scientific question that isn't motivated by a practical concern. Intellectual curiosity can be a useful heuristic.)
(I've also found the above counter-question generally useful for dealing with questions. For example, it's one way to notice when a question should be dissolved, and asked of someone else it's one way to help both of you clarify what they actually want to know.)
CFAR is taking LW-style rationality into the world, this month, with a new kind of rationality camp: Rationality for Entrepreneurs. It is aimed at ambitious, relatively successful folk (regardless of whether they are familiar with LW), who like analytic thinking and care about making practical real-world projects work. Some will be paying for themselves; others will be covered by their companies.
If you'd like to learn rationality in a more practical context, consider applying. Also, if you were hoping to introduce rationality and related ideas to a friend/acquaintance who fits the bill, please talk to them about the workshop, both for their sake and to strengthen the rationality community.
The price will be out of reach for some: the workshop costs $3.9k. But there is a money-back guarantee. Some partial scholarships may be available. This fee buys participants:
- Four nights and three days at a retreat center, with small classes, interactive exercises, and much opportunity for unstructured conversation that applies the material at meals and during the evenings (room and board is included);
- One instructor for every three participants;
- Six weeks of Skype/phone and email follow-up, to help participants make the material into regular habits, and navigate real-life business and personal situations with these tools.
CFAR is planning future camps which are more directly targeted at a Less Wrong audience (like our previous camps), so don’t worry if this camp doesn’t seem like the right fit for you (because of cost, interests, etc.). There will be others. But if you or someone you know does have an entrepreneurial bent, then we strongly recommend applying to this camp rather than waiting. Attendees will be surrounded by other ambitious, successful, practically-minded folks, learn from materials that have been tailored to entrepreneurial issues, and receive extensive follow-up to help apply what they’ve learned to their businesses and personal lives.
Our schedule is below.
(See also the thread about the camp on Hacker News.)
Summary: People often say that voting is irrational, because the probability of affecting the outcome is so small. But the outcome itself is extremely large when you consider its impact on other people. I estimate that for most people, voting is worth a charitable donation of somewhere between $100 and $1.5 million. For me, the value came out to around $56,000. So I figure something on the order of $1000 is a reasonable evaluation (after all, I'm writing this post because the number turned out to be large according to this method, so regression to the mean suggests I err on the conservative side), and that's be enough to make me do it.
Moreover, in swing states the value is much higher, so taking a 10% chance at convincing a friend in a swing state to vote similarly to you is probably worth thousands of expected donation dollars, too.
I find this much more compelling than the typical attempts to justify voting purely in terms of signal value or the resulting sense of pride in fulfilling a civic duty. And voting for selfish reasons is still almost completely worthless, in terms of direct effect. If you're on the way to the polls only to vote for the party that will benefit you the most, you're better off using that time to earn $5 mowing someone's lawn. But if you're even a little altruistic... vote away!
Time for a Fermi estimate
Below is an example Fermi calculation for the value of voting in the USA. Of course, the estimates are all rough and fuzzy, so I'll be conservative, and we can adjust upward based on your opinion.
I'll be estimating the value of voting in marginal expected altruistic dollars, the expected number of dollars being spent in a way that is in line with your altruistic preferences.1 If you don't like measuring the altruistic value of the outcome in dollars, please consider making up your own measure, and keep reading. Perhaps use the number of smiles per year, or number of lives saved. Your measure doesn't have to be total or average utilitarian, either; as long as it's roughly commensurate with the size of the country, it will lead you to a similar conclusion in terms of orders of magnitude.
I think people who are not made happier by having things either have the wrong things, or have them incorrectly. Here is how I get the most out of my stuff.
Money doesn't buy happiness. If you want to try throwing money at the problem anyway, you should buy experiences like vacations or services, rather than purchasing objects. If you have to buy objects, they should be absolute and not positional goods; positional goods just put you on a treadmill and you're never going to catch up.
I think getting value out of spending money, owning objects, and having positional goods are all three of them skills, that people often don't have naturally but can develop. I'm going to focus mostly on the middle skill: how to have things correctly1.
Inspired by Konkvistador's comment
Posts titled "Rational ___-ing" or "A Rational Approach to ____" induce groans among a sizeable contingent here, myself included. However, inflationary use of "rational" and its transformation into an applause light is only one part of the problem. These posts tend to revolve around specific answers, rather than the process of how to find answers. I claim a post on "rational toothpaste buying" could be on-topic and useful, if correctly written to illustrate determining goals, assessing tradeoffs, and implementing the final conclusions. A post detailing the pros and cons of various toothpaste brands is for a dentistry or personal hygiene forum; a post about algorithms for how to determine the best brands or whether to do so at all is for a rationality forum. This post is my shot at showing what this would look like.
As I've been reading through various articles and their comments on Less Wrong, I've noticed a theme that has appeared repeatedly: a frustration that we are not seeing more practical benefits from studying rationality. For example, Eliezer writes in A Sense that More Is Possible,
Why aren't "rationalists" surrounded by a visible aura of formidability? Why aren't they found at the top level of every elite selected on any basis that has anything to do with thought? Why do most "rationalists" just seem like ordinary people...
Yvain writes in Extreme Rationality: It's Not That Great,
...I've gotten countless clarity-of-mind benefits from Overcoming Bias' x-rationality, but practical benefits? Aside from some peripheral disciplines, I can't think of any.
patrissimo wrote in a comment on another article,
Sorry, folks, but compared to the self-help/self-development community, Less Wrong is currently UTTERLY LOSING at self-improvement and life optimization.
These writers have also offered some suggestions for improving the situation. Eliezer writes,
Of this [question] there are several answers; but one of them, surely, is that they have received less systematic training of rationality in a less systematic context than a first-dan black belt gets in hitting people.
patrissimo describes what he thinks an effective rationality practice would look like.
- It is a group of people who gather in person to train specific skills.
- While there are some theoreticians of the art, most people participate by learning it and doing it, not theorizing about it.
- Thus the main focus is on local practice groups, along with the global coordination to maximize their effectiveness (marketing, branding, integration of knowledge, common infrastructure). As a result, it is driven by the needs of the learners [emphasis added].
- You have to sweat, but the result is you get stronger.
- You improve by learning from those better than you, competing with those at your level, and teaching those below you.
- It is run by a professional, or at least someone getting paid [emphasis added] for their hobby. The practicants receive personal benefit from their practice, in particular from the value-added of the coach, enough to pay for talented coaches.
Dan Nuffer and I have decided that it's time to stop talking and start doing. We are in the very early stages of creating a business to help people improve their lives by training them in instrumental rationality. We've done some preliminary market research to get an idea of where the opportunities might lie. In fact, this venture got started when, on a whim, I ran a poll on ask500people.com asking,
Would you pay $75 for an interactive online course teaching effective decision-making skills?
I got 299 responses in total. These are the numbers that responded with "likely" or "very likely":
- 23.4% (62) overall.
- 49% (49 of 100) of the respondents from India.
- 10.6% (21 of 199) of the respondents not from India.
- 9.0% (8 of 89) of the respondents from the U.S.
These numbers were much higher than I expected, especially the numbers from India, which still puzzle me. Googling around a bit, though, I found an instructor-led online decision-making course for $130, and a one-day decision-making workshop offered in the UK for £200 (over $350)... and the Google keyword tool returns a large number of search terms (800) related to "decision-making", many of them with a high number of monthly searches.
So it appears that there may be a market for training in effective decision-making -- something that could be the first step towards a more comprehensive training program in instrumental rationality. Some obvious market segments to consider are business decision makers, small business owners, and intelligent people of an analytical bent (e.g., the kind of people who find Less Wrong interesting). An important subset of this last group are INTJ personality types; I don't know if there is an effective way to find and market to specific Meyers-Briggs personality types, but I'm looking into it.
"Life coaching" is a proven business, and its growing popularity suggests the potential for a "decision coaching" service; in fact, helping people with big decisions is one of the things a life coach does. One life coach of 12 years described a typical client as age 35 to 55, who is "at a crossroads, must make a decision and is sick of choosing out of safety and fear." Life coaches working with individuals typically charge around $100 to $300 per hour. As far as I can tell, training in decision analysis / instrumental rationality is not commonly found among life coaches. Surely we can do better.
Can we do effective training online? patrissimo thinks that gathering in person is necessary, but I'm not so sure. His evidence is that "all the people who have replied to me so far saying they get useful rationality practice out of the LW community said the growth came through attending local meetups." To me this is weak evidence -- it seems to say more about the effectiveness of local meetups vs. just reading about rationality. In any event, it's worth testing whether online training can work, since
- not everyone can go to meetups,
- it should be easier to scale up, and
- not to put too fine a point on it, but online training is probably more profitable.
To conclude, one of the things an entrepreneur needs to do is "get out of the building" and talk to members of the target market. We're interested in hearing what you think. What ideas do you think would be most effective in training for instrumental rationality, and why? What would you personally want from a rationality training program? What kinds of products / services related to rationality training would you be interesting in buying?
This post started off as a comment to Vaniver's post Value of Information: Four Examples. This post also heavily builds on Eliezer's post The 5-Second Level. The five second level is the idea that to develop a rationality skill, you need to automatically recognize a problem and then apply a stored, actionable procedural skill to deal with it, all in about five seconds or so. In here, I take the value of information concept and develop it into a five second skill, summarizing my thought process as I do so. Hopefully this will help others develop things into five second skills.
So upon reading this, I thought "the value of information seems like a valuable concept", but didn't do much more. A little later, I thought, "I want to make sure that I actually apply this concept when it is warranted. How do I make sure of that?" In other words, "how do I get this concept to the five second level?" Then I decided to document my thought process in the hopes of it being useful to others. This is quite stream-of-consciousness, but I hope that seeing my thought process helps to learn from it. (Or to offer me valuable criticism on how I should have thought.)
First off, "how do I apply this concept?" is too vague to be useful. A better question would be, "in what kinds of situations might this concept be useful?". With a bit of thought, it was easy to find at least three situations, ones where I am:
1. ...tempted to act now without gathering more information, despite the VoI being high.
2. ...tempted to gather more information, despite the VoI being low.
3. ...not sure of whether I should seek information or not.
#3 implies that I'm already reflecting on the situation, and am therefore relatively likely to remember VoI as a possible mental tool anyway. So developing a five-second level reaction for that one isn't as important. But in #1 and #2 I might just proceed by default, never realizing that I could do better. So I'll leave #3 aside, concentrating on #1 and #2.
Now in these situations, the relevant thing is that the VoI might be "high" or "low". Time to get more concrete - what does that mean? Looking at Vaniver's post, the VoI is high if 1) extra information is likely to make me choose B when I had intended on choosing A, and 2) there's a high payoff in choosing correctly between A and B. If at least 2 is false, VoI is low. The intermediate case is the one where 2 is true but 1 is false, in which case it depends on how extreme the values are. E.g. only a 1% chance of changing my mind given extra information might sometimes imply a high VoI, if the difference between the correct and incorrect choice is a million euros, say.
So sticking just to #1 for simplicity, and because I think that's a worse problem for me, I'd need to train myself to immediately notice and react if:
In J. Michael Straczynski's science fiction TV show Babylon 5, there's a character named Lennier. He's pretty Spock-like: he's a long-lived alien who avoids displaying emotion and feels superior to humans in intellect and wisdom. He's sworn to always speak the truth. In one episode, he and another character, the corrupt and rakish Ambassador Mollari, are chatting. Mollari is bored. But then Lennier mentions that he's spent decades studying probability. Mollari perks up, and offers to introduce him to this game the humans call poker.
View more: Next