Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Power and difficulty

19 undermind 22 October 2014 05:22AM

A specific bias that Lesswrongers may often get from fiction[1] is the idea that power is proportional to difficulty.  The more power something gives you, the harder it should be to get, right?

A mediocre student becomes a powerful mage through her terrible self-sacrifice and years of studying obscure scrolls. Even within the spells she can cast, the truly world-altering ones are those that demand the most laborious preparation, the most precise gestures, and the longest and most incomprehensible stream of syllables. A monk makes an arduous journey to ancient temples and learns secret techniques of spiritual oneness and/or martial asskickery, which require great dedication and self-knowledge. Otherwise, it would be cheating. The whole process of leveling up, of adding ever-increasing modifiers to die rolls, is based on the premise that power comes to those who do difficult things. And it's failsafe - no matter what you put your skill points in, you become better at something. It's a training montage, or a Hero's journey. As with other fictional evidence, these are not "just stories" -- they are powerful cultural narratives. This kind of narrative shapes moral choices[2] and identity. So where do we see this reflected in less obviously fictional contexts?

There's the rags-to-riches story -- the immigrant who came with nothing, but by dint of hard work, now owns a business. University engineering programs are notoriously tough, because you are gaining the ability to do a lot of things (and for signalling reasons). A writer got to where she is today because she wrote and revised and submitted and revised draft after draft after draft.

 

In every case, there is assumed to be a direct causal link between difficulty and power. Here, these are loosely defined. Roughly, "power" means "ability to have your way", and "difficulty" is "amount of work & sacrifice required." These can be translated into units of social influence - a.k.a money -- and investment, a.k.a. time, or money. In many cases, power is set by supply and demand -- nobody needs a wizard if they can all cast their own spells, and a doctor can command much higher prices if they're the only one in town. The power of royalty or other birthright follows a similar pattern - it's not "difficult", but it is scarce -- only a very few people have it, and it's close to impossible for others to get it.

Each individual gets to choose what difficult things they will try to do. Some will have longer or shorter payoffs, but each choice will have some return. And since power (partly) depends on everybody else's choices, neoclassical economics says that individuals' choices collectively determine a single market rate for the return on difficulty. So anything you do that's difficult should have the same payoff.

 

Anything equally difficult should have equal payoff. Apparently. Clearly, this is not the world we live in. Admittedly, there were some pretty questionable assumptions along the way, but it's almost-kind-of-reasonable to conclude that, if you just generalize from the fictional evidence. (Consider RPGs: They're designed to be balanced. Leveling up any class will get you to advance in power at a more-or-less equal rate.)

 

So how does reality differ from this fictional evidence? One direction is trivial: it's easy to find examples where what's difficult is not particularly powerful.

Writing a book is hard, and has a respectable payoff (depending on the quality of the book, publicity, etc.). Writing a book without using the letter "e", where the main character speaks only in palindromes, while typing in the dark with only your toes on a computer that's rigged to randomly switch letters around is much much more difficult, but other than perhaps gathering a small but freakishly devoted fanbase, it does not bring any more power/influence than writing any other book. It may be a sign that you are capable of more difficult things, and somebody may notice this and give you power, but this is indirect and unreliable. Similarly, writing a game in machine code or as a set of instructions for a Turing machine is certainly difficult, but also pretty dumb, and has no significant payoff beyond writing the game in a higher-level language. [Edit - thanks to TsviBT: This is assuming there already is a compiler and relevant modules. If you are first to create all of these, there might be quite a lot of benefit.]

On the other hand, some things are powerful, but not particularly difficult. On a purely physical level, this includes operating heavy machinery, or piloting drones. (I'm sure it's not easy, but the power output is immense). Conceptually, I think calculus comes in this category. It can provide a lot of insight into a lot of disparate phenomena (producing utility and its bastard cousin, money), but is not too much work to learn.

 

As instrumental rationalists, this is the territory we want to be in. We want to beat the market rate for turning effort into influence. So how do we do this?

This is a big, difficult question. I think it's a useful way to frame many of the goals of instrumental rationality. What major should I study? Is this relationship worthwhile? (Note: This may, if poorly applied, turn you into a terrible person. Don't apply it poorly.) What should I do in my spare time?

These questions are tough. But the examples of powerful-but-easy stuff suggest a useful principle: make use of what already exists. Calculus is powerful, but was only easy to learn because I'd already been learning math for a decade. Bulldozers are powerful, and the effort to get this power is minimal if all you have to do is climb in and drive. It's not so worthwhile, though, if you have to derive a design from first principles, mine the ore, invent metallurgy, make all the parts, and secure an oil supply first.

Similarly, if you're already a writer, writing a new book may gain you more influence than learning plumbing. And so on. This begins to suggest that we should not be too hasty to judge past investments as sunk costs. Your starting point matters in trying to find the closest available power boost. And as with any messy real-world problem, luck plays a major role, too.

 

Of course, there will always be some correlation between power and difficulty -- it's not that the classical economic view is wrong, there's just other factors at play. But to gain influence, you should in general be prepared to do difficult things. However, they should not be arbitrary difficult things -- they should be in areas you have specifically identified as having potential.

To make this more concrete, think of Methods!Harry. He strategically invests a lot of effort, usually at pretty good ratios -- the Gringotts money pump scheme, the True Patronus, his mixing of magic and science, and Partial Transfiguration.  Now that's some good fictional evidence.

 



[1] Any kind of fiction, but particularly fantasy, sci-fi, and neoclassical economics. All works of elegant beauty, with a more-or-less tenuous relationship to real life.

[2] Dehghani, M., Sachdeva, S., Ekhtiari, H., Gentner, D., Forbus, F. "The role of Cultural Narratives in Moral Decision Making." Proceedings of the 31th Annual Conference of the Cognitive Science Society. 2009. 

 

 

October Monthly Bragging Thread

10 linkhyrule5 04 October 2013 07:06AM

Since it had a decent amount of traffic until a good two weeks into September (and I thought it was a good idea), I'm reviving this thread.

Joshua_Blaine:

In an attempt to encourage more people to actually do awesome things (a la instrumental rationality), I am proposing a new monthly thread (can be changed to bi-weekly, should that be demanded). Your job, should you choose to accept it, is to comment on this thread explaining the most awesome thing you've done this month. You may be as blatantly proud of you self as you feel. You may unabashedly consider yourself the coolest freaking person ever because of that awesome thing you're dying to tell everyone about. This is the place to do just that.

Remember, however, that this isn't any kind of progress thread. Nor is it any kind of proposal thread.This thread is solely for people to talk about the awesomest thing they've done all month. not will do. not are working on. have already done. This is to cultivate an environment of object level productivity rather than meta-productivity methods.

So, what's the coolest thing you've done this month?

Reflective Control

11 lionhearted 02 September 2013 05:45PM

You've had those moments -- the ones where you're very aware of where you're at in the world, and you're mapping out your future and plans very smartly, and you're feeling great about taking action and pushing important things forwards.

I used to find myself only reaching that place, at random, once or twice per year.

But every time I did, I would spend just a few hours sketching out plans, thinking about my priorities, discarding old things I used to do that didn't bring much value, and pushing my limits to do new worthwhile things. I thought, "This is really valuable. I should do this more often."

Eventually, I named that state: Reflective Control.

As often happens, by naming something it becomes easier to do it more often.

At this time, I still had a hazy poorly working feeling about what it was. So I tried to define it. After many attempts, I came to this:

> Reflective Control is when you're firmly off autopilot, in a high-positive and high-willpower state, and are able to take action.

You'll note there's four discreet components to it: firmly off autopilot (reflective), high positivity, high will, and cable of and oriented towards taking action.

I also asked myself, "How to know if you're in Reflective Control?"

My best answer of an exercise for it is,

> You set aside the impulses/distractions, and try to set a concrete Control-related goal. This is meta-work, meaning the process of defining your life and what needs to happen next. You do this calmly. By setting a concrete Control-related goal successfully and then executing on it, you know you're in an RC state.

> Example: "I will identify all the open projects I've got, and the next steps for each of them."

 

With that definition and that exercise in hand, I was able to do something which works almost magically when I wanted to take on big challenges: I could rate myself from 1-100 on the four key elements of the component, and then set a concrete goal to achieve, and analyze a little about which factor might be holding me back. Here is an example from my journal:

> Reflective 70/100, positive 70/100, will 65/100, action 40/100… ok, I'm feeling good once a good, just some anxiety suppressing will a little and action quite a bit, but no problem. My goal is to finish the xxx outline before I leave here.

I've found this incredibly useful. Summary:

*There's a state I call "Reflective Control" where I'm off autopilot and thinking (reflective), in a positive mood, with willpower and action-oriented.

*I can put explicit numbers on this, somewhat subjectively, from 1-100. This lets me see where the link in the chain is, if any.

*By setting a concrete goal and working towards it, you can get more objective feedback and balance whichever element is lowest with some practical actions.

Privileging the Question

94 Qiaochu_Yuan 29 April 2013 06:30PM

Related to: Privileging the Hypothesis

Remember the exercises in critical reading you did in school, where you had to look at a piece of writing and step back and ask whether the author was telling the whole truth? If you really want to be a critical reader, it turns out you have to step back one step further, and ask not just whether the author is telling the truth, but why he's writing about this subject at all.

-- Paul Graham

There's an old saying in the public opinion business: we can't tell people what to think, but we can tell them what to think about.

-- Doug Henwood

Many philosophers—particularly amateur philosophers, and ancient philosophers—share a dangerous instinct: If you give them a question, they try to answer it.

-- Eliezer Yudkowsky

Here are some political questions that seem to commonly get discussed in US media: should gay marriage be legal? Should Congress pass stricter gun control laws? Should immigration policy be tightened or relaxed? 

These are all examples of what I'll call privileged questions (if there's an existing term for this, let me know): questions that someone has unjustifiably brought to your attention in the same way that a privileged hypothesis unjustifiably gets brought to your attention. The questions above are probably not the most important questions we could be answering right now, even in politics (I'd guess that the economy is more important). Outside of politics, many LWers probably think "what can we do about existential risks?" is one of the most important questions to answer, or possibly "how do we optimize charity?" 

Why has the media privileged these questions? I'd guess that the media is incentivized to ask whatever questions will get them the most views. That's a very different goal from asking the most important questions, and is one reason to stop paying attention to the media. 

The problem with privileged questions is that you only have so much attention to spare. Attention paid to a question that has been privileged funges against attention you could be paying to better questions. Even worse, it may not feel from the inside like anything is wrong: you can apply all of the epistemic rationality in the world to answering a question like "should Congress pass stricter gun control laws?" and never once ask yourself where that question came from and whether there are better questions you could be answering instead.

I suspect this is a problem in academia too. Richard Hamming once gave a talk in which he related the following story:

Over on the other side of the dining hall was a chemistry table. I had worked with one of the fellows, Dave McCall; furthermore he was courting our secretary at the time. I went over and said, "Do you mind if I join you?" They can't say no, so I started eating with them for a while. And I started asking, "What are the important problems of your field?" And after a week or so, "What important problems are you working on?" And after some more time I came in one day and said, "If what you are doing is not important, and if you don't think it is going to lead to something important, why are you at Bell Labs working on it?" I wasn't welcomed after that; I had to find somebody else to eat with!

Academics answer questions that have been privileged in various ways: perhaps the questions their advisor was interested in, or the questions they'll most easily be able to publish papers on. Neither of these are necessarily well-correlated with the most important questions. 

So far I've found one tool that helps combat the worst privileged questions, which is to ask the following counter-question:

What do I plan on doing with an answer to this question?

With the worst privileged questions I frequently find that the answer is "nothing," sometimes with the follow-up answer "signaling?" That's a bad sign. (Edit: but "nothing" is different from "I'm just curious," say in the context of an interesting mathematical or scientific question that isn't motivated by a practical concern. Intellectual curiosity can be a useful heuristic.)

(I've also found the above counter-question generally useful for dealing with questions. For example, it's one way to notice when a question should be dissolved, and asked of someone else it's one way to help both of you clarify what they actually want to know.)

Nov 16-18: Rationality for Entrepreneurs

25 AnnaSalamon 08 November 2012 06:15PM

CFAR is taking LW-style rationality into the world, this month, with a new kind of rationality camp: Rationality for Entrepreneurs.  It is aimed at ambitious, relatively successful folk (regardless of whether they are familiar with LW), who like analytic thinking and care about making practical real-world projects work.  Some will be paying for themselves; others will be covered by their companies.  

If you'd like to learn rationality in a more practical context, consider applying.  Also, if you were hoping to introduce rationality and related ideas to a friend/acquaintance who fits the bill, please talk to them about the workshop, both for their sake and to strengthen the rationality community.

The price will be out of reach for some: the workshop costs $3.9k.  But there is a money-back guarantee.  Some partial scholarships may be available. This fee buys participants:

  • Four nights and three days at a retreat center, with small classes, interactive exercises, and much opportunity for unstructured conversation that applies the material at meals and during the evenings (room and board is included);
  • One instructor for every three participants; 
  • Six weeks of Skype/phone and email follow-up, to help participants make the material into regular habits, and navigate real-life business and personal situations with these tools.

CFAR is planning future camps which are more directly targeted at a Less Wrong audience (like our previous camps), so don’t worry if this camp doesn’t seem like the right fit for you (because of cost, interests, etc.).  There will be others.  But if you or someone you know does have an entrepreneurial bent[1], then we strongly recommend applying to this camp rather than waiting.  Attendees will be surrounded by other ambitious, successful, practically-minded folks, learn from materials that have been tailored to entrepreneurial issues, and receive extensive follow-up to help apply what they’ve learned to their businesses and personal lives.

Our schedule is below.

(See also the thread about the camp on Hacker News.)

continue reading »

Voting is like donating thousands of dollars to charity

31 Academian 05 November 2012 01:02AM

Summary:  People often say that voting is irrational, because the probability of affecting the outcome is so small. But the outcome itself is extremely large when you consider its impact on other people. I estimate that for most people, voting is worth a charitable donation of somewhere between $100 and $1.5 million. For me, the value came out to around $56,000.  So I figure something on the order of $1000 is a reasonable evaluation (after all, I'm writing this post because the number turned out to be large according to this method, so regression to the mean suggests I err on the conservative side), and that's be enough to make me do it.

Moreover, in swing states the value is much higher, so taking a 10% chance at convincing a friend in a swing state to vote similarly to you is probably worth thousands of expected donation dollars, too.

I find this much more compelling than the typical attempts to justify voting purely in terms of signal value or the resulting sense of pride in fulfilling a civic duty. And voting for selfish reasons is still almost completely worthless, in terms of direct effect. If you're on the way to the polls only to vote for the party that will benefit you the most, you're better off using that time to earn $5 mowing someone's lawn. But if you're even a little altruistic... vote away!

Time for a Fermi estimate

Below is an example Fermi calculation for the value of voting in the USA. Of course, the estimates are all rough and fuzzy, so I'll be conservative, and we can adjust upward based on your opinion.

I'll be estimating the value of voting in marginal expected altruistic dollars, the expected number of dollars being spent in a way that is in line with your altruistic preferences.1 If you don't like measuring the altruistic value of the outcome in dollars, please consider making up your own measure, and keep reading. Perhaps use the number of smiles per year, or number of lives saved. Your measure doesn't have to be total or average utilitarian, either; as long as it's roughly commensurate with the size of the country, it will lead you to a similar conclusion in terms of orders of magnitude.

continue reading »

How To Have Things Correctly

57 Alicorn 17 October 2012 06:10AM

I think people who are not made happier by having things either have the wrong things, or have them incorrectly.  Here is how I get the most out of my stuff.

Money doesn't buy happiness.  If you want to try throwing money at the problem anyway, you should buy experiences like vacations or services, rather than purchasing objects.  If you have to buy objects, they should be absolute and not positional goods; positional goods just put you on a treadmill and you're never going to catch up.

Supposedly.

I think getting value out of spending money, owning objects, and having positional goods are all three of them skills, that people often don't have naturally but can develop.  I'm going to focus mostly on the middle skill: how to have things correctly1.

continue reading »

Rational Toothpaste: A Case Study

65 badger 31 May 2012 12:31AM

Inspired by Konkvistador's comment

Posts titled "Rational ___-ing" or "A Rational Approach to ____" induce groans among a sizeable contingent here, myself included. However, inflationary use of "rational" and its transformation into an applause light is only one part of the problem. These posts tend to revolve around specific answers, rather than the process of how to find answers. I claim a post on "rational toothpaste buying" could be on-topic and useful, if correctly written to illustrate determining goals, assessing tradeoffs, and implementing the final conclusions. A post detailing the pros and cons of various toothpaste brands is for a dentistry or personal hygiene forum; a post about algorithms for how to determine the best brands or whether to do so at all is for a rationality forum. This post is my shot at showing what this would look like.

continue reading »

For-Profit Rationality Training

24 ksvanhorn 28 December 2011 09:42PM

As I've been reading through various articles and their comments on Less Wrong, I've noticed a theme that has appeared repeatedly: a frustration that we are not seeing more practical benefits from studying rationality. For example, Eliezer writes in A Sense that More Is Possible,

Why aren't "rationalists" surrounded by a visible aura of formidability? Why aren't they found at the top level of every elite selected on any basis that has anything to do with thought? Why do most "rationalists" just seem like ordinary people...

Yvain writes in Extreme Rationality: It's Not That Great,

...I've gotten countless clarity-of-mind benefits from Overcoming Bias' x-rationality, but practical benefits? Aside from some peripheral disciplines, I can't think of any.

patrissimo wrote in a comment on another article,

Sorry, folks, but compared to the self-help/self-development community, Less Wrong is currently UTTERLY LOSING at self-improvement and life optimization.

These writers have also offered some suggestions for improving the situation. Eliezer writes,

Of this [question] there are several answers; but one of them, surely, is that they have received less systematic training of rationality in a less systematic context than a first-dan black belt gets in hitting people.

patrissimo describes what he thinks an effective rationality practice would look like.

  1. It is a group of people who gather in person to train specific skills.
  2. While there are some theoreticians of the art, most people participate by learning it and doing it, not theorizing about it.
  3. Thus the main focus is on local practice groups, along with the global coordination to maximize their effectiveness (marketing, branding, integration of knowledge, common infrastructure). As a result, it is driven by the needs of the learners [emphasis added].
  4. You have to sweat, but the result is you get stronger.
  5. You improve by learning from those better than you, competing with those at your level, and teaching those below you.
  6. It is run by a professional, or at least someone getting paid [emphasis added] for their hobby. The practicants receive personal benefit from their practice, in particular from the value-added of the coach, enough to pay for talented coaches.

Dan Nuffer and I have decided that it's time to stop talking and start doing. We are in the very early stages of creating a business to help people improve their lives by training them in instrumental rationality. We've done some preliminary market research to get an idea of where the opportunities might lie. In fact, this venture got started when, on a whim, I ran a poll on ask500people.com asking,

Would you pay $75 for an interactive online course teaching effective decision-making skills?

I got 299 responses in total. These are the numbers that responded with "likely" or "very likely":

  • 23.4% (62) overall.
  • 49% (49 of 100) of the respondents from India.
  • 10.6% (21 of 199) of the respondents not from India.
  • 9.0% (8 of 89) of the respondents from the U.S.

These numbers were much higher than I expected, especially the numbers from India, which still puzzle me. Googling around a bit, though, I found an instructor-led online decision-making course for $130, and a one-day decision-making workshop offered in the UK for £200 (over $350)... and the Google keyword tool returns a large number of search terms (800) related to "decision-making", many of them with a high number of monthly searches.

So it appears that there may be a market for training in effective decision-making -- something that could be the first step towards a more comprehensive training program in instrumental rationality. Some obvious market segments to consider are business decision makers, small business owners, and intelligent people of an analytical bent (e.g., the kind of people who find Less Wrong interesting). An important subset of this last group are INTJ personality types; I don't know if there is an effective way to find and market to specific Meyers-Briggs personality types, but I'm looking into it.

"Life coaching" is a proven business, and its growing popularity suggests the potential for a "decision coaching" service; in fact, helping people with big decisions is one of the things a life coach does. One life coach of 12 years described a typical client as age 35 to 55, who is "at a crossroads, must make a decision and is sick of choosing out of safety and fear." Life coaches working with individuals typically charge around $100 to $300 per hour. As far as I can tell, training in decision analysis / instrumental rationality is not commonly found among life coaches. Surely we can do better.

Can we do effective training online? patrissimo thinks that gathering in person is necessary, but I'm not so sure. His evidence is that "all the people who have replied to me so far saying they get useful rationality practice out of the LW community said the growth came through attending local meetups." To me this is weak evidence -- it seems to say more about the effectiveness of local meetups vs. just reading about rationality. In any event, it's worth testing whether online training can work, since

  • not everyone can go to meetups,
  • it should be easier to scale up, and
  • not to put too fine a point on it, but online training is probably more profitable.

To conclude, one of the things an entrepreneur needs to do is "get out of the building" and talk to members of the target market. We're interested in hearing what you think. What ideas do you think would be most effective in training for instrumental rationality, and why? What would you personally want from a rationality training program? What kinds of products / services related to rationality training would you be interesting in buying?

5-second level case study: Value of information

23 Kaj_Sotala 22 November 2011 01:44PM

This post started off as a comment to Vaniver's post Value of Information: Four Examples. This post also heavily builds on Eliezer's post The 5-Second Level. The five second level is the idea that to develop a rationality skill, you need to automatically recognize a problem and then apply a stored, actionable procedural skill to deal with it, all in about five seconds or so. In here, I take the value of information concept and develop it into a five second skill, summarizing my thought process as I do so. Hopefully this will help others develop things into five second skills.

So upon reading this, I thought "the value of information seems like a valuable concept", but didn't do much more. A little later, I thought, "I want to make sure that I actually apply this concept when it is warranted. How do I make sure of that?" In other words, "how do I get this concept to the five second level?" Then I decided to document my thought process in the hopes of it being useful to others. This is quite stream-of-consciousness, but I hope that seeing my thought process helps to learn from it. (Or to offer me valuable criticism on how I should have thought.)

First off, "how do I apply this concept?" is too vague to be useful. A better question would be, "in what kinds of situations might this concept be useful?". With a bit of thought, it was easy to find at least three situations, ones where I am:

1. ...tempted to act now without gathering more information, despite the VoI being high.
2. ...tempted to gather more information, despite the VoI being low.
3. ...not sure of whether I should seek information or not.

#3 implies that I'm already reflecting on the situation, and am therefore relatively likely to remember VoI as a possible mental tool anyway. So developing a five-second level reaction for that one isn't as important. But in #1 and #2 I might just proceed by default, never realizing that I could do better. So I'll leave #3 aside, concentrating on #1 and #2.

Now in these situations, the relevant thing is that the VoI might be "high" or "low". Time to get more concrete - what does that mean? Looking at Vaniver's post, the VoI is high if 1) extra information is likely to make me choose B when I had intended on choosing A, and 2) there's a high payoff in choosing correctly between A and B. If at least 2 is false, VoI is low. The intermediate case is the one where 2 is true but 1 is false, in which case it depends on how extreme the values are. E.g. only a 1% chance of changing my mind given extra information might sometimes imply a high VoI, if the difference between the correct and incorrect choice is a million euros, say.

So sticking just to #1 for simplicity, and because I think that's a worse problem for me, I'd need to train myself to immediately notice and react if:

continue reading »

View more: Next