The correct response to uncertainty is *not* half-speed

79 15 January 2016 10:55PM

Once upon a time (true story), I was on my way to a hotel in a new city.  I knew the hotel was many miles down this long, branchless road.  So I drove for a long while.

After a while, I began to worry I had passed the hotel.

So, instead of proceeding at 60 miles per hour the way I had been, I continued in the same direction for several more minutes at 30 miles per hour, wondering if I should keep going or turn around.

After a while, I realized: I was being silly!  If the hotel was ahead of me, I'd get there fastest if I kept going 60mph.  And if the hotel was behind me, I'd get there fastest by heading at 60 miles per hour in the other direction.  And if I wasn't going to turn around yet -- if my best bet given the uncertainty was to check N more miles of highway first, before I turned around -- then, again, I'd get there fastest by choosing a value of N, speeding along at 60 miles per hour until my odometer said I'd gone N miles, and then turning around and heading at 60 miles per hour in the opposite direction.

Either way, fullspeed was best.  My mind had been naively averaging two courses of action -- the thought was something like: "maybe I should go forward, and maybe I should go backward.  So, since I'm uncertain, I should go forward at half-speed!"  But averages don't actually work that way.[1]

Following this, I started noticing lots of hotels in my life (and, perhaps less tactfully, in my friends' lives).  For example:
• I wasn't sure if I was a good enough writer to write a given doc myself, or if I should try to outsource it.  So, I sat there kind-of-writing it while also fretting about whether the task was correct.
• (Solution:  Take a minute out to think through heuristics.  Then, either: (1) write the post at full speed; or (2) try to outsource it; or (3) write full force for some fixed time period, and then pause and evaluate.)
• I wasn't sure (back in early 2012) that CFAR was worthwhile.  So, I kind-of worked on it.
• An old friend came to my door unexpectedly, and I was tempted to hang out with her, but I also thought I should finish my work.  So I kind-of hung out with her while feeling bad and distracted about my work.
• A friend of mine, when teaching me math, seems to mumble specifically those words that he doesn't expect me to understand (in a sort of compromise between saying them and not saying them)...
• Duncan reports that novice Parkour students are unable to safely undertake certain sorts of jumps, because they risk aborting the move mid-stream, after the actual last safe stopping point (apparently kind-of-attempting these jumps is more dangerous than either attempting, or not attempting the jumps)
• It is said that start-up founders need to be irrationally certain that their startup will succeed, lest they be unable to do more than kind-of work on it...

That is, it seems to me that often there are two different actions that would make sense under two different models, and we are uncertain which model is true... and so we find ourselves taking an intermediate of half-speed action... even when that action makes no sense under any probabilistic mixture of the two models.

You might try looking out for such examples in your life.

[1] Edited to add: The hotel example has received much nitpicking in the comments.  But: (A) the actual example was legit, I think.  Yes, stopping to think has some legitimacy, but driving slowly for a long time because uncertain does not optimize for thinking.  Similarly, it may make sense to drive slowly to stare at the buildings in some contexts... but I was on a very long empty country road, with no buildings anywhere (true historical fact), and also I was not squinting carefully at the scenery.  The thing I needed to do was to execute an efficient search pattern, with a threshold for a future time at which to switch from full-speed in some direction to full-speed in the other.  Also: (B) consider some of the other examples; "kind of working", "kind of hanging out with my friend", etc. seem to be common behaviors that are mostly not all that useful in the usual case.

Fermi Estimates

51 11 April 2013 05:52PM

Just before the Trinity test, Enrico Fermi decided he wanted a rough estimate of the blast's power before the diagnostic data came in. So he dropped some pieces of paper from his hand as the blast wave passed him, and used this to estimate that the blast was equivalent to 10 kilotons of TNT. His guess was remarkably accurate for having so little data: the true answer turned out to be 20 kilotons of TNT.

Fermi had a knack for making roughly-accurate estimates with very little data, and therefore such an estimate is known today as a Fermi estimate.

Why bother with Fermi estimates, if your estimates are likely to be off by a factor of 2 or even 10? Often, getting an estimate within a factor of 10 or 20 is enough to make a decision. So Fermi estimates can save you a lot of time, especially as you gain more practice at making them.

Estimation tips

These first two sections are adapted from Guestimation 2.0.

Dare to be imprecise. Round things off enough to do the calculations in your head. I call this the spherical cow principle, after a joke about how physicists oversimplify things to make calculations feasible:

Milk production at a dairy farm was low, so the farmer asked a local university for help. A multidisciplinary team of professors was assembled, headed by a theoretical physicist. After two weeks of observation and analysis, the physicist told the farmer, "I have the solution, but it only works in the case of spherical cows in a vacuum."

By the spherical cow principle, there are 300 days in a year, people are six feet (or 2 meters) tall, the circumference of the Earth is 20,000 mi (or 40,000 km), and cows are spheres of meat and bone 4 feet (or 1 meter) in diameter.

Decompose the problem. Sometimes you can give an estimate in one step, within a factor of 10. (How much does a new compact car cost? \$20,000.) But in most cases, you'll need to break the problem into several pieces, estimate each of them, and then recombine them. I'll give several examples below.

Estimate by bounding. Sometimes it is easier to give lower and upper bounds than to give a point estimate. How much time per day does the average 15-year-old watch TV? I don't spend any time with 15-year-olds, so I haven't a clue. It could be 30 minutes, or 3 hours, or 5 hours, but I'm pretty confident it's more than 2 minutes and less than 7 hours (400 minutes, by the spherical cow principle).

Can we convert those bounds into an estimate? You bet. But we don't do it by taking the average. That would give us (2 mins + 400 mins)/2 = 201 mins, which is within a factor of 2 from our upper bound, but a factor 100 greater than our lower bound. Since our goal is to estimate the answer within a factor of 10, we'll probably be way off.

Instead, we take the geometric mean — the square root of the product of our upper and lower bounds. But square roots often require a calculator, so instead we'll take the approximate geometric mean (AGM). To do that, we average the coefficients and exponents of our upper and lower bounds.

So what is the AGM of 2 and 400? Well, 2 is 2×100, and 400 is 4×102. The average of the coefficients (2 and 4) is 3; the average of the exponents (0 and 2) is 1. So, the AGM of 2 and 400 is 3×101, or 30. The precise geometric mean of 2 and 400 turns out to be 28.28. Not bad.

What if the sum of the exponents is an odd number? Then we round the resulting exponent down, and multiply the final answer by three. So suppose my lower and upper bounds for how much TV the average 15-year-old watches had been 20 mins and 400 mins. Now we calculate the AGM like this: 20 is 2×101, and 400 is still 4×102. The average of the coefficients (2 and 4) is 3; the average of the exponents (1 and 2) is 1.5. So we round the exponent down to 1, and we multiple the final result by three: 3(3×101) = 90 mins. The precise geometric mean of 20 and 400 is 89.44. Again, not bad.

Sanity-check your answer. You should always sanity-check your final estimate by comparing it to some reasonable analogue. You'll see examples of this below.

Use Google as needed. You can often quickly find the exact quantity you're trying to estimate on Google, or at least some piece of the problem. In those cases, it's probably not worth trying to estimate it without Google.

Attempted Telekinesis

82 07 February 2015 06:53PM
Summary:  I’d like to share some techniques that made a large difference for me, and for several other folks I shared them with.  They are techniques for reducing stress, social shame, and certain other kinds of “wasted effort”.  These techniques are less developed and rigorous than the techniques that CFAR teaches in our workshops -- for example, they currently only work for perhaps 1/3rd of the dozen or so people I’ve shared them with -- but they’ve made a large enough impact for that 1/3rd that I wanted to share them with the larger group.  I’ll share them through a sequence of stories and metaphors, because, for now, that is what I have.

A discussion of heroic responsibility

39 29 October 2014 04:22AM

[Originally posted to my personal blog, reposted here with edits.]

Introduction

You could call it heroic responsibility, maybe,” Harry Potter said. “Not like the usual sort. It means that whatever happens, no matter what, it’s always your fault. Even if you tell Professor McGonagall, she’s not responsible for what happens, you are. Following the school rules isn’t an excuse, someone else being in charge isn’t an excuse, even trying your best isn’t an excuse. There just aren’t any excuses, you’ve got to get the job done no matter what.” Harry’s face tightened. “That’s why I say you’re not thinking responsibly, Hermione. Thinking that your job is done when you tell Professor McGonagall—that isn’t heroine thinking. Like Hannah being beat up is okay then, because it isn’t your fault anymore. Being a heroine means your job isn’t finished until you’ve done whatever it takes to protect the other girls, permanently.” In Harry’s voice was a touch of the steel he had acquired since the day Fawkes had been on his shoulder. “You can’t think as if just following the rules means you’ve done your duty. –HPMOR, chapter 75.

I like this concept. It counters a particular, common, harmful failure mode, and that it’s an amazingly useful thing for a lot of people to hear. I even think it was a useful thing for me to hear a year ago.

But... I’m not sure about this yet, and my thoughts about it are probably confused, but I think that there's a version of Heroic Responsibility that you can get from reading this description, that's maybe even the default outcome of reading this description, that's also a harmful failure mode.

Something Impossible

A wrong way to think about heroic responsibility

I dealt with a situation at work a while back–May 2014 according to my journal. I had a patient for five consecutive days, and each day his condition was a little bit worse. Every day, I registered with the staff doctor my feeling that the current treatment was Not Working, and that maybe we ought to try something else. There were lots of complicated medical reasons why his decisions were constrained, and why ‘let’s wait and see’ was maybe the best decision, statistically speaking–that in a majority of possible worlds, waiting it out would lead to better outcomes than one of the potential more aggressive treatments, which came with side effects. And he wasn’t actually ignoring me; he would listen patiently to all my concerns. Nevertheless, he wasn’t the one watching the guy writhe around in bed, uncomfortable and delirious, for twelve hours every day, and I felt ignored, and I was pretty frustrated.

On day three or four, I was listening to Ray’s Solstice album on my break, and the song ‘Something Impossible’ came up.

Bold attempts aren't enough, roads can't be paved with intentions...
You probably don’t even got what it takes,
But you better try anyway, for everyone's sake
And you won’t find the answer until you escape from the
Its time to just shut up, and do the impossible.
Can’t walk away...
Gotta break off those shackles, and shake off those chains
Gotta make something impossible happen today...

It hit me like a load of bricks–this whole thing was stupid and rationalists should win. So I spent my entire break talking on Gchat with one of my CFAR friends, trying to see if he could help me come up with a suggestion that the doctor would agree was good. This wasn’t something either of us were trained in, and having something to protect doesn't actually give you superpowers, and the one creative solution I came up with was worse than the status quo for several obvious reasons.

I went home on day four feeling totally drained and having asked to please have a different patient in the morning. I came in to find that the patient had nearly died in the middle of the night. (He was now intubated and sedated, which wasn’t great for him but made my life a hell of a lot easier.) We eventually transferred him to another hospital, and I spent a while feeling like I’d personally failed.

I’m not sure whether or not this was a no-win scenario even in theory. But I don't think I, personally, could have done anything with greater positive expected value. There's a good reason why a doctor with 10 years of school and 20 years of ICU experience can override a newly graduated nurse's opinion. In most of the possible worlds, the doctor is right and I'm wrong. Pretty much the only thing that I could have done better would have been to care less–and thus be less frustrated and more emotionally available to comfort a guy who was having the worst week of his life.

In short, I fulfilled my responsibilities to my patient. Nurses have a lot of responsibilities to their patients, well specified in my years of schooling and in various documents published by the College of Nurses of Ontario. But nurses aren’t expected or supposed to take heroic responsibility for these things.

I think that overall, given a system that runs on humans, that's a good thing.

The Well-Functioning Gear

I feel like maybe the hospital is an emergent system that has the property of patient-healing, but I’d be surprised if any one part of it does.

Suppose I see an unusual result on my patient. I don’t know what it means, so I mention it to a specialist. The specialist, who doesn’t know anything about the patient beyond what I’ve told him, says to order a technetium scan. He has no idea what a technetium scan is or how it is performed, except that it’s the proper thing to do in this situation. A nurse is called to bring the patient to the scanner, but has no idea why. The scanning technician, who has only a vague idea why the scan is being done, does the scan and spits out a number, which ends up with me. I bring it to the specialist, who gives me a diagnosis and tells me to ask another specialist what the right medicine for that is. I ask the other specialist – who has only the sketchiest idea of the events leading up to the diagnosis – about the correct medicine, and she gives me a name and tells me to ask the pharmacist how to dose it. The pharmacist – who has only the vague outline of an idea who the patient is, what test he got, or what the diagnosis is – doses the medication. Then a nurse, who has no idea about any of this, gives the medication to the patient. Somehow, the system works and the patient improves.

Part of being an intern is adjusting to all of this, losing some of your delusions of heroism, getting used to the fact that you’re not going to be Dr. House, that you are at best going to be a very well-functioning gear in a vast machine that does often tedious but always valuable work. –Scott Alexander

The medical system does a hard thing, and it might not do it well, but it does it. There is too much complexity for any one person to have a grasp on it. There are dozens of mutually incomprehensible specialties. And the fact that [insert generic nurse here] doesn't have the faintest idea how to measure electrolytes in blood, or build an MRI machine, or even what's going on with the patient next door, is a feature, not a bug.

The medical system doesn’t run on exceptional people–it runs on average people, with predictably average levels of skill, slots in working memory, ability to notice things, ability to not be distracted thinking about their kid's problems at school, etc. And it doesn’t run under optimal conditions; it runs under average conditions. Which means working overtime at four am, short staffing, three patients in the ER waiting for ICU beds, etc.

Sure, there are problems with the machine. The machine is inefficient. The machine doesn’t have all the correct incentives lined up. The machine does need fixing–but I would argue that from within the machine, as one of its parts, taking heroic responsibility for your own sphere of control isn’t the way to go about fixing the system.

As an [insert generic nurse here], my sphere of control is the four walls of my patient's room. Heroic responsibility for my patient would mean...well, optimizing for them. In the most extreme case, it might mean killing the itinerant stranger to obtain a compatible kidney. In the less extreme case, I spend all my time giving my patient great care, instead of helping the nurse in the room over, whose patient is much sicker. And then sometimes my patient will die, and there will be literally nothing I can do about it, their death was causally set in stone twenty-four hours before they came to the hospital.

I kind of predict that the results of installing heroic responsibility as a virtue, among average humans under average conditions, would be a) everyone stepping on everyone else’s toes, and b) 99% of them quitting a year later.

Recursive Heroic Responsibility

If you're a gear in a machine, and you notice that the machine is broken, your options are a) be a really good gear, or b) take heroic responsibility for your sphere of control, and probably break something...but that's a false dichotomy. Humans are very flexible tools, and there are also infinite other options, including "step out of the machine, figure out who's in charge of this shit, and get it fixed."

You can't take responsibility for the individual case, but you can for the system-level problem, the long view, the one where people eat badly and don't exercise and at age fifty, morbidly obese with a page-long medical history, they end up as a slow-motion train wreck in an ICU somewhere. Like in poker, you play to win money–positive EV–not to win hands. Someone’s going to be the Minister of Health for Canada, and they’re likely to be in a position where taking heroic responsibility for the Canadian health care system makes things better. And probably the current Minister of Health isn’t being strategic, isn’t taking the level of responsibility that they could, and the concept of heroic responsibility would be the best thing for them to encounter.

So as an [insert generic nurse here], working in a small understaffed ICU, watching the endless slow-motion train wreck roll by...maybe the actual meta-level right thing to do is to leave, and become the freaking Minister of Health, or befriend the current one and introduce them to the concept of being strategic.

But it's fairly obvious that that isn't the right action for all the nurses in that situation. I'm wary of advice that doesn't generalize. What's difference between the nurse who should leave in order to take meta-level responsibility, and the nurse who should stay because she's needed as a gear?

Heroic responsibility for average humans under average conditions

I can predict at least one thing that people will say in the comments, because I've heard it hundreds of times–that Swimmer963 is a clear example of someone who should leave nursing, take the meta-level responsibility, and do something higher impact for the usual. Because she's smart. Because she's rational. Whatever.

Fine. This post isn't about me. Whether I like it or not, the concept of heroic responsibility is now a part of my value system, and I probably am going to leave nursing.

But what about the other nurses on my unit, the ones who are competent and motivated and curious and really care? Would familiarity with the concept of heroic responsibility help or hinder them in their work? Honestly, I predict that they would feel alienated, that they would assume I held a low opinion of them (which I don't, and I really don't want them to think that I do), and that they would flinch away and go back to the things that they were doing anyway, the role where they were comfortable–or that, if they did accept it, it would cause them to burn out. So as a consequentialist, I'm not going to tell them.

And yeah, that bothers me. Because I'm not a special snowflake. Because I want to live in a world where rationality helps everyone. Because I feel like the reason they would react that was isn't because of anything about them as people, or because heroic responsibility is a bad thing, but because I'm not able to communicate to them what I mean. Maybe stupid reasons. Still bothers me.

False Friends and Tone Policing

45 18 June 2014 06:20PM

TL;DR: It can be helpful to reframe arguments about tone, trigger warnings, and political correctness as concerns about false cognates/false friends.  You may be saying something that sounds innocuous to you, but translates to something much stronger/more vicious to your audience.  Cultivating a debating demeanor that invites requests for tone concerns can give you more information about about the best way to avoid distractions and have a productive dispute.

When I went on a two-week exchange trip to China, it was clear the cultural briefing was informed by whatever mistakes or misunderstandings had occurred on previous trips, recorded and relayed to us so that we wouldn't think, for example, that our host siblings were hitting on us if they took our hands while we were walking.

But the most memorable warning had to do with Mandarin filler words.  While English speakers cover gaps with "uh" "um" "ah" and so forth, the equivalent filler words in Mandarin had an African-American student on a previous trip pulling aside our tour leader and saying he felt a little uncomfortable since his host family appeared to be peppering all of their comments with "nigga, nigga, nigga..."

As a result, we all got warned ahead of time.  The filler word (那个 - nèige) was a false cognate that, although innocuous to the speaker, sounded quite off-putting to us.  It helped to be warned, but it still required some deliberate, cognitive effort to remind myself that I wasn't actually hearing something awful and to rephrase it in my head.

When I've wound up in arguments about tone, trigger warnings, and taboo words, I'm often reminded of that experience in China.  Limiting language can prompt suspicion of closing off conversations, but in a number of cases, when my friends have asked me to rephrase, it's because the word or image I was using was as distracting (however well meant) as 那个 was in Beijing.

It's possible to continue a conversation with someone who's every statement is laced with "nigga" but it takes effort.  And no one is obligated to expend their energy on having a conversation with me if I'm making it painful or difficult for them, even if it's as the result of a false cognate (or, as the French would say, false friend) that sounds innocuous to me but awful to my interlocutor.  If I want to have a debate at all, I need to stop doing the verbal equivalent of assaulting my friend to make any progress.

It can be worth it to pause and reconsider your language even if the offensiveness of a word or idea is exactly the subject of your dispute.  When I hosted a debate on "R: Fire Eich" one of the early speakers made it clear that, in his opinion, opposing gay marriage was logically equivalent to endorsing gay genocide (he invoked a slippery slope argument back to the dark days of criminal indifference to AIDS).

Pretty much no one in the room (whatever their stance on gay marriage) agreed with this equivalence, but we could all agree it was pretty lucky that this person had spoken early in the debate, so that we understood how he was hearing our speeches.  If every time someone said "conscience objection," this speaker was appending "to enable genocide," the fervor and horror with which he questioned us made a lot more sense, and didn't feel like personal viciousness.  Knowing how high the stakes felt to him made it easier to have a useful conversation.

This is a large part of why I objected to PZ Myers's deliberate obtuseness during the brouhaha he sparked when he asked readers to steal him a consecrated Host from a Catholic church so that he could desecrate it.  PZ ridiculed Catholics for getting upset that he was going to "hurt" a piece of bread, even though the Eucharist is a fairly obvious example of a false cognate that is heard/received differently by Catholics and atheists.  (After all, if it wasn't holy to someone, he wouldn't be able to profane it).  In PZ's incident, it was although we had informed our Chinese hosts about the 那个/nigga confusion, and they had started using it more boisterously, so that it would be clearer to us that they didn't find it offensive.

We were only able to defuse the awkwardness in China for two reasons.

1. The host family was so nice, aside from this one provocation, that the student noticed he was confused and sought advice.
2. There was someone on hand who understood both groups well enough to serve as an interpreter.

In an ordinary argument (especially one that takes place online) it's up to you to be visibly virtuous enough that, if you happen to be using a vicious false cognate, your interlocutor will find that odd, not of a piece with your other behavior.

That's one reason my debating friend did bother explaining explicitly the connection he saw between opposition to gay marriage and passive support of genocide -- he trusted us enough to think that we wouldn't endorse the implications of our arguments if he made them obvious.  In the P.Z. dispute, when Catholic readers found him as the result of the stunt, they didn't have any such trust.

It's nice to work to cultivate that trust, and to be the kind of person your friends do approach with requests for trigger warnings and tone shifts.  For one thing, I don't want to use emotionally intense false cognates and not know it, any more than I would want to be gesticulating hard enough to strike my friend in the face without noticing.  For the most part, I prefer to excise the distraction, so it's easier for both of us to focus on the heart of the dispute, but, even if you think that the controversial term is essential to your point, it's helpful to know it causes your friend pain, so you have the opportunity to salve it some other way.

P.S. Arnold Kling's The Three Languages of Politics is a short read and a nice introduction to what political language you're using that sounds like horrible false cognates to people rooted in different ideologies.

P.P.S. I've cross-posted this on my usual blog, but am trying out cross-posting to Discussion sometimes.

Come up with better Turing Tests

13 10 June 2014 10:47AM

So the Turing test has been "passed", and the general consensus is that this was achieved in a very unimpressive way - the 13 year old Ukrainian persona was a cheat, the judges were incompetent, etc... These are all true, though the test did pass Turing's original criteria - and there are far more people willing to be dismissive of those criteria in retrospect than were in advance. It happened about 14 years later than Turing had been anticipating, which makes it quite a good prediction for 1950 (in my personal view, Turing made two mistakes that compensated - the "average interrogator" was a much lower bar than he thought, but progress on the subject would be much slower than he thought).

But anyway, the main goal now, as suggested by Toby Ord and others, is to design a better Turing test, something that can give AI designers something to aim at, and that would be a meaningful test of abilities. The aim is to ensure that if a program passes these new tests, we won't be dismissive of how it was achieved.

Here are a few suggestions I've heard about or thought about recently; can people suggest more and better ideas?

1. Use proper control groups. 30% of judges thinking that a program is human is meaningless unless the judges also compare with actual humans. Pair up a human subject with a program, and the role of the judge is to establish which of the two subjects is the human and which is not.
2. Toss out the persona tricks - no 13 year-olds, nobody with poor English skills. It was informative about human psychology that these tricks work, but we shouldn't allow them in future. All human subjects will have adequate English and typing skills.
3. On that subject, make sure the judges and subjects are properly motivated (financial rewards, prizes, prestige...) to detect or appear human. We should also brief them that our usual conversational approach to establish which kind of human they are dealing with, is not useful for determining whether they are dealing with a human at all.
4. Use only elite judges. For instance, if Scott Aaronson can't figure it out, the program must have some competence.
5. Make a collection of generally applicable approaches (such as the Winograd Schemas) available to the judges, while emphasising they will have to come up with their own exact sentences, since anything online could have been used to optimise the program already.
6. My favourite approach is to test the program on a task they were not optimised for. A cheap and easy way of doing that would be to test them on novel ASCII art.

My current method would be the lazy one of simply typing this, then waiting, arms folded:

"If you want to prove you're human, simply do nothing for 4 minutes, then re-type this sentence I've just written here, skipping one word out of 2".

Questioning and Respect

20 10 June 2014 10:52AM
A: [Surprising fact]
B: [Question]

When someone has a claim questioned, there are two common responses. One is to treat the question as a challenge, intended as an insult or indicating a lack of trust. If you have this model of interaction you think people should take your word for things, and feel hurt when they don't. Another response is to treat the question as a signal of respect: they take what you're saying seriously and are trying to integrate it into their understanding of the world. If you have this model of interaction then it's the people who smile, nod, and give no indication of their disagreement that are being disrespectful.

Within either of these groups you can just follow the social norm, but it's harder across groups. Recently I was talking to a friend who claimed that in their state income taxes per dollar went down as you earned more. This struck me as really surprising and kind of unlikely: usually it goes the other way around. [1] I'm very much in the latter group described above, while I was pretty sure my friend was in the former. Even though I suspected they would treat it as disrespectful if I asked for details and tried to confirm their claim, it would have felt much more disrespectful for me to just pretend to accept it and move on. What do you do in situations like this?

(Especially given that I think the "disagreement as respect" version builds healthier communities...)

[1] Our tax system does have regressive components, where poor people sometimes pay a higher percentage of their income as tax than richer people, but it's things like high taxes on cigarettes (which rich people don't consume as much), sales taxes (rich people spend less of their income), and a lower capital gains tax rate (poorer people earn way less in capital gains). I tried to clarify to see if this is what my friend meant, but they were clear that they were talking about "report your income to the state, get charged a higher percentage as tax if your income is lower".

I also posted this on my blog.

Examples of Rationality Techniques adopted by the Masses

12 07 June 2014 02:03PM

Hi Everyone,

I was discussing LessWrong and rationality with a few people the other day, and I hit upon a common snag in the conversation.

My conversation partners agreed that rationality is a good idea in general, agreed that there are things you personally can do to improve your decision-making. But their point of view was that, while this is a nice ideal to strive to for yourself, there's little progress that could be made in the general population, who will remain irrational. Since one of the missions of CFAR/LW is to raise the sanity waterline, this is of course a problem.

So here's my question, something I was unable to think of in the spur of the argument - what are good examples of rationality techniques that have already become commonly used in the general population? E.g., one could say "the scientific method", which is certainly a kind of rationality technique that's going semi-wide adoption (though nowhere near universal). Are there any other examples? If you send a random from today back in time, other than specific advances in science, will there be anything they could teach people from the old days in terms of general thinking?

Using vs. evaluating (or, Why I don't come around here no more)

23 20 January 2014 02:36AM

[Summary: Trying to use new ideas is more productive than trying to evaluate them.]

I haven't posted to LessWrong in a long time. I have a fan-fiction blog where I post theories about writing and literature. Topics don't overlap at all between the two websites (so far), but I prioritize posting there much higher than posting here, because responses seem more productive there.

The key difference, I think, is that people who read posts on LessWrong ask whether they're "true" or "false", while the writers who read my posts on writing want to write. If I say something that doesn't ring true to one of them, he's likely to say, "I don't think that's quite right; try changing X to Y," or, "When I'm in that situation, I find Z more helpful", or, "That doesn't cover all the cases, but if we expand your idea in this way..."

Whereas on LessWrong a more typical response would be, "Aha, I've found a case for which your step 7 fails! GOTCHA!"

It's always clear from the context of a writing blog why a piece of information might be useful. It often isn't clear how a LessWrong post might be useful. You could blame the author for not providing you with that context. Or, you could be pro-active and provide that context yourself, by thinking as you read a post about how it fits into the bigger framework of questions about rationality, utility, philosophy, ethics, and the future, and thinking about what questions and goals you have that it might be relevant to.

Mechanism Design: Constructing Algorithms for Strategic Agents

34 30 April 2014 06:37PM

tl;dr Mechanism design studies how to design incentives for fun and profit. A puzzle about whether or not to paint a room is posed. A modeling framework is introduced, with lots of corresponding notation.

Mechanism design is a framework for constructing institutions for group interactions, giving us a language for the design of everything from voting systems to school admissions to auctions to crowdsourcing. Think of it as the engineering side of game theory, building algorithms for strategic agents. In game theory, the primary goal is to answer the question, “Given agents who can take some actions that will lead to some payoffs, what do we expect to happen when the agents strategically interact?” In other words, game theory describes the outcomes of fixed scenarios. In contrast, mechanism design flips the question around and asks, “Given some goals, what payoffs should agents be assigned for the right outcome to occur when agents strategically interact?” The rules of the game are ours to choose, and, within some design constraints, we want to find the best possible ones for a situation.

Although many people, even high-profile theorists, doubt the usefulness of game theory, its application in mechanism design is one of the major success stories of modern economics. Spectrum license auctions designed by economists paved the way for modern cell-phone networks and garnered billions in revenue for the US and European governments. Tech companies like Google and Microsoft employ theorists to improve advertising auctions. Economists like Al Roth and computer scientists like Tuomas Sandholm have been instrumental in establishing kidney exchanges to facilitate organ transplants, while others have been active in the redesign of public school admissions in Boston, Chicago, and New Orleans.

The objective of this post is to introduce all the pieces of a mechanism design problem, providing the setup for actual conclusions later on. I assume you have some basic familiarity with game theory, at the level of understanding the concept of a dominant strategies and Nash equilbria. Take a look at Yvain’s Game Theory Intro if you’d like to brush up.