Filter This month

Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

A Return to Discussion

25 sarahconstantin 27 November 2016 01:59PM

Epistemic Status: Casual

It’s taken me a long time to fully acknowledge this, but people who “come from the internet” are no longer a minority subculture.  Senators tweet and suburban moms post Minion memes. Which means that talking about trends in how people socialize on the internet is not a frivolous subject; it’s relevant to how people interact, period.

There seems to have been an overall drift towards social networks over blogs and forums in general, and in particular things like:

  • the drift of commentary from personal blogs to “media” aggregators like The AtlanticVox, and Breitbart
  • the migration of fandom from LiveJournal to Tumblr
  • Facebook and Twitter as the places where links and discussions go

At the moment I’m not empirically tracking any trends like this, and I’m not confident in what exactly the major trends are — maybe in future I’ll start looking into this more seriously. Right now, I have a sense of things from impression and hearsay.

But one thing I have noticed personally is that people have gotten intimidatedby more formal and public kinds of online conversation.  I know quite a few people who used to keep a “real blog” and have become afraid to touch it, preferring instead to chat on social media.  It’s a weird kind of perfectionism — nobody ever imagined that blogs were meant to be masterpieces.  But I do see people fleeing towards more ephemeral, more stream-of-consciousness types of communication, or communication that involves no words at all (reblogging, image-sharing, etc.)  There seems to be a fear of becoming too visible as a distinctive writing voice.

For one rather public and hilarious example, witness Scott Alexander’s  flight from LessWrong to LiveJournal to a personal blog to Twitter and Tumblr, in hopes that somewhere he can find a place isolated enough that nobody will notice his insight and humor. (It hasn’t been working.)

What might be going on here?

Of course, there are pragmatic concerns about reputation and preserving anonymity. People don’t want their writing to be found by judgmental bosses or family members.  But that’s always been true — and, at any rate, social networking sites are often less anonymous than forums and blogs.

It might be that people have become more afraid of trolls, or that trolling has gotten worse. Fear of being targeted by harassment or threats might make people less open and expressive.  I’ve certainly heard many writers say that they’ve shut down a lot of their internet presence out of exhaustion or literal fear.  And I’ve heard serious enough horror stories that I respect and sympathize with people who are on their guard.

But I don’t think that really explains why one would drift towards more ephemeral media. Why short-form instead of long-form?  Why streaming feeds instead of searchable archives?  Trolls are not known for their patience and rigor.  Single tweets can attract storms of trolls.  So troll-avoidance is not enough of an explanation, I think.

It’s almost as though the issue were accountability.  

A blog is almost a perfect medium for personal accountability. It belongs to you, not your employer, and not the hivemind.  The archives are easily searchable. The posts are permanently viewable. Everything embarrassing you’ve ever written is there.  If there’s a comment section, people are free to come along and poke holes in your posts. This leaves people vulnerable in a certain way. Not just to trolls, but to critics.

You can preempt embarrassment by declaring that you’re doing something shitty anyhow. That puts you in a position of safety. I think that a lot of online mannerisms, like using all-lowercase punctuation, or using really self-deprecating language, or deeply nested meta-levels of meme irony, are ways of saying “I’m cool because I’m not putting myself out there where I can be judged.  Only pompous idiots are so naive as to think their opinions are actually valuable.”

Here’s another angle on the same issue.  If you earnestly, explicitly say what you think, in essay form, and if your writing attracts attention at all, you’ll attract swarms of earnest, bright-but-not-brilliant, mostly young white male, commenters, who want to share their opinions, because (perhaps naively) they think their contributions will be welcomed. It’s basically just “oh, are we playing a game? I wanna play too!”  If you don’t want to play with them — maybe because you’re talking about a personal or highly technical topic and don’t value their input, maybe because your intention was just to talk to your friends and not the general public, whatever — you’ll find this style of interaction aversive.  You’ll read it as sealioning. Or mansplaining.  Or“well, actually”-ing.

I think what’s going on with these kinds of terms is something like:

Author: “Hi! I just said a thing!”

Commenter: “Ooh cool, we’re playing the Discussion game! Can I join?  Here’s my comment!”  (Or, sometimes, “Ooh cool, we’re playing the Verbal Battle game!  I wanna play! Here’s my retort!”)

Author: “Ew, no, I don’t want to play with you.”

There’s a bit of a race/gender/age/educational slant to the people playing the “commenter” role, probably because our society rewards some people more than others for playing the discussion game.  Privileged people are more likely to assume that they’re automatically welcome wherever they show up, which is why others tend to get annoyed at them.

Privileged people, in other words, are more likely to think they live in a high-trust society, where they can show up to strangers and be greeted as a potential new friend, where open discussion is an important priority, where they can trust and be trusted, since everybody is playing the “let’s discuss interesting things!” game.

The unfortunate reality is that most of the world doesn’t look like that high-trust society.

On the other hand, I think the ideal of open discussion, and to some extent the past reality of internet discussion, is a lot more like a high-trust society where everyone is playing the “discuss interesting things” game, than it is like the present reality of social media.

A lot of the value generated on the 90’s and early 2000’s internet was built on people who were interested in things, sharing information about those things with like-minded individuals.  Think of the websites that were just catalogues of information about someone’s obsessions. (I remember my family happily gathering round the PC when I was a kid, to listen to all the national anthems of the world, which some helpful net denizen had collated all in one place.)  There is an enormous shared commons that is produced when people are playing the “share info about interesting stuff” game.  Wikipedia. StackExchange. It couldn’t have been motivated by pure public-spiritedness — otherwise people wouldn’t have produced so much free work.  There are lower motivations: the desire to show off how clever you are, the desire to be a know-it-all, the desire to correct other people.  And there are higher motivations — obsession, fascination, the delight of infodumping. This isn’t some higher plane of civic virtue; it’s just ordinary nerd behavior.

But in ordinary nerd behavior, there are some unusual strengths.  Nerds are playing the “let’s have discussions!” game, which means that they’re unembarrassed about sharing their take on things, and unembarrassed about holding other people accountable for mistakes, and unembarrassed about being held accountable for mistakes.  It’s a sort of happy place between perfectionism and laxity.  Nobody is supposed to get everything right on the first try; but you’re supposed to respond intelligently to criticism. Things will get poked at, inevitably.  Poking is friendly behavior. (Which doesn’t mean it’s not also aggressive behavior.  Play and aggression are always intermixed.  But it doesn’t have to be understood as scary, hostile, enemy.)

Nerd-format discussions are definitely not costless. You get discussions of advanced/technical topics being mobbed by clueless opinionated newbies, or discussions of deeply personal issues being hassled by clueless opinionated randos.  You get endless debate over irrelevant minutiae. There are reasons why so many people flee this kind of environment.

But I would say that these disadvantages are necessary evils that, while they might be possible to mitigate somewhat, go along with having a genuinely public discourse and public accountability.

We talk a lot about social media killing privacy, but there’s also a way in which it kills publicness, by allowing people to curate their spaces by personal friend groups, and retreat from open discussions.   In a public square, any rando can ask an aristocrat to explain himself.  If people hide from public squares, they can’t be exposed to Socrates’ questions.

I suspect that, especially for people who are even minor VIPs (my level of online fame, while modest, is enough to create some of this effect), it’s tempting to become less available to the “public”, less willing to engage with strangers, even those who seem friendly and interesting.  I think it’s worth fighting this temptation.  You don’t get the gains of open discussion if you close yourself off.  You may not capture all the gains yourself, but that’s how the tragedy of the commons works; a bunch of people have to cooperate and trust if they’re going to build good stuff together.  And what that means, concretely, on the margin, is taking more time to explain yourself and engage intellectually with people who, from your perspective, look dumb, clueless, crankish, or uncool.

Some of the people I admire most, including theoretical computer scientist Scott Aaronson, are notable for taking the time to carefully debunk crackpots (and offer them the benefit of the doubt in case they are in fact correct.)  Is this activity a great ROI for a brilliant scientist, from a narrowly selfish perspective?  No. But it’s praiseworthy, because it contributes to a truly open discussion. If scientists take the time to investigate weird claims from randos, they’re doing the work of proving that science is a universal and systematic way of thinking, not just an elite club of insiders.  In the long run, it’s very important that somebody be doing that groundwork.

Talking about interesting things, with friendly strangers, in a spirit of welcoming open discussion and accountability rather than fleeing from it, seems really underappreciated today, and I think it’s time to make an explicit push towards building places online that have that quality.

In that spirit, I’d like to recommend LessWrong to my readers. For those not familiar with it, it’s a discussion forum devoted to things like cognitive science, AI, and related topics, and, back in its heyday a few years ago, it was suffused with the nerdy-discussion-nature. It had all the enthusiasm of late-night dorm-room philosophy discussions — except that some of the people you’d be having the discussions with were among the most creative people of our generation.  These days, posting and commenting is a lot sparser, and the energy is gone, but I and some other old-timers are trying to rekindle it. I’m crossposting all my blog posts there from now on, and I encourage everyone to check out and join the discussions there.

(Cross-posted from my blog,

Epistemic Effort

24 Raemon 29 November 2016 04:08PM

Epistemic Effort: Thought seriously for 5 minutes about it. Thought a bit about how to test it empirically. Spelled out my model a little bit. I'm >80% confident this is worth trying and seeing what happens. Spent 45 min writing post.

I've been pleased to see "Epistemic Status" hit a critical mass of adoption - I think it's a good habit for us to have. In addition to letting you know how seriously to take an individual post, it sends a signal about what sort of discussion you want to have, and helps remind other people to think about their own thinking.

I have a suggestion for an evolution of it - "Epistemic Effort" instead of status. Instead of "how confident you are", it's more of a measure of "what steps did you actually take to make sure this was accurate?" with some examples including:

  • Thought about it musingly
  • Made a 5 minute timer and thought seriously about possible flaws or refinements
  • Had a conversation with other people you epistemically respect and who helped refine it
  • Thought about how to do an empirical test
  • Thought about how to build a model that would let you make predictions about the thing
  • Did some kind of empirical test
  • Did a review of relevant literature
  • Ran an Randomized Control Trial
[Edit: the intention with these examples is for it to start with things that are fairly easy to do to get people in the habit of thinking about how to think better, but to have it quickly escalate to "empirical tests, hard to fake evidence and exposure to falsifiability"]

A few reasons I think this (most of these reasons are "things that seem likely to me" but which I haven't made any formal effort to test - they come from some background in game design and reading some books on habit formation, most of which weren't very well cited)
  • People are more likely to put effort into being rational if there's a relatively straightforward, understandable path to do so
  • People are more likely to put effort into being rational if they see other people doing it
  • People are more likely to put effort into being rational if they are rewarded (socially or otherwise) for doing so.
  • It's not obvious that people will get _especially_ socially rewarded for doing something like "Epistemic Effort" (or "Epistemic Status") but there are mild social rewards just for doing something you see other people doing, and a mild personal reward simply for doing something you believe to be virtuous (I wanted to say "dopamine" reward but then realized I honestly don't know if that's the mechanism, but "small internal brain happy feeling")
  • Less Wrong etc is a more valuable project if more people involved are putting more effort into thinking and communicating "rationally" (i.e. making an effort to make sure their beliefs align with the truth, and making sure to communicate so other people's beliefs align with the truth)
  • People range in their ability / time to put a lot of epistemic effort into things, but if there are easily achievable, well established "low end" efforts that are easy to remember and do, this reduces the barrier for newcomers to start building good habits. Having a nice range of recommended actions can provide a pseudo-gamified structure where there's always another slightly harder step you available to you.
  • In the process of writing this very post, I actually went from planning a quick, 2 paragraph post to the current version, when I realized I should really eat my own dogfood and make a minimal effort to increase my epistemic effort here. I didn't have that much time so I did a couple simpler techniques. But even that I think provided a lot of value.
Results of thinking about it for 5 minutes.

  • It occurred to me that explicitly demonstrating the results of putting epistemic effort into something might be motivational both for me and for anyone else thinking about doing this, hence this entire section. (This is sort of stream of conscious-y because I didn't want to force myself to do so much that I ended up going 'ugh I don't have time for this right now I'll do it later.')
  • One failure mode is that people end up putting minimal, token effort into things (i.e. randomly tried something on a couple doubleblinded people and call it a Randomized Control Trial).
  • Another is that people might end up defaulting to whatever the "common" sample efforts are, instead of thinking more creatively about how to refine their ideas. I think the benefit of providing a clear path to people who weren't thinking about this at all outweights people who might end up being less agenty about their epistemology, but it seems like something to be aware of.
  • I don't think it's worth the effort to run a "serious" empirical test of this, but I do think it'd be worth the effort, if a number of people started doing this on their posts, to run a followup informal survey asking "did you do this? Did it work out for you? Do you have feedback."
  • A neat nice-to-have, if people actually started adopting this and it proved useful, might be for it to automatically appear at the top of new posts, along with a link to a wiki entry that explained what the deal was.

Next actions, if you found this post persuasive:

Next time you're writing any kind of post intended to communicate an idea (whether on Less Wrong, Tumblr or Facebook), try adding "Epistemic Effort: " to the beginning of it. If it was intended to be a quick, lightweight post, just write it in its quick, lightweight form.

After the quick, lightweight post is complete, think about whether it'd be worth doing something as simple as "set a 5 minute timer and think about how to refine/refute the idea". If not, just write "thought about it musingly" after Epistemic Status. If so, start thinking about it more seriously and see where it leads.

While thinking about it for 5 minutes, some questions worth asking yourself:
  • If this were wrong, how would I know?
  • What actually led me to believe this was a good idea? Can I spell that out? In how much detail?
  • Where might I check to see if this idea has already been tried/discussed?
  • What pieces of the idea might you peel away or refine to make the idea stronger? Are there individual premises you might be wrong about? Do they invalidate the idea? Does removing them lead to a different idea? 

[Link] On Trying Not To Be Wrong

17 sarahconstantin 11 November 2016 07:25PM

[Link] If we can't lie to others, we will lie to ourselves

15 paulfchristiano 26 November 2016 10:29PM

Making intentions concrete - Trigger-Action Planning

14 Kaj_Sotala 01 December 2016 08:34PM

I'll do it at some point.

I'll answer this message later.

I could try this sometime.

For most people, all of these thoughts have the same result. The thing in question likely never gets done - or if it does, it's only after remaining undone for a long time and causing a considerable amount of stress. Leaving the "when" ambiguous means that there isn't anything that would propel you into action.

What kinds of thoughts would help avoid this problem? Here are some examples:

  • When I find myself using the words "later" or "at some point", I'll decide on a specific time when I'll actually do it.
  • If I'm given a task that would take under five minutes, and I'm not in a pressing rush, I'll do it right away.
  • When I notice that I'm getting stressed out about something that I've left undone, I'll either do it right away or decide when I'll do it.
Picking a specific time or situation to serve as the trigger of the action makes it much more likely that it actually gets done.

Could we apply this more generally? Let's consider these examples:
  • I'm going to get more exercise.
  • I'll spend less money on shoes.
  • I want to be nicer to people.
These goals all have the same problem: they're vague. How will you actually implement them? As long as you don't know, you're also going to miss potential opportunities to act on them.

Let's try again:
  • When I see stairs, I'll climb them instead of taking the elevator.
  • When I buy shoes, I'll write down how much money I've spent on shoes this year.
  • When someone does something that I like, I'll thank them for it.
These are much better. They contain both a concrete action to be taken, and a clear trigger for when to take it.

Turning vague goals into trigger-action plans

Trigger-action plans (TAPs; known as "implementation intentions" in the academic literature) are "when-then" ("if-then", for you programmers) rules used for behavior modification [i]. A meta-analysis covering 94 studies and 8461 subjects [ii] found them to improve people's ability for achieving their goals [iii]. The goals in question included ones such as reducing the amount of fat in one's diet, getting exercise, using vitamin supplements, carrying on with a boring task, determination to work on challenging problems, and calling out racist comments. Many studies also allowed the subjects to set their own, personal goals.

TAPs were found to work both in laboratory and real-life settings. The authors of the meta-analysis estimated the risk of publication bias to be small, as half of the studies included were unpublished ones.

Designing TAPs

TAPs work because they help us notice situations where we could carry out our intentions. They also help automate the intentions: when a person is in a situation that matches the trigger, they are much more likely to carry out the action. Finally, they force us to turn vague and ambiguous goals into more specific ones.

A good TAP fulfills three requirements [iv]:
  • The trigger is clear. The "when" part is a specific, visible thing that's easy to notice. "When I see stairs" is good, "before four o'clock" is bad (when before four exactly?). [v]
  • The trigger is consistent. The action is something that you'll always want to do when the trigger is fulfilled. "When I leave the kitchen, I'll do five push-ups" is bad, because you might not have the chance to do five push-ups each time when you leave the kitchen. [vi]
  • The TAP furthers your goals. Make sure the TAP is actually useful!
However, there is one group of people who may need to be cautious about using TAPs. One paper [vii] found that people who ranked highly on so-called socially prescribed perfectionism did worse on their goals when they used TAPs. These kinds of people are sensitive to other people's opinions about them, and are often highly critical of themselves. Because TAPs create an association between a situation and a desired way of behaving, it may make socially prescribed perfectionists anxious and self-critical. In two studies, TAPs made college students who were socially prescribed perfectionists (and only them) worse at achieving their goals.

For everyone else however, I recommend adopting this TAP:

When I set myself a goal, I'll turn it into a TAP.

Origin note

This article was originally published in Finnish at It draws heavily on CFAR's material, particularly the workbook from CFAR's November 2014 workshop.


[i] Gollwitzer, P. M. (1999). Implementation intentions: strong effects of simple plans. American psychologist, 54(7), 493.

[ii] Gollwitzer, P. M., & Sheeran, P. (2006). Implementation intentions and goal achievement: A meta‐analysis of effects and processes. Advances in experimental social psychology, 38, 69-119.

[iii] Effect size d = .65, 95% confidence interval [.6, .7].

[iv] Gollwitzer, P. M., Wieber, F., Myers, A. L., & McCrea, S. M. (2010). How to maximize implementation intention effects. Then a miracle occurs: Focusing on behavior in social psychological theory and research, 137-161.

[v] Wieber, Odenthal & Gollwitzer (2009; unpublished study, discussed in [iv]) tested the effect of general and specific TAPs on subjects driving a simulated car. All subjects were given the goal of finishing the course as quickly as possible, while also damaging their car as little as possible. Subjects in the "general" group were additionally given the TAP, "If I enter a dangerous situation, then I will immediately adapt my speed". Subjects in the "specific" group were given the TAP, "If I see a black and white curve road sign, then I will immediately adapt my speed". Subjects with the specific TAP managed to damage their cars less than the subjects with the general TAP, without being any slower for it.

[vi] Wieber, Gollwitzer, et al. (2009; unpublished study, discussed in [iv]) tested whether TAPs could be made even more effective by turning them into an "if-then-because" form: "when I see stairs, I'll use them instead of taking the elevator, because I want to become more fit". The results showed that the "because" reasons increased the subjects' motivation to achieve their goals, but nevertheless made TAPs less effective.

The researchers speculated that the "because" might have changed the mindset of the subjects. While an "if-then" rule causes people to automatically do something, "if-then-because" leads people to reflect upon their motivates and takes them from an implementative mindset to a deliberative one. Follow-up studies testing the effect of implementative vs. deliberative mindsets on TAPs seemed to support this interpretation. This suggests that TAPs are likely to work better if they can be carried out as consistently and as with little thought as possible.

[vii] Powers, T. A., Koestner, R., & Topciu, R. A. (2005). Implementation intentions, perfectionism, and goal progress: Perhaps the road to hell is paved with good intentions. Personality and Social Psychology Bulletin, 31(7), 902-912.

Downvotes temporarily disabled

12 Vaniver 01 December 2016 05:31PM

This is a stopgap measure until admins get visibility into comment voting, which will allow us to find sockpuppet accounts more easily.


The best place to track changes to the codebase is the github LW issues page.

[Link] Less costly signaling

12 paulfchristiano 22 November 2016 09:11PM

Matching donation fundraisers can be harmfully dishonest.

11 Benquo 11 November 2016 09:05PM

Anna Salamon, executive director of CFAR (named with permission), recently wrote to me asking for my thoughts on fundraisers using matching donations. (Anna, together with co-writer Steve Rayhawk, has previously written on community norms that promote truth over falsehood.) My response made some general points that I wish were more widely understood:

  • Pitching matching donations as leverage (e.g. "double your impact") misrepresents the situation by overassigning credit for funds raised.
  • This sort of dishonesty isn't just bad for your soul, but can actually harm the larger world - not just by eroding trust, but by causing people to misallocate their charity budgets.
  • "Best practices" for a charity tend to promote this kind of dishonesty, because they're precisely those practices that work no matter what your charity is doing.
  • If your charity is impact-oriented - if you care about outcomes rather than institutional success - then you should be able to do substantially better than "best practices".

So I'm putting an edited version of my response here.

continue reading »

[Link] Crony Beliefs

11 ete 03 November 2016 08:54PM

Sample means, how do they work?

10 Benquo 20 November 2016 09:04PM

You know how people make public health decisions about food fortification, and medical decisions about taking supplements, based on things like the Recommended Daily Allowance? Well, there's an article in Nutrients titled A Statistical Error in the Estimation of the Recommended Dietary Allowance for Vitamin D. This paper says the following about the info used to establish the US recommended daily allowance for vitamin D:

The correct interpretation of the lower prediction limit is that 97.5% of study averages are predicted to have values exceeding this limit. This is essentially different from the IOM’s conclusion that 97.5% of individuals will have values exceeding the lower prediction limit.

The whole point of looking at averages is that individuals vary a lot due to a bunch of random stuff, but if you take an average of a lot of individuals, that cancels out most of the noise, so the average varies hardly at all. How much variation there is from individual to individual determines the population variance. How much variation you'd expect in your average due to statistical noise from sample to sample determines what we call the variation of the sample mean.

When you look at frequentist statistical confidence intervals, they are generally expressing how big the ordinary range of variation is for your average. For instance, 90% of the time, your average will not be farther off from the "true" average than it is from the boundaries of your confidence interval. This is relevant for answering questions like, "does this trend look a lot bigger than you'd expect from random chance?" The whole point of looking at large samples is that the errors have a chance to cancel out, leading to a very small random variation in the mean, relative to the variation in the population. This allows us to be confident that even fairly small differences in the mean are unlikely to be due to random noise.

The error here, was taking the statistical properties of the mean, and assuming that they applied to the population. In particular, the IOM looked at the dose-response curve for vitamin D, and came up with a distribution for the average response to vitamin D dosage. Based on their data, if you did another study like theirs on new data, it ought to predict that 600 IU of vitamin D is enough for the average person 97.5% of the time.

They concluded from this that 97.5% of people get enough vitamin D from 600 IU.

This is not an arcane detail. This is confusing the attributes of a population, with the attributes of an average. This is bad. This is real, real bad. In any sane world, this is mathematical statistics 101 stuff. I can imagine that someone who's heard about a margin of error a lot doesn't understand this stuff, but anyone who has to actually use the term should understand this.

Political polling is a simple example. Let's say that a poll shows 48% of Americans voting for the Republican and 52% for the Democrat, with a 5% margin of error. This means that 95% of polls like this one are expected to have an average within 5 percentage points of the true average. This does not mean that 95% of individual Americans have somewhere between a 43% and 53% chance of voting for the Republican. Most of them are almost definitively decided on one candidate, or the other. The average does not behave the same as the population. That's how fundamental this error is – it's like saying that all voters are undecided because the population is split.

Remember the famous joke about how the average family has two and a half kids? It's a joke because no one actually has two and a half kids. That's how fundamental this error is – it's like saying that there are people who have an extra half child hopping around. And this error caused actual harm:

The public health and clinical implications of the miscalculated RDA for vitamin D are serious. With the current recommendation of 600 IU, bone health objectives and disease and injury prevention targets will not be met. This became apparent in two studies conducted in Canada where, because of the Northern latitude, cutaneous vitamin D synthesis is limited and where diets contribute an estimated 232 IU of vitamin D per day. One study estimated that despite Vitamin D supplementation with 400 IU or more (including dietary intake that is a total intake of 632 IU or more) 10% of participants had values of less than 50 nmol/L. The second study reported serum 25(OH)D levels of less than 50 nmol/L for 15% of participants who reported supplementation with vitamin D. If the RDA had been adequate, these percentages should not have exceeded 2.5%. Herewith these studies show that the current public health target is not being met.

Actual people probably got hurt because of this. Some likely died.

This is also an example of scientific journals serving their intended purpose of pointing out errors, but it should never have gotten this far. This is a send a coal-burning engine under the control of a drunk engineer into the Taggart tunnel when the ventilation and signals are broken level of negligence. I think of the people using numbers as the reliable ones, but that's not actually enough – you have to think with them, you have to be trying to get the right answer, you have to understand what the numbers mean.

I can imagine making this mistake in school, when it's low stakes. I can imagine making this mistake on my blog. I can imagine making this mistake at work if I'm far behind on sleep and on a very tight deadline. But if I were setting public health policy? If I were setting the official RDA? I'd try to make sure I was right. And I'd ask the best quantitative thinkers I know to check my numbers.

The article was published in 2014, and as far as I can tell, as of the publication of this blog post, the RDA is unchanged.

(Cross-posted from my personal blog.)

[Link] Expert Prediction Of Experiments

9 Yvain 29 November 2016 02:47AM

Using a Spreadsheet to Make Good Decisions: Five Examples

9 peter_hurford 28 November 2016 05:10PM

I've been told that LessWrong is coming back now, so I'm cross-posting this rationality post of interest from the Effective Altruism forum.


We all make decisions every day. Some of these decisions are pretty inconsequential, such as what to have for an afternoon snack. Some of these decisions are quite consequential, such as where to live or what to dedicate the next year of your life to. Finding a way to make these decisions better is important.

The folks at Charity Science Health and I have been using the same method to make many of our major decisions for the past for years -- everything from where to live to even deciding to create Charity Science Health. The method isn’t particularly novel, but we definitely think the method is quite underused.

Here it is, as a ten step process:

  1. Come up with a well-defined goal.

  2. Brainstorm many plausible solutions to achieve that goal.

  3. Create criteria through which you will evaluate those solutions.

  4. Create custom weights for the criteria.

  5. Quickly use intuition to prioritize the solutions on the criteria so far (e.g., high, medium, and low)

  6. Come up with research questions that would help you determine how well each solution fits the criteria

  7. Use the research questions to do shallow research into the top ideas (you can review more ideas depending on how long the research takes per idea, how important the decision is, and/or how confident you are in your intuitions)

  8. Use research to rerate and rerank the solutions

  9. Pick the top ideas worth testing and do deeper research or MVP testing, as is applicable

  10. Repeat steps 8 and 9 until sufficiently confident in a decision.


Which charity should I start?

The definitive example for this process was the Charity Entrepreneurship project, where our team decided which charity would be the best possible charity to create.

Come up with a well-defined goal: I want to start an effective global poverty charity, where effective is taken to mean a low cost per life saved comparable to current GiveWell top charities.

Brainstorm many plausible solutions to achieve that goal: For this, we decided to start by looking at the intervention level. Since there are thousands of potential interventions, we placed a lot of emphasis on plausibly highly effectve, and chose to look at GiveWell’s priority programs plus a few that we thought were worthy additions.

Create criteria through which you will evaluate those solutions / create custom weights for the criteria: For this decision, we spent a full month of our six month project thinking through the criteria. We weighted criteria based on both importance and the expected varaince that would occur between our options. We decided to strongly value cost-effectiveness, flexibility , and scalability. We moderately valued strength of evidence, metric focus, and indirect effects. We weakly valued logistical possibility and other factors.

Come up with research questions that would help you determine how well each solution fits the criteria: We came up with the following list of questions and research process.

Use the research questions to do shallow research into the top ideas, use research to rerate and rerank the solutions: Since this choice was important and we were pretty uninformed about the different interventions, we did shallow research into all of the choices. We then produced the following spreadsheet:

Afterwards, it was pretty easy to drop 22 out of the 30 possible choices and go with a top eight (the eight that ranked 7 or higher on our scale).


Pick the top ideas worth testing and do deeper research or MVP testing, as is applicable / Repeat steps 8 and 9 until sufficiently confident in a decision: We then researched the top eight more deeply, with a keen idea to turn them into concrete charity ideas rather than amorphous interventions. When re-ranking, we came up with a top five, and wrote up more detailed reports --SMS immunization reminders,tobacco taxation,iron and folic acid fortification,conditional cash transfers, and a poverty research organization. A key aspect to this narrowing was also talking to relevant experts, which we wish we did earlier on in the process as it could quickly eliminate unpromising options.

Pick the top ideas worth testing and do deeper research or MVP testing, as is applicable: As we researched further, it became more clear that SMS immunization reminders performed best on the criteria being highly cost-effective, with a high strength of evidence and easy testability. However, the other four finalists are also excellent opportunities and we strongly invite other teams to invest in creating charities in those four areas.


Which condo should I buy?

Come up with a well-defined goal: I want to buy a condo that is (a) a good place to live and (b) a reasonable investment.

Brainstorm many plausible solutions to achieve that goal: For this, I searched around on Zillow and found several candidate properties.

Create criteria through which you will evaluate those solutions: For this decision, I looked at the purchasing cost of the condo, the HOA fee, whether or not the condo had parking, the property tax, how much I could expect to rent the condo out, whether or not the condo had a balcony, whether or not the condo had a dishwasher, how bright the space was, how open the space was, how large the kitchen was, and Zillow’s projection of future home value.

Create custom weights for the criteria: For this decision, I wanted to turn things roughly into a personal dollar value, where I could calculate the benefits minus the costs. The costs were the purchasing cost of the condo turned into a monthly mortgage payment, plus the annual HOA fee, plus the property tax. The benefits were the expected annual rent plus half of Zillow’s expectation for how much the property would increase in value over the next year, to be a touch conservative. I also added some more arbitrary bonuses: +$500 bonus if there was a dishwasher, a +$500 bonus if there was a balcony, and up to +$1000 depending on how much I liked the size of the kitchen. I also added +$3600 if there was a parking space, since the space could be rented out to others as I did not have a car. Solutions would be graded on benefits minus costs model.

Quickly use intuition to prioritize the solutions on the criteria so far: Ranking the properties was pretty easy since it was very straightforward, I could skip to plugging in numbers directly from the property data and the photos.




Annual fees

Annual increase

Annual rent







































Come up with research questions that would help you determine how well each solution fits the criteria: For this, the research was just to go visit the property and confirm the assessments.

Use the research questions to do shallow research into the top ideas, use research to rerate and rerank the solutions: Pretty easy, not much changed as I went to actually investigate.

Pick the top ideas worth testing and do deeper research or MVP testing, as is applicable: For this, I just ended up purchasing the highest ranking condo, which was a mostly straightforward process. Property A wins! 
This is a good example of how easy it is to re-adapt the process and how you can weight criteria in nonlinear ways.

How should we fundraise? 

Come up with a well-defined goal: I want to find the fundraising method with the best return on investment. 

Brainstorm many plausible solutions to achieve that goal: For this, our Charity Science Outreach team conducted a literature review of fundraising methods and asked experts, creating a list of the 25 different fundraising ideas. 

Create criteria through which you will evaluate those solutions / Create custom weights for the criteria: The criteria we used here was pretty similar to the criteria we later used for picking a charity -- we valued ease of testing, the estimated return on investment, the strength of the evidence, and the scalability potential roughly equally. 

Come up with research questions that would help you determine how well each solution fits the criteria: We created this rubric with questions

  • What research says on it (e.g. expected fundraising ratios, success rates, necessary pre-requisites)

  • What are some relevant comparisons to similar fundraising approaches? How well do they work?

  • What types/sizes of organizations is this type of fundraising best for?

  • How common is this type of fundraising, in nonprofits generally and in similar nonprofits (global health)?

  • How one would run a minimum cost experiment in this area?

  • What is the expected time, cost, and outcome for the experiment?

  • What is the expected value?

  • What is the expected time cost to get best time per $ ratio (e.g., would we have to have 100 staff or huge budget to make this effective)?

  • What further research should be done if we were going to run this approach?

Use the research questions to do shallow research into the top ideas, use research to rerate and rerank the solutions: After reviewing, we were able to narrow the 25 down to eight finalists: legacy fundraising, online ads, door-to-door, niche marketing, events, networking, peer-to-peer fundraising, and grant writing.
Pick the top ideas worth testing and do deeper research or MVP testing, as is applicable: We did MVPs of all eight of the top ideas and eventually decided that three of the ideas were worth pursuing full-time: online ads, peer-to-peer fundraising, and legacy fundraising.

Who should we hire? 

Come up with a well-defined goal: I want to hire the employee who will contribute the most to our organization. 

Brainstorm many plausible solutions to achieve that goal: For this, we had the applicants who applied to our job ad.

Create criteria through which you will evaluate those solutions / Create custom weights for the criteria: We thought broadly about what good qualities a hire would have, and decided to heavily weight values fit and prior experience with the job, and then roughly equally value autonomy, communication skills, creative problem solving, the ability to break down tasks, and the ability to learn new skills.
Quickly use intuition to prioritize the solutions on the criteria so far: We started by ranking hires based on their resumes and written applications. (Note that to protect the anonymity of our applicants, the following information is fictional.)





Break down

Learn new skills

Values fit

Prior experience










































Come up with research questions that would help you determine how well each solution fits the criteria: The initial written application was already tailored toward this, but we designed a Skype interview to further rank our applicants. 

Use the research questions to do shallow research into the top ideas, use research to rerate and rerank the solutions: After our Skype interviews, we re-ranked all the applicants. 






Break down

Learn new skills

Values fit

Prior experience










































Pick the top ideas worth testing and do deeper research or MVP testing, as is applicable: While “MVP testing” may not be polite to extend to people, we do a form of MVP testing by only offering our applicants one month trials before converting to a permanent hire.


Which television show should we watch? 

Come up with a well-defined goal: Our friend group wants to watch a new TV show together that we’d enjoy the most. 

Brainstorm many plausible solutions to achieve that goal: We all each submitted one TV show, which created our solution pool. 

Create criteria through which you will evaluate those solutions / Create custom weights for the criteria: For this decision, the criteria was the enjoyment value of each participant, weighted equally. 

Come up with research questions that would help you determine how well each solution fits the criteria: For this, we watched the first episode of each television show and then all ranked each one. 

Pick the top ideas worth testing and do deeper research or MVP testing, as is applicable: We then watched the winning television show, which was Black Mirror. Fun! 


Which statistics course should I take? 

Come up with a well-defined goal: I want to learn as much statistics as fast as possible, without having the time to invest in taking every course. 

Brainstorm many plausible solutions to achieve that goal: For this, we searched around on the internet and found ten online classes and three books.

Create criteria through which you will evaluate those solutions / Create custom weights for the criteria: For this decision, we heavily weighted breadth and time cost, weighted depth and monetary cost, and weakly weighted how interesting the course was and whether the course provided a tangible credential that could go on a resume.
Quickly use intuition to prioritize the solutions on the criteria so far: By looking at the syllabi, table of contents, and reading around online, we came up with some initial rankings:



Estimated hours

Depth score

Breadth score

How interesting

Credential level

Master Statistics with R







Probability and Statistics, Statistical Learning, Statistical Reasoning







Critically Evaluate Social Science Research and Analyze Results Using R













Berkley stats 20 and 21







Statistical Reasoning for Public Health







Khan stats







Introduction to R for Data Science







Against All Odds







Hans Rosling doc on stats







Berkeley Math







OpenIntro Statistics







Discovering Statistics Using R by Andy Field







Naked-Statistics by Charles Wheelan








Come up with research questions that would help you determine how well each solution fits the criteria: For this, the best we could do would be to do a little bit from each of our top class choices, while avoiding purchasing the expensive ones unless free ones did not meet our criteria. 

Pick the top ideas worth testing and do deeper research or MVP testing, as is applicable: Only the first three felt deep enough. Only one of them was free, but we were luckily able to find a way to audit the two expensive classes. After a review of all three, we ended up going with “Master Statistics with R”.

Rationality Heuristic for Bias Detection: Updating Towards the Net Weight of Evidence

9 gwern 17 November 2016 02:51AM

Bias tests look for violations of basic universal properties of rational belief such as subadditivity of probabilities or anchoring on randomly-generated numbers. I propose a new one for the temporal consistency of beliefs: agents who believe that the net evidence for a claim c from t1 to t2 is positive or negative must then satisfy the inequalities that P(c, t1)<P(c, t2) & P(c, t1)>P(c, t2), respectively. A failure to update in the direction of the believed net evidence indicates that nonrational reasons are influencing the belief in c; the larger the net evidence without directional updates, the more that nonrational reasons are influencing c. Extended to a population level, this suggests that a heuristic measurement of the nonrational grounds for belief can be conducted using long-term public opinion surveys of important issues combined with contemporary surveys of estimated net evidence since the start of the opinion surveys to compare historical shifts in public opinion on issues with the net evidence on those issues.

continue reading »

Yudkowsky vs Trump: the nuclear showdown.

9 MrMind 11 November 2016 11:30AM

Sorry for the slightly clickbait-y title.

Some commenters have expressed, in the last open thread, their disappointment that figureheads from or near the rationality sphere seemed to have lost their cool when it came to this US election: when they were supposed to be calm and level-headed, they instead campaigned as if Trump was going to be the Basilisk incarnated.

I've not followed many commenters, mainly Scott Alexander and Eliezer Yudkowsky, and they both endorsed Clinton. I'll try to explain what were their arguments, briefly but as faithfully as possible. I'd like to know if you consider them mindkilled and why.

Please notice: I would like this to be a comment on methodology, about if their arguments were sound given what they knew and believed. I most definitely do not want this to decay in a lamentation about the results, or insults to the obviously stupid side, etc.

Yudkowsky made two arguments against Trump: level B incompetence and high variance. Since the second is also more or less the same as Scott's, I'll just go with those.

Level B incompetence

Eliezer attended a pretty serious and wide diplomatic simulation game, that made him appreciate how difficult is to just maintain a global equilibrium between countries and avoid nuclear annihilation. He says that there are three level in politics:

- level 0, where everything that the media report and the politicians say is taken at face value: every drama is true, every problem is important and every cry of outrage deserves consideration;

- level A, where you understand that politics is as much about theatre and emotions as it is about policies: at this level players operate like in pro-wrestling, creating drama and conflict to steer the more gullible viewers towards the preferred direction; at this level cinicism is high and almost every conflict is a farce and probably staged.

But the bucket doesn't stop here. As the diplomacy simulation taught him, there's also:

- level B, where everything becomes serious and important again. At this level, people work very hard at maintaining the status quo (where outside you have mankind extinction), diplomatic relations and subtle international equilibria shield the world from much worse outcomes. Faux pas at this level in the past had resulted in wars, genocides and general widespread badness.

In August fifty Republican security advisors signed a letter condemning Trump for his position on foreign policy: these are, Yudkowsky warned us, exactly those level B player, and they are saying us that Trump is an ill advised choice.
Trump might be a fantastic level A player, but he is an incompetent level B player, and this might very well turn to disaster.

High variance

The second argument is a more general version of the first: if you look at a normal distribution, it's easy to mistake only two possibilities: you either can do worst than the average, or better. But in a three dimensional world, things are much more complicated. Status quo is fragile (see the first argument), surrounded not by an equal amount of things being good or being bad. Most substantial variations from the equilibrium are disasters, and if you put a high-variance candidate, someone whose main point is to subvert the status quo, in charge, then with overwhelming probability you're headed off to a cliff.
People who voted for Trump are unrealistically optimists, thinking that civilization is robust, the current state is bad and variations can definitely help with getting away from a state of bad equilibrium.

[Link] Rebuttal piece by Stuart Russell and FHI Research Associate Allan Dafoe: "Yes, the experts are worried about the existential risk of artificial intelligence."

9 crmflynn 03 November 2016 05:54PM

Recent AI control posts

8 paulfchristiano 29 November 2016 06:53PM

Over at medium, I’m continuing to write about AI control; here’s a roundup from the last month.


  • Prosaic AI control argues that AI control research should first consider the case where AI involves no “unknown unknowns.”
  • Handling destructive technology tries to explain the upside of AI control, if we live in a universe where we eventually need to build a singleton anyway.
  • Hard-core subproblems explains a concept I find helpful for organizing research.

Building blocks of ALBA

Terminology and concepts

Seeking better name for "Effective Egoism"

8 DataPacRat 25 November 2016 10:31PM

Aka, coming up with a better term for applying LW-style rationality techniques to 'rational self-interest'.

Aka, in parallel with the current movement of 'Effective Altruism', which seeks the best available ways to fulfill one's values, when those values focus roughly on improving the well-being and reducing the suffering of people in general, seeking the best available ways to fulfill one's values, when those values focus roughly on improving the well-being and reducing the suffering of oneself.

(I find that I may have use for this term both in reality and in my NaNoWriMo attempt.)

[Link] The Post-Virtual-Reality Sadness

8 morganism 16 November 2016 08:17AM

Mental Habits are Procedural

8 lifelonglearner 07 November 2016 02:53PM

Lately, I’ve realized that there’s something I’ve been fundamentally doing wrong in my head when it comes to building good mental architecture:  Whenever I decide to integrate a new habit of mind, I get easily frustrated when it doesn’t stick after a few days.  This has been a recurring occurrence.

I’ve finally realized that my expectations may be the culprits here.

To judge how long it takes to start utilizing a certain heuristic, I appear to have been using an intuitionist approach, classifying such habits under a “mental stuff” label, because it seems like mental notions should be easier to learn.

Perhaps more concretely, I’ve been fooled because mental notions feel like declarative knowledge, but they’re really more procedural.  Knowing about pre-mortems seems easy; I just link it to other concepts under the “planning” label in my head.  But this misses the point that the whole reason I even understand pre-mortems is to actually use it.

I confess that I’ve had a similar experience with mathematics a while back.  For much of the course, I merely reviewed my notes, letting my brain run over the same grooves.  The familiarity of the concepts gave me the illusion of understanding; yes, I could grasp the main ideas, but comprehension and capability are miles apart.  When it came time to independently solve problems, I was totally lost.

What appears needed in these situations where certain topics “masquerade” as declarative knowledge (when you actually care about the procedural part) is to find analogs to concrete procedural skills.  For example, I have much better estimates on how long it will take to learn an instrument, a new magic trick, or a sport.  In my mind, the aforementioned actions feel very “physical”, rather than “mental”.  This may appears to trigger a reframe.

The key, then, is to renormalize my expectations for learning new habits of mind, by drawing parallels to analogous skills where I have good estimates. Reframing the situation in this way makes it less frustrating when I fail to develop agency in a few days.  Learning other skills have timelines of weeks or months, and that’s with solid practice.  

To think otherwise for learning mental skills would be unrealistic.

Similarly, reference class forecasting looks at the “base rate” to make predictions.  Statistically speaking, I’m probably not an outlier, so using the average can be a good predictor of my own performance.  When it comes to habit change, I can see how likely I am to succeed, or how long, by looking at people as a whole.

I just looked up the base rate for habit change.  Looks like lots of people cite the Lally study which had an average length of 66 days to ingrain a new habit.  The data ranged from 18 days to over 250 (the study ran for just 12 weeks, so this was extrapolated data).  

Some scientists surveyed were also fairly pessimistic on the timelines for breaking a habit, from two months to six months.

Welp, I’m definitely going to have to recalibrate now.

Learning new mental tricks aside, there’s a related problem I’ve been bumping into often, regarding my thoughts in general:  I can’t seem to hold all of them in my head at once.

What I’m dubbing the “transience of thought” is basically the where I forget lots of helpful things I read/encounter.  Progress isn’t linear.  Many of my helpful thoughts fall on the wayside, never to be seen again.  Or, I’ll forget most of the great insights from a book I recently read.

Once again, this appears to be a problem of expectations.  I’m sure that with the right amount of reinforcement and repetition, these ideas can be more deeply ingrained.

This has led me to think about what it feels like to have really subsumed a mental heuristic.  I took a look at some mental tools I already use, at a deep level, and tried to describe how they feel:

Upon examining my optimizing mindset:

“Having a mental habit deeply entrenched doesn’t feel like I’ve got a weaponized skill ready to fire off in certain circumstances (as I would have hypothesized).  Rather, it’s merely The Way That I Respond in those circumstances.  It feels like the most natural response, or the only “reasonable” thing to do.  The heuristic isn’t at your disposal; it’s just A Thing You Do.”

If I have unrealistic timeframes for mental habit change, then I’m more likely to get frustrated at not seeing early results.  I’m basically the analog of the dieter who quits after a few days.  (Expectations aside, there’s also the more obvious notion that our mental tools are what we use to respond to situations so of course they’d affect our behavior.)

One recent idea I’ve been flirting with concerns System 1 / 2, or the “rider and the elephant” models of the mind.  In such models, the “rational” side is always portrayed as dominated by the more “primal” side; in any case, there is always an implication of a struggle for dominance between both sides.

Though this may be an accurate depiction of behavior in cases of time-inconsistent preferences, I can’t help but wonder if they set up a self-fulfilling prophecy for our “rational” side to ultimately lose when confronted with “temptations”.  The implied power struggle between the sides, as a whole, also seems damaging.  

I’d like to be able to reconcile all of my different goals, not fight myself at every turn as different urges try to assert themselves.

[Link] What they don’t teach you at STEM school

7 RomeoStevens 30 November 2016 07:20PM

How can people write good LW articles?

7 abramdemski 29 November 2016 10:40AM

A comment by AnnaSalamon on her recent article:

good intellectual content

Yes. I wonder if there are somehow spreadable habits of thinking (or of "reading while digesting/synethesizing/blog posting", or ...) that could themselves be written up, in order to create more ability from more folks to add good content.

Probably too meta / too clever an idea, but may be worth some individual brainstorms?

I wouldn't presume to write "How To Write Good LessWrong Articles", but perhaps I'm up to the task of starting a thread on it.

To the point: feel encouraged to skip my thoughts and comment with your own ideas.

The thoughts I ended up writing are, perhaps, more of an argument that it's still possible to write good new articles and only a little on how to do so:

Several people have suggested to me that perhaps the reason LessWrong has gone mostly silent these days is that there's only so much to be said on the subject of rationality, and the important things have been thoroughly covered. I think this is easily seen to be false, if you go and look at the mountain of literature related to subjects in the sequences. There is a lot left to be sifted through, synthesized, and explained clearly. Really, there are a lot of things which have only been dealt with in a fairly shallow way on LessWrong and could be given a more thorough treatment. A reasonable algorithm is to dive into academic papers on a subject of interest and write summaries of what you find. I expect there are a lot of interesting things to be uncovered in the existing literature on cognitive biases, economics, game theory, mechanism design, artificial intelligence, algorithms, operations research, public policy, and so on -- and that this community would have an interesting spin on those things.

Moreover, I think that "rationality isn't solved" (simply put). Perhaps you can read a bunch of stuff on here and think that all the answers have been laid out -- you form rational beliefs in accord with the laws of probability theory, and make rational decisions by choosing the policy with maximum expected utility; what else is there to know? Or maybe you admit that there are some holes in that story, like the details of TDT vs UDT and the question of logical uncertainty and so on; but you can't do anything meaningful about that. To such an attitude, I would say: do you know how to put it all into practice? Do you know how to explain it to other people clearly, succinctly, and convincingly? If you try to spell it all out, are there any holes in your understanding? If so, are you deferring to the understanding of the group, or are you deferring to an illusion of group understanding which doesn't really exist? If something is not quite clear to you, there's a decent chance that it's not quite clear to a lot of people; don't make the mistake of thinking everyone understands but you. And don't make the mistake of thinking you understand something that you haven't tried to explain from the start.

I'd encourage a certain kind of pluralistic view of rationality. We don't have one big equation explaining what a rational agent would look like -- there are some good candidates for such an equation, but they have caveats such as requiring unrealistic processing power and dropping anvils on their own heads if offered $10 to do so. The project of specifying one big algorithm -- one unifying decision theory -- is a worthy one, and such concepts can organize our thinking. But what if we thought of practical rationality as consisting more of a big collection of useful algorithms? I'm thinking along the lines of the book Algorithms to Live By, which gives dozens of algorithms which apply to different aspects of life. Like decision theory, such algorithms give a kind of "rational principle" which we can attempt to follow -- to the extent that it applies to our real-life situation. In theory, every one of them would follow from decision theory (or else, would do worse than a decision-theoretic calculation). But as finite beings, we can't work it all out from decision theory alone -- and anyway, as I've been harping on, decision theory itself is just a rag-tag collection of proposed algorithms upon closer inspection. So, we could take a more open-ended view of rationality as an attempt to collect useful algorithms, rather than a project that could be finished.

A second, more introspective way of writing LessWrong articles (my first being "dive into the literature"), which I think has a good track record: take a close look at something you see happening in your life or the world and try to make a model of it, try to explain it at a more algorithmic level. I'm thinking of posts like Intellectual Hipsters and Meta-Contrarianism and Slaves to Fashion Signalling.

Nassim Taleb on Election Forecasting

7 NatashaRostova 26 November 2016 07:06PM

Nassim Taleb recently posted this mathematical draft of election forecasting refinement to his Twitter.

The math isn’t super important to see why it’s so cool. His model seems to be that we should try to forecast the election outcome, including uncertainty between now and the end date, rather than build a forecast that takes current poll numbers and implicitly assumes nothing changes.
The mechanism of his model focuses on forming an unbiased time-series, formulated using stochastic methods. The mainstream methods as of now focus on multi-level Bayesian methods that look to see how the election would turn out if it were run today.
That seems like it makes more sense. While it’s safe to assume a candidate will always want to have the highest chances of winning, the process by which two candidates interact is highly dynamic and strategic with respect to the election date.

When you stop to think about it, it’s actually remarkable that elections are so incredibly close to 50-50, with a 3-5% victory being generally immense. It captures this underlying dynamic of political game theory.

(At the more local level this isn’t always true, due to issues such as incumbent advantage, local party domination, strategic funding choices, and various other issues. The point though is that when those frictions are ameliorated due to the importance of the presidency, we find ourselves in a scenario where the equilibrium tends to be elections very close to 50-50.)

So back to the mechanism of the model, Taleb imposes a no-arbitrage condition (borrowed from options pricing) to impose time-varying consistency on the Brier score. This is a similar concept to financial options, where you can go bankrupt or make money even before the final event. In Taleb's world, if a guy like Nate Silver is creating forecasts that are varying largely over time prior to the election, this suggests he hasn't put any time dynamic constraints on his model.

The math is based on assumptions though that with high uncertainty, far out from the election, the best forecast is 50-50. This set of assumptions would have to be empirically tested. Still, stepping aside from the math, it does feel intuitive that an election forecast with high variation a year away from the event is not worth relying on, that sticking closer to 50-50 would offer a better full-sample Brier score.

I'm not familiar enough in the practical modelling to say whether this is feasible. Sometime the ideal models are too hard to estimate.

I'm interested in hearing any thoughts on this from people who are familiar with forecasting or have an interest in the modelling behind it.

I also have a specific question to tie this back to a rationality based framework: When you read Silver (or your preferred reputable election forecaster, I like Andrew Gelman) post their forecasts prior to the election, do you accept them as equal or better than any estimate you could come up with? Or do you do a mental adjustment or discounting based on some factor you think they've left out? Whether it's prediction market variations, or adjustments based on perceiving changes in nationalism or politician specific skills (e.g. Scott Adams claimed to be able to predict that Trump would persuade everyone to vote for him. While it's tempting to write him off as a pundit charlatan, or claim he doesn't have sufficient proof, we also can't prove his model was wrong either.) I'm interested in learning the reasons we may disagree or be reasonably skeptical of polls, knowing it of course must be tested to know the true answer.

This is my first LW discussion post -- open to freedback on how it could be improved

[Link] Terminally ill teen won historic ruling to preserve body

7 NancyLebovitz 18 November 2016 02:16PM

Which areas of rationality are underexplored? - Discussion Thread

6 casebash 01 December 2016 10:05PM

There seems to actually be real momentum behind this attempt as reviving Less Wrong. One of the oldest issues on LW has been the lack of content. For this reason, I thought that it might be worthwhile opening a thread where people can suggest how we can expand the scope of what people write about in order for us to have sufficient content.

Does anyone have any ideas about which areas of rationality are underexplored? Please only list one area per comment.

Industry Matters 2: Partial Retraction

6 sarahconstantin 23 November 2016 05:08PM

Epistemic status: still tentative

Some useful comments on the last post on manufacturing have convinced me of some weaknesses in my argument.

First of all, I think I was wrong that most manufacturing job loss is due to trade. There are several economic analyses, using different methods, that come to the conclusion that a minority of manufacturing jobs are lost to trade, with most of the remainder lost to labor productivity increases.

Second of all, I want to refine my argument about productivity.

Labor productivity and multifactor productivity in manufacturing, as well as output, have grown steadily throughout the 20th century, and continue to grow today. The claim “we are making more things than ever before in America” is true.

It’s also true that manufacturing employment has dropped slowly through the 70’s and 80’s until today.  This is plausibly due to improvements in labor productivity.

However, the striking, very rapid decline of manufacturing employment post-2000, in which half of all manufacturing jobs were lost in fifteen years, looks like a different phenomenon. And it does correspond temporally to a drop in output and productivity growth.  It also corresponds temporally to the establishment of normal trade relations with China, and there is more detailed evidence that there’s a causal link between job loss and competition with China.

My current belief is that the long-term secular decline in manufacturing employment is probably just due to the standard phenomenon where better efficiency leads to employing fewer workers in a field, the same reason that there are fewer farmers than there used to be.

However, something weird seems to have happened in 2000, something that hurt productivity.  It might be trade.  It might be some kind of “stickiness” effect where external shocks are hard to recover from, because there’s a lot of interdependence in industry, and if you lose one firm you might lose the whole ecosystem.  It might be some completely different thing. But I believe that there is a post-2000 phenomenon which is not adequately explained by just “higher productivity causes job loss.”

Most manufacturing job loss is due to productivity; only a minority is due to trade

David Autor‘s economic analysis concluded that trade with China contributed 16% of the US manufacturing employment decline between 1990 and 2000, 26% of the decline between 2000 and 2007, and 21% over the full period.  He came to this conclusion by looking at particular manufacturing regions in the US, looking at their exposure to Chinese imports in the local industry, and seeing how much employment declined post-2000.  Regions with more import exposure had higher job loss.

Researchers at Ball State University also concluded that trade was responsible for a minority of manufacturing job loss during the period 2000-2010: 13.4% due to trade, and 87.8% due to manufacturing productivity.  This was calculated using import numbers and productivity numbers from the U.S. Census and the Bureau of Labor Statistics, under the simple model that the change in employment is a linear combination of the change in domestic consumption, the change in imports, the change in exports, and the change in labor productivity.

Josh Bivens of the Economic Policy Institute, using the same model as the Ball State economists, computes that imports were responsible for 21.15% of job losses between 2000 and 2003, while productivity growth was responsible for 84.32%.

Justin Pierce and Peter Schott of the Federal Reserve Board observe that industries where the 2000 normalization of trade relations with China would have increased imports the most were those that had the most job loss. Comparing job loss in above-median impact-from-China industries vs. below-median impact-from-China industries, the difference in job loss accounts for about 29% of the drop in manufacturing employment from 2000 to 2006.

I wasn’t able to find any economic analyses that argued that trade was responsible for a majority of manufacturing job losses. It seems safe to conclude that most manufacturing job loss is due to productivity gains, not trade.

It’s also worth noting that NAFTA doesn’t seem to have cost manufacturing jobs at all.

Productivity and output are growing, but have slowed since 2000.

Real output in manufacturing is growing, and has been since the 1980’s, but there are some signs of a slowdown.

Researchers at the Economic Policy Institute claim that slowing manufacturing productivity and output growth around 2000 led to the sharp drop in employment.  If real value added in manufacturing had continued growing at the rate it had been in 2000, it would be 1.4x as high today.

Manufacturing output aside from computers and electronic products has beenslow-growing since the 90’s.  The average annual output growth rate, 1997-2015, in manufacturing, was 12% in computers, but under 4% in all other manufacturing sectors. (The next best was motor vehicles, at 3% output growth rate.)

US motor vehicle production has been growing far more slowly than global motor vehicle production.

Here are some BLS numbers on output in selected manufacturing industries:

As an average over the time period, this growth rate represents about 2.5%-3.5% annual growth, which is roughly in line with GDP growth.  So manufacturing output growth averaged since the late 80’s isn’t unusually bad.

Labor productivity has also been rising in various industries:

However, when we look at the first and second derivatives of output and productivity, the picture looks worse.

Multifactor productivity seems to have flattened in the mid-2000’s, and multifactor productivity growth has dropped sharply.

Manufacturing labor productivity growth is positive, but lower than it’s been historically, at about 0.45% in 2014, and a 4-year moving average of 2.1%, compared to 3-4% growth in the 90’s.

Multifactor productivity in durable goods is down in absolute terms since about 2000 and hasn’t fully recovered.

(Multifactor productivity refers to the returns to labor and capital. If multifactor productivity isn’t growing, then while we may be investing in more capital, it’s not necessarily better capital.)

Labor productivity growth in electronics is dropping and has just become negative.

Labor productivity growth in the auto industry is staying flat at about 2%.

Manufacturing output growth has dropped very recently, post-recession, to about 0. From the 80’s to the present, it was about steady, at roughly 1%.  By contrast, global manufacturing growth is much higher: 6.5% in China, 1.9% globally.  And US GDP growth is about 2.5% on average.

In some industries, like auto parts and textiles,  raw output has dropped since 2000. (Although, arguably, these are lower-value industries and the US is moving up the value chain.)

Looking back even farther, there is a slowdown in multifactor productivity growth in manufacturing, beginning in the early 70’s. Multifactor productivity grew by 1.5% annually from 1949-1973, and only by 0.3% in 1973-1983.  Multifactor productivity today isn’t unprecedentedly low, but it’s dropping to the levels of stagnation we saw in the 1970’s.

Basically, recent labor productivity is positive but not growing and in some cases dropping; output is growing slower than GDP; and multifactor productivity is dropping. This points to there being something to worry about.

What might be going on?

Economist Jared Bernstein argues that automation doesn’t explain the whole story of manufacturing job loss. If you exclude the computer industry, manufacturing output is only about 8% higher than it was in 1997, and lowerthan it was before the Great Recession.  The growth in manufacturing output has been “anemic.”  He says that factory closures have large spillover effects. Shocks like the rise of China, or a global glut of steel in the 1980’s, lead to US factory closures; and then when demand recovers, the US industries don’t.

This model also fits with the fact that proximity matters a lot.  It’s valuable, for knowledge-transfer reasons, to build factories near suppliers.  So if parts manufacturing moves overseas, the factories that assemble those parts are likely to relocate as well. It’s also valuable, due to shipping costs, to locate manufacturing near to expensive-to-ship materials like steel or petroleum.  And, also as a result of shipping costs, it’s valuable to locate manufacturing in places with good transportation infrastructure. So there can be stickiness/spillover effects, where, once global trade makes it cheaper to make parts and raw materials in China, there’s incentives pushing higher-value manufacturing to relocate there as well.

It doesn’t seem to be entirely coincidence that the productivity slowdown coincided with the opening of trade with China. The industries where employment dropped most after 2000 were those where the risk of tariffs on Chinese goods dropped the most.

However, this story is still consistent with the true claim that most lost manufacturing jobs are lost to productivity, not trade. Multifactor productivity may be down and output and labor productivity may be slowing, but output is still growing, and that growth is still big enough to drive most job loss.

Crossposted from my blog:

[Link] Maine passes Ranked Choice Voting

6 morganism 14 November 2016 08:07PM

[Link] Optimizing the news feed

5 paulfchristiano 01 December 2016 11:23PM

Debating concepts - What is the comparative?

5 casebash 30 November 2016 02:46PM

In order to get a solid handle on a proposal, it isn't enough to just know what the world will look like if you adopt the proposal. It is also very important to know what the current situation or counter-model is, otherwise the proposal may provide less of a benefit than expected. Before I begin, I'll note that this article is about the comparative, I'll write another article for being comparative later, though probably under the name being responsive since this will be less confusing.

One of the best ways to think about what it means to be comparative is that you want to indicate what the two worlds will look like. For example, conscription may provide us with more troops and everyone might agree that troops are important for winning wars, but we also need to look at what the status quo is like. If the country already has enough troops or allies, then the difference in the comparative might not be that significant. When we ignore the comparative, we can often get caught up in a particular frame and fail to realise that the framing is misleading. At first, being able to better win wars might sound like it is really, really important. But when we ask the question of whether or not we need to be better at winning or if we are good enough, we might quickly realise that it doesn't really matter. As can be seen here, there is no need to wait for the other team to bring arguments before you can start being comparative.

Here the first speaker in, This house supports nationalism, provides a good example of being comparative. He explains that he doesn't see the comparative as being some utopian cosmopolitan society, but that people will always choose a particular form of identity. He say that this should be nationalism; not ethno-nationalism, but rather a form of nationalism based on shared values. He argues that this is advantageous since everyone in a nation can opt into this identity and hence it avoids the divisions that occur when people opt into specific identities such as race or gender. The speaker also talks about how nationalism can energise the nation, but if the speaker had only talked about this, then the argument would have been weaker. In this case, thinking about what the world would otherwise look like allows you to make nationalism more attractive since we can see that the alternatives are not particularly compelling.

As another example, consider a debate about banning abortion. Imagine that the government stands up and talks about why they think the fetus is a person and hence it should be illegitimate to abort it. However, their argument will not be as persuasive if they fail to deal with the comparative, which is that many women will get abortions whether or not it is legal and these abortions will be more dangerous. In this debate, the comparative weakens the affirmative case, but it also allows the government to pre-emptively respond to this point. This objection is also common knowledge, so unless this is responded to, this analysis will likely be rejected outright.

So, as we have seen, being comparative allows you to be more persuasive and to think in a more nuanced manner about an issue. It provides the language to explain to a friendly argumentation opponent how you think their argument could be improved or why you don't think that their argument is very persuasive.





[Link] Things "Meta" Can Mean

5 ProofOfLogic 30 November 2016 09:52AM

[Link] Vestibular Stimulation and Fat Loss

5 sarahconstantin 20 November 2016 12:23AM

[Link] Industry Matters

5 sarahconstantin 19 November 2016 07:29AM

Kidney Trade: A Dialectic

5 Gram_Stone 18 November 2016 05:19PM

Related: GiveWell's Increasing the Supply of Organs for Transplantation in the U.S.

(Content warning: organs, organ trade, transplantation. Help me flesh this out! My intention is to present the arguments I've seen in a way that is, at a minimum, non-boring. In particular, moral intuitions conflicting or otherwise are welcome.)

“Now arriving at Objection from Human Dignity,” proclaimed the intercom in a euphonious female voice. Aleph shot Kappa and Lambda a dirty look farewell and disembarked from the train.

Kappa: “Okay, so maybe there’s a possibility that legal organ markets aren’t completely, obviously bad. I can at least quell my sense of disgust for the length of this train ride, if it really might save a lot more lives than what we’re doing right now. But I’m not even close to being convinced that that’s the case.”

Lambda nodded.

Kappa: “First: a clarification. Why kidneys? Why not livers or skin or corneas?”

Lambda: “I’m trying to be conservative.  For one, we can eliminate a lot of organs from consideration in the case of live donors because only a few organs can be donated without killing the donor in the process. Not considering tissues, but just organs, this narrows it down to kidneys, livers, and lungs. Liver transplants have … undesirable side effects that complicate-”

Kappa: “Uh, ‘undesirable side effects?’ Like what?”

Lambda: “Er, well it turns out that recovering from a liver donation is excruciatingly painful, and that seems like it might make the whole issue … harder to think about. Anyway, for that reason; and because most organ trade including legal donations is in kidneys; and because most people who die on waitlists are waiting for kidneys; and because letting people sell their organs after they're dead doesn't seem like it would increase the supply that much; for all of these reasons, focusing on kidneys from live donors seems to simplify the analysis without tossing out a whole lot of the original problem. Paying kidney donors looks like it’s a lot closer to being an obvious improvement in hindsight than paying people to donate other organs and tissues. If you wanted to talk about non-kidneys, you would have to go further than I have.”

Kappa: “Okay, so just kidneys then, unless I see a good reason to argue otherwise. The first big problem I see is that surgery is dangerous. So how are you not arguing that we should pay a bunch of people to take a bunch of deadly risks?”

Lambda: “As with any surgery, patients are at greater risk than usual immediately after having such a serious operation. The standard response is, "The risk of death from a kidney transplant in the form of a natural frequency is merely 1 in 3000, which is about the same as the risk of death from a liposuction operation." But to my knowledge there are only four studies that have looked at this, some finding little risk, others finding greater risk, some finding no increased risk of end stage renal disease, others finding increased risk of end stage renal disease. Both sides have been the target of methodological criticisms. I'm currently of the opinion that the evidence is too ambiguous for me to make any confident conclusions. I'm thus inclined to point out that we already incentivize people to do risky things with social benefits, such as military service, medical experimentation, and surrogate pregnancy. So saying that it's immoral to incentivize people to donate kidneys seems to imply that it's immoral to incentivize people to do at least some of those other things.”

Kappa: “Fine. Let’s assume that incentivizing people to take the personal risk is morally acceptable, just for the sake of argument. What makes you think that a market would improve things? How do I know you’re not the sort of person who thinks a market improves anything?”

Lambda: “Suppose you have a family member who needs a kidney transplant, and you’re not compatible. Suppose further that a stranger approaches you at the hospital and explains that they have a family member who needs a kidney and that they also aren’t compatible with their family member. However, claims the stranger, the two of you are compatible with one another’s family members. They will donate their kidney to your family member only if you will donate your kidney to their family member. Ideally, we would like this trade to take place. Would you donate your kidney? If not, why not?”

Kappa: “First I would want to know how the stranger accessed my medical records. At any rate, I don’t think I would. What if I donate first, and they back out after I donate? What if their family member dies before or during surgery and they no longer have an incentive to donate their kidney to my family member?”

Lambda: “Indeed, what if? In more than one way, it’s risky to trade kidneys as things are today. On the other hand, if you could reliably sell your kidney and buy another, you wouldn’t have to worry about being left out in the cold. Your kidney may be gone, but no one can take your revenue unless you make a big mistake. If the seller backs out, you can always try to buy another one.”

Kappa: “But there are already organizations with matchmaking programs that allow such trades to take place. They solve the trust problem with social prestige and verification procedures and other things. What more would a market get you, and how much does it matter after considering the additional problems that a market might cause? What are you really suggesting, when you can't use words like 'market'?

Lambda: "The ban prevents the use of money in organ trades, so what do you use in its place, and what have you lost? In the place of money, you use promises that you'll donate your kidney. The first way that promises are worse than money is that they're a poor store of value. If I trade my promise for a stranger's promise, and the stranger loses their incentive to donate, then the promise loses its value. Even if I only want to use the money to buy a kidney, I would prefer receiving money because I can be confident that I can later retrieve it and exchange it for a kidney as long as someone is selling one that I want. The second way that promises are worse than money is that they're a poor medium of exchange. Because each individual promise has associated with it some specific conditions for donation, promises aren't widely acceptable in trade. At the moment, we have to set up what are essentially incredibly elaborate barters to make trades that are more complex than simple donations from one donor to one recipient. It seems like both of these factors might prevent a number of trades that could be realized even given the currently low supply, particularly trades that might occur across time."

Kappa: “Right, but like I said, what about the additional problems that markets cause? Tissues sold by corporations in the U.S. in 2000 were more expensive than tissues sold by public institutions in the EU in 2010. And some of their products aren’t even demonstrably more useful than public alternatives; they deceive consumers! How is that supposed to make things better?”

Lambda: “This is a case where I would argue that there isn’t enough regulation. It’s true that with the wrong laws you can get situations like, say, the one where corporations encourage new parents to harvest and privately store autologous cord blood for large sums even though there’s no evidence that it’s more effective than the allogenic cord blood that's stored in public banks. But is an unqualified ban the only way to stop the rent-seeking? Why couldn’t you throw that part out but keep the trust, all via regulation? Remember also that you can store cord blood in a bank, but at the moment you can only store kidneys inside of a living human body. It seems like that would make it a lot harder to arbitrage."

Kappa: "What about egalitarian concerns? Wouldn't these incentives disproportionately encourage the poor to sell their organs?"

Lambda: "Whether lifting the ban makes things more egalitarian or less depends on your reference frame. The poor will have a greater incentive to sell their organs than the rich just like the poor usually have a greater incentive to sell other things than the rich. The idea behind the egalitarian objection is that the ban prevents this and it's more egalitarian if no one can legally sell their organs at all. But illegal organs already tend to flow from the poorest countries to the richest countries for the very reasons that you fear lifting the ban, and lifting the ban decreases U.S. demand for foreign organs by increasing domestic supply. In this reference frame, lifting the ban is more egalitarian, replacing the current sellers who receive little to no compensation, high risks, and poor post-operative care, with U.S. sellers who would receive more compensation, have lower risks, and receive better post-operative care."

Kappa: “In a market, I would guess that the average recipient wants to receive a kidney a lot more than the average donor wants to donate one. This could spell disaster for a market solution. What makes you think this wouldn’t happen with a kidney market?”

Lambda: “Empirically, the Iranian organ market has eliminated kidney waitlists in that country. The U.S. and Iran may be quite different, but they'd have to be different in the particular way that makes markets work there and markets not work here for that argument to follow. Besides, the U.S. spends about $72,000 per patient per year on dialysis, whereas the U.S. only spends about $106,000 on transplant patients in the first year, and about $24,000 per transplant patient per year, so the government should be willing to subsidize kidney suppliers in the case of market failure without intervention.”

Kappa: "Geez. Uh... what about impulsive donations? You'd be encouraging irresponsibility."

Lambda: "That seems like a weak one. Legislate waiting periods. And this isn't exactly a problem particular to legal kidney markets."

Kappa: "I have you now, Lambda! Even if all of these things are true, the fact remains that most people, including me, are disgusted by the very idea of exchanging our organs for money! How ever would you overcome our repulsion?"

Lambda: "You do have me, Kappa."

Kappa: "I'll grant you that, but no politician can lose by being against- I mean, what?"

Lambda stood up and walked solemnly to the window.

"How ever would I overcome your repulsion?"

[Link] Embed, encode, attend, predict: The new deep learning formula for state-of-the-art NLP models

5 morganism 12 November 2016 09:33PM

[Link] FHI is accepting applications for internships in the area of AI Safety and Reinforcement Learning

5 crmflynn 07 November 2016 04:33PM

Open thread, Nov. 7 - Nov. 13, 2016

5 MrMind 07 November 2016 08:01AM

If it's worth saying, but not worth its own post, then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

[Link] Costs are not benefits

5 philh 03 November 2016 09:32PM

[Link] Hate Crimes: A Fact Post

4 sarahconstantin 01 December 2016 04:25PM

December 2016 Media Thread

4 ArisKatsaris 01 December 2016 07:41AM

This is the monthly thread for posting media of various types that you've found that you enjoy. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads.


  • Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect.
  • If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations.
  • Please post only under one of the already created subthreads, and never directly under the parent media thread.
  • Use the "Other Media" thread if you believe the piece of media you want to discuss doesn't fit under any of the established categories.
  • Use the "Meta" thread if you want to discuss about the monthly media thread itself (e.g. to propose adding/removing/splitting/merging subthreads, or to discuss the type of content properly belonging to each subthread) or for any other question or issue you may have about the thread or the rules.

Terminology is important

4 casebash 30 November 2016 12:37AM

As rationalists, we like to focus on substance over style. This is a good habit; unfortunately, most of the public will swallow extremely poor reasoning if it is expressed sufficiently confidently and fluently. However, style is also important when it comes to popularising an idea, even if it is within our own community.

In order for terminology to be useful, a few conditions need to be met:

  • Firstly, the term needs to either be more nuanced or more concise than explaining the same concept in ordinary language. I tend to see it as a bad thing when terms are created just to signal that a person is a member of the in-group.

  • Secondly, the speaker needs to remember the term. If a term is hard to pronounce or it has a complex name, then the speaker may be unable to use it, even if they would like to be able to.

  • Lastly, the person who is hearing the term for the first time should be able to connect it its the meaning. If they are constantly having to pause to remember what the term means, then it will be harder for them to figure out your meaning as a whole. In the best case, a person can guess what the term means before it is even explained to them.

I believe that a lot of the value that Less Wrong or the rationalist community has provided is not just new concepts, but the language that allows us to describe them. The better a term scores on each of the above factors, the more the term will be used and the more we can rely on other speakers within the community also adopting the term. This is a key part of what draws people to the rationalist community, being able to have a conversation from a certain baseline that doesn’t end up getting dragged down in the way that would be typical outside of the community. Instead of getting trapped in an argument at the level of the base assumptions, it allows a conversation to go deeper and become more nuanced.


Given all of this, I believe that further developing terminology is a key part of what our project should be. I will begin by writing a series of articles on debating terms which I wish were a part of our common vocabulary. I would like to encourage people to reread old Less Wrong articles and consider whether the concepts have been given a clear and memorable name and if not, to write a new article arguing in favour of this new term. We need to figure out ways of producing more content and a believe that a reasonable number of quality articles could be produced this way. Failing this, if you have a concept that needs a new, I would suggest writing a post arguing why the concept is important, providing examples of when it might be useful and then other people may feel compelled to try to think of a term. My first effort in this direction will be to steal some concepts from debating. Here is my first article - What is the comparative?

Stand-up comedy as a way to improve rationality skills

4 Andy_McKenzie 27 November 2016 09:52PM

Epistemic status: Believed, but hard to know how much to adjust for opportunity costs 

I'm wondering whether stand-up comedy would be a good way to expand and test one's "rationality skills" and/or just general interaction skills. One thing I like about it is that you get immediate feedback: the audience either laughs at your joke, or they don't. 

Prominent existential risk researcher Nick Bostrom used to be a stand-up comedian

For my postgraduate work, I went to London, where I studied physics and neuroscience at King's College, and obtained a PhD from the London School of Economics. For a while I did a little bit stand-up comedy on the vibrant London pub and theatre circuit.

It was also mentioned at the London LW meetup in June 2011

Comedy as Anti-Compartmentalization - Another pet theory of mine. I was puzzled by the amount of atheist comedians out there, who people pay to see tell them that their religion is absurd. (Yes, Christian comedians exist too. Search YouTube. I dare you.) So my theory is that humour serves as a space where patterns and data from different fields are allowed to be superimposed on one another. Think of it as an anti-compartmentalization habit. Due to our brain design, compartmentalization is the default, so humour may be a hack to counter that. And we reward those who do it well with high status because it's valuable. Maybe we should have transhumanist/rationalist stand-up comedians? We sure have a lot of inconsistencies to point out.

Diego Caliero thinks that there would be good material to draw upon from the rationalist community.

Does anyone have any experience trying this and/or have thoughts on whether it would be useful? Also, does anyone in NYC want to try it out? 

View more: Next