Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
Today, my paper "Is caviar a risk factor for being a millionaire?" was published in the Christmas Edition of the BMJ (formerly the British Medical Journal). The paper is available at http://www.bmj.com/content/355/bmj.i6536 but it is unfortunately behind a paywall. I am hoping to upload an open access version to a preprint server but this needs to be confirmed with the journal first.
In this paper, I argue that the term "risk factor" is ambiguous, and that this ambiguity causes pervasive methodological confusion in the epidemiological literature. I argue that many epidemiological papers essentially use an audio recorder to determine whether a tree falling in the forest makes a sound, without being clear about which definition of "sound" they are considering.
Even worse, I argue that epidemiologists often try to avoid claiming that their results say anything about causality, by hiding behind "prediction models". When they do this. they often still control extensively for "confounding", a term which only has a meaning in causal models. I argue that this is analogous to stating that you are interested in whether trees falling in the forest causes any human to perceive the qualia of hearing, and then spending your methods section discussing whether the audio recorder was working properly.
Due to space constraints and other considerations, I am unable to state these analogies explicitly in the paper, but it does include a call for a taboo on the word risk factor, and a reference to Rationality: AI to Zombies. To my knowledge, this is the first reference to the book in the medical literature.
I will give a short talk about this paper at the Less Wrong meetup at the MIRI/CFAR office in Berkeley at 6:30pm tonight.
(I apologize for this short, rushed announcement, I was planning to post a full writeup but I was not expecting this paper to be published for another week)
In 2007, psychology researchers Michal Kosinski and David Stillwell released a personality testing app on Facebook app called myPersonality. The app ended up being used by 4 million Facebook users, most of whom consented to their personality question answers and some information from their Facebook profiles to be used for research purposes.
The very large sample size and matching data from Facebook profiles make it possible to investigate many questions about personality differences that were previously inaccessible. Koskinski and Stillwell have used it in a number of interesting publications, which I highly recommend (e.g. ,  ).
In this post, I focus on what the dataset tells us about how big five personality traits vary by geographic region in the United States.
Epistemic Effort: Thought seriously for 5 minutes about it. Thought a bit about how to test it empirically. Spelled out my model a little bit. I'm >80% confident this is worth trying and seeing what happens. Spent 45 min writing post.
I've been pleased to see "Epistemic Status" hit a critical mass of adoption - I think it's a good habit for us to have. In addition to letting you know how seriously to take an individual post, it sends a signal about what sort of discussion you want to have, and helps remind other people to think about their own thinking.
I have a suggestion for an evolution of it - "Epistemic Effort" instead of status. Instead of "how confident you are", it's more of a measure of "what steps did you actually take to make sure this was accurate?" with some examples including:
- Thought about it musingly
- Made a 5 minute timer and thought seriously about possible flaws or refinements
- Had a conversation with other people you epistemically respect and who helped refine it
- Thought about how to do an empirical test
- Thought about how to build a model that would let you make predictions about the thing
- Did some kind of empirical test
- Did a review of relevant literature
- Ran an Randomized Control Trial
- People are more likely to put effort into being rational if there's a relatively straightforward, understandable path to do so
- People are more likely to put effort into being rational if they see other people doing it
- People are more likely to put effort into being rational if they are rewarded (socially or otherwise) for doing so.
- It's not obvious that people will get _especially_ socially rewarded for doing something like "Epistemic Effort" (or "Epistemic Status") but there are mild social rewards just for doing something you see other people doing, and a mild personal reward simply for doing something you believe to be virtuous (I wanted to say "dopamine" reward but then realized I honestly don't know if that's the mechanism, but "small internal brain happy feeling")
- Less Wrong etc is a more valuable project if more people involved are putting more effort into thinking and communicating "rationally" (i.e. making an effort to make sure their beliefs align with the truth, and making sure to communicate so other people's beliefs align with the truth)
- People range in their ability / time to put a lot of epistemic effort into things, but if there are easily achievable, well established "low end" efforts that are easy to remember and do, this reduces the barrier for newcomers to start building good habits. Having a nice range of recommended actions can provide a pseudo-gamified structure where there's always another slightly harder step you available to you.
- In the process of writing this very post, I actually went from planning a quick, 2 paragraph post to the current version, when I realized I should really eat my own dogfood and make a minimal effort to increase my epistemic effort here. I didn't have that much time so I did a couple simpler techniques. But even that I think provided a lot of value.
- It occurred to me that explicitly demonstrating the results of putting epistemic effort into something might be motivational both for me and for anyone else thinking about doing this, hence this entire section. (This is sort of stream of conscious-y because I didn't want to force myself to do so much that I ended up going 'ugh I don't have time for this right now I'll do it later.')
- One failure mode is that people end up putting minimal, token effort into things (i.e. randomly tried something on a couple doubleblinded people and call it a Randomized Control Trial).
- Another is that people might end up defaulting to whatever the "common" sample efforts are, instead of thinking more creatively about how to refine their ideas. I think the benefit of providing a clear path to people who weren't thinking about this at all outweights people who might end up being less agenty about their epistemology, but it seems like something to be aware of.
- I don't think it's worth the effort to run a "serious" empirical test of this, but I do think it'd be worth the effort, if a number of people started doing this on their posts, to run a followup informal survey asking "did you do this? Did it work out for you? Do you have feedback."
- A neat nice-to-have, if people actually started adopting this and it proved useful, might be for it to automatically appear at the top of new posts, along with a link to a wiki entry that explained what the deal was.
Next actions, if you found this post persuasive:
- If this were wrong, how would I know?
- What actually led me to believe this was a good idea? Can I spell that out? In how much detail?
- Where might I check to see if this idea has already been tried/discussed?
- What pieces of the idea might you peel away or refine to make the idea stronger? Are there individual premises you might be wrong about? Do they invalidate the idea? Does removing them lead to a different idea?
this post is now crossposted to the EA forum
80,000 hours is a well known Effective Altruism organisation which does "in-depth research alongside academics at Oxford into how graduates can make the biggest difference possible with their careers".
They recently posted a guide to donating which aims, in their words, to (my emphasis)
use evidence and careful reasoning to work out how to best promote the wellbeing of all. To find the highest-impact charities this giving season ... We ... summed up the main recommendations by area below
Looking below, we find a section on the problem area of criminal justice (US-focused). An area where the aim is outlined as follows: (quoting from the Open Philanthropy "problem area" page)
investing in criminal justice policy and practice reforms to substantially reduce incarceration while maintaining public safety.
Reducing incarceration whilst maintaining public safety seems like a reasonable EA cause, if we interpret "pubic safety" in a broad sense - that is, keep fewer people in prison whilst still getting almost all of the benefits of incarceration such as deterrent effects, prevention of crime, etc.
So what are the recommended charities? (my emphasis below)
"The Alliance for Safety and Justice is a US organization that aims to reduce incarceration and racial disparities in incarceration in states across the country, and replace mass incarceration with new safety priorities that prioritize prevention and protect low-income communities of color."
They promote an article on their site called "black wounds matter", as well as how you can "Apply for VOCA Funding: A Toolkit for Organizations Working With Crime Survivors in Communities of Color and Other Underserved Communities"
2. Cosecha - (note that their url is www.lahuelga.com, which means "the strike" in Spanish) (my emphasis below)
"Cosecha is a group organizing undocumented immigrants in 50-60 cities around the country. Its goal is to build mass popular support for undocumented immigrants, in resistance to incarceration/detention, deportation, denigration of rights, and discrimination. The group has become especially active since the Presidential election, given the immediate threat of mass incarceration and deportation of millions of people."
They have the ultimate goal of launching massive civil resistance and non-cooperation to show this country it depends on us ... if they wage a general strike of five to eight million workers for seven days, we think the economy of this country would not be able to sustain itself
The article quotes Carlos Saavedra, who is directly mentioned by Open Philanthropy's Chloe Cockburn:
Carlos Saavedra, who leads Cosecha, stands out as an organizer who is devoted to testing and improving his methods, ... Cosecha can do a lot of good to prevent mass deportations and incarceration, I think his work is a good fit for likely readers of this post."
They mention other charities elsewhere on their site and in their writeup on the subject, such as the conservative Center for Criminal Justice Reform, but Cosecha and the Alliance for Safety and Justice are the ones that were chosen as "highest impact" and featured in the guide to donating.
Sometimes one has to be blunt: 80,000 hours is promoting the financial support of some extremely hot-button political causes, which may not be a good idea. Traditionalists/conservatives and those who are uninitiated to Social Justice ideology might look at The Alliance for Safety and Justice and Cosecha and label them as them racists and criminals, and thereby be turned off by Effective Altruism, or even by the rationality movement as a whole.
There are standard arguments, for example this by Robin Hanson from 10 years ago about why it is not smart or "effective" to get into these political tugs-of-war if one wants to make a genuine difference in the world.
One could also argue that the 80,000 hours' charities go beyond the usual folly of political tugs-of-war. In addition to supporting extremely political causes, 80,000 hours could be accused of being somewhat intellectually dishonest about what goal they are trying to further actually is.
Consider The Alliance for Safety and Justice. 80,000 Hours state that the goal of their work in the criminal justice problem area is to "substantially reduce incarceration while maintaining public safety". This is an abstract goal that has very broad appeal and one that I am sure almost everyone agrees to. But then their more concrete policy in this area is to fund a charity that wants to "reduce racial disparities in incarceration" and "protect low-income communities of color". The latter is significantly different to the former - it isn't even close to being the same thing - and the difference is highly political. One could object that reducing racial disparities in incarceration is merely a means to the end of substantially reducing incarceration while maintaining public safety, since many people in prison in the US are "of color". However this line of argument is a very politicized one and it might be wrong, or at least I don't see strong support for it. "Selectively release people of color and make society safer - endorsed by effective altruists!" struggles against known facts about redictivism rates across races, as well as an objection about the implicit conflation of equality of outcome and equality of opportunity. (and I do not want this to be interpreted as a claim of moral superiority of one race over others - merely a necessary exercise in coming to terms with facts and debunking implicit assumptions). Males are incarcerated much more than women, so what about reducing gender disparities in incarceration, whilst also maintaining public safety? Again, this is all highly political, laden with politicized implicit assumptions and language.
Cosecha is worse! They are actively planning potentially illegal activities like helping illegal immigrants evade the law (though IANAL), as well as activities which potentially harm the majority of US citizens such as a seven day nationwide strike whose intent is to damage the economy. Their URL is "The Strike" in Spanish.
Again, the abstract goal is extremely attractive to almost anyone, but the concrete implementation is highly divisive. If some conservative altruist signed up to financially or morally support the abstract goal of "substantially reducing incarceration while maintaining public safety" and EA organisations that are pursuing that goal without reading the details, and then at a later point they saw the details of Cosecha and The Alliance for Safety and Justice, they would rightly feel cheated. And to the objection that conservative altruists should read the description rather than just the heading - what are we doing writing headings so misleading that you'd feel cheated if you relied on them as summaries of the activity they are mean to summarize?
One possibility would be for 80,000 hours to be much more upfront about what they are trying to achieve here - maybe they like left-wing social justice causes, and want to help like-minded people donate money to such causes and help the particular groups who are favored in those circles. There's almost a nod and a wink to this when Chloe Cockburn says (my paraphrase of Saavedra, and emphasis, below)
I think his [A man who wants to lead a general strike of five to eight million workers for seven days so that the economy of the USA would not be able to sustain itself, in order to help illegal immigrants] work is a good fit for likely readers of this post.
Alternatively, they could try to reinvigorate the idea that their "criminal justice" problem area is politically neutral and beneficial to everyone; the Open Philanthropy issue writeup talks about "conservative interest in what has traditionally been a solely liberal cause" after all. I would advise considering dropping The Alliance for Safety and Justice and Cosecha if they intend to do this. There may not be politically neutral charities in this area, or there may not be enough high quality conservative charities to present a politically balanced set of recommendations. Setting up a growing donor advised fund or a prize for nonpartisan progress that genuinely intends to benefit everyone including conservatives, people opposed to illegal immigration and people who are not "of color" might be an option to consider.
We could examine 80,000 hours' choice to back these organisations from a more overall-utilitarian/overall-effectiveness point of view, rather than limiting the analysis to the specific problem area. These two charities don't pass the smell test for altruistic consequentialism, pulling sideways on ropes, finding hidden levers that others are ignoring, etc. Is the best thing you can do with your smart EA money helping a charity that wants to get stuck into the culture war about which skin color is most over-represented in prisons? What about a second charity that wants to help people illegally immigrate at a time when immigration is the most divisive political topic in the western world?
Furthermore, Cosecha's plans for a nationwide strike and potential civil disobedience/showdown with Trump & co could push an already volatile situation in the US into something extremely ugly. The vast majority of people in the world (present and future) are not the specific group that Cosecha aims to help, but the set of people who could be harmed by the uglier versions of a violent and calamitous showdown in the US is basically the whole world. That means that even if P(Cosecha persuades Trump to do a U-turn on illegals) is 10 or 100 times greater than P(Cosecha precipitates a violent crisis in the USA), they may still be net-negative from an expected utility point of view. EA doesn't usually fund causes whose outcome distribution is heavily left-skewed so this argument is a bit unusual to have to make, but there it is.
Not only is Cosecha a cause that is (a) mind-killing and culture war-ish (b) very tangentially related to the actual problem area it is advertised under by 80,000 hours, but it might also (c) be an anti-charity that produces net disutility (in expectation) in the form of a higher probability a US civil war with money that you donate to it.
Back on the topic of criminal justice and incarceration: opposition to reform often comes from conservative voters and politicians, so it might seem unlikely to a careful thinker that extra money on the left-wing side is going to be highly effective. Some intellectual judo is required; make conservatives think that it was their idea all along. So promoting the Center for Criminal Justice Reform sounds like the kind of smart, against-the-grain idea that might be highly effective! Well done, Open Philanthropy! Also in favor of this org: they don't copiously mention which races or person-categories they think are most important in their articles about criminal justice reform, the only culture war item I could find on them is the world "conservative" (and given the intellectual judo argument above, this counts as a plus), and they're not planning a national strike or other action with a heavy tail risk. But that's the one that didn't make the cut for the 80,000 hours guide to donating!
The fact that they let Cosecha (and to a lesser extent The Alliance for Safety and Justice) through reduces my confidence in 80,000 hours and the EA movement as a whole. Who thought it would be a good idea to get EA into the culture war with these causes, and also thought that they were plausibly among the most effective things you can do with money? Are they taking effectiveness seriously? What does the political diversity of meetings at 80,000 hours look like? Were there no conservative altruists present in discussions surrounding The Alliance for Safety and Justice and Cosecha, and the promotion of them as "beneficial for everyone" and "effective"?
Before we finish, I want to emphasize that this post is not intended to start an object-level discussion about which race, gender, political movement or sexual orientation is cooler, and I would encourage moderators to temp-ban people who try to have that kind of argument in the comments of this post.
I also want to emphasize that criticism of professional altruists is a necessary evil; in an ideal world the only thing I would ever want to say to people who dedicate their lives to helping others (Chloe Cockburn in particular, since I mentioned her name above) is "thank you, you're amazing". Other than that, comments and criticism are welcome, especially anything pointing out any inaccuracies or misunderstandings in this post. Comments from anyone involved in 80,000 hours or Open Philanthropy are welcome.
(This is a crossposted FB post, so it might read a bit weird)
My goal this year (in particular, my main focus once I arrive in the Bay, but also my focus in NY and online in the meanwhile), is to join and champion the growing cause of people trying to fix some systemic problems in EA and Rationalsphere relating to "lack of Hufflepuff virtue".
I want Hufflepuff Virtue to feel exciting and important, because it is, and I want it to be something that flows naturally into our pursuit of both epistemic integrity, intellectual creativity, and concrete action.
Some concrete examples:
- on the 5 second reflex level, notice when people need help or when things need doing, and do those things.
- have an integrated understanding that being kind to people is *part* of helping them (and you!) to learn more, and have better ideas.
(There are a bunch of ways to be kind to people that do NOT do this, i.e. politely agreeing to disagree. That's not what I'm talking about. We need to hold each other to higher standards but not talk down to people in a fashion that gets in the way of understanding. There are tradeoffs and I'm not sure of the best approach but there's a lot of room for improvement)
- be excited and willing to be the person doing the grunt work to make something happen
- foster a sense that the community encourages people to try new events, actively take personal responsibility to notice and fix community-wide problems that aren't necessarily sexy.
- when starting new projects, try to have mentorship and teamwork built into their ethos from the get-go, rather than hastily tacked on later
I want these sorts of things to come easily to mind when the future people of 2019 think about the rationality community, and have them feel like central examples of the community rather than things that we talk about wanting-more-of.
You may have seen that Numberphile video that circulated the social media world a few years ago. It showed the 'astounding' mathematical result:
1+2+3+4+5+… = -1/12
(quote: "the answer to this sum is, remarkably, minus a twelfth")
Then they tell you that this result is used in many areas of physics, and show you a page of a string theory textbook (oooo) that states it as a theorem.
The video caused quite an uproar at the time, since it was many people's first introduction to the rather outrageous idea and they had all sorts of very reasonable objections.
Here's the 'proof' from the video:
First, consider P = 1 - 1 + 1 - 1 + 1…
Clearly the value of P oscillates between 1 and 0 depending on how many terms you get. Numberphile decides that it equals 1/2, because that's halfway in the middle.
Alternatively, consider P+P with the terms interleaved, and check out this quirky arithmetic:
= 1 + (-1+1) + (1-1) … = 1, so 2P = 1, so P = 1/2
Now consider Q = 1-2+3-4+5…
And write out Q+Q this way:
+ 1 -2+3-4…
= 1-1+1-1+1 = 1/2 = 2Q, so Q = 1/4
Now consider S = 1+2+3+4+5...
Write S-4S as
- 4 -8 …
=1-2+3-4+5… = Q=1/4
So S-4S=-3S = 1/4, so S=-1/12
How do you feel about that? Probably amused but otherwise not very good, regardless of your level of math proficiency. But in another way it's really convincing - I mean, string theorists use it, by god. And, to quote the video, "these kinds of sums appear all over physics".
So the question is this: when you see a video or hear a proof like this, do you 'believe them'? Even if it's not your field, and not in your area of expertise, do you believe someone who tells you "even though you thought mathematics worked this way, it actually doesn't; it's still totally mystical and insane results are lurking just around the corner if you know where to look"? What if they tell you string theorists use it, and it appears all over physics?
I imagine this is as a sort of rationality litmus test. See how you react to the video or the proof (or remember how you reacted when you initially heard this argument). Is it the 'rational response'? How do you weigh your own intuitions vs a convincing argument from authority plus math that seems to somehow work, if you turn your head a bit?
If you don't believe them, what does that feel like? How confident are you?
It's totally true that, as an everyday rationalist (or even as a scientist or mathematician or theorist), there will always be computational conclusions that are out of your reach to verify. You pretty much have to believe theoretical physicists who tell you "the Standard Model of particle physics accurately models reality and predicts basically everything we see at the subatomic scale with unerring accuracy"; you're likely in no position to argue.
But - and this is the point - it's highly unlikely that all of your tools are lies, even if 'experts' say so, and you ought to require extraordinary evidence to be convinced that they are. It's not enough that someone out there can contrive a plausible-sounding argument that you don't know how to refute, if your tools are logically sound and their claims don't fit into that logic.
(On the other hand, if you believe something because you heard it was a good idea from one expert, and then another expert tells you a different idea, take your pick; there's no way to tell. It's the personal experience that makes this example lead to sanity-questioning, and that's where the problem lies.)
In my (non-expert but well-informed) view, the correct response to this argument is to say "no, I don't believe you", and hold your ground. Because the claim made in the video is so absurd that, even if you believe the video is correct and made by experts and the string theory textbook actually says that, you should consider a wide range of other explanations as to "how it could have come to be that people are claiming this" before accepting that addition might work in such an unlikely way.
Not because you know about how infinite sums work better than a physicist or mathematician does, but because you know how mundane addition works just as well as they do, and if a conclusion this shattering to your model comes around -- even to a layperson's model of how addition works, that adding positive numbers to positive numbers results in bigger numbers --, then either "everything is broken" or "I'm going insane" or (and this is by far the theory that Occam's Razor should prefer) "they and I are somehow talking about different things".
That is, the unreasonable mathematical result is because the mathematician or physicist is talking about one "sense" of addition, but it's not the same one that you're using when you do everyday sums or when you apply your intuitions about intuition to everyday life. This is by far the simplest explanation: addition works just how you thought it does, even in your inexpertise; you and the mathematician are just talking past each other somehow, and you don't have to know what way that is to be pretty sure that it's happening. Anyway, there's no reason expert mathematicians can't be amateur communicators, and even that is a much more palatable result than what they're claiming.
(As it happens, my view is that any trained mathematician who claims that 1+2+3+4+5… = -1/12 without qualification is so incredibly confused or poor at communicating or actually just misanthropic that they ought to be, er, sent to a re-education camp.)
So, is this what you came up with? Did your rationality win out in the face of fallacious authority?
(Also, do you agree that I've represented the 'rational approach' to this situation correctly? Give me feedback!)
Postscript: the explanation of the proof
It turns out that there is a sense in which those summations are valid, but it's not the sense you're using when you perform ordinary addition. It's also true that the summations emerge in physics. It is also true that the validity of these summations is in spite of the rules of "you can't add, subtract, or otherwise deal with infinities, and yes all these sums diverge" that you learn in introductory calculus; it turns out that those rules are also elementary and there are ways around them but you have to be very rigorous to get them right.
An elementary explanation of what happened in the proof is that, in all three infinite sum cases, it is possible to interpret the infinite sum as a more accurate form (but STILL not precise enough to use for regular arithmetic, because infinities are very much not valid, still, we're serious):
S(infinity) = 1+2+3+4+5… ≈ -1/12 + O(infinity)
Where S(n) is a function giving the n'th partial sum of the series, and S(infinity) is an analytic continuation (basically, theoretical extension) of the function to infinity. (The O(infinity) part means "something on the order of infinity")
Point is, that O(infinity) bit hangs around, but doesn't really disrupt math on the finite part, which is why algebraic manipulations still seem to work. (Another cute fact: the curve that fits the partial sum function also non-coincidentally takes the value -1/12 at n=0.)
And it's true that this series always associates with the finite part -1/12; even though there are some manipulations that can get you to other values, there's a list of 'valid' manipulations that constrains it. (Well, there are other kinds of summations that I don't remember that might get different results. But this value is not accidentally associated with this summation.)
And the fact that the series emerges in physics is complicated but amounts to the fact that, in the particular way we've glued math onto physical reality, we've constructed a framework that also doesn't care about the infinity term (it's rejected as "nonphysical"!), and so we get the right answer despite dubious math. But physicists are fine with that, because it seems to be working and they don't know a better way to do it yet.
Hark! the herald daemons spam,
Glory to the newborn World,
Joyful, all post-humans, rise,
Join the triumph of the skies.
Veiled in wire the Godhead see,
Built to raise the sons of earth,
Built to give them second birth.
The cold cut him off from his toes, then fingers, then feet, then hands. Clutched in a grip he could not unclench, his phone beeped once. He tried to lift a head too weak to rise, to point ruined eyes too weak to see. Then he gave up.
So he never saw the last message from his daughter, reporting how she’d been delayed at the airport but would be the soon, promise, and did he need anything, lots of love, Emily. Instead he saw the orange of the ceiling become blurry, that particularly hateful colour filling what was left of his sight.
His world reduced to that orange blur, the eternally throbbing sore on his butt, and the crisp tick of a faraway clock. Orange. Pain. Tick. Orange. Pain. Tick.
He tried to focus on his life, gather some thoughts for eternity. His dry throat rasped - another flash of pain to mingle with the rest - so he certainly couldn’t speak words aloud to the absent witnesses. But he hoped that, facing death, he could at least put together some mental last words, some summary of the wisdom and experience of years of living.
But his memories were denied him. He couldn’t remember who he was - a name, Grant, was that it? How old was he? He’d loved and been loved, of course - but what were the details? The only thought he could call up, the only memory that sometimes displaced the pain, was of him being persistently sick in a broken toilet. Was that yesterday or seventy years ago?
Though his skin hung loose on nearly muscle-free bones, he felt it as if it grew suddenly tight, and sweat and piss poured from him. Orange. Pain. Tick. Broken toilet. Skin. Orange. Pain...
The last few living parts of Grant started dying at different rates.
I'll do it at some point.
I'll answer this message later.
I could try this sometime.
For most people, all of these thoughts have the same result. The thing in question likely never gets done - or if it does, it's only after remaining undone for a long time and causing a considerable amount of stress. Leaving the "when" ambiguous means that there isn't anything that would propel you into action.
What kinds of thoughts would help avoid this problem? Here are some examples:
- When I find myself using the words "later" or "at some point", I'll decide on a specific time when I'll actually do it.
- If I'm given a task that would take under five minutes, and I'm not in a pressing rush, I'll do it right away.
- When I notice that I'm getting stressed out about something that I've left undone, I'll either do it right away or decide when I'll do it.
- I'm going to get more exercise.
- I'll spend less money on shoes.
- I want to be nicer to people.
- When I see stairs, I'll climb them instead of taking the elevator.
- When I buy shoes, I'll write down how much money I've spent on shoes this year.
- When someone does something that I like, I'll thank them for it.
- The trigger is clear. The "when" part is a specific, visible thing that's easy to notice. "When I see stairs" is good, "before four o'clock" is bad (when before four exactly?). [v]
- The trigger is consistent. The action is something that you'll always want to do when the trigger is fulfilled. "When I leave the kitchen, I'll do five push-ups" is bad, because you might not have the chance to do five push-ups each time when you leave the kitchen. [vi]
- The TAP furthers your goals. Make sure the TAP is actually useful!
[i] Gollwitzer, P. M. (1999). Implementation intentions: strong effects of simple plans. American psychologist, 54(7), 493.
Relevance to Less Wrong: Whether you think it is for better or worse, users on LW are about 50,000x more likely to be signed up for cryonics than the average person.
Disclaimer: I volunteer at the Brain Preservation Foundation, but I speak for myself in this post and I'm only writing about publicly available information.
“Thorin, I can’t accept your generous job offer because, honestly, I think that your company might destroy Middle Earth.”
“Bifur, I can tell that you’re one of those “the Balrog is real, evil, and near” folks who thinks that in the next few decades Mithril miners will dig deep enough to wake the Balrog causing him to rise and destroy Middle Earth. Let’s say for the sake of argument that you’re right. You must know that lots of people disagree with you. Some don’t believe in the Balrog, others think that anything that powerful will inevitably be good, and more think we are hundreds or even thousands of years away from being able to disturb any possible Balrog. These other dwarves are not going to stop mining, especially given the value of Mithril. If you’re right about the Balrog we are doomed regardless of what you do, so why not have a high paying career as a Mithril miner and enjoy yourself while you can?”
“But Thorin, if everyone thought that way we would be doomed!”
“Exactly, so make the most of what little remains of your life.”
“Thorin, what if I could somehow convince everyone that I’m right about the Balrog?”
“You can’t because, as the wise Sinclair said, ‘It is difficult to get a dwarf to understand something, when his salary depends upon his not understanding it!’ But even if you could, it still wouldn’t matter. Each individual miner would correctly realize that just him alone mining Mithril is extraordinarily unlikely to be the cause of the Balrog awakening, and so he would find it in his self-interest to mine. And, knowing that others are going to continue to extract Mithril means that it really doesn’t matter if you mine because if we are close to disturbing the Balrog he will be awoken.”
“But dwarves can’t be that selfish, can they?”
“Actually, altruism could doom us as well. Given Mithril’s enormous military value many cities rightly fear that without new supplies they will be at the mercy of cities that get more of this metal, especially as it’s known that the deeper Mithril is found, the greater its powers. Leaders who care about their citizen’s safety and freedom will keep mining Mithril. If we are soon all going to die, altruistic leaders will want to make sure their people die while still free citizens of Middle Earth.”
“But couldn’t we all coordinate to stop mining? This would be in our collective interest.”
“No, dwarves would cheat rightly realizing that if just they mine a little bit more Mithril it’s highly unlikely to do anything to the Balrog, and the more you expect others to cheat, the less your cheating matters as to whether the Balrog gets us if your assumptions about the Balrog are correct.”
“OK, but won’t the rich dwarves step in and eventually stop the mining? They surely don’t want to get eaten by the Balrog.”
“Actually, they have just started an open Mithril mining initiative which will find and then freely disseminate new and improved Mithril mining technology. These dwarves earned their wealth through Mithril, they love Mithril, and while some of them can theoretically understand how Mithril mining might be bad, they can’t emotionally accept that their life’s work, the acts that have given them enormous success and status, might significantly hasten our annihilation.”
“Won’t the dwarven kings save us? After all, their primary job is to protect their realms from monsters.
“Ha! They are more likely to subsidize Mithril mining than to stop it. Their military machines need Mithril, and any king who prevented his people from getting new Mithril just to stop some hypothetical Balrog from rising would be laughed out of office. The common dwarf simply doesn’t have the expertise to evaluate the legitimacy of the Balrog claims and so rightly, from their viewpoint at least, would use the absurdity heuristic to dismiss any Balrog worries. Plus, remember that the kings compete with each other for the loyalty of dwarves and even if a few kings came to believe in the dangers posed by the Balrog they would realize that if they tried to imposed costs on their people, they would be outcompeted by fellow kings that didn’t try to restrict Mithril mining. Bifur, the best you can hope for with the kings is that they don’t do too much to accelerating Mithril mining.”
“Well, at least if I don’t do any mining it will take a bit longer for miners to awake the Balrog.”
“No Bifur, you obviously have never considered the economics of mining. You see, if you don’t take this job someone else will. Companies such as ours hire the optimal number of Mithril miners to maximize our profits and this number won’t change if you turn down our offer.”
“But it takes a long time to train a miner. If I refuse to work for you, you might have to wait a bit before hiring someone else.”
“Bifur, what job will you likely take if you don’t mine Mithril?”
“Mining gold and Mithril require similar skills. If you get a job working for a gold mining company, this firm would hire one less dwarf than it otherwise would and this dwarf’s time will be freed up to mine Mithril. If you consider the marginal impact of your actions, you will see that working for us really doesn’t hasten the end of the world even under your Balrog assumptions.”
“OK, but I still don’t want to play any part in the destruction of the world so I refuse work for you even if this won’t do anything to delay when the Balrog destroys us.”
“Bifur, focus on the marginal consequences of your actions and don’t let your moral purity concerns cause you to make the situation worse. We’ve established that your turning down the job will do nothing to delay the Balrog. It will, however, cause you to earn a lower income. You could have donated that income to the needy, or even used it to hire a wizard to work on an admittedly long-shot, Balrog control spell. Mining Mithril is both in your self-interest and is what’s best for Middle Earth.”
Common knowledge is important. So I wanted to note:
Every year on Solstice feedback forms, I get concerns about songs like "The X days of X-Risk" or "When I Die" (featuring lines including 'they may freeze my body when I die'), that they are too weird and ingroupy and offputting to people who aren't super-nerdy-transhumanists
But I also get comments from people who know little about X-risk or cryonics or whatever who say "these songs are hilarious and awesome." Sunday Assemblies who have no connection to Less Wrong sing When I Die and it's a crowd favorite every year.
And my impression is that people are only really weirded out by these songs on behalf of other people who are only weirded out by them on behalf of other people. There might be a couple people who are genuinely offput the ideas but if so it's not super clear to me. I take very seriously the notion of making Solstice inclusive while retaining it's "soul", talk to lots of people about what they find alienating or weird, and try to create something that can resonate with as many people as possible.
So I want it to at least be clear: if you are personally actually offput by those songs for your own sake, that makes sense and I want to know about it, but if you're just worried about other people, I'm pretty confident you don't need to be. The songs are designed so you don't need to take them seriously if you don't want to.
Random note 1: I think the only line that's raised concern from some non-LW-ish people for When I Die is "I'd prefer to never die at all", and that's because it's literally putting words in people's mouths which aren't true for everyone. I mentioned that to Glen. We'll see if he can think of anything else
Random note 2: Reactions to more serious songs like "Five Thousand Years" seem generally positive among non-transhumanists, although sometimes slightly confused. The new transhumanist-ish song this year, Endless Light, has gotten overall good reviews.
In Nature, there's been a recent publication arguing that the best way of gauging the truth of a question is to get people to report their views on the truth of the matter, and their estimate of the proportion of people who would agree with them.
Then, it's claimed, the surprisingly popular answer is likely to be the correct one.
In this post, I'll attempt to sketch a justification as to why this is the case, as far as I understand it.
First, an example of the system working well:
Canberra is the capital of Australia, but many people think the actual capital is Sydney. Suppose only a minority knows that fact, and people are polled on the question:
Is Canberra the capital of Australia?
Then those who think that Sydney is the capital will think the question is trivially false, and will generally not see any reason why anyone would believe it true. They will answer "no" and put high proportion of people answering "no".
The minority who know the true capital of Australia will answer "yes". But most of them will likely know a lot of people who are mistaken, so they won't put a high proportion on people answering "yes". Even if they do, there are few of them, so the population estimate for the population estimate of "yes", will still be low.
Thus "yes", the correct answer, will be surprisingly popular.
A quick sanity check: if we asked instead "Is Alice Springs the capital of Australia?", then those who believe Sydney is the capital will still answer "no" and claim that most people would do the same. Those who believe the capital is in Canberra will answer similarly. And there will be no large cache of people believing in Alice Springs being the capital, so "yes" will not be surprisingly popular.
What is important here is that adding true information to the population, will tend to move the proportion of people believing in the truth, more than that moves people's estimate of that proportion.
No differential information:
Let's see how that setup could fail. First, it could fail in a trivial fashion: the Australian Parliament and the Queen secretly conspire to move the capital to Melbourne. As long as they aren't included in the sample, nobody knows about the change. In fact, nobody can distinguish a world in which that was vetoed from one where where it passed. So the proportion of people who know the truth - that being those few deluded souls who already though the capital was in Melbourne, for some reason - is no higher in the world where it's true than the one where it's false.
So the population opinion has to be truth-tracking, not in the sense that the majority opinion is correct, but in the sense that more people believe X is true, relatively, in a world where X is true versus a world where X is false.
Systematic bias in population proportion:
A second failure mode could happen when people are systematically biased in their estimate of the general opinion. Suppose, for instance, that the following headline went viral:
"Miss Australia mocked for claims she got a doctorate in the nation's capital, Canberra."
And suppose that those who believed the capital was in Sydney thought "stupid beauty contest winner, she thought the capital was in Canberra!". And suppose those know knew the true capital thought "stupid beauty contest winner, she claimed to have a doctorate!". So the actual proportion in the belief doesn't change much at all.
But then suppose everyone reasons "now, I'm smart, so I won't update on this headline, but some other people, who are idiots, will start to think the capital is in Canberra." Then they will update their estimate of the population proportion. And Canberra may no longer be surprisingly popular, just expectedly popular.
Purely subjective opinions
How would this method work on a purely subjective opinion, such as:
Is Picasso superior to Van Gogh?
Well, there are two ways of looking at this. The first is to claim this is a purely subjective opinion, and as such people's beliefs are not truth tracking, and so the answers don't give any information. Indeed, if everyone accepts that the question is purely subjective, then there is no such thing as private (or public) information that is relevant to this question at all. Even if there were a prior on this question, no-one can update on any information.
But now suppose that there is a judgement that is widely shared, that, I don't know, blue paintings are objectively superior to paintings that use less blue. Then suddenly answers to that question become informative again! Except now, the question that is really being answered is:
Does Picasso use more blue than Van Gogh?
Or, more generally:
According to widely shared aesthetic criteria, is Picasso superior to Van Gogh?
The same applies to moral questions like "is killing wrong?". In practice, that is likely to reduce to:
According to widely shared moral criteria, is killing wrong?
On Wednesday I had lunch with Raph Levien, and came away with a picture of how a website that fostered the highest quality discussion might work.
- It’s possible that the right thing is a quick fix to Less Wrong as it is; this is about exploring what could be done if we started anew.
- If we decided to start anew, what the software should do is only one part of what would need to be decided; that’s the part I address here.
- As Anna Salamon set out, the goal is to create a commons of knowledge, such that a great many people have read the same stuff. A system that tailored what you saw to your own preferences would have its own strengths but would work entirely against this goal.
- I therefore think the right goal is to build a website whose content reflects the preferences of one person, or a small set of people. In what follows I refer to those people as the “root set”.
- A commons needs a clear line between the content that’s in and the content that’s out. Much of the best discussion is on closed mailing lists; it will be easier to get the participation of time-limited contributors if there’s a clear line of what discussion we want them to have read, and it’s short.
- However this alone excludes a lot of people who might have good stuff to add; it would be good to find a way to get the best of both worlds between a closed list and an open forum.
- I want to structure discussion as a set of concentric circles.
- Discussion in the innermost circle forms part of the commons of knowledge all can be assumed to be familiar with; surrounding it are circles of discussion where the bar is progressively lower. With a slider, readers choose which circle they want to read.
- Content from rings further out may be pulled inwards by the votes of trusted people.
- Content never moves outwards except in the case of spam/abuse.
- Users can create top-level content in further-out rings and allow the votes of other users to move it closer to the centre. Users are encouraged to post whatever they want in the outermost rings, to treat it as one would an open thread or similar; the best content will be voted inwards.
- Trust in users flows through endorsements starting from the root set.
More specifics on what that vision might look like:
- The site gives all content (posts, top-level comments, and responses) a star rating from 0 to 5 where 0 means “spam/abuse/no-one should see”.
- The rating that content can receive is capped by the rating of the parent; the site will never rate a response higher than its parent, or a top-level comment higher than the post it replies to.
- Users control a “slider” a la Slashdot which controls the level of content that they see: set to 4, they see only 4 and 5-star content.
- By default, content from untrusted users gets two stars; this leaves a star for “unusually bad” (eg rude) and one for “actual spam or other abuse”.
- Content ratings above 2 never go down, except to 0; they only go up. Thus, the content in these circles can grow but not shrink, to create a stable commons.
- Since a parent’s rating acts as a cap on the highest rating a child can get, when a parent’s rating goes up, this can cause a child’s rating to go up too.
- Users rate content on this 0-5 scale, including their own content; the site aggregates these votes to generate content ratings.
- Users also rate other users on the same scale, for how much they are trusted to rate content.
- There is a small set of “root” users whose user ratings are wholly trusted. Trust flows from these users using some attack resistant trust metric.
- Trust in a particular user can always go down as well as up.
- Only votes from the most trusted users will suffice to bestow the highest ratings on content.
- The site may show more trusted users with high sliders lower-rated content specifically to ask them to vote on it, for instance if a comment is receiving high ratings from users who are one level below them in the trust ranking. This content will be displayed in a distinctive way to make this purpose clear.
- Votes from untrusted users never directly affect content ratings, only what is shown to more trusted users to ask for a rating. Downvoting sprees from untrusted users will thus be annoying but ineffective.
- The site may also suggest to more trusted users that they uprate or downrate particular users.
- The exact algorithms by which the site rates content, hands trust to users, or asks users for moderation would probably want plenty of tweaking. Machine learning could help here. However, for an MVP something pretty simple would likely get the site off the ground easily.
[Epistemic status: mostly confident, but being this intentional is experimental]
This year, I'm focusing on two traits: resilience and conscientiousness. I think these (or the fact that I lack them) are my biggest barriers to success. Also: identifying them as goals for 2017 doesn't mean I'll stop developing them in 2018. A year is just a nice, established amount of time in which progress can actually be made. This plan is a more intentional version of techniques I've used to improve myself over the last few years. I have outside verification that I'm more responsible, high-functioning, and resilient than I was several years ago. I have managed to reduce my SSRI dose, and I have finished more important tasks this year than last year.
First, I want to talk about my criteria for success. Without illustrating the end result, or figuring out how to measure it, I could finish out the year with a false belief that I'd made progress. If you plan something without success criteria, you run the same risk. I also believe that most of the criteria should be observable by a third party, i.e. hard to fake.
- I respond to disruptions in my plans with distress and anger. While I've gotten better at calming down, the distress still happens. I would like to have emotional control such that I observe first, and then feel my feelings. Disruptions should incite curiosity, and a calm evaluation of whether to correct course. The observable bit is whether or not my husband and friends report that I seem less upset when they disrupt me. This process is already taking place; I've been practicing this skill for a long time and I expect to continue seeing progress. (resilience)
- If an important task takes very little time, doesn't require a lot of effort, and doesn't disrupt a more important process, I will do it immediately. The observable part is simple, here: are the dishes getting done? Did the trash go out on Wednesday? (conscientiousness)
- I will do (2) without "taking damage." I will use visualization of the end result to make my initial discomfort less significant. (resilience)
- I will use various things like audiobooks, music, and playfulness to make what can be made pleasant, pleasant. (resilience and conscientiousness)
- My instinct when encountering hard problems will be to dissolve them into smaller pieces and identify the success criteria, immediately, before I start trying to generate solutions. I can verify that I'm doing this by doing hard problems in front of people, and occasionally asking them to describe my process as it appears.
- I will focus on the satisfaction of doing hard things, and practice sitting in discomfort regularly (cold tolerance, calming myself around angry people, the pursuit of fitness, meditation). It's hard to identify an external sign that this is accomplished. I expect aversion-to-starting to become less common, and my spouse can probably identify that. (conscientiousness)
- I will keep a daily journal of what I've accomplished, and carry a notebook to make reflective writing easy and convenient. This will help keep me honest about my past self. (conscientiousness)
- By the end of the year, I will find myself and my close friends/family satisfied with my growth. I will have a record of finishing several important tasks, will be more physically fit than I am now, and will look forward to learning difficult things.
- Meditation for 10 minutes a day directly improves my resilience and lowers my anxiety.
- Medication shouldn't be skipped (an SSRI, DHEA, and methylphenidate). If I decide to go off of it, I should properly taper rather than quitting cold turkey. DHEA counteracts the negatives of my hormonal birth control and (seems to!) make me more positively aggressive and confident.
- Fitness (in the form of dance, martial arts, and lifting) keeps my back from hurting, gives me satisfaction, and has a number of associated cognitive benefits. Dancing and martial arts also function as socialization, in a way that leads to group intimacy faster than most of my other hobbies. Being fit and attractive helps me maintain a high libido.
- I need between 7 and 9 hours of sleep. I've tried getting around it. I can't. Getting enough sleep is a well-documented process, so I'm not going to outline my process here.
- Water. Obviously.
- Since overcoming most of my social anxiety, I've discovered that frequent, high-value socialization is critical to avoid depression. I try to regularly engage in activities that bootstrap intimacy, like the dressing room before performances, solving a hard problem with someone, and going to conventions. I need several days a week to include long conversations with people I like.
- Kratom (<7g) does wonders for my anxieties about starting a task. I try not to take it too often, since I don't want to develop tolerance, but I like to keep some on hand for this.
- Nicotine+caffeine/ltheanine capsules gives me an hour of motivation without jitters. This also has a rapid tolerance so I don't do it often.
- A 30-second mindfulness meditation can usually calm my first emotional response to a distressing event.
- Various posts on mindingourway.com can help reconnect me to my values when I'm feeling particularly demotivated.
- Reorganizing furniture makes me feel less "stuck" when I get restless. Ditto for doing a difficult thing in a different place.
- Google Calendar, a number of notebooks, and a whiteboard keep me from forgetting important tasks.
- Josh Waitzkin's book, The Art of Learning, remotivates me to achieve mastery in various hobbies.
- External prompting from other people can make me start a task I've been avoiding. Sometimes I have people aggressively yell at me.
- The LW study hall (Complice.co) helps keep me focused. I also do "pomos" over video with other people who don't like Complice.
As another year comes around, and our solstice plans come to a head I want to review this year's great progress in science to follow on from last year's great review.
The general criteria is: World changing science, not politics. That means a lot of space discoveries, a lot of technology, some groundbreaking biology, and sometimes new chemical materials. There really are too many to list briefly.
With that in mind, below is the list:
Things that spring to mind when you ask people:
- T3d printing organs and skin tissue http://www.bbc.com/news/health-35581454
- Baby born with 3 parents. link
- AlphaGo VS Lee Sedol
- Cryopreservation of a rabbit brain - Link
- Majorana fermions discovered (possibly quantum computing applications)
- SpaceX landed Falcon 9 at sea - Link
- Gravitational waves deteced by LIGO
- Quantum logic gate with 99% accuracy at Oxford
- TensorFlow has been out just over a year now. An open source neural net project.
Note: the whole thing is worth reading - I cherry picked a few really cool ones.
- Astronomers identify IDCS 1426 as the most distant massive galaxy cluster yet discovered, at 10 billion light years from Earth.
- Mathematicians, as part of the Great Internet Mersenne Prime Search, report the discovery of a new prime number: "274,207,281 − 1"
- The world's first 13 TB solid state drive (SSD) is announced, doubling the previous record for a commercially available SSD. link
- A successful head transplant on a monkey by scientists in China is reported.
- The University of New South Wales announces that it will begin human trials of the Phoenix99, a fully implantable bionic eye. Link
- Scientists in the United Kingdom are given the go-ahead by regulators to genetically modify human embryos by using CRISPR-Cas9 and related techniques. Link
- Scientists announce Breakthrough Starshot, a Breakthrough Initiatives program, to develop a proof-of-concept fleet of small centimeter-sized light sail spacecraft, named StarChip, capable of making the journey to Alpha Centauri, the nearest extrasolar star system, at speeds of 20% and 15% of the speed of light, taking between 20 and 30 years to reach the star system, respectively, and about 4 years to notify Earth of a successful arrival. Link
- A new paper in Astrobiology suggests there could be a way to simplify the Drake equation, based on observations of exoplanets discovered in the last two decades. link
- A detailed report by the National Academies of Sciences, Engineering, and Medicine finds no risk to human health from genetic modifications of food. Link
- Researchers from Queensland's Department of Environment and Heritage Protection, and the University of Queensland jointly report that the Bramble Cay melomys is likely extinct, adding: "Significantly, this probably represents the first recorded mammalian extinction due to anthropogenic climate change." Link
- Scientists announce detecting a second gravitational wave event (GW151226) resulting from the collision of black holes. Link
- The first known death caused by a self-driving car is disclosed by Tesla Motors. Link
- A team at the University of Oxford achieves a quantum logic gate with record-breaking 99.9% precision, reaching the benchmark required to build a quantum computer. Link
- The world's first baby born through a controversial new "three parent" technique is reported. Link
- A team at Australia's University of New South Wales create a new quantum bit that remains in a stable superposition for 10 times longer than previously achieved. Link
- Scientists at the International Union of Pure and Applied Chemistry officially recognizes names for four new chemical elements: Nihonium, Nh, 113; Moscovium, Mc, 115; Tennessine, Ts, 117 and Oganesson, Og, 118. Link
- Zika virus
- North Korea launches a long-range rocket into space, violating multiple UN treaties and prompting condemnation from around the world.
- The ESA and Roscosmos launch the joint ExoMars Trace Gas Orbiter on a mission to Mars.
- The Gotthard Base Tunnel, the world's longest and deepest railway tunnel, is opened following two decades of construction work.
- The United Kingdom votes in a referendum to leave the European Union.
- NASA's Juno spacecraft enters orbit around Jupiter and begins a 20-month survey of the planet.
- Solar Impulse 2 becomes the first solar-powered aircraft to circumnavigate the Earth.
- NASA launches OSIRIS-REx, its first asteroid sample return mission. The probe will visit Bennu and is expected to return with samples in 2023.
- Global CO2 levels exceed 400ppm at the time of year normally associated with minimum levels  A 400 ppm level is believed to be higher than anything experienced in human history.
- Marvin Minsky, American computer scientist
- Donald E. Williams, American astronaut
- Walter Kohn, Austrian-born American Nobel physicist
- Harry Kroto, English Nobel chemist
- Elie Wiesel, Romanian-born American Nobel writer and political activist
- Seymour Papert, South African-born American mathematician and computer scientist
- Ahmed Zewail, Egyptian-American Nobel chemist
- Reinhard Selten, German Nobel economist
- Roger Y. Tsien, American Nobel biologist
- James Cronin, American Nobel physicist
- Shimon Peres, 9th President and 8th Prime Minister of Israel, Nobel Peace Prize laureate
- Dario Fo, Italian actor, Nobel playwright and comedian
- The Nobel Prize in Chemistry 2016 was awarded jointly to Jean-Pierre Sauvage, Sir J. Fraser Stoddart and Bernard L. Feringa "for the design and synthesis of molecular machines"
- The Nobel Prize in Physics 2016 was divided, one half awarded to David J. Thouless, the other half jointly to F. Duncan M. Haldane and J. Michael Kosterlitz "for theoretical discoveries of topological phase transitions and topological phases of matter".
- The Nobel Prize in Physiology or Medicine 2016 was awarded to Yoshinori Ohsumi "for his discoveries of mechanisms for autophagy".
- The Nobel Prize in Literature 2016 was awarded to Bob Dylan "for having created new poetic expressions within the great American song tradition".
- The Nobel Peace Prize 2016 was awarded to Juan Manuel Santos "for his resolute efforts to bring the country's more than 50-year-long civil war to an end".
- The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel 2016 was awarded jointly to Oliver Hart and Bengt Holmström "for their contributions to contract theory"
100 years ago (1916):
- The British Royal Army Medical Corps carries out the first successful blood transfusion using blood that had been stored and cooled.
- Emma Goldman is arrested for lecturing on birth control in the United States.
- In Munich German automobile company BMW (Die Bayerischen Motoren Werke) is founded.
- The toggle light switch is invented by William J. Newton and Morris Goldberg.
- Margaret Sanger opens the first U.S. birth control clinic - a forerunner of Planned Parenthood.
- Oxycodone, a narcotic painkiller closely related to codeine is first synthesized in Germany.
- Ernst Rüdin publishes his initial results on the genetics of schizophrenia.
- Louis Enricht claims he has a substitute for gasoline.
- The Society of Motion Picture and Television Engineers is founded in the United States as the Society of Motion Picture Engineers.
Nobel Prizes in 1916:
- Physics – not awarded
- Chemistry – not awarded
- Medicine – not awarded
- Literature – Carl Gustaf Verner von Heidenstam
- Peace – not awarded
- Pokemon go
- Brexit - Britain secedes from the EU
- Donald trump US president
- SpaceX making more launches, and had a major explosion setback
- Internet.org project delayed by SpaceX expolosion.
Meta: this took in the order of 3+ hours to write over several weeks.
Cross posted to Lesswrong here.
This is a stopgap measure until admins get visibility into comment voting, which will allow us to find sockpuppet accounts more easily.
The best place to track changes to the codebase is the github LW issues page.
People seemed to like my post from yesterday about infinite summations and how to rationally react to a mathematical argument you're not equipped to validate, so here's another in the same vein that highlights a different way your reasoning can go.
(It's probably not quite as juicy of an example as yesterday's, but it is one that I'm equipped to write about today so I figure it's worth it.)
This example is somewhat more widely known and a bit more elementary. I won't be surprised if most people already know the 'solution'. But the point of writing about it is not to explain the math - it's to talk about "how you should feel" about the problem, and how to rationally approach rectifying it with your existing mental model. If you already know the solution, try to pretend or think back to when you didn't. I think it was initially surprising to most people, whenever you learned it.
The claim: that 1 = 0.999... repeating (infinite 9s). (I haven't found an easy way to put a bar over the last 9, so I'm using ellipses throughout.)
The questionable proof:
x = 0.9999...
10x = 9.9999... (everyone knows multiplying by ten moves the decimal over one place)
10x-x = 9.9999... - 0.9999....
9x = 9
x = 1
People's response when they first see this is usually: wait, what? an infinite series of 9s equals 1? no way, they're obviously different.
The litmus test is this: what do you think a rational person should do when confronted with this argument? How do you approach it? Should you accept the seemingly plausible argument, or reject it (as with yesterday's example) as "no way, it's more likely that we're somehow talking about different objects and it's hidden inside the notation"?
Or are there other ways you can proceed to get more information on your own?
One of the things I want to highlight here is related to the nature of mathematics.
I think people have a tendency to think that, if they are not well-trained students of mathematics (at least at the collegiate level), then rigor or precision involving numbers is out of their reach. I think this is definitely not the case: you should not be afraid to attempt to be precise with numbers even if you only know high school algebra, and you should especially not be afraid to demand precision, even if you don't know the correct way to implement it.
Particularly, I'd like to emphasize that mathematics as a mental discipline (as opposed to an academic field), basically consists of "the art of making correct statements about patterns in the world" (where numbers are one of the patterns that appears everywhere you have things you can count, but there are others). This sounds suspiciously similar to rationality - which, as a practice, might be "about winning", but as a mental art is "about being right, and not being wrong, to the best of your ability". More or less. So mathematical thinking and rational thinking are very similar, except that we categorize rationality as being primarily about decisions and real-world things, and mathematics as being primarily about abstract structures and numbers.
In many cases in math, you start with a structure that you don't understand, or even know how to understand, precisely, and start trying to 'tease' precise results out of it. As a layperson you might have the same approach to arguments and statements about elementary numbers and algebraic manipulations, like in the proof above, and you're just as in the right to attempt to find precision in them as a professional mathematician is when they perform the same process on their highly esoteric specialty. You also have the bonus that you can go look for the right answer to see how you did, afterwards.
All this to say, I think any rational person should be willing to 'go under the hood' one or two levels when they see a proof like this. It doesn't have to be rigorous. You just need to do some poking around if you see something surprising to your intuition. Insights are readily available if you look, and you'll be a stronger rational thinker if you do.
There are a few angles that I think a rational but untrained-in-math person can think to take straightaway.
You're shown that 0.9999.. = 1. If this is a surprise, that means your model of what these terms mean doesn't jive with how they behave in relation to each other, or that the proof was fallacious. You can immediately conclude that it's either:
a) true without qualification, in which case your mental model of what the symbols "0.999...", "=", or "1" mean is suspect
b) true in a sense, but it's hidden behind a deceptive argument (like in yesterday's post), and even if the sense is more technical and possibly beyond your intuition, it should be possible to verify if it exists -- either through careful inspection or turning to a more expert source or just verifying that options (a) and (c) don't work
c) false, in which case there should be a logical inconsistency in the proof, though it's not necessarily true that you're equipped to find it
Moreover, (a) is probably the default, by Occam's Razor. It's more likely that a seemingly correct argument is correct than that there is a more complicated explanation, such as (b), "there are mysterious forces at work here", or (c), "this correct-seeming argument is actually wrong", without other reasons to disbelieve it. The only evidence against it is basically that it's surprising. But how do you test (a)?
Note there are plenty of other 'math paradoxes' that fall under (c) instead: for example, those ones that secretly divide by 0 and derive nonsense afterwards. (a=b ; a^2=ab ; a^2-b^2=ab-b^2 ; (a+b)(a-b)=b(a-b) ; a+b=b ; 2a = a ; 2=1). But the difference is that their conclusions are obviously false, whereas this one is only surprising and counterintuitive. 1=2 involves two concepts we know very well. 0.999...=1 involves one we know well, but one that likely has a feeling of sketchiness about it; we're not used to having to think carefully about what a construction like 0.999... means, and we should immediately realize that when doubting the conclusion.
Here are a few angles you can take to testing (a):
1. The "make it more precise" approach: Drill down into what you mean by each symbol. In particular, it seems very likely that the mystery is hiding inside what "0.999..." means, because that's the one that it's seems complicated and liable to be misunderstood.
What does 0.999... infinitely repeating actually mean? It seems like it's "the limit of the series of finite numbers of 9s", if you know what a limit is. It seems like it might be "the number larger than every number of the form 0.abcd..., consisting of infinitely many digits (optionally, all 0 after a point)". That's awfully similar to 1, also, though.
A very good question is "what kinds of objects are these, anyway?" The rules of arithmetic generally assume we're working with real numbers, and the proof seems to hold for those in our customary ruleset. So what's the 'true' definition of a real number?
Well, we can look it up, and find that it's fairly complicated and involves identifying reals with sets of rationals in one or another specific way. If you can parse the definitions, you'll find that one definition is "a real number is a Dedekind cut of the rational numbers", that is, "a partition of the rational numbers into two sets A and B such that A is nonempty and closed downwards, B is nonempty and closed upwards, and A contains no greatest element", and from that it Can Be Seen (tm) that the two symbols "1" and "0.999..." both refer to the same partition of Q, and therefore are equivalent as real numbers.
2. The "functional" approach: if 0.999...=1, then it should behave the same as 1 in all circumstances. Is that something we can verify? Does it survive obvious tests, like other arguments of the same form?
Does 0.999.. always act the same was that 1 does? It appears to act the same in the algebraic manipulations that we saw, of course. What are some other things to try?
We might think to try: 1-0.9999... = 1-1 = 0, but also seems to equal 0.000....0001, if that's valid: an 'infinite decimal that ends in a 1'. So those must be equivalent also, if that's a valid concept. We can't find anything to multiply 0.000...0001 by to 'move the decimal' all the way into the finite decimal positions, seemingly, because we would have to multiply by infinity and that wouldn't prove anything because we already know such operations are suspect.
I, at least, cannot see any reason when doing math that the two shouldn't be the same. It's not proof, but it's evidence that the conclusion is probably OK.
3. The "argument from contradiction" approach: what would be true if the claim were false?
If 0.999... isn't equal to 1, what does that entail? Well, let a=0.999... and b=1. We can, according to our familiar rules of algebra, construct the number halfway between them: (a+b)/2, alternatively written as a+(b-a)/2. But our intuition for decimals doesn't seem to let there be a number between the two. What would it be -- 0.999...9995? "capping" the decimal with a 5? (yes, we capped a decimal with a 1 earlier, but we didn't know if that was valid either). What does that mean imply 0.999 - 0.999...9995 should be? 0.000...0004? Does that equal 4*0.000...0001? None of this math seems to be working either.
As long as we're not being rigorous, this isn't "proof", but it is a compelling reason to think the conclusion might be right after all. If it's not, we get into things that seem considerably more insane.
4. The "reexamine your surprise" approach: how bad is it if this is true? Does that cause me to doubt other beliefs? Or is it actually just as easy to believe it's true as not? Perhaps I am just biased against the conclusion for aesthetic reasons?
How bad is it if 0.999...=1? Well, it's not like yesterday's example with 1+2+3+4+5... = -1/12. It doesn't utterly defy our intuition for what arithmetic is. It says that one object we never use is equivalent to another object we're familiar with. I think that, since we probably have no reason to strongly believe anything about what an infinite sum of 9/10 + 9/100 + 9/1000 + ... should equal, it's perfectly palatable that it might equal 1, despite our initial reservations.
(I'm sure there are other approaches too, but this got long with just four so I stopped looking. In real life, if you're not interested in the details there's always the very legitimate fifth approach of "see what the experts say and don't worry about it", also. I can't fault you for just not caring.)
By the way, the conclusion that 0.999...=1 is completely, unequivocally true in the real numbers, basically for the Dedekind cut reason given above, which is the commonly accepted structure we are using when we write out mathematics if none is indicated. It is possible to find structures where it's not true, but you probably wouldn't write 0.999... in those structures anyway. It's not like 1+2+3+4+5...=-1/12, for which claiming truth is wildly inaccurate and outright deceptive.
But note that none of these approaches are out of reach to a careful thinker, even if they're not a mathematician. Or even mathematically-inclined.
So it's not required that you have the finesse to work out detailed mathematical arguments -- certainly the definitions of real numbers are too precise and technical for the average layperson to deal with. The question here is whether you take math statements at face value, or disbelieve them automatically (you would have done fine yesterday!), or pick the more rational choice -- breaking them down and looking for low-hanging ways to convince yourself one way or the other.
When you read a surprising argument like the 0.999...=1 one, does it occur to you to break down ways of inspecting it further? To look for contradictions, functional equivalences, second-guess your surprise as being a run-of-the-mill cognitive bias, or seek out precision to realign your intuition with the apparent surprise in 'reality'?
I think it should. Though I am pretty biased because I enjoy math and study it for fun. But -- if you subconsciously treat math as something that other people do and you just believe what they say at the end of the day, why? Does this cause you to neglect to rationally analyze mathematical conclusions, at whatever level you might be comfortable with? If so, I'll bet this isn't optimal and it's worth isolating in your mind and looking more closely at. Precise mathematical argument is essentially just rationalism applied to numbers, after all. Well - plus a lot of jargon.
(Do you think I represented the math or the rational arguments correctly? is my philosophy legitimate? Feedback much appreciated!)
About a month ago, Anna posted about the Importance of Less Wrong or Another Single Conversational Locus, followed shortly by Sarah Constantin's http://lesswrong.com/lw/o62/a_return_to_discussion/
There was a week or two of heavy-activity by some old timers. Since there's been a decent array of good posts but not quite as inspiring as the first week was and I don't know whether to think "we just need to try harder" or change tactics in some way.
- I do feel it's been better to quickly be able to see a lot of posts in the community in one place
- I don't think the quality of the comments is that good, which is a bit demotivating.
- on facebook, lots of great conversations happen in a low-friction way, and when someone starts being annoying, the person's who's facebook wall it is has the authority to delete comments with abandon, which I think is helpful.
- I could see the solution being to either continue trying to incentivize better LW comments, or to just have LW be "single locus for big important ideas, but discussion to flesh them out still happen in more casual environments"
- I'm frustrated that the intellectual projects on Less Wrong are largely silo'd from the Effective Altruism community, which I think could really use them.
- The Main RSS feed has a lot of subscribers (I think I recall "about 10k"), so having things posted there seems good.
- I think it's good to NOT have people automatically post things there, since that produced a lot of weird anxiety/tension on "is my post good enough for main? I dunno!"
- But, there's also not a clear path to get something promoted to Main, or a sense of which things are important enough for Main
- I notice that I (personally) feel an ugh response to link posts and don't like being taken away from LW when I'm browsing LW. I'm not sure why.
Curious if others have thoughts.
Related: Leave a Line of Retreat
When I was smaller, I was sitting at home watching The Mummy, with my mother, ironically enough. There's a character by the name of Bernard Burns, and you only need to know two things about him. The first thing you need to know is that the titular antagonist steals his eyes and tongue because, hey, eyes and tongues spoil after a while you know, and it's been three thousand years.
The second thing is that Bernard Burns was the spitting image of my father. I was terrified! I imagined my father, lost and alone, certain that he would die, unable to see, unable even to properly scream!
After this frightening ordeal, I had the conversation in which it is revealed that fiction is not reality, that actions in movies don't really have consequences, that apparent consequences are merely imagined and portrayed.
Of course I knew this on some level. I think the difference between the way children and adults experience fiction is a matter of degree and not kind. And when you're an adult, suppressing those automatic responses to fiction has itself become so automatic, that you experience fiction as a thing compartmentalized. You always know that the description of consequences in the fiction will not by magic have fire breathed into them, that Imhotep cannot gently step out of the frame and really remove your real father's real eyes.
So, even though we often use fiction to engage, to make things feel more real, in another way, once we grow, I think fiction gives us the chance to entertain formidable ideas at a comfortable distance.
A great user once said, "Vague anxieties are powerful anxieties." Related to this is the simple rationality technique of Leaving a Line of Retreat: before evaluating the plausibility of a highly cherished or deeply frightening belief, one visualizes the consequences of the highly cherished belief being false, or of the deeply frightening belief being true. We hope that it will thereby become just a little easier to evaluate the plausibility of that belief, for if we are wrong, at least we know what we're doing about it. Sometimes, if not often, what you'd really do about it isn't as bad as your intuitions would have you think.
If I had to put my finger on the source of that technique's power, I would name its ability to reduce the perceived hedonic costs of truthseeking. It's hard to estimate the plausibility of a charged idea because you expect your undesired outcome to feel very bad, and we naturally avoid this. The trick is in realizing that, in any given situation, you have almost certainly overestimated how bad it would really feel.
But Sun Tzu didn't just plan his own retreats; he also planned his enemies' retreats. What if your interlocutor has not practiced the rationality technique of Leaving a Line of Retreat? Well, Sun Tzu might say, "Leave one for them."
As I noted in the beginning, adults automatically compartmentalize fiction away from reality. It is simply easier for me to watch The Mummy than it was when I was eight. The formidable idea of my father having his eyes and tongue removed is easier to hold at a distance.
Thus, I hypothesize, truth in fiction is hedonically cheap to seek.
When you recite the Litany of Gendlin, you do so because it makes seemingly bad things seem less bad. I propose that the idea generalizes: when you're experiencing fiction, everything seems less bad than its conceivably real counterpart, it's stuck inside the book, and any ideas within will then seem less formidable. The idea is that you can use fiction as an implicit line of retreat, that you can use it to make anything seem less bad by making it make-believe, and thus, safe. The key, though, is that not everything inside of fiction is stuck inside of fiction forever. Sometimes conclusions that are valid in fiction also turn out to be valid in reality.
This is hard to use on yourself, because you can't make a real scary idea into fiction, or shoehorn your scary idea into existing fiction, and then make it feel far away. You'll know where the fiction came from. But I think it works well on others.
I don't think I can really get the point across in the way that I'd like without an example. This proposed technique was an accidental discovery, like popsicles or the Slinky:
A history student friend of mine was playing Fallout: New Vegas, and he wanted to talk to me about which ending he should choose. The conversation seemed mostly optimized for entertaining one another, and, hoping not to disappoint, I tried to intertwine my fictional ramblings with bona fide insights. The student was considering giving power to a democratic government, but he didn't feel very good about it, mostly because this fictional democracy was meant to represent anything that anyone has ever said is wrong with at least one democracy, plausible or not.
"The question you have to ask yourself," I proposed to the student, "is 'Do I value democracy because it is a good system, or do I value democracy per se?' A lot of people will admit that they value democracy per se. But that seems wrong to me. That means that if someone showed you a better system that you could verify was better, you would say 'This is good governance, but the purpose of government is not good governance, the purpose of government is democracy.' I do, however, understand democracy as a 'current best bet' or local maximum."
I have in fact gotten wide-eyed stares for saying things like that, even granting the closing ethical injunction on democracy as local maximum. I find that unusual, because it seems like one of the first steps you would take towards thinking about politics clearly, to not equivocate democracy with good governance. If you were further in the past and the fashionable political system were not democracy but monarchy, and you, like many others, consider democracy preferable to monarchy, then upon a future human revealing to you the notion of a modern democracy, you would find yourself saying, regrettably, "This is good governance, but the purpose of government is not good governance, the purpose of government is monarchy."
But because we were arguing for fictional governments, our autocracies, or monarchies, or whatever non-democratic governments heretofore unseen, could not by magic have fire breathed into them. For me to entertain the idea of a non-democratic government in reality would have solicited incredulous stares. For me to entertain the idea in fiction is good conversation.
The student is one of two people with whom I've had this precise conversation, and I do mean in the particular sense of "Which Fallout ending do I pick?" I snuck this opinion into both, and both came back weeks later to tell me that they spent a lot of time thinking about that particular part of the conversation, and that the opinion I shared seemed deep.
Also, one of them told me that they had recently received some incredulous stares.
So I think this works, at least sometimes. It looks like you can sneak scary ideas into fiction, and make them seem just non-scary enough for someone to arrive at an accurate belief about that scary idea.
I do wonder though, if you could generalize this even more. How else could you reduce the perceived hedonic costs of truthseeking?
- 50%: Past research
- 30%: Letters of recommendation
- 10%: Transcript
- 10%: Personal Essays
- Trying to get any competitive award that’s judged mostly by your past. The best college application is stellar grades and some good awards, the best resume is a great network and lots of success stories, and the best pitch to VCs is a rock-solid business.
- Thinking really hard about what to say to that cute guy or girl across the room. Most of what happens is determined before you open your mouth by what they’re looking for and whether they’re attracted to you.
- Worrying about small optimizations when writing code, like avoiding copying small objects. Most of good performance comes from the high-level design of the system.
I think the solution to this fallacy is always to think past the immediate goal. Instead of asking “How can I get this Fellowship,” ask “How can I improve my research career.” When you see the road ahead of you as just a path to your larger mission, something that once seemed like your only hope now becomes one option among many.
It's great to make people more aware of bad mental habits and encourage better ones, as many people have done on LessWrong. The way we deal with weak thinking is, however, like how people dealt with depression before the development of effective anti-depressants:
- Clinical depression was only marginally treatable.
- It was seen as a crippling character flaw, weakness, or sin.
- Admitting you had it could result in losing your job and/or friends.
- Treatment was not covered by insurance.
- Therapy was usually analytic or behavioral and not very effective.
- People thus went to great mental effort not to admit, even to themselves, having depression or any other mental illness.
This post originally appeared on The Gears To Ascension
I present generative modeling of minds as a hypothesis for the complexities of social dynamics, and build a case for it out of pieces. My hope is that this explains social behaviors more precisely and with less handwaving than its components. I intend this to be a framework for reasoning about social dynamics more explicitly and for training intuitions. In future posts I plan to build on it to give more concrete evidence, and give examples of social dynamics that I think become more legible with the tools provided by combining these ideas.
Epistemic status: Hypothesis, currently my maximum likelihood hypothesis, of why social interaction is so weird.
INTRO: SOCIAL INTERACTION.
People talk to each other a lot. Many of them are good at it. Most people don't really have a deep understanding of why, and it's rare for people to question why it's a thing that's possible to be bad at. Many of the rules seem arbitrary at first look, and it can be quite hard to transfer skill at interaction by explanation.
Some of the rules sort of make sense, and you can understand why bad things would happen when you break them: Helping people seems to make them more willing to help you. Being rude to people makes them less willing to help you. People want to "feel heard". But what do those mean, exactly?
I've been wondering about this for a while. I wasn't naturally good at social interaction, and have had to put effort into learning it. This has been a spotty success - I often would go to people for advice, and then get things like "people want to know that you care". That advice sounded nice, but it was vague and not usable.
The more specific social advice seems to generalize quite badly. "Don't call your friends stupid", for example. Banter is an important part of some friendships! People say each other are ugly and feel cared for. Wat?
Recently, I've started to see a deeper pattern here that actually seems to have strong generalization: it's simple to describe, it correctly predicts large portions of very complicated and weird social patterns, and it reliably gives me a lens to decode what happened when something goes wrong. This blog post is my attempt to share it as a package.
I basically came up with none of this. What I'm sharing is the synthesis of things that Andrew Critch, Nate Soares, and Robin Hanson have said - I didn't find these ideas that useful on their own, but together I'm kind of blown away by how much they collectively explain. In future blog posts I'll share some of the things I have used this to understand.
WARNING: An easy instinct, on learning these things, is to try to become more complicated yourself, to deal with the complicated territory. However, my primary conclusion is "simplify, simplify, simplify": try to make fewer decisions that depend on other people's state of mind. You can see more about why and how in the posts in the "Related" section, at the bottom.
Newcomb's problem is a game that two beings can play. Let's say that the two people playing are you and Newcomb. On Newcomb's turn, Newcomb learns all that they can about you, and then puts one opaque box and one transparent box in a room. Then on your turn, you go into the room, and you can take one or both of the boxes. What Newcomb puts in the boxes depends on what they think you'll do once it's your turn:
- If Newcomb thinks that you'll take only the opaque box, they fill it with $1 million, and put $1000 in the transparent box.
- If Newcomb thinks that you'll take both of the boxes, they only put $1000 in the transparent box.
Once Newcomb is done setting the room up, you enter and may do whatever you like.
This problem is interesting because the way you win or lose has little to do with what you actually do once you go into the room, it's entirely about what you can convince Newcome you'll do. This leads many people to try to cheat: convince Newcomb that you'll only take one box, and then take two.
In the original framing, Newcomb is a mind-reading oracle, and knows for certain what you'll do. In a more realistic version of the test, Newcomb is merely a smart person and paying attention to you. Newcomb's problem is simply a crystallized view of something that people do all the time: evaluate what kind of people each other, to determine trust. And it's interesting to look at it and note that when it's crystallized, it's kind of weird. When you put it this way, it becomes apparent that there are very strong arguments for why you should always do the trustworthy thing and one-box.
THE NECESSITY OF NEWCOMBLIKE INTERACTION
(This section inspired by nate soares' post "newcomblike problems are the norm".)
You want to know that people care about you. You don't just want to know that the other person is acting helpfully right now. If someone doesn't care about you, and is just helping you because it helps them, then you'll trust and like them less. If you know that someone thinks your function from experience to emotions is acceptable to them, you will feel validated.
I think this makes a lot of sense. In artificial distributed systems, we ask a bunch of computers to work together, each computer a node in the system. All of the computers must cooperate to perform some task - some artificial distributed systems, like bittorrent, are intended to allow the different nodes (computers) in the system to share things with each other, but where each participating computer joins to benefit from the system. Other distributed systems, such as the backbone routers of the internet, are intended to provide a service to the outside world - in the case of the backbone routers, they make the internet work.
However, nodes can violate the distributed system's protocols, and thereby gain advantage. In bittorrent, nodes can download but refuse to upload. In the internet backbone, each router needs to know where other routers are, but if a nearby router lies, then the entire internet may slow down dramatically, or route huge portions of US traffic to china. Unfortunately, despite the many trust problems in distributed systems, we have solved relatively few of them. Bitcoin is a fun exception to this - I'll use it as a metaphor in a bit.
Humans are each nodes in a natural distributed system, where each node has its own goals, and can provide and consume services, just like the artificial ones we've built. But we also have this same trust problem, and must be solving it somehow, or we wouldn't be able to make civilizations.
Human intuitions automatically look for reasons why the world is the way it is. In stats/ML/AI, it's called generative modeling. When you have an experience - every time you have any experience, all the time, on the fly - your brain's low level circuitry assumes there was a reason that the experience happened. Each moment your brain is looking for what the process was that created that experience for you. Then in the future, you can take your mental version of the world and run it forward to see what might happen.
When you're young, you start out pretty uncertain about what processes might be driving the world, but as you get older your intuition learns to expect gravity to work, learns to expect that pulling yourself up by your feet won't work, and learns to think of people as made of similar processes to oneself.
So when you're interacting with an individual human, your brain is automatically tracking what sort of process they are - what sort of person they are. It is my opinion that this is one of the very hardest things that brains do (where I got that idea). When you need to decide whether you trust them, you don't just have to do that based off their actions - you also have your mental version of them that you've learned from watching how they behave.
But it's not as simple as evaluating, just once, what kind of person someone is. As you interact with someone, you are continuously automatically tracking what kind of person they are, what kind of thoughts they seem to be having right now, in the moment. When I meet a person and they say something nice, is it because they think they're supposed to, or because they care about me? If my boss is snapping at me, are they to convince me I'm unwelcome at the company without saying it outright, or is my boss just having a bad day?
Note: I am not familiar with the details of the evolution of cooperation. I propose a story here to transfer intuitions, but the details may have happened in a different order. I would be surprised if I am not describing a real event, and it would weaken my point.
Humans are smart, and our ancestors have been reasonably smart going back a very long time, far before even primates branched off. So imagine what it was like to be an animal in a pre-tribal species. You want to survive, and you need resources to do so. You can take them from other animals. You can give them to other animals. Some animals may be more powerful than you, and attempt to take yours.
Imagine what it's like to be an animal partway through the evolution of cooperation. You feel some drive to be nice to other animals, but you don't want to be nice if the other animal will take advantage of you. So you pay attention to which animals seem to care about being nice, and you only help them. They help you, and you both survive.
As the generations go on, this happens repeatedly. An animal that doesn't feel caring for other animals is an animal that you can't trust; An animal that does feel caring is one that you want to help, because they'll help you back.
Over generations, it becomes more and more the case that the animals participating in this system actually want to help each other - because the animals around them are all running newcomblike tests of friendliness. Does this animal seem to have a basic urge to help me? Will this animal only take the one box, if I leave the boxes lying out? If the answer is that you can trust them, and you recognize that you can trust them, then that is the best for you, because then the other animal recognizes that they were trusted and will be helpful back.
After many generations of letting evolution explore this environment, you can expect to end up with animals that feel strong emotions for each other, animals which want to be seen as friendly, animals where helping matters. Here is an example of another species that has learned to behave sort of this way.
This seems to me be a good generating hypothesis for why people care about what other people think of them innately, and seems to predict ways that people will care about each other. I want to feel like people actually care about me, I don't just want to hear them say that they do. In particular, it seems to me that humans want this far more than you would expect of an arbitrary smart-ish animal.
I'll talk more in detail about what I think human innate social drives actually are in a future blog post. I'm interested in links to any research on things like human basic needs or emotional validation. For now, the heuristic I've found most useful is simply "People want to know that those around them approve of/believe their emotional responses to their experiences are sane". See also Succeed Socially, in the related list.
THE RECURSION DISTORTION
Knowing that humans evaluate each other in newcomblike ways doesn't seem to me to be enough to figure out how to interact with them. Only armed with the statement "one needs to behave in a way that others will recognize as predictably cooperative", I still wouldn't know how to navigate this.
At a lightning talk session I was at a few months ago, Andrew Critch made the argument that humans regularly model many layers deep in real situations. His claim was that people intuitively have a sense of what each other are thinking, including their senses of what you're thinking, and back and forth for a bit. Before I go on, I should emphasize how surprising this should be, without the context of how the brain actually does it: the more levels of me-imagining-you-imagining-me-imagining-you-imagining… you go, the more of an explosion of different options you should expect to see, and the less you should expect actual-sized human minds to be able to deal with it.
However, after having thought about it, I don't think it's as surprising as it seems. I don't think people actually vividly imagine this that many levels deep: what I think is going on is that as you grow up, you learn to recognize different clusters of ways a person can be. Stereotypes, if you will, but not necessarily so coarse as that implies.
At a young age, if I am imagining you, I imagine a sort of blurry version of you. My version of you will be too blurry to have its own version of me, but I learn to recognize the blurry-you when I see it. The blurry version of you only has a few emotions, but I sort of learn what they are: my blurry you can be angry-"colored", or it can be satisfied-"colored", or it can be excited-"colored", etc. ("Color" used here as a metaphor, because I expect this to be built a similar way to color or other basic primitives in the brain.)
Then later, as I get older, I learn to recognize when you see a blurry version of me. My new version of you is a little less blurry, but this new version of you has a blurry-me, made out of the same anger-color or satisfaction-color that I had learned you could be made out of. I go on, and eventually this version of you becomes its own individual colors - you can be angry-you-with-happy-me-inside colored when I took your candy, or you can be relieved-you-with-distraught-me-inside colored when you are seeing that I'm unhappy when a teacher took your candy back.
As this goes on, I learn to recognize versions of you as their own little pictures, with only a few colors - but each color is a "color" that I learned in the past, and the "color" can have me in it, maybe recursively. Now my brain doesn't have to track many levels - it just has to have learned that there is a "color" for being five levels deep of this, or another "color" for being five levels deep of that. Now that I have that color, my intuition can make pictures out of the colors and thereby handle six levels deep, and eventually my intuition will turn six levels into colors and I'll be able to handle seven.
I think it gets a bit more complicated than this for particularly socially competent people, but that's a basic outline of how humans could reliably learn to do this.
A RECURSION EXAMPLE
I found the claim that humans regularly social-model 5+ levels deep hard to believe at first, but Critch had an example to back it up, which I attempt to recreate here.
Fair warning, it's a somewhat complicated example to follow, unless you imagine yourself actually there. I only share it for the purpose of arguing that this sort of thing actually can happen; if you can't follow it, then it's possible the point stands without it. I had to invent notation in order to make sure I got the example right, and I'm still not sure I did.
(I'm sorry this is sort of contrived. Making these examples fully natural is really really hard.)
- You're back in your teens, and friends with Kris and Gary. You hang out frequently and have a lot of goofy inside jokes and banter.
- Tonight, Gary's mom has invited you and Kris over for dinner.
- You get to Gary's house several hours early, but he's still working on homework. You go upstairs and borrow his bed for a nap.
- Later, you're awoken by the activity as Kris arrives, and Gary's mom shouts a greeting from the other room: "Hey, Kris! Your hair smells bad.". Kris responds with "Yours as well." This goes back and forth, with Gary, Kris, and Gary's mom fluidly exchanging insults as they chat. You're surprised - you didn't know Kris knew Gary's mom.
- Later, you go downstairs to say hi. Gary's mom says "welcome to the land of the living!" and invites you all to sit and eat.
- Partway through eating, Kris says "Gary, you look like a slob."
- You feel embarrassed in front of Gary's mom, and say "Kris, don't be an ass."
- You knew they had been bantering happily earlier. If you hadn't had an audience, you'd have just chuckled and joined in. What happened here?
If you'd like, pause for a moment and see if you can figure it out.
You, Gary, and Kris all feel comfortable bantering around each other. Clearly, Gary and Kris feel comfortable around Gary's mom, as well. But the reason you were uncomfortable is that you know Gary's mom thought you were asleep when Kris got there, and you hadn't known they were cool before, so as far as Gary's mom knows, you think she thinks kris is just being an ass. So you respond to that.
Let me try saying that again. Here's some notation for describing it:
X => Y: X correctly believes Y
X ~> Y: X incorrectly believes Y
X ?? Y: X does not know Y
X=Y=Z=...: X and Y and Z and ... are comfortable bantering
And here's an explanation in that notation:
Kris=You=Gary: Kris, You, and Gary are comfortable bantering.
Gary=Kris=Gary's mom: Gary, Kris, and Gary's mom are comfortable bantering.
You => [gary=Gary's mom=kris]: You know they're comfortable bantering.
Gary's mom ~> [You ?? [gary=Gary's mom=kris]]: Gary's mom doesn't know you know.
You => [Gary's mom ~> [You ?? [gary=Gary's mom=kris]]]: You know Gary's mom doesn't know you know they're comfortable bantering.
And to you in the moment, this crazy recursion just feels like a bit of anxiety, fuzzyness, and an urge to call Kris out so Gary's mom doesn't think you're ok with Kris being rude.
Now, this is a somewhat unusual example. It has to be set up just right in order to get such a deep recursion. The main character's reaction is sort of unhealthy/fake - better would have been to clarify that you overheard them bantering earlier. As far as I can tell, the primary case where things get this hairy is when there's uncertainty. But it does actually get this deep - this is a situation pretty similar to ones I've found myself in before.
There's a key thing here: when things like this happen, you react nearly immediately. You don't need to sit and ponder, you just immediately feel embarrassed for Kris, and react right away. Even though in order to figure out explicitly what you were worried about, you would have had to think about it four levels deep.
If you ask people about this, and it takes deep recursion to figure out what's going on, I expect you will generally get confused non-answers, such as "I just had a feeling". I also expect that when people give confused non-answers, it is almost always because of weird recursion things happening.
In Critch's original lightning talk, he gave this as an argument that the human social skills module is the one that just automatically gets this right. I agree with that, but I want to add: I think that that module is the same one that evaluates people for trust and tracks their needs and generally deals with imagining other people.
COMMUNICATION IN A NEWCOMBLIKE WORLD
So people have generative models of each other, and they care about each other's generative models of them. I care about people's opinion of me, but not in just a shallow way: I can't just ask them to change their opinion of me, because I'll be able to tell what they really think. Their actual moral judgement of their actual generative model of me directly affects my feelings of acceptance. So I want to let them know what kind of person I am: I don't just want to claim to be that kind of person, I want to actually show them that I am that kind of person.
You can't just tell someone "I'm not an asshole"; that's not strong evidence about whether you're an asshole. People have incentives to lie. People have powerful low-level automatic bayesian inference systems, and they'll automatically and intuitively recognize what social explanations are more likely as explanations of your behavior. If you want them to believe you're not an asshole, you have to give credible evidence that you are not an asshole: you have to show them that you do things that would have been unlikely had you been an asshole. You have to show them that you're willing to be nice to them, you have to show them that you're willing to accommodate their needs. Things that would be out of character if you were a bad character.
If you hang out with people who read Robin Hanson, you've probably heard of this before, under the name "signaling".
But many people who hear that interpret it as a sort of vacuous version, as though "signaling" is a sort of fakery, as though all you need to do is give the right signals. If someone says "I'm signaling that I'm one of the cool kids", then sure, they may be doing things that for other people would be signals of being one of the cool kids, but on net the evidence is that they are not one of the cool kids. Signaling isn't about the signals, it's about giving evidence about yourself.In order to be able to give credible evidence that you're one of the cool kids, you have to either get really good at lying-with-your-behavior such that people actually believe you, or you have to change yourself to be one of the cool kids. (This is, I think, a big part of where social anxiety advice falls down: "fake it 'til you make it" works only insofar as faking it actually temporarily makes it.)
"Signaling" isn't fakery, it is literally all communication about what kind of person you are. A common thing Hanson says, "X isn't about Y, it's about signaling" seems misleading to me: if someone is wearing a gold watch, it's not so much that wearing a gold watch isn't about knowing the time, it's that the owner's actual desires got distorted by the lens of common knowledge. Knowing that someone would be paying attention to them to infer their desires, they filtered their desires to focus on the ones they thought would make them look good. This also can easily come off as inauthentic, and it seems fairly clear why to me: if you're filtering your desires to make yourself look good, then that's a signal that you need to fake your desires or else you won't look good.
Signals are focused around hard-to-fake evidence. Anything and everything that is hard to fake and would only happen if you're a particular kind of person, and that someone else recognizes as so, is useful in conveying information about what kind of person you are. Fashion and hygiene are good examples of this: being willing to put in the effort make yourself fashionable or presentable, respectively, is evidence of being the kind of person who cares about participating in the societal distributed system.
Conveying truth in ways that are hard to fake is the sort of thing that comes up in artificial distributed systems, too. Bitcoin is designed around a "blockchain": a series of incredibly-difficult-to-fake records of transactions.
Bitcoin has interesting cryptographic tricks to make this hard to fake, but it centers around having a lot of people doing useless work, so that no one person can do a bunch more useless work and thereby succeed at faking it.
From the inside, it doesn't feel like we're in a massive distributed system. It doesn't feel like we're tracking game theory and common knowledge. Even though everyone, even those who don't know about it, do it automatically.
In the example, the main character just felt like something was funny. The reason they were able to figure it out and say something so fast was that they were a competent human who had focused their considerable learning power on understanding social interaction, presumably from a young age, and automatically recognized a common knowledge pattern when it presented itself.
But in real life, people are constantly doing this. To get along with people, you have to be willing to pay attention to giving evidence about your perception of them. To be accepted, you have to be willing to give evidence that you are the kind of person that other people want to accept, and you might need to change yourself if you actually just aren't.
In general, I currently think that minimizing recursion depth of common knowledge is important. Try to find ways-to-be that people will be able to recognize more easily. Think less about social things in-the-moment so that others have to think less to understand you; adjust your policies to work reliably so that people can predict them reliably.
Other information of interest
- Formal argument for the value of integrity:
- Comparison of integrity vs tit for tat:
- Another example of what it means to show someone you care:
- Make it actually useful to be honest:
- Seek social skills that promote honesty and depth:
- Have your own ideas:
- My summary view of the ideas the above posts go into detail about:
- Wonderful place to bootstrap your basic ability to communicate with social signals:
Introduction: Here's a misconception about World War II that I think is harmful and I don't see refuted often enough.
Misconception: In 1941, Hitler was sitting pretty with most of Europe conquered and no huge difficulties on the horizon. Then, due to his megalomania and bullshit ideology, he decided to invade Russia. This was an unforced error of epic proportions. It proved his undoing, like that of Napoleon before him.
Rebuttal: In hindsight, we think of the Soviet Union as a superpower and military juggernaut which you'd be stupid to go up against. But this is not how things looked to the Germans in 1941. Consider World War I. In 1917–1918, Germany and Austria had defeated Russia at the same time as they were fighting a horrifyingly bloody war with France and Britain - and another devastating European war with Italy. In 1941, Italy was an ally, France had been subdued and Britain wasn't in much of a position to exert its strength. Seemingly, the Germans had much more favorable conditions than in the previous round. And they won the previous round.
In addition, the Germans were not crazy to think that the Red Army was a bit of a joke. The Russians had had their asses handed to them by Poland in 1920 and in 1939–1940 it had taken the Russians three months and a ridiculous number of casualties to conquer a small slice of Finland.
Nevertheless, Russia did have a lot of manpower and a lot of equipment (indeed, far more than the Germans had thought) and was a potential threat. The Molotov-Ribbentrop pact was obviously cynical and the Germans were not crazy to think that they would eventually have to fight the Russians. Being the first to attack seemed like a good idea and 1941 seemed like a good time to do it. The potential gains were very considerable. Launching the invasion was a rational military decision.
Why this matters: The idea that Hitler made his most fatal decision for irrational reasons feeds into the conception that evil and irrationality must go hand in hand. It's the same kind of thinking that makes people think a superintelligence would automatically be benign. But there is no fundamental law of the universe which prevents a bad guy from conquering the world. Hitler lost his war with Russia for perfectly mundane and contingent reasons like, “the communists had been surprisingly effective at industrialization.”
In the past couple years, if you've poked your head into Rationality-sphere discussions you may have heard tell of a mental framework which has eluded clear boundaries but has nonetheless raised some interested eyebrows and has begun to solidify into a coherent conversation point. This system of thought has been variously referred to as "Postrationality" or "Metarationality" or "Keganism" or "Meaningness." Briefly put, Metarationality is a set of related Rationality concepts that place less emphasis on idealized Less Wrong style Rationality and more on one's place in a developmental psychology pathway. This description is imperfect in that there is not yet an agreed-upon definition of Metarationality; it currently stands only as a fuzzy set of relationships between certain specific writings emerging from the traditional Rationality space.
In the spirit of Repositories, myself and a few other LW-ers have compiled some source materials that fall inside or adjacent to this memespace. If you are aware of any conspicuously missing links, posts, or materials, please post a list in a single comment and I'll add to the lists! (Edit 1-28-17: There have been many suggestions for additions which I will add to the lists soon!)
- In Over Our Heads - Robert Kegan. Introduction to the 5-Stage model of psychological development. The "Thinking: Fast and Slow" of Metarationality, and spiritual sequel to his earlier work, The Evolving Self.
- Metaphors We Live By - Mark Johnson. A theory of language and the mind, claimed by many as substantially improving their practical ability to interact with both the world and writing.
- Impro: Improvisation and the Theatre - Keith Johnstone. A meandering and beautiful if not philosophically rigorous description of life in education and theater, and for many readers proof that logic is not the only thing that induces mental updates.
- Ritual and its Consequences - Adam Seligman et. al. An anthropological work describing the role or ritual and culture in shaping attitudes, action, and beliefs on a societal scale. The subtitle An Essay on the Limits of Sincerity closely matches metarationalist themes.
- Meaningness - David Chapman. Having originally written Rationality-adjacently, Chapman now encompasses a broader ranging and well internally-referenced collection of useful metarationalist concepts, including the very coining of "Metarationality."
- Ribbonfarm - A group blog from Venkatesh Rao and Sarah Perry, self-described as "Longform Insight Porn" and caring not for its relationship or non-relationship to Rationality as a category.
Individual Introductory Posts
- Postrationality, Table of Contents on Yearly Cider
- How to Think Real Good on Meaningness
- Pop Beyesianism: Cruder Than I Thought? on Meaningness
- The Essence of Peopling on Ribbonfarm
- Trace of the Weirding on Ribbonfarm
- Reconciling Inside View and Outside View of Conflict and Identity on Life in a Free Market
My main intention at this juncture is to encourage and coordinate understanding of the social phenomenon and thought systems entailed by the vast network spanning from the above links. There is a lot to be said and argued about the usefulness or correctness of even using terms such as Metarationality, such as arguments that it is only a subset of Rationalist thought, or that terms like Postrationality are meant to signal ingroup superiority to Rationalism. There is plenty of ink to be spilled over these questions, but we'll get there in due time.
Lets start with charitability, understanding, goodwill, and empiricism, and work from there.
Thanks to /u/agilecaveman for their continued help and brilliance.
I grew up in socialist East Germany. Like most of my fellow citizens, I was not permitted to leave the country. But there was an important exception: People could leave after retirement. Why? Because that meant their forfeited their retirement benefits. Once you took more from the state than you gave, you were finally allowed to leave. West Germany would generously take you in. My family lived near the main exit checkpoint for a while and there was a long line of old people most days.
And then there is Saudi Arabia and other rentier states. Rentier states(https://en.m.wikipedia.org/wiki/Rentier_state) derive most of their income not from their population. The population gets a lot more wealth from the state than the state gets from the population. States like Saudi Arabia are therefore relatively independent of their population's consent with policy. A citizen who is unhappy is welcome to leave, or to retreat to their private sphere and live off benefits while keeping their mouth shut - neither of these options incurs a significant cost for the state.
I think these facts are instructive in thinking about Universal Basic Income. I want to make a point that I haven't seen made in discussions of the matter.
Most political systems (not just democracies) are built on an assumption that the state needs its citizens. This assumption is always a bit wrong - for example, no state has much need of the terminally ill, except to signal to its citizens that it cares for all of them. In the cases of East Germany and Saudi Arabia, this assumption is more wrong. And Universal Basic Income makes it more wrong as well.
From the point of view of a state, there are citizens who are more valuable (or who help in competition with other states) and ones who are more of a burden (who make competing with other states more difficult). Universal Basic Income massively broadens the part of society that is a net loss to the state.
Now obviously technological unemployment is likely to do that anyway. But there's a difference between answers to that problem that divide up the available work between the members of society and answers that divide up society into contributors and noncontributors. My intuition is that UBI is the second kind of solution, because states will be incentivized to treat contributors differently from noncontributors. The examples are to illustrate that a state can behave very differently towards citizens if it is fundamentally not interested in retaining them.
I go along with Harari's suggestion that the biggest purely political problem of the 21st century is the integration of the economically unnecessary parts of the population into society. My worry is that UBI, while helping with immediate economic needs, makes that problem worse in the long run. Others have already pointed out problems with UBI (such as that in a democracy it'll be impossible to get rid of if it is a failure) that gradual approaches like lower retirement age, later entry into the workforce and less work per week don't have. But I reckon that behind the immediate problems with UBI such as the amount of funding it needs and the question of what it does to the motivation to work, there's a whole class of problems that arise out of the changed relationships between citizens, states and economies. With complex networks of individuals and institutions responding intelligently to the changed circumstances, a state inviting its citizens to emigrate may not be the weirdest of unforeseen consequences.
Epistemic Status: sharing a hypothesis that's slowly been coalescing since a discussion with Eliezer at EAG and got catalyzed by Anna's latest LW post along with an exercise I have been using. n=1
Mental phenomena (and thus rationality skills) can't be trained without a feedback loop that causes calibration in the relevant direction. One of my guesses for a valuable thing Eliezer did was habitual stack traces causing a leveling up of stack trace resolution i.e. seeing more fine grain detail in mental phenomena. This is related to 'catching flinches' as Anna describes, as an example of a particularly useful phenomena to be able to catch. In general, you can't tune black boxes, you need to be able to see individual steps.
How can you level up the stack trace skill? Triaging your unwillingness to do things, and we'll start with your unwillingness to practice the stack trace skill! I like 'triage' more than 'classify' because it imports some connotations about scope sensitivity.
In order to triage we need a taxonomy. Developing/hacking/modding your own is what ultimately works best, but you can use prebuilt ones as training wheels. Here are two possible taxonomies:
Note whether it is experienced as
- Distracting Desire
Note whether it is experienced as
- Mental talk
- Mental images
- Sensations in the body
Form the intention to practice the stack trace skill and then try to classify at least one thing that happens. If you feel good when you get a 'hit' you will be more likely to catch additional events.
You can try this on anything. The desire for an unhealthy snack, the unwillingness to email someone etc. Note that the exercise isn't about forcing yourself to do things you don't want to do. You just want to see more clearly your own objections to doing it. If you do it more, you'll start to notice that you can catch more 'frames' or multiple phenomena at the same time or in a row e.g. I am experiencing ambiguity as the mental talk "I'm not sure how to do that" and as a slightly slimy/sliding away sensation followed by aversion to feeling the slimy feeling and an arising distracting desire to check my email. Distinguishing between actual sensations in the body and things that only seem like they could maybe be described as sensations is mostly a distraction and not all that important initially.
These are just examples and finding nice tags in your own mentalese makes the thing run smoother. You can also use this as fuel for focusing for particularly interesting frames you catch e.g. when you catch a limiting belief. It's also interesting to notice instances of the 'to-be' verb form in mental talk as this is the source of a variety of map-territory distinction errors.
There is a specific failure worth mentioning: coming up with a story. If you ask yourself questions like "Why did I think that?" your brain is great at coming up with plausible sounding stories that are often bullshit. This is why, when practicing the skill, you have to prime the intention to catch specific things beforehand. Once the skill has been built up you can use it on arbitrary thoughts and have a sense for the difference between 'story' and actual frame catching.
If other people try this I'm curious for feedback. My experience so far has been that increasing the resolution on stack traces has made the practice of every other mental technique dramatically easier because the feedback loops are all tighter. Especially relevant to repairing a failed TAP. How much practice was involved? A few minutes a day for 3 weeks caused a noticeable effect that has endured. My models, plans, and execution fail less often. When they do I have a much better chance of catching the real culprit.
I used to think that comments didn’t matter. I was wrong. This is important because communities of discourse are an important source of knowledge. I’ll explain why I changed my mind, and then propose a simple mechanism for improving them, that can be implemented on any platform that allows threaded comments.
There seems to actually be real momentum behind this attempt as reviving Less Wrong. One of the oldest issues on LW has been the lack of content. For this reason, I thought that it might be worthwhile opening a thread where people can suggest how we can expand the scope of what people write about in order for us to have sufficient content.
Does anyone have any ideas about which areas of rationality are underexplored? Please only list one area per comment.
Planning 101: Techniques and Research
<Cross-posed from my blog>
[Epistemic status: Relatively strong. There are numerous studies showing that predictions often become miscalibrated. Overconfidence in itself appears fairly robust, appearing in different situations. The actual mechanism behind the planning fallacy is less certain, though there is evidence for the inside/outside view model. The debiasing techniques are supported, but more data on their effectiveness could be good.]
Humans are often quite overconfident, and perhaps for good reason. Back on the savanna and even some places today, bluffing can be an effective strategy for winning at life. Overconfidence can scare down enemies and avoid direct conflict.
When it comes to making plans, however, overconfidence can really screw us over. You can convince everyone (including yourself) that you’ll finish that report in three days, but it might still really take you a week. Overconfidence can’t intimidate advancing deadlines.
I’m talking, of course, about the planning fallacy, our tendency to make unrealistic predictions and plans that just don’t work out.
Students are a prime example of victims to the planning fallacy:
First, students were asked to predict when they were 99% sure they’d finish a project. When the researchers followed up with them later, though, only about 45%, less than half of the students, had actually finished by their own predicted times [Buehler, Griffin, Ross, 1995].
Even more striking, students working on their psychology honors theses were asked to predict when they’d finish, “assuming everything went as poor as it possibly could.” Yet, only about 30% of students finished by their own worst-case estimate [Buehler, Griffin, Ross, 1995].
Similar overconfidence was also found in Japanese and Canadian cultures, giving evidence that this is a human (and not US-culture-based) phenomenon. Students continued to make optimistic predictions, even when they knew the task had taken them longer last time [Buehler and Griffin, 2003, Buehler et al., 2003].
As I student myself, though, I don’t mean to just pick on ourselves.
The planning fallacy affects projects across all sectors.
An overview of public transportation projects found that most of them were, on average, 20–45% above the estimated cost. In fact, research has shown that these poor predictions haven’t improved at all in the past 30 years [Flyvbjerg 2006].
And there’s no shortage of anecdotes, from the Scottish Parliament Building, which cost 10 times more than expected, or the Denver International Airport, which took over a year longer and cost several billion more.
When it comes to planning, we suffer from a major disparity between our expectations and reality. This article outlines the research behind why we screw up our predictions and gives three suggested techniques to suck less at planning.
So what’s going on in our heads when we make these predictions for planning?
On one level, we just don’t expect things to go wrong. Studies have found that we’re biased towards not looking at pessimistic scenarios [Newby-Clark et al., 2000]. We often just assume the best-case scenario when making plans.
Part of the reason may also be due to a memory bias. It seems that we might underestimate how long things take us, even in our memory [Roy, Christenfeld, and McKenzie 2005].
But by far the dominant theory in the field is the idea of an inside view and an outside view [Kahneman and Lovallo 1993]. The inside view is the information you have about your specific project (inside your head). The outside view is what someone else looking at your project (outside of the situation) might say.
We seem to use inside view thinking when we make plans, and this leads to our optimistic predictions. Instead of thinking about all the things that might go wrong, we’re focused on how we can help our project go right.
Still, it’s the outside view that can give us better predictions. And it turns out we don’t even need to do any heavy-lifting in statistics to get better predictions. Just asking other people (from the outside) to predict your own performance, or even just walking through your task from a third-person point of view can improve your predictions [Buehler et al., 2010].
Basically, the difference in our predictions seems to depend on whether we’re looking at the problem in our heads (a first-person view) or outside our heads (a third-person view). Whether we’re the “actor” or the “observer” in our minds seems to be a key factor in our planning [Pronin and Ross 2006].
I’ll be covering three ways to improve predictions: Murphyjitsu, Reference Class Forecasting (RCF), and Back-planning. In actuality, they’re all pretty much the same thing; all three techniques focus, on some level, on trying to get more of an outside view. So feel free to choose the one you think works best for you (or do all three).
For each technique, I’ll give an overview and cover the steps first and then end with the research that supports it. They might seem deceptively obvious, but do try to keep in mind that obvious advice can still be helpful!
(Remembering to breathe, for example, is obvious, but you should still do it anyway. If you don't want to suffocate.)
“Avoid Obvious Failures”
Almost as good as giving procrastination an ass-kicking.
The name Murphyjitsu comes from the infamous Murphy’s Law: “Anything that can go wrong, will go wrong.” The technique itself is from the Center for Applied Rationality (CFAR), and is designed for “bulletproofing your strategies and plans”.
Here are the basic steps:
- Figure out your goal. This is the thing you want to make plans to do.
- Write down which specific things you need to get done to make the thing happen. (Make a list.)
- Now imagine it’s one week (or month) later, and yet you somehow didn’t manage to get started on your goal. (The visualization part here is important.) Are you surprised?
- Why? (What went wrong that got in your way?)
- Now imagine you take steps to remove the obstacle from Step 4.
- Return to Step 3. Are you still surprised that you’d fail? If so, your plan is probably good enough. (Don’t fool yourself!)
- If failure still seems likely, go through Steps 3–6 a few more times until you “problem proof” your plan.
Murphyjitsu based off a strategy called a “premortem” or “prospective hindsight”, which basically means imagining the project has already failed and “looking backwards” to see what went wrong [Klein 2007].
It turns out that putting ourselves in the future and looking back can help identify more risks, or see where things can go wrong. Prospective hindsight has been shown to increase our predictive power so we can make adjustments to our plans — before they fail [Mitchell et al., 1989, Veinott et al., 2010].
This seems to work well, even if we’re only using our intuitions. While that might seem a little weird at first (“aren’t our intuitions pretty arbitrary?”), research has shown that our intuitions can be a good source of information in situations where experience is helpful [Klein 1999; Kahneman 2011]*.
While a premortem is usually done on an organizational level, Murphyjitsu works for individuals. Still, it’s a useful way to “failure-proof” your plans before you start them that taps into the same internal mechanisms.
Here’s what Murphyjitsu looks like in action:
“First, let’s say I decide to exercise every day. That’ll be my goal (Step 1). But I should also be more specific than that, so it’s easier to tell what “exercising” means. So I decide that I want to go running on odd days for 30 minutes and do strength training on even days for 20 minutes. And I want to do them in the evenings (Step 2).
Now, let’s imagine that it’s now one week later, and I didn’t go exercising at all! What went wrong? (Step 3) The first thing that comes to mind is that I forgot to remind myself, and it just slipped out of my mind (Step 4). Well, what if I set some phone / email reminders? Is that good enough? (Step 5)
Once again, let’s imagine it’s one week later and I made a reminder. But let’s say I still didn’t got exercising. How surprising is this? (Back to Step 3) Hmm, I can see myself getting sore and/or putting other priorities before it…(Step 4). So maybe I’ll also set aside the same time every day, so I can’t easily weasel out (Step 5).
How do I feel now? (Back to Step 3) Well, if once again I imagine it’s one week later and I once again failed, I’d be pretty surprised. My plan has two levels of fail-safes and I do want to do exercise anyway. Looks like it’s good! (Done)
“Get Accurate Estimates”
Predicting the future…using the past!
Reference class forecasting (RCF)is all about using the outside view. Our inside views tend to be very optimistic: We will see all the ways that things can go right, but none of the ways things can go wrong. By looking at past history — other people who have tried the same or similar thing as us — we can get a better idea of how long things will really take.
Here are the basic steps:
- Figure out what you want to do.
- See your records how long it took you last time 3.
- That’s your new prediction.
- If you don’t have past information, look for about how long it takes, on average, to do our thing. (This usually looks like Googling “average time to do X”.)**
- That’s your new prediction!
Technically, the actual process for reference class forecasting works a little differently. It involves a statistical distribution and some additional calculations, but for most everyday purposes, the above algorithm should work well enough.
In both cases, we’re trying to take an outside view, which we know improves our estimates [Buehler et al., 1994].
When you Google the average time or look at your own data, you’re forming a “reference class”, a group of related actions that can give you info about how long similar projects tend to take. Hence, the name “reference class forecasting”.
Basically, RCF works by looking only at results. This means that we can avoid any potential biases that might have cropped up if we were to think it through. We’re shortcutting right to the data. The rest of it is basic statistics; most people are close to average. So if we have an idea of what the average looks like, we can be sure we’ll be pretty close to average as well [Flyvbjerg 2006; Flyvbjerg 2008].
The main difference in our above algorithm from the standard one is that this one focuses on your own experiences, so the estimate you get tends to be more accurate than an average we’d get from an entire population.
For example, if it usually takes me about 3 hours to finish homework (I use Toggl to track my time), then I’ll predict that it will take me 3 hours today, too.
It’s obvious that RCF is incredibly simple. It literally just tells you that how long something will take you this time will be very close to how long it took you last time. But that doesn’t mean it’s ineffective! Often, the past is a good benchmark of future performance, and it’s far better than any naive prediction your brain might spit out.
RCF + Murphyjitsu Example:
For me, I’ve found that using a mixture of Reference Class Forecasting and Murphyjitsu to be helpful for reducing overconfidence in my plans.
When starting projects, I will often ask myself, “What were the reasons that I failed last time?” I then make a list of the first three or four “failure-modes” that I can recall. I now make plans to preemptively avoid those past errors.
(This can also be helpful in reverse — asking yourself, “How did I solve a similar difficult problem last time?” when facing a hard problem.)
Here’s an example:
“Say I’m writing a long post (like this one) and I want to know how what might go wrong. I’ve done several of these sorts of primers before, so I have a “reference class” of data to draw from. So what were the major reasons I fell behind for those posts?
Hmm, it looks like I would either forget about the project, get distracted, or lose motivation. Sometimes I’d want to do something else instead, or I wouldn’t be very focused.
Okay, great. Now what are some ways that I might be able to “patch” those problems?
Well, I can definitely start by making a priority list of my action items. So I know which things I want to finish first. I can also do short 5-minute planning sessions to make sure I’m actually writing. And I can do some more introspection to try and see what’s up with my motivation.
“Calibrate Your Intuitions with Reality”
Back-planning involves, as you might expect, planning from the end. Instead of thinking about where we start and how to move forward, we imagine we’re already at our goal and go backwards.
Here are the steps:
- Figure out the task you want to get done.
- Imagine you’re at the end of your task.
- Now move backwards, step-by-step. What is the step right before you finish?
- Repeat Step 3 until you get to where you are now.
- Write down how long you think the task will now take you.
- You now have a detailed plan as well as better prediction!
The experimental evidence for back-planning basically suggests that people will predict longer times to start and finish projects.
There are a few interesting hypotheses about why back-planning seems to improve predictions. The general gist of these theories is that back-planning is a weird, counterintuitive way to think about things, which means it disrupts a lot of mental processes that can lead to overconfidence [Wiese et al., 2012].
This means that back-planning can make it harder to fall into the groove of the easy “best-case” planning we default to. Instead, we need to actually look at where things might go wrong. Which is, of course, what we want.
In my own experience, I’ve found that going through a quick back-planning session can help my intuitions “warm up” to my prediction more. As in, I’ll get an estimation from RCF, but it still feels “off”. Walking through the plan through back-planning can help all the parts of me understand that it really will probably take longer.
Here’s the back-planning example:
“Right now, I want to host a talk at my school. I know that’s the end goal (Step 1). So the end goal is me actually finishing the talk and taking questions (Step 2). What happens right before that? (Step 3). Well, people would need to actually be in the room. And I would have needed a room.
Is that all? (Step 3). Also, for people to show up, I would have needed publicity. Probably also something on social media. I’d need to publicize at least a week in advance, or else it won’t be common knowledge.
And what about the actual talk? I would have needed slides, maybe memorize my talk. Also, I’d need to figure out what my talk is actually going to be on.
Huh, thinking it through like this, I’d need something like 3 weeks to get it done. One week for the actual slides, one week for publicity (at least), and one week for everything else that might go wrong.
That feels more ‘right’ than my initial estimate of ‘I can do this by next week.’”
Murphyjitsu, Reference Class Forecasting, and Back-planning are the three debiasing techniques that I’m fairly confident work well. This section is far more anecdotal. They’re ideas that I think are useful and interesting, but I don’t have much formal backing for them.
Decouple Predictions From Wishes:
In my own experience, I often find it hard to separate when I want to finish a task versus when I actually think I will finish a task. This is a simple distinction to keep in mind when making predictions, and I think it can help decrease optimism. The most important number, after all, is when I actually think I will finish—it’s what’ll most likely actually happen.
There’s some evidence suggesting that “wishful thinking” could actually be responsible for some poor estimates but it’s far from definitive [Buehler et al., 1997, Krizan and Windschitl].
Incentivize Correct Predictions:
Lately, I’ve been using a 4-column chart for my work. I write down the task in Column 1 and how long I think it will take me in Column 2. Then I go and do the task. After I’m done, I write down how long it actually took me in Column 3. Column 4 is the absolute value of Column 2 minus Column 3, or my “calibration score”.
The idea is to minimize my score every day. It’s simple and it’s helped me get a better sense for how long things really take.
Plan For Failure:
In my schedules, I specifically write in “distraction time”. If you aren’t doing this, you may want to consider doing this. Most of us (me included) have wandering attentions, and I know I’ll lost at least some time to silly things every day.
Double Your Estimate:
I get it. The three debiasing techniques I outlined above can sometimes take too long. In a pinch, you can probably approximate good predictions by just doubling your naive prediction.
Most people tend to be less than 2X overconfident, but I think (pessimistically) sticking to doubling is probably still better than something like 1.5X.
Obviously because groups are made of individuals, we’d expect them to be susceptible to the same overconfidence biases I covered earlier. Though some research has shown that groups are less susceptible to bias, more studies have shown that group predictions can be far more optimistic than individual predictions [Wright and Wells, Buehler et al., 2010]. “Groupthink” is term used to describe the observed failings of decision making in groups [Janis].
Groupthink (and hopefully also overconfidence), can be countered by either assigning a “Devil’s Advocate” or engaging in “dialectical inquiry” [Lunenburg 2012]:
We give out more than cookies over here
A Devil’s Advocate is a person who is actively trying to find fault with the group’s plans, looking for holes in reasoning or other objections. It’s suggested that the role rotates, and it’s associated with other positives like improved communication skills.
A dialectical inquiry is where multiple teams try to create the best plan, and then present them. Discussion then happens, and then the group selects the best parts of each plan . It’s a little like building something awesome out of lots of pieces, like a giant robot.
For both strategies, research has shown that they lead to “higher-quality recommendations and assumptions” (compared to not doing them), although it can also reduce group satisfaction and acceptance of the final decision [Schweiger et al. 1986].
(Pretty obvious though; who’d want to keep chatting with someone hell-bent on poking holes in your plan?)
If you’re interested in learning (even) more about the planning fallacy, I’d highly recommend the paper The Planning Fallacy: Cognitive, Motivational, and Social Origins by Roger Buehler, Dale Griffin, and Johanna Peetz. Most of the material in this guide here is was taken from their paper. Do go check it out! It’s free!
Remember that everyone is overconfident (you and me included!), and that failing to plan is the norm. There are scary unknown unknowns out there that we just don’t know about!
Good luck and happy planning!
* Just don’t go and start buying lottery tickets with your gut. We’re talking about fairly “normal” things like catching a ball, where your intuitions give you accurate predictions about where the ball will land. (Instead of, say, calculating the actual projectile motion equation in your head.)
** In a pinch, you can just use your memory, but studies have shown that our memory tends to be biased too. So as often as possible, try to use actual measurements and numbers from past experience.
Buehler, Roger, Dale Griffin, and Johanna Peetz. "The Planning Fallacy: Cognitive,
Motivational, and Social Origins." Advances in Experimental Social Psychology 43 (2010): 1-62. Social Science Research Network.
Buehler, Roger, Dale Griffin, and Michael Ross. "Exploring the Planning Fallacy: Why People
Underestimate their Task Completion Times." Journal of Personality and Social Psychology 67.3 (1994): 366.
Buehler, Roger, Dale Griffin, and Heather MacDonald. "The Role of Motivated Reasoning in
Optimistic Time Predictions." Personality and Social Psychology Bulletin 23.3 (1997): 238-247.
Buehler, Roger, Dale Griffin, and Michael Ross. “It’s About Time: Optimistic Predictions in
Work and Love.” European Review of Social Psychology Vol. 6, (1995): 1–32
Buehler, Roger, et al. "Perspectives on Prediction: Does Third-Person Imagery Improve Task
Completion Estimates?." Organizational Behavior and Human Decision Processes 117.1 (2012): 138-149.
Buehler, Roger, Dale Griffin, and Michael Ross. "Inside the Planning Fallacy: The Causes and
Consequences of Optimistic Time Predictions." Heuristics and Biases: The Psychology of Intuitive Judgment (2002): 250-270.
Buehler, R., & Griffin, D. (2003). Planning, Personality, and Prediction: The Role of Future
Focus in Optimistic Time Predictions. Organizational Behavior and Human Decision Processes, 92, 80–90
Flyvbjerg, Bent. "From Nobel Prize to Project Management: Getting Risks Right." Project
Management Journal 37.3 (2006): 5-15. Social Science Research Network.
Flyvbjerg, Bent. "Curbing Optimism Bias and Strategic Misrepresentation in Planning:
Reference Class Forecasting in Practice." European Planning Studies 16.1 (2008): 3-21.
Janis, Irving Lester. "Groupthink: Psychological Studies of Policy Decisions and Fiascoes."
Johnson, Dominic DP, and James H. Fowler. "The Evolution of Overconfidence." Nature
477.7364 (2011): 317-320.
Kahneman, Daniel. Thinking, Fast and Slow. Macmillan, 2011.
Kahneman, Daniel, and Dan Lovallo. “Timid Choices and Bold Forecasts: A Cognitive
Perspective on Risk Taking." Management Science 39.1 (1993): 17-31.
Klein, Gary. Sources of power: How People Make Decisions. MIT press, 1999.
Klein, Gary. "Performing a Project Premortem." Harvard Business Review 85.9 (2007): 18-19.
Krizan, Zlatan, and Paul D. Windschitl. "Wishful Thinking About the Future: Does Desire
Impact Optimism?" Social and Personality Psychology Compass 3.3 (2009): 227-243.
Lunenburg, F. "Devil’s Advocacy and Dialectical Inquiry: Antidotes to Groupthink."
International Journal of Scholarly Academic Intellectual Diversity 14 (2012): 1-9.
Mitchell, Deborah J., J. Edward Russo, and Nancy Pennington. "Back to the Future: Temporal
Perspective in the Explanation of Events." Journal of Behavioral Decision Making 2.1 (1989): 25-38.
Newby-Clark, Ian R., et al. "People focus on Optimistic Scenarios and Disregard Pessimistic
Scenarios While Predicting Task Completion Times." Journal of Experimental Psychology: Applied 6.3 (2000): 171.
Pronin, Emily, and Lee Ross. "Temporal Differences in Trait Self-Ascription: When the Self is
Seen as an Other." Journal of Personality and Social Psychology 90.2 (2006): 197.
Roy, Michael M., Nicholas JS Christenfeld, and Craig RM McKenzie. "Underestimating the
Duration of Future Events: Memory Incorrectly Used or Memory Bias?." Psychological Bulletin 131.5 (2005): 738.
Schweiger, David M., William R. Sandberg, and James W. Ragan. "Group Approaches for
Improving Strategic Decision Making: A Comparative Analysis of Dialectical Inquiry,
Devil's Advocacy, and Consensus." Academy of Management Journal 29.1 (1986): 51-71.
Veinott, Beth. "Klein, and Sterling Wiggins,“Evaluating the Effectiveness of the Premortem
Technique on Plan Confidence,”." Proceedings of the 7th International ISCRAM Conference (May, 2010).
Wiese, Jessica, Roger Buehler, and Dale Griffin. "Backward Planning: Effects of Planning
Direction on Predictions of Task Completion Time." Judgment and Decision Making 11.2
Wright, Edward F., and Gary L. Wells. "Does Group Discussion Attenuate the Dispositional
Bias?." Journal of Applied Social Psychology 15.6 (1985): 531-546.
AnnaSalamon's recent post on "flinching" and "buckets" nicely complements PhilGoetz's 2009 post Reason as memetic immune disorder. (I'll be assuming that readers have read Anna's post, but not necessarily Phil's.) Using Anna's terminology, I take Phil to be talking about the dangers of merging buckets that started out as separate. Anna, on the other hand, is talking about how to deal with one bucket that should actually be several.
Phil argued (paraphrasing) that rationality can be dangerous because it leads to beliefs of the form "P implies Q". If you convince yourself of that implication, and you believe P, then you are compelled to believe Q. This is dangerous because your thinking about P might be infected by a bad meme. Now rationality has opened the way for this bad meme to infect your thinking about Q, too.
It's even worse if you reason yourself all the way to believing "P if and only if Q". Now any corruption in your thinking about either one of P and Q will corrupt your thinking about the other. In terms of buckets: If you put "Yes" in the P bucket, you must put "Yes" in the Q bucket, and vice versa. In other words, the P bucket and the Q bucket are now effectively one and the same.
In this sense, Phil was pointing out that rationality merges buckets. (More precisely, rationality creates dependencies among buckets. In the extreme case, buckets become effectively identical). This can be bad for the reasons that Anna gives. Phil argues that some people resist rationality because their "memetic immune system" realizes that rational thinking might merge buckets inappropriately. To avoid this danger, people often operate on the principle that it's suspect even to consider merging buckets from different domains (e.g., religious scripture and personal life).
This suggests a way in which Anna's post works at the meta-level, too.
Phil's argument is that people resist rationality because, in effect, they've identified the two buckets "Think rationally" and "Spread memetic infections". They fear that saying "Yes" to "Think rationally" forces them to say "Yes" to the dangers inherent to merged buckets.
But Anna gives techniques for "de-merging" buckets in general if it turns out that some buckets were inappropriately merged, or if one bucket should have been several in the first place.
In other words, Anna's post essentially de-merges the two particular buckets "Think rationally" and "Spread memetic infections". You can go ahead and use rational thinking, even though you will risk inappropriately merging buckets, because you now have techniques for de-merging those buckets if you need to.
In this way, Anna's post may diminish the "memetic immune system" obstacle to rational thinking that Phil observed.
Prediction markets are powerful, but also still quite niche. I believe that part of this lack of popularity could be solved with significantly better tools. During my work with Guesstimate I’ve thought a lot about this issue and have some ideas for what I would like to see in future attempts at prediction technologies.
1. Machine learning for forecast aggregation
In financial prediction markets, the aggregation method is the market price. In non-market prediction systems, simple algorithms are often used. For instance, in the Good Judgement Project, the consensus trends displays “the median of the most recent 40% of the current forecasts from each forecaster.” Non-financial prediction aggregation is a pretty contested field with several proposed methods.
I haven’t heard much about machine learning used for forecast aggregation. It would seem to me like many, many factors could be useful in aggregating forecasts. For instance, some elements of one’s social media profile may be indicative of their forecasting ability. Perhaps information about the educational differences between multiple individuals could provide insight on how correlated their knowledge is.
Perhaps aggregation methods, especially with training data, could partially detect and offset predictable human biases. If it is well known that people making estimates of project timelines are overconfident, then this could be taken into account. For instance, someone enters in “I think I will finish this project in 8 weeks”, and the system can infer something like, “Well, given the reference class I have of similar people making similar calls, I’d expect it to take 12.”
A strong machine learning system would of course require a lot of sample data, but small strides may be possible with even limited data. I imagine that if data is needed, lots of people on platforms like Mechanical Turk could be sampled.
2. Prediction interval input
The prediction tools I am familiar with focus on estimating the probabilities of binary events. This can be extremely limiting. For instance, instead of allowing users to estimate what Trump’s favorable rating would be, they instead have to bet on whether it will be over a specific amount, like “Will Trump’s favorable rate be at least 45.0% on December 31st?”
It’s probably no secret that I have a love for probability densities. I propose that users should be able to enter probability densities directly. User entered probability densities would require more advanced aggregation techniques, but is doable.
Probability density inputs would also require additional understanding from users. While this could definitely be a challenge, many prediction markets already are quite complicated, and existing users of these tools are quite sophisticated.
I would suspect that using probability densities could simplify questions about continuous variables and also give much more useful information on their predictions. If there are tail risks these would be obvious; and perhaps more interestingly, probability intervals from prediction tools could be directly used in further calculations. For instance, if there were separate predictions about the population of the US and the average income, these could be multiplied to have an estimate of the total GDP (correlations complicate this, but for some problems may not be much of an issue, and in others perhaps they could be estimated as well).
Probability densities make less sense for questions with a discrete set of options, like predicting who will win an election. There are a few ways of dealing with these. One is to simply leave these questions to other platforms, or to resort back to the common technique of users estimating specific percentage likelihoods in these cases. Another is to modify some of these to be continuous variables that determine discrete outcomes; like the number of electoral college votes a U.S. presidential candidate will receive. Another option is to estimate the ‘true’ probability of something as a distribution, where the ‘true’ probability is defined very specifically. For instance, a group could make probability density forecasts for the probability that the blog 538 will give to a specific outcome on a specific date. In the beginning of an election, people would guess 538's percent probability for one candidate winning a month before the election.
3. Intelligent Prize Systems
I think the main reason why so many academics and rationalists are excited about prediction markets is because of their positive externalities. Prediction markets like InTrade seem to do quite well at predicting many political and future outcomes, and this information is very valuable to outside third parties.
I’m not sure how comfortable I feel about the incentives here. The fact that the main benefits come from externalities indicates that the main players in the markets aren’t exactly optimizing for these benefits. While users are incentivized to be correct and calibrated, they are not typically incentivized to predict things that happen to be useful for observing third parties.
I would imagine that the externalities created by prediction tools would be strongly correlate with the value of information to these third parties, which does rely on actionable and uncertain decisions. So if the value of information from prediction markets were to be optimized, it would make sense that these third parties have some way of ranking what gets attention based on what their decisions are.
For instance, a whole lot of prediction markets and related tools focus heavily on sports forecasts. I highly doubt that this is why most prediction market enthusiasts get excited about these markets.
In many ways, promoting prediction markets for their positive externalities is very strange endeavor. It’s encouraging the creation of a marketplace because of the expected creation of some extra benefit that no one directly involved in that marketplace really cares about. Perhaps instead there should be otherwise-similar ways for those who desire information from prediction groups to directly pay for that information.
One possibility that has been discussed is for prediction markets to be subsidized in specific ways. This obviously would have to be done carefully in order to not distort incentives. I don’t recall seeing this implemented successfully yet, just hearing it be proposed.
For prediction tools that aren’t markets, prizes can be given out by sponsoring parties. A naive system is for one large sponsor to sponsor a ‘category’, then the best few people in that category get the prizes. I believe something like this is done by Hypermind.
I imagine a much more sophisticated system could pay people as they make predictions. One could imagine a system that numerically estimates how much information was added to the new aggregate when a new prediction is made. Users with established backgrounds will influence the aggregate forecast significantly more than newer ones, and thus will be rewarded proportionally. A more advanced system would also take into account estimate supply and demand; if there are some conditions where users particularly enjoy adding forecasts, they may not need to be compensated as much for these, despite the amount or value of information contributed.
On the prize side, a sophisticated system could allow various participants to pool money for different important questions and time periods. For instance, several parties put down a total of $10k on the question ‘what will the US GDP be in 2020’, to be rewarded over the period of 2016 to 2017. Participants who put money down could be rewarded by accessing that information earlier than others or having improved API access.
Using the system mentioned above, an actor could hypothetically build up a good reputation, and then use it to make a biased prediction in the expectation that it would influence third parties. While this would be very possible, I would expect it to require the user to generate more value than their eventual biased prediction would cost. So while some metrics may become somewhat biased, in order for this to happen many others would become improved. If this were still a problem, perhaps forecasts could make bets in order to demonstrate confidence (even if the bet were made in a separate application).
4. Non-falsifiable questions
Prediction tools are really a subset of estimation tools, where the requirement is that they estimate things that are eventually falsifiable. This is obviously a very important restriction, especially when bets are made. However, it’s not an essential restriction, and hypothetically prediction technologies could be used for much more general estimates.
To begin, we could imagine how very long term ideas could be forecasted. A simple model would be to have one set of forecasts for what the GDP will be in 2020, and another for what the systems’ aggregate will think the GDP is in 2020, at the time of 2018. Then in 2018 everyone could be ranked, even though the actual event has not yet occurred.
In order for the result in 2018 to be predictive it would obviously require that participants would expect future forecasts to be predictive. If participants thought everyone else would be extremely optimistic, they would be encouraged to make optimistic predictions as well. This leads to a feedback loop that the more accurate the system is thought to be the more accurate it will be (approaching the accuracy of an immediately falsifiable prediction). If there is sufficient trust in a community and aggregation system, I imagine this system could work decently, but if there isn’t, then it won’t.
In practice I would imagine that forecasters would be continually judged as future forecasts are contributed that agree or disagree with them, rather than only when definitive events happen that prove or disprove their forecasts. This means that forecasters could forecast things that happen in very long time horizons, and still be ranked based on their ability in the short term.
Going more abstract, there could be more abstract poll-like questions like, “How many soldiers died in war in WW2?” or “How many DALYs would donating $10,000 to the AMF create in 2017?”. For these, individuals could propose their estimates, then the aggregation system would work roughly like normal to combine these estimates. Even though these questions may never be known definitively, if there is built in trust in the system, I could imagine that they could produce reasonable results.
One question here which is how to evaluate the results of aggregation systems for non-falsifiable questions. I don’t imagine any direct way, but could imagine ways of approximating it by asking experts how reasonable the results seem to them. While methods to aggregate results for non-falsifiable questions are themselves non-falsifiable, the alternatives also are very lacking. Given how many of these questions exist, it seems to me like perhaps they should be dealt with; and perhaps they can use the results from communities and statistical infrastructure optimized in situations that do have answers.
Each one of the above features could be described in much more detail, but I think the basic ideas are quite simple. I’m very enthusiastic about these, and would be interested in talking with anyone interested in collaborating on or just talking about similar tools. I’ve been considering attempting a system myself, but first want to get more feedback.
The Good Judgement Project FAQ, https://www.gjopen.com/faq
Sharpening Your Forecasting Skills, Link
IARPA Aggregative Contingent Estimation (ACE) research program https://www.iarpa.gov/index.php/research-programs/ace
The Good Judgement Project: A Large Scale Test of Different Methods of Combining Expert Predictions
“Will Trump’s favorable rate be at least 45.0% on December 31st?” on PredictIt (Link).
I believe Quantile Regression Averaging is one way of aggregating prediction intervals https://en.wikipedia.org/wiki/Quantile_regression_averaging
- Hypermind (http://hypermind.com/)
Sarah Constantin wrote:
Specifically, I think that LW declined from its peak by losing its top bloggers to new projects. Eliezer went to do AI research full-time at MIRI, Anna started running CFAR, various others started to work on those two organizations or others (I went to work at MetaMed). There was a sudden exodus of talent, which reduced posting frequency, and took the wind out of the sails.
One trend I dislike is that highly competent people invariably stop hanging out with the less-high-status, less-accomplished, often younger, members of their group. VIPs have a strong temptation to retreat to a "VIP island" -- which leaves everyone else short of role models and stars, and ultimately kills communities. (I'm genuinely not accusing anybody of nefarious behavior, I'm just noting a normal human pattern.) Like -- obviously it's not fair to reward competence with extra burdens, I'm not that much of a collectivist. But I think that potentially human group dynamics won't work without something like "community-spiritedness" -- there are benefits to having a community of hundreds or thousands, for instance, that you cannot accrue if you only give your time and attention to your ten best friends.
While I agree that the trend described in the second paragraph happens (and I also dislike the effects), I have another model that I think more tightly explains why the first paragraph happened. I also think that it's important to build systems with the constraint in mind that they work for the individuals inside those systems. A system that relies on trapped, guilted, or oppressed participants is a system at risk for collapse.
So in order to create great public spaces for rationalists, we don't just need to have good models of community development. Those can tell us what we need from people, but might not include how to make people fill those slots in a sustainable way.
To explain why lots of top bloggers left at once, let me present a model of adult development, drawn from George Vaillant’s modification of Erik Erikson’s model, as discussed in Triumphs of Experience. It focuses on 6 different ‘developmental tasks,’ rather than ‘stages’ or ‘levels.’ Each has success and failure conditions associated with it; a particular component of life goes either well or poorly. They’re also not explicitly hierarchical; one could achieve the “third” task before achieving the “second” task, for example, but one still notices trends in the ages at which the tasks have been completed.
Triumphs of Experience is the popsci treatment of the Harvard Grant Study of development; they took a bunch of Harvard freshmen and sophomores, subjected them to a bunch of psychological tests and interviews, and then watched them grow up over the course of ~70 years. This sort of longitudinal study gives them a very different perspective from cross-sectional studies, because they have much better pictures of what people looked like before and after.
I'll briefly list the developmental tasks, followed by quotes from Triumphs of Expertise that characterize succeeding at them. The bolded ones seem most relevant:
- Identity vs. Role Diffusion: “Live independently of family of origin, and to be self-supporting.”
- Intimacy vs. Isolation: “capacity to live with another person in an emotionally attached, interdependent, and committed relationship for ten years or more.”
- Career Consolidation vs. Role Diffusion: “Commitment, compensation, contentment, and competence.”
- Generativity vs. Stagnation: “assumption of sustained responsibility for the growth and well-being of others still young enough to need care but old enough to make their own decisions.”
- Guardianship vs. Hoarding: The previous level covered one-on-one relationships; this involves more broad, future-focused sorts of endeavors. Instead of mentoring one person, one is caretaker of a library for many.
- Integrity vs. Despair: Whether one is graceful in the face of death or not. 
As mentioned before, they're ordered but the ordering isn't strict, and so you can imagine someone working on any developmental task, or multiple at once. But it seems likely that people will focus most of their attention on their earliest ongoing task.
It seems to me like several of the top bloggers were focusing on blogging because something was blocking their attempts to focus on career consolidation, and so they focused on building up a community instead. When the community was ready enough, they switched to their career--but as people were all in the same community, this happened mostly at once.
I think that “community-spiritedness” in the sense that Sarah is pointing at, in the sense of wanting to take care of the raising of new generations or collection and dissemination of ideas, comes most naturally to people working on generativity and guardianship. People work on that most easily if they’re either done with consolidating their career or their career is community support (in one form or another). If not, it seems like there’s a risk that opportunities to pursue earlier needs will seem more immediate and be easily able to distract them.
(In retrospect, I fell prey to this; I first publicly embarked on the project of LW revitalization a year ago, and then after about a month started a search for a new job, which took up the time I had been spending on LW. If doing it again, I think the main thing I would have tried to do differently is budgeting a minimally sufficient amount of time for LW while doing the job search with the rest of my time, as opposed to spending time how it felt most useful. This might have kept the momentum going and allowed me to spend something like 80% of my effort on the search and 20% on LW, rather than unintentionally spending 100% and 0%.)
1. I interpret the last task through the lens of radical acceptance, not deathism; given that one is in fact dying, whether one responds in a way that helps themselves and those around them seems like an important thing to ask about separately from the question of how much effort we should put into building a world where that is the case later and less frequently than the present.
View more: Next