Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
As previously discussed, on June 6th I received a message from jackk, a Trike Admin. He reported that the user Jiro had asked Trike to carry out an investigation to the retributive downvoting that Jiro had been subjected to. The investigation revealed that the user Eugine_Nier had downvoted over half of Jiro's comments, amounting to hundreds of downvotes.
I asked the community's guidance on dealing with the issue, and while the matter was being discussed, I also reviewed previous discussions about mass downvoting and looked for other people who mentioned being the victims of it. I asked Jack to compile reports on several other users who mentioned having been mass-downvoted, and it turned out that Eugine was also overwhelmingly the biggest downvoter of users David_Gerard, daenarys, falenas108, ialdabaoth, shminux, and Tenoke. As this discussion was going on, it turned out that user Ander had also been targeted by Eugine.
I sent two messages to Eugine, requesting an explanation. I received a response today. Eugine admitted his guilt, expressing the opinion that LW's karma system was failing to carry out its purpose of keeping out weak material and that he was engaged in a "weeding" of users who he did not think displayed sufficient rationality.
Needless to say, it is not the place of individual users to unilaterally decide that someone else should be "weeded" out of the community. The Less Wrong content deletion policy contains this clause:
Harrassment of individual users.
If we determine that you're e.g. following a particular user around and leaving insulting comments to them, we reserve the right to delete those comments. (This has happened extremely rarely.)
Although the wording does not explicitly mention downvoting, harassment by downvoting is still harassment. Several users have indicated that they have experienced considerable emotional anguish from the harassment, and have in some cases been discouraged from using Less Wrong at all. This is not a desirable state of affairs, to say the least.
I was originally given my moderator powers on a rather ad-hoc basis, with someone awarding mod privileges to the ten users with the highest karma at the time. The original purpose for that appointment was just to delete spam. Nonetheless, since retributive downvoting has been a clear problem for the community, I asked the community for guidance on dealing with the issue. The rough consensus of the responses seemed to authorize me to deal with the problem as I deemed appropriate.
The fact that Eugine remained quiet about his guilt until directly confronted with the evidence, despite several public discussions of the issue, is indicative of him realizing that he was breaking prevailing social norms. Eugine's actions have worsened the atmosphere of this site, and that atmosphere will remain troubled for as long as he is allowed to remain here.
Therefore, I now announce that Eugine_Nier is permanently banned from posting on LessWrong. This decision is final and will not be changed in response to possible follow-up objections.
Unfortunately, it looks like while a ban prevents posting, it does not actually block a user from casting votes. I have asked jackk to look into the matter and find a way to actually stop the downvoting. Jack indicated earlier on that it would be technically straightforward to apply a negative karma modifier to Eugine's account, and wiping out Eugine's karma balance would prevent him from casting future downvotes. Whatever the easiest solution is, it will be applied as soon as possible.
EDIT 24 July 2014: Banned users are now prohibited from voting.
Although I feel that Nick Bostrom’s new book “Superintelligence” is generally awesome and a well-needed milestone for the field, I do have one quibble: both he and Steve Omohundro appear to be more convinced than I am by the assumption that an AI will naturally tend to retain its goals as it reaches a deeper understanding of the world and of itself. I’ve written a short essay on this issue from my physics perspective, available at http://arxiv.org/pdf/1409.0813.pdf.
give you, some we can't, few have been written up and even fewer in any
well-organized way. Benja or Nate might be able to expound in more detail
while I'm in my seclusion.
Very briefly, though:
The problem of utility functions turning out to be ill-defined in light of
new discoveries of the universe is what Peter de Blanc named an
"ontological crisis" (not necessarily a particularly good name, but it's
what we've been using locally).
The way I would phrase this problem now is that an expected utility
maximizer makes comparisons between quantities that have the type
"expected utility conditional on an action", which means that the AI's
utility function must be something that can assign utility-numbers to the
AI's model of reality, and these numbers must have the further property
that there is some computationally feasible approximation for calculating
expected utilities relative to the AI's probabilistic beliefs. This is a
constraint that rules out the vast majority of all completely chaotic and
uninteresting utility functions, but does not rule out, say, "make lots of
Models also have the property of being Bayes-updated using sensory
information; for the sake of discussion let's also say that models are
about universes that can generate sensory information, so that these
models can be probabilistically falsified or confirmed. Then an
"ontological crisis" occurs when the hypothesis that best fits sensory
information corresponds to a model that the utility function doesn't run
on, or doesn't detect any utility-having objects in. The example of
"immortal souls" is a reasonable one. Suppose we had an AI that had a
naturalistic version of a Solomonoff prior, a language for specifying
universes that could have produced its sensory data. Suppose we tried to
give it a utility function that would look through any given model, detect
things corresponding to immortal souls, and value those things. Even if
the immortal-soul-detecting utility function works perfectly (it would in
fact detect all immortal souls) this utility function will not detect
anything in many (representations of) universes, and in particular it will
not detect anything in the (representations of) universes we think have
most of the probability mass for explaining our own world. In this case
the AI's behavior is undefined until you tell me more things about the AI;
an obvious possibility is that the AI would choose most of its actions
based on low-probability scenarios in which hidden immortal souls existed
that its actions could affect. (Note that even in this case the utility
function is stable!)
Since we don't know the final laws of physics and could easily be
surprised by further discoveries in the laws of physics, it seems pretty
clear that we shouldn't be specifying a utility function over exact
physical states relative to the Standard Model, because if the Standard
Model is even slightly wrong we get an ontological crisis. Of course
there are all sorts of extremely good reasons we should not try to do this
anyway, some of which are touched on in your draft; there just is no
simple function of physics that gives us something good to maximize. See
also Complexity of Value, Fragility of Value, indirect normativity, the
whole reason for a drive behind CEV, and so on. We're almost certainly
going to be using some sort of utility-learning algorithm, the learned
utilities are going to bind to modeled final physics by way of modeled
higher levels of representation which are known to be imperfect, and we're
going to have to figure out how to preserve the model and learned
utilities through shifts of representation. E.g., the AI discovers that
humans are made of atoms rather than being ontologically fundamental
humans, and furthermore the AI's multi-level representations of reality
evolve to use a different sort of approximation for "humans", but that's
okay because our utility-learning mechanism also says how to re-bind the
learned information through an ontological shift.
This sorta thing ain't going to be easy which is the other big reason to
start working on it well in advance. I point out however that this
doesn't seem unthinkable in human terms. We discovered that brains are
made of neurons but were nonetheless able to maintain an intuitive grasp
on what it means for them to be happy, and we don't throw away all that
info each time a new physical discovery is made. The kind of cognition we
want does not seem inherently self-contradictory.
Three other quick remarks:
*) Natural selection is not a consequentialist, nor is it the sort of
consequentialist that can sufficiently precisely predict the results of
modifications that the basic argument should go through for its stability.
The Omohundrian/Yudkowskian argument is not that we can take an arbitrary
stupid young AI and it will be smart enough to self-modify in a way that
preserves its values, but rather that most AIs that don't self-destruct
will eventually end up at a stable fixed-point of coherent
consequentialist values. This could easily involve a step where, e.g., an
AI that started out with a neural-style delta-rule policy-reinforcement
learning algorithm, or an AI that started out as a big soup of
self-modifying heuristics, is "taken over" by whatever part of the AI
first learns to do consequentialist reasoning about code. But this
process doesn't repeat indefinitely; it stabilizes when there's a
consequentialist self-modifier with a coherent utility function that can
precisely predict the results of self-modifications. The part where this
does happen to an initial AI that is under this threshold of stability is
a big part of the problem of Friendly AI and it's why MIRI works on tiling
agents and so on!
*) Natural selection is not a consequentialist, nor is it the sort of
consequentialist that can sufficiently precisely predict the results of
modifications that the basic argument should go through for its stability.
It built humans to be consequentialists that would value sex, not value
inclusive genetic fitness, and not value being faithful to natural
selection's optimization criterion. Well, that's dumb, and of course the
result is that humans don't optimize for inclusive genetic fitness.
Natural selection was just stupid like that. But that doesn't mean
there's a generic process whereby an agent rejects its "purpose" in the
light of exogenously appearing preference criteria. Natural selection's
anthropomorphized "purpose" in making human brains is just not the same as
the cognitive purposes represented in those brains. We're not talking
about spontaneous rejection of internal cognitive purposes based on their
causal origins failing to meet some exogenously-materializing criterion of
validity. Our rejection of "maximize inclusive genetic fitness" is not an
exogenous rejection of something that was explicitly represented in us,
that we were explicitly being consequentialists for. It's a rejection of
something that was never an explicitly represented terminal value in the
first place. Similarly the stability argument for sufficiently advanced
self-modifiers doesn't go through a step where the successor form of the
AI reasons about the intentions of the previous step and respects them
apart from its constructed utility function. So the lack of any universal
preference of this sort is not a general obstacle to stable
*) The case of natural selection does not illustrate a universal
computational constraint, it illustrates something that we could
anthropomorphize as a foolish design error. Consider humans building Deep
Blue. We built Deep Blue to attach a sort of default value to queens and
central control in its position evaluation function, but Deep Blue is
still perfectly able to sacrifice queens and central control alike if the
position reaches a checkmate thereby. In other words, although an agent
needs crystallized instrumental goals, it is also perfectly reasonable to
have an agent which never knowingly sacrifices the terminally defined
utilities for the crystallized instrumental goals if the two conflict;
indeed "instrumental value of X" is simply "probabilistic belief that X
leads to terminal utility achievement", which is sensibly revised in the
presence of any overriding information about the terminal utility. To put
it another way, in a rational agent, the only way a loose generalization
about instrumental expected-value can conflict with and trump terminal
actual-value is if the agent doesn't know it, i.e., it does something that
it reasonably expected to lead to terminal value, but it was wrong.
This has been very off-the-cuff and I think I should hand this over to
Nate or Benja if further replies are needed, if that's all right.
[meta] Future moderation and investigation of downvote abuse cases, or, I don't want to deal with this stuff
Since the episode with Eugine_Nier, I have received three private messages from different people asking me to investigate various cases of suspected mass downvoting. And to be quite honest, I don't want to deal with this. Eugine's case was relatively clear-cut, since he had engaged in systematic downvoting of a massive scale, but the new situations are a lot fuzzier and I'm not sure of what exactly the rules should be (what counts as a permitted use of the downvote system and what doesn't?).
At least one person has also privately contacted me and offered to carry out moderator duties if I don't want them, but even if I told them yes (on what basis? why them and not someone else?), I don't know what kind of policy I should tell them to enforce. I only happened to be appointed a moderator because I was in the list of top 10 posters at a particular time, and I don't feel like I should have any particular authority to make the rules. Nor do I feel like I have any good idea of what the rules should be, or who would be the right person to enforce them.
In any case, I don't want to be doing this job, nor do I particularly feel like being responsible for figuring out who should, or how, or what the heck. I've already started visiting LW less often because I dread having new investigation requests to deal with. So if you folks could be so kind as to figure it out without my involvement? If there's a clear consensus that someone in particular should deal with this, I can give them mod powers, or something.
Last month I saw this post: http://lesswrong.com/lw/kbc/meta_the_decline_of_discussion_now_with_charts/ addressing whether the discussion on LessWrong was in decline. As a relatively new user who had only just started to post comments, my reaction was: “I hope that LessWrong isn’t in decline, because the sequences are amazing, and I really like this community. I should try to write a couple articles myself and post them! Maybe I could do an analysis/summary of certain sequences posts, and discuss how they had helped me to change my mind”. I started working on writing an article.
Then I logged into LessWrong and saw that my Karma value was roughly half of what it had been the day before. Previously I hadn’t really cared much about Karma, aside from whatever micro-utilons of happiness it provided to see that the number slowly grew because people generally liked my comments. Or at least, I thought I didn’t really care, until my lizard brain reflexes reacted to what it perceived as an assault on my person.
Had I posted something terrible and unpopular that had been massively downvoted during the several days since my previous login? No, in fact my ‘past 30 days’ Karma was still positive. Rather, it appeared that everything I had ever posted to LessWrong now had a -1 on it instead of a 0. Of course, my loss probably pales in comparison to that of other, more prolific posters who I have seen report this behavior.
So what controversial subject must I have commented on in order to trigger this assault? Well, let’s see, in the past week I had asked if anyone had any opinions of good software engineer interview questions I could ask a candidate. I posted in http://lesswrong.com/lw/kex/happiness_and_children/ that I was happy to not have children, and finally, here in what appears to me to be by far the most promising candidate:http://lesswrong.com/r/discussion/lw/keu/separating_the_roles_of_theory_and_direct/ I replied to a comment about global warming data, stating that I routinely saw headlines about data supporting global warming.
Here is our scenario: A new user is attempting to participate on a message board that values empiricism and rationality, posted that evidence supports that climate change is real. (Wow, really rocking the boat here!) Then, apparently in an effort to ‘win’ this discussion by silencing opposition, someone went and downvoted every comment this user had ever made on the site. Apparently they would like to see LessWrong be a bastion of empiricism and rationality and [i]climate change denial[/i] instead? And the way to achieve this is not to have a fair and rational discussion of the existing empirical data, but rather to simply Karmassassinate anyone who would oppose them?
Here is my hypothesis: The continuing problem of karma downvote stalkers is contributing to the decline of discussion on the site. I definitely feel much less motivated to try and contribute anything now, and I have been told by multiple other people at LessWrong meetings things such as “I used to post a lot on LessWrong, but then I posted X, and got mass downvoted, so now I only comment on Yvain’s blog”. These anecdotes are, of course, only very weak evidence to support my claim. I wish I could provide more, but I will have to defer to any readers who can supply more.
Perhaps this post will simply trigger more retribution, or maybe it will trigger an outswelling of support, or perhaps just be dismissed by people saying I should’ve posted it to the weekly discussion thread instead. Whatever the outcome, rather than meekly leaving LessWrong and letting my 'stalker' win, I decided to open a discussion about the issue. Thank you!
Some time back, I wrote that I was unwilling to continue with investigations into mass downvoting, and asked people for suggestions on how to deal with them from now on. The top-voted proposal in that thread suggested making Viliam_Bur into a moderator, and Viliam gracefully accepted the nomination. So I have given him moderator privileges and also put him in contact with jackk, who provided me with the information necessary to deal with the previous cases. Future requests about mass downvote investigations should be directed to Viliam.
Thanks a lot for agreeing to take up this responsibility, Viliam! It's not an easy one, but I'm very grateful that you're willing to do it. Please post a comment here so that we can reward you with some extra upvotes. :)
All of these things I mentioned in the most recent open thread, but since the first one is directly relevant and the comment where I posted it somewhat hard to come across, I figured I'd make a post too.
Custom Comment Highlights
NOTE FOR FIREFOX USERS: this contained a bug which has been squashed, causing the list of comments not to be automatically populated (depending on your version of Firefox). I suggest reinstalling. Sorry, no automatic updates unless you use the Chrome extension (though with >50% probability there will be no further updates).
You know how the highlight for new comments on Less Wrong threads disappears if you reload the page, making it difficult to find those comments again? Here is a userscript you can install to fix that (provided you're on Firefox or Chrome). Once installed, you can set the date after which comments are highlighted, and easily scroll to new comments. See screenshots. Installation is straightforward (especially for Chrome, since I made an extension as well).
Bonus: works even if you're logged out or don't have an account, though you'll have to set the highlight time manually.
Delay Before Commenting
Slate Star Codex Comment Highlighter
Note for LW Admins / Yvain
From Toby Ord:
Tool assisted speedruns (TAS) are when people take a game and play it frame by frame, effectively providing super reflexes and forethought, where they can spend a day deciding what to do in the next 1/60th of a second if they wish. There are some very extreme examples of this, showing what can be done if you really play a game perfectly. For example, this video shows how to winSuper Mario Bros 3 in 11 minutes. It shows how different optimal play can be from normal play. In particular, on level 8-1, it gains 90 extra lives by a sequence of amazing jumps.
Other TAS runs get more involved and start exploiting subtle glitches in the game. For example, this page talks about speed running NetHack, using a lot of normal tricks, as well as luck manipulation (exploiting the RNG) and exploiting a dangling pointer bug to rewrite parts of memory.
Though there are limits to what AIs could do with sheer speed, it's interesting that great performance can be achieved with speed alone, that this allows different strategies from usual ones, and that it allows the exploitation of otherwise unexploitable glitches and bugs in the setup.
Jason Mitchell is [edit: has been] the John L. Loeb Associate Professor of the Social Sciences at Harvard. He has won the National Academy of Science's Troland Award as well as the Association for Psychological Science's Janet Taylor Spence Award for Transformative Early Career Contribution.
Here, he argues against the principle of replicability of experiments in science. Apparently, it's disrespectful, and presumptively wrong.
Recent hand-wringing over failed replications in social psychology is largely pointless, because unsuccessful experiments have no meaningful scientific value.
Because experiments can be undermined by a vast number of practical mistakes, the likeliest explanation for any failed replication will always be that the replicator bungled something along the way. Unless direct replications are conducted by flawless experimenters, nothing interesting can be learned from them.
Three standard rejoinders to this critique are considered and rejected. Despite claims to the contrary, failed replications do not provide meaningful information if they closely follow original methodology; they do not necessarily identify effects that may be too small or flimsy to be worth studying; and they cannot contribute to a cumulative understanding of scientific phenomena.
Replication efforts appear to reflect strong prior expectations that published findings are not reliable, and as such, do not constitute scientific output.
The field of social psychology can be improved, but not by the publication of negative findings. Experimenters should be encouraged to restrict their “degrees of freedom,” for example, by specifying designs in advance.
Whether they mean to or not, authors and editors of failed replications are publicly impugning the scientific integrity of their colleagues. Targets of failed replications are justifiably upset, particularly given the inadequate basis for replicators’ extraordinary claims.
This is why we can't have social science. Not because the subject is not amenable to the scientific method -- it obviously is. People are conducting controlled experiments and other people are attempting to replicate the results. So far, so good. Rather, the problem is that at least one celebrated authority in the field hates that, and would prefer much, much more deference to authority.
This paper, or more often the New Scientist's exposition of it is being discussed online and is rather topical here. In a nutshell, stimulating one small but central area of the brain reversibly rendered one epilepsia patient unconscious without disrupting wakefulness. Impressively, this phenomenon has apparently been hypothesized before, just never tested (because it's hard and usually unethical). A quote from the New Scientist article (emphasis mine):
One electrode was positioned next to the claustrum, an area that had never been stimulated before.
When the team zapped the area with high frequency electrical impulses, the woman lost consciousness. She stopped reading and stared blankly into space, she didn't respond to auditory or visual commands and her breathing slowed. As soon as the stimulation stopped, she immediately regained consciousness with no memory of the event. The same thing happened every time the area was stimulated during two days of experiments (Epilepsy and Behavior, doi.org/tgn).
To confirm that they were affecting the woman's consciousness rather than just her ability to speak or move, the team asked her to repeat the word "house" or snap her fingers before the stimulation began. If the stimulation was disrupting a brain region responsible for movement or language she would have stopped moving or talking almost immediately. Instead, she gradually spoke more quietly or moved less and less until she drifted into unconsciousness. Since there was no sign of epileptic brain activity during or after the stimulation, the team is sure that it wasn't a side effect of a seizure.
If confirmed, this hints at several interesting points. For example, a complex enough brain is not sufficient for consciousness, a sort-of command and control structure is required, as well, even if relatively small. A low-consciousness state of late-stage dementia sufferers might be due to the damage specifically to the claustrum area, not just the overall brain deterioration. The researchers speculates that stimulating the area in vegetative-state patients might help "push them out of this state". From an AI research perspective, understanding the difference between wakefulness and consciousness might be interesting, too.
In this post, I list six metaethical possibilities that I think are plausible, along with some arguments or plausible stories about how/why they might be true, where that's not obvious. A lot of people seem fairly certain in their metaethical views, but I'm not and I want to convey my uncertainty as well as some of the reasons for it.
- Most intelligent beings in the multiverse share similar preferences. This came about because there are facts about what preferences one should have, just like there exist facts about what decision theory one should use or what prior one should have, and species that manage to build intergalactic civilizations (or the equivalent in other universes) tend to discover all of these facts. There are occasional paperclip maximizers that arise, but they are a relatively minor presence or tend to be taken over by more sophisticated minds.
- Facts about what everyone should value exist, and most intelligent beings have a part of their mind that can discover moral facts and find them motivating, but those parts don't have full control over their actions. These beings eventually build or become rational agents with values that represent compromises between different parts of their minds, so most intelligent beings end up having shared moral values along with idiosyncratic values.
- There aren't facts about what everyone should value, but there are facts about how to translate non-preferences (e.g., emotions, drives, fuzzy moral intuitions, circular preferences, non-consequentialist values, etc.) into preferences. These facts may include, for example, what is the right way to deal with ontological crises. The existence of such facts seems plausible because if there were facts about what is rational (which seems likely) but no facts about how to become rational, that would seem like a strange state of affairs.
- None of the above facts exist, so the only way to become or build a rational agent is to just think about what preferences you want your future self or your agent to hold, until you make up your mind in some way that depends on your psychology. But at least this process of reflection is convergent at the individual level so each person can reasonably call the preferences that they endorse after reaching reflective equilibrium their morality or real values.
- None of the above facts exist, and reflecting on what one wants turns out to be a divergent process (e.g., it's highly sensitive to initial conditions, like whether or not you drank a cup of coffee before you started, or to the order in which you happen to encounter philosophical arguments). There are still facts about rationality, so at least agents that are already rational can call their utility functions (or the equivalent of utility functions in whatever decision theory ends up being the right one) their real values.
- There aren't any normative facts at all, including facts about what is rational. For example, it turns out there is no one decision theory that does better than every other decision theory in every situation, and there is no obvious or widely-agreed-upon way to determine which one "wins" overall.
(Note that for the purposes of this post, I'm concentrating on morality in the axiological sense (what one should value) rather than in the sense of cooperation and compromise. So alternative 1, for example, is not intended to include the possibility that most intelligent beings end up merging their preferences through some kind of grand acausal bargain.)
It may be useful to classify these possibilities using labels from academic philosophy. Here's my attempt: 1. realist + internalist 2. realist + externalist 3. relativist 4. subjectivist 5. moral anti-realist 6. normative anti-realist. (A lot of debates in metaethics concern the meaning of ordinary moral language, for example whether they refer to facts or merely express attitudes. I mostly ignore such debates in the above list, because it's not clear what implications they have for the questions that I care about.)
One question LWers may have is, where does Eliezer's metathics fall into this schema? Eliezer says that there are moral facts about what values every intelligence in the multiverse should have, but only humans are likely to discover these facts and be motivated by them. To me, Eliezer's use of language is counterintuitive, and since it seems plausible that there are facts about what everyone should value (or how each person should translate their non-preferences into preferences) that most intelligent beings can discover and be at least somewhat motivated by, I'm reserving the phrase "moral facts" for these. In my language, I think 3 or maybe 4 is probably closest to Eliezer's position.
In early 2000, I registered my personal domain name weidai.com, along with a couple others, because I was worried that the small (sole-proprietor) ISP I was using would go out of business one day and break all the links on the web to the articles and software that I had published on my "home page" under its domain. Several years ago I started getting offers, asking me to sell the domain, and now they're coming in almost every day. A couple of days ago I saw the first six figure offer ($100,000).
In early 2009, someone named Satoshi Nakamoto emailed me personally with an announcement that he had published version 0.1 of Bitcoin. I didn't pay much attention at the time (I was more interested in Less Wrong than Cypherpunks at that point), but then in early 2011 I saw a LW article about Bitcoin, which prompted me to start mining it. I wrote at the time, "thanks to the discussion you started, I bought a Radeon 5870 and started mining myself, since it looks likely that I can at least break even on the cost of the card." That approximately $200 investment (plus maybe another $100 in electricity) is also worth around six figures today.
Clearly, technological advances can sometimes create gold rush-like situations (i.e., first-come-first-serve opportunities to make truly extraordinary returns with minimal effort or qualifications). And it's possible to stumble into them without even trying. Which makes me think, maybe we should be trying? I mean, if only I had been looking for possible gold rushes, I could have registered a hundred domain names optimized for potential future value, rather than the few that I happened to personally need. Or I could have started mining Bitcoins a couple of years earlier and be a thousand times richer.
I wish I was already an experienced gold rush spotter, so I could explain how best to do it, but as indicated above, I participated in the ones that I did more or less by luck. Perhaps the first step is just to keep one's eyes open, and to keep in mind that tech-related gold rushes do happen from time to time and they are not impossibly difficult to find. What other ideas do people have? Are there other past examples of tech gold rushes besides the two that I mentioned? What might be some promising fields to look for them in the future?
If you are a gay male then you’ve probably worried at one point about sexually transmitted diseases. Indeed men who have sex with men have some of the highest prevalence of many of these diseases. And if you’re not a gay male, you’ve probably still thought about STDs at one point. But how much should you worry? There are many organizations and resources that will tell you to wear a condom, but very few will tell you the relative risks of wearing a condom vs not. I’d like to provide a concise summary of the risks associated with gay male sex and the extent to which these risks can be reduced. (See Mark Manson’s guide for a similar resources for heterosexual sex.). I will do so by first giving some information about each disease, including its prevalence among gay men. Most of this data will come from the US, but the US actually has an unusually high prevalence for many diseases. Certainly HIV is much less common in many parts of Europe. I will end with a case study of HIV, which will include an analysis of the probabilities of transmission broken down by the nature of sex act and a discussion of risk reduction techniques.
When dealing with risks associated with sex, there are few relevant parameters. The most common is the prevalence – the proportion of people in the population that have the disease. Since you can only get a disease from someone who has it, the prevalence is arguably the most important statistic. There are two more relevant statistics – the per act infectivity (the chance of contracting the disease after having sex once) and the per partner infectivity (the chance of contracting the disease after having sex with one partner for the duration of the relationship). As it turns out the latter two probabilities are very difficult to calculate. I only obtained those values for for HIV. It is especially difficult to determine per act risks for specific types of sex acts since many MSM engage in a variety of acts with multiple partners. Nevertheless estimates do exist and will explored in detail in the HIV case study section.
Prevalence: Between 13 - 28%. My guess is about 13%.
The most infamous of the STDs. There is no cure but it can be managed with anti-retroviral therapy. A commonly reported statistic is that 19% of MSM (men who have sex with men) in the US are HIV positive (1). For black MSM, this number was 28% and for white MSM this number was 16%. This is likely an overestimate, however, since the sample used was gay men who frequent bars and clubs. My estimate of 13% comes from CDC's total HIV prevalence in gay men of 590,000 (2) and their data suggesting that MSM comprise 2.9% of men in the US (3).
Prevalence: Between 9% and 15% in the US
This disease affects the throat and the genitals but it is treatable with antibiotics. The CDC estimates 15.5% prevalence (4). However, this is likely an overestimate since the sample used was gay men in health clinics. Another sample (in San Francisco health clinics) had a pharyngeal gonorrhea prevalence of 9% (5).
Prevalence: 0.825% in the US
My estimate was calculated in the same manner as my estimate for HIV. I used the CDC's data (6). Syphilis is transmittable by oral and anal sex (7) and causes genital sores that may look harmless at first (8). Syphilis is curable with penicillin however the presence of sores increases the infectivity of HIV.
Herpes (HSV-1 and HSV-2)
Prevalence: HSV-2 - 18.4% (9); HSV-1 - ~75% based on Australian data (10)
This disease is mostly asymptomatic and can be transmitted through oral or anal sex. Sometimes sores will appear and they will usually go away with time. For the same reason as syphilis, herpes can increase the chance of transmitting HIV. The estimate for HSV-1 is probably too high. Snowball sampling was used and most of the men recruited were heavily involved in organizations for gay men and were sexually active in the past 6 months. Also half of them reported unprotected anal sex in the past six months. The HSV-2 sample came from a random sample of US households (11).
Prevalence: Rectal - 0.5% - 2.3% ; Pharyngeal - 3.0 - 10.5% (12)
Like herpes, it is often asymptomatic - perhaps as low as 10% of infected men report symptoms. It is curable with antibiotics.
Prevalence: 47.2% (13)
This disease is incurable (though a vaccine exists for men and women) but usually asymptomatic. It is capable of causing cancers of the penis, throat and anus. Oddly there are no common tests for HPV in part because there are many strains (over 100) most of which are relatively harmless. Sometimes it goes away on its own (14). The prevalence rate was oddly difficult to find, the number I cited came from a sample of men from Brazil, Mexico and the US.
Case Study of HIV transmission; risks and strategies for reducing risk
IMPORTANT: None of the following figures should be generalized to other diseases. Many of these numbers are not even the same order of magnitude as the numbers for other diseases. For example, HIV is especially difficult to transmit via oral sex, but Herpes can very easily be transmitted.
Unprotected Oral Sex per-act risk (with a positive partner or partner of unknown serostatus):
Non-zero but very small. Best guess .03% without condom (15)
Unprotected Anal sex per-act risk (with positive partner):
Receptive: 0.82% - 1.4% (16) (17)
Insertive Circumcised: 0.11% (18)
Insertive Uncircumcised: 0.62% (18)
Protected Anal sex per-act risk (with positive partner):
Estimates range from 2 times lower to twenty times lower (16) (19) and the risk is highly dependent on the slippage and breakage rate.
Contracting HIV from oral sex is very rare. In one study, 67 men reported performing oral sex on at least one HIV positive partner and none were infected (20). However, transmission is possible (15). Because instances of oral transmission of HIV are so rare, the risk is hard to calculate so should be taken with a grain of salt. The number cited was obtained from a group of individuals that were either HIV positive or high risk for HIV. The per act-risk with a positive partner is therefore probably somewhat higher.
Note that different HIV positive men have different levels of infectivity hence the wide range of values for per-act probability of transmission. Some men with high viral loads (the amount of HIV in the blood) may have an infectivity of greater than 10% per unprotected anal sex act (17).
Risk reducing strategies
Choosing sex acts that have a lower transmission rate (oral sex, protected insertive anal sex, non-insertive) is one way to reduce risk. Monogamy, testing, antiretroviral therapy, PEP and PrEP are five other ways.
Testing Your partner/ Monogamy
If your partner tests negative then they are very unlikely to have HIV. There is a 0.047% chance of being HIV positive if they tested negative using a blood test and a 0.29% chance of being HIV positive if they tested negative using an oral test. If they did further tests then the chance is even lower. (See the section after the next paragraph for how these numbers were calculated).
So if your partner tests negative, the real danger is not the test giving an incorrect result. The danger is that your partner was exposed to HIV before the test, but his body had not started to make antibodies yet. Since this can take weeks or months, it is possible for your partner who tested negative to still have HIV even if you are both completely monogamous.
For tests, the sensitivity - the probability that an HIV positive person will test positive - is 99.68% for blood tests (21), 98.03% with oral tests. The specificity - the probability that an HIV negative person will test negative - is 99.74% for oral tests and 99.91% for blood tests. Hence the probability that a person who tested negative will actually be positive is:
P(Positive | tested negative) = P(Positive)*(1-sensitivity)/(P(Negative)*specificity + P(Positive)*(1-sensitivity)) = 0.047% for blood test, 0.29% for oral test
Where P(Positive) = Prevalence of HIV, I estimated this to be 13%.
However, according to a writer for About.com (22) - a doctor who works with HIV - there are often multiple tests which drive the sensitivity up to 99.997%.
Oraquick is an HIV test that you can purchase online and do yourself at home. It costs $39.99 for one kit. The sensitivity is 93.64%, the specificity is 99.87% (23). The probability that someone who tested negative will actually be HIV positive is 0.94%. - assuming a 13% prevalence for HIV. The same danger mentioned above applies - if the infection occurred recently the test would not detect it.
Highly active anti-retroviral therapy (HAART), when successful, can reduce the viral load – the amount of HIV in the blood - to low or undetectable levels. Baggaley et. al (17) reports that in heterosexual couples, there have been some models relating viral load to infectivity. She applies these models to MSM and reports that the per-act risk for unprotected anal sex with a positive partner should be 0.061%. However, she notes that different models produce very different results thus this number should be taken with a grain of salt.
Post-Exposure Prophylaxis (PEP)
A last resort if you think you were exposed to HIV is to undergo post-exposure prophylaxis within 72 hours. Antiretroviral drugs are taken for about a month in the hopes of preventing the HIV from infecting any cells. In one case controlled study some health care workers who were exposed to HIV were given PEP and some were not, (this was not under the control of the experimenters). Workers that contracted HIV were less likely to have been given PEP with an odds ratio of 0.19 (24). I don’t know whether PEP is equally effective at mitigating risk from other sources of exposure.
Pre-Exposure Prophylaxis (PrEP)
This is a relatively new risk reduction strategy. Instead of taking anti-retroviral drugs after exposure, you take anti-retroviral drugs every day in order to prevent HIV infection. I could not find a per-act risk, but in a randomized controlled trial, MSM who took PrEP were less likely to become infected with HIV than men who did not (relative reduction - 41%). The average number of sex partners was 18. For men who were more consistent and had a 90% adherence rate, the relative reduction was better - 73%. (25) (26).
On a recent trip to Ireland, I gave a talk on tactics for having better arguments (video here). There's plenty in the video that's been discussed on LW before (Ideological Turing Tests and other reframes), but I thought I'd highlight one other class of trick I use to have more fruitful disagreements.
It's hard, in the middle of a fight, to remember, recognize, and defuse common biases, rhetorical tricks, emotional triggers, etc. I'd rather cheat than solve a hard problem, so I put a lot of effort into shifting disagreements into environments where it's easier for me and my opposite-number to reason and argue well, instead of relying on willpower. Here's a recent example of the kind of shift I like to make:
A couple months ago, a group of my friends were fighting about the Brendan Eich resignation on facebook. The posts were showing up fast; everyone was, presumably, on the edge of their seats, fueled by adrenaline, and alone at their various computers. It’s a hard place to have a charitable, thoughtful debate.
I asked my friends (since they were mostly DC based) if they’d be amenable to pausing the conversation and picking it up in person. I wanted to make the conversation happen in person, not in front of an audience, and in a format that let people speak for longer and ask questions more easily. If so, I promised to bake cookies for the ultimate donnybrook.
My friends probably figured that I offered cookies as a bribe to get everyone to change venues, and they were partially right. But my cookies had another strategic purpose. When everyone arrived, I was still in the process of taking the cookies out of the oven, so I had to recruit everyone to help me out.
“Alice, can you pour milk for people?”
“Bob, could you pass out napkins?”
“Eve, can you greet people at the door while I’m stuck in the kitchen with potholders on?”
Before we could start arguing, people on both sides of the debate were working on taking care of each other and asking each others’ help. Then, once the logistics were set, we all broke bread (sorta) with each other and had a shared, pleasurable experience. Then we laid into each other.
Sharing a communal experience of mutual service didn’t make anyone pull their intellectual punches, but I think it made us more patient with each other and less anxiously fixated on defending ourselves. Sharing food and seating helped remind us of the relationships we enjoyed with each other, and why we cared about probing the ideas of this particular group of people.
I prefer to fight with people I respect, who I expect will fight in good faith. It's hard to remember that's what I'm doing if I argue with them in the same forums (comment threads, fb, etc) that I usually see bad fights. An environment shift and other compensatory gestures makes it easier to leave habituated errors and fears at the door.
I have returned from a particularly fruitful Google search, with unexpected results.
My question was simple. I was pretty sure that talking to myself aloud makes me temporarily better at solving problems that need a lot of working memory. It is a thinking tool that I find to be of great value, and that I imagine would be of interest to anyone who'd like to optimize their problem solving. I just wanted to collect some evidence on that, make sure I'm not deluding myself, and possibly learn how to enhance the effect.
This might be just lousy Googling on my part, but the evidence is surprisingly unclear and disorganized. There are at least three seperate Wiki pages for it. They don't link to each other. Instead they present the distinct models of three seperate fields: autocommunication in communication studies, semiotics and other cultural studies, intrapersonal communication ("self-talk" redirects here) in anthropology and (older) psychology and private speech in developmental psychology. The first is useless for my purpose, the second mentions "may increase concentration and retention" with no source, the third confirms my suspicion that this behavior boosts memory, motivation and creativity, but it only talks about children.
Google Scholar yields lots of sports-related results for "self-talk" because it can apparently improve the performance of athletes and if there's something that obviously needs the optimization power of psychology departments, it is competitive sports. For "intrapersonal communication" it has papers indicating it helps in language acquisition and in dealing with social anxiety. Both are dwarfed by the results for "private speech", which again focus on children. There's very little on "autocommunication" and what is there has nothing to do with the functioning of individual minds.
So there's a bunch of converging pieces of evidence supporting the usefulness of this behavior, but they're from several seperate fields that don't seem to have noticed each other very much. How often do you find that?
Let me quickly list a few ways that I find it plausible to imagine talking to yourself could enhance rational thought.
- It taps the phonological loop, a distinct part of working memory that might otherwise sit idle in non-auditory tasks. More memory is always better, right?
- Auditory information is retained more easily, so making thoughts auditory helps remember them later.
- It lets you commit to thoughts, and build upon them, in a way that is more powerful (and slower) than unspoken thought while less powerful (but quicker) than action. (I don't have a good online source for this one, but Inside Jokes should convince you, and has lots of new cognitive science to boot.)
- System 1 does seem to understand language, especially if it does not use complex grammar - so this might be a useful way for results of System 2 reasoning to be propagated. Compare affirmations. Anecdotally, whenever I'm starting a complex task, I find stating my intent out loud makes a huge difference in how well the various submodules of my mind cooperate.
- It lets separate parts of your mind communicate in a fairly natural fashion, slows each of them down to the speed of your tongue and makes them not interrupt each other so much. (This is being used as a psychotherapy method.) In effect, your mouth becomes a kind of talking stick in their discussion.
All told, if you're talking to yourself you should be more able to solve complex problems than somebody of your IQ who doesn't, although somebody of your IQ with a pen and a piece of paper should still outthink both of you.
Given all that, I'm surprised this doesn't appear to have been discussed on LessWrong. Honesty: Beyond Internal Truth comes close but goes past it. Again, this might be me failing to use a search engine, but I think this is worth more of our attention that it has gotten so far.
I'm now almost certain talking to myself is useful, and I already find hindsight bias trying to convince me I've always been so sure. But I wasn't - I was suspicious because talking to yourself is an early warning sign of schizophrenia, and is frequent in dementia. But in those cases, it might simply be an autoregulatory response to failing working memory, not a pathogenetic element. After all, its memory enhancing effect is what the developmental psychologists say the kids use it for. I do expect social stigma, which is why I avoid talking to myself when around uninvolved or unsympathetic people, but my solving of complex problems tends to happen away from those anyway so that hasn't been an issue really.
So, what do you think? Useful?
TL;DR: Building humor into your habits for spotting and correcting errors makes the fix more enjoyable, easier to talk about and receive social support, and limits the danger of a contempt spiral.
One of the most reliably bad decisions I've made on a regular basis is the choice to stay awake (well, "awake") and on the internet past the point where I can get work done, or even have much fun. I went through a spell where I even fell asleep on the couch more nights than not, unable to muster the will or judgement to get up and go downstairs to bed.
I could remember (even sometimes in the moment) that this was a bad pattern, but, the more tired I was, the more tempting it was to think that I should just buckle down and apply more willpower to be more awake and get more out of my computer time. Going to bed was a solution, but it was hard for it not to feel (to my sleepy brain and my normal one) like a bit of a cop out.
Only two things helped me really keep this failure mode in check. One was setting a hard bedtime (and beeminding it) as part of my sacrifice for Advent. But the other key tool (which has lasted me long past Advent) is the gif below.
The poor kid struggling to eat his ice cream cone, even in the face of his exhaustion, is hilarious. And not too far off the portrait of me around 2am scrolling through my Feedly.
Thinking about how stupid or ineffective or insufficiently strong-willed I'm being makes it hard for me to do anything that feels like a retreat from my current course of action. I want to master the situation and prove I'm stronger. But catching on to the fact that my current situation (of my own making or not) is ridiculous, makes it easier to laugh, shrug, and move on.
I think the difference is that it's easy for me to feel contemptuous of myself when frustrated, and easy to feel fond when amused.
I've tried to strike the new emotional tone when I'm working on catching and correcting other errors. (e.g "Stupid, you should have known to leave more time to make the appointment! Planning fallacy!" becomes "Heh, I guess you thought that adding two "trivially short" errands was a closed set, and must remain 'trivially short.' That's a pretty silly error.")
In the first case, noticing and correcting an error feels punitive, since it's quickly followed by a hefty dose of flagellation, but the second comes with a quick laugh and a easier shift to a growth mindset framing. Funny stories about errors are also easier to tell, increasing the chance my friends can help catch me out next time, or that I'll be better at spotting the error just by keeping it fresh in my memory. Not to mention, in order to get the joke, I tend to look for a more specific cause of the error than stupid/lazy/etc.
As far as I can tell, it also helps that amusement is a pretty different feeling than the ones that tend to be active when I'm falling into error (frustration, anger, feeling trapped, impatience, etc). So, for a couple of seconds at least, I'm out of the rut and now need to actively return to it to stay stuck.
In the heat of the moment of anger/akrasia/etc is a bad time to figure out what's funny, but, if you're reflecting on your errors after the fact, in a moment of consolation, it's easier to go back armed with a helpful reframing, ready to cast Riddikulus!
Why Psychologists’ Food Fight Matters: Important findings” haven’t been replicated, and science may have to change its ways. By Michelle N. Meyer and Christopher Chabris. Slate, July 31, 2014. [Via Steven Pinker's Twitter account, who adds: "Lesson for sci journalists: Stop reporting single studies, no matter how sexy (these are probably false). Report lit reviews, meta-analyses."] Some excerpts:
Psychologists are up in arms over, of all things, the editorial process that led to the recent publication of a special issue of the journal Social Psychology. This may seem like a classic case of ivory tower navel gazing, but its impact extends far beyond academia. The issue attempts to replicate 27 “important findings in social psychology.” Replication—repeating an experiment as closely as possible to see whether you get the same results—is a cornerstone of the scientific method. Replication of experiments is vital not only because it can detect the rare cases of outright fraud, but also because it guards against uncritical acceptance of findings that were actually inadvertent false positives, helps researchers refine experimental techniques, and affirms the existence of new facts that scientific theories must be able to explain.
One of the articles in the special issue reported a failure to replicate a widely publicized 2008 study by Simone Schnall, now tenured at Cambridge University, and her colleagues. In the original study, two experiments measured the effects of people’s thoughts or feelings of cleanliness on the harshness of their moral judgments. In the first experiment, 40 undergraduates were asked to unscramble sentences, with one-half assigned words related to cleanliness (like pure or pristine) and one-half assigned neutral words. In the second experiment, 43 undergraduates watched the truly revolting bathroom scene from the movie Trainspotting, after which one-half were told to wash their hands while the other one-half were not. All subjects in both experiments were then asked to rate the moral wrongness of six hypothetical scenarios, such as falsifying one’s résumé and keeping money from a lost wallet. The researchers found that priming subjects to think about cleanliness had a “substantial” effect on moral judgment: The hand washers and those who unscrambled sentences related to cleanliness judged the scenarios to be less morally wrong than did the other subjects. The implication was that people who feel relatively pure themselves are—without realizing it—less troubled by others’ impurities. The paper was covered by ABC News, the Economist, and the Huffington Post, among other outlets, and has been cited nearly 200 times in the scientific literature.
However, the replicators—David Johnson, Felix Cheung, and Brent Donnellan (two graduate students and their adviser) of Michigan State University—found no such difference, despite testing about four times more subjects than the original studies. [...]
The editor in chief of Social Psychology later agreed to devote a follow-up print issue to responses by the original authors and rejoinders by the replicators, but as Schnall told Science, the entire process made her feel “like a criminal suspect who has no right to a defense and there is no way to win.” The Science article covering the special issue was titled “Replication Effort Provokes Praise—and ‘Bullying’ Charges.” Both there and in her blog post, Schnall said that her work had been “defamed,” endangering both her reputation and her ability to win grants. She feared that by the time her formal response was published, the conversation might have moved on, and her comments would get little attention.How wrong she was. In countless tweets, Facebook comments, and blog posts, several social psychologists seized upon Schnall’s blog post as a cri de coeur against the rising influence of “replication bullies,” “false positive police,” and “data detectives.” For “speaking truth to power,” Schnall was compared to Rosa Parks. The “replication police” were described as “shameless little bullies,” “self-righteous, self-appointed sheriffs” engaged in a process “clearly not designed to find truth,” “second stringers” who were incapable of making novel contributions of their own to the literature, and—most succinctly—“assholes.” Meanwhile, other commenters stated or strongly implied that Schnall and other original authors whose work fails to replicate had used questionable research practices to achieve sexy, publishable findings. At one point, these insinuations were met with threats of legal action. [...]Unfortunately, published replications have been distressingly rare in psychology. A 2012 survey of the top 100 psychology journals found that barely 1 percent of papers published since 1900 were purely attempts to reproduce previous findings. Some of the most prestigious journals have maintained explicit policies against replication efforts; for example, the Journal of Personality and Social Psychology published a paper purporting to support the existence of ESP-like “precognition,” but would not publish papers that failed to replicate that (or any other) discovery. Science publishes “technical comments” on its own articles, but only if they are submitted within three months of the original publication, which leaves little time to conduct and document a replication attempt.The “replication crisis” is not at all unique to social psychology, to psychological science, or even to the social sciences. As Stanford epidemiologist John Ioannidis famously argued almost a decade ago, “Most research findings are false for most research designs and for most fields.” Failures to replicate and other major flaws in published research have since been noted throughout science, including in cancer research, research into the genetics of complex diseases like obesity and heart disease, stem cell research, and studies of the origins of the universe. Earlier this year, the National Institutes of Health stated “The complex system for ensuring the reproducibility of biomedical research is failing and is in need of restructuring.”Given the stakes involved and its centrality to the scientific method, it may seem perplexing that replication is the exception rather than the rule. The reasons why are varied, but most come down to the perverse incentives driving research. Scientific journals typically view “positive” findings that announce a novel relationship or support a theoretical claim as more interesting than “negative” findings that say that things are unrelated or that a theory is not supported. The more surprising the positive finding, the better, even though surprising findings are statistically less likely to be accurate. Since journal publications are valuable academic currency, researchers—especially those early in their careers—have strong incentives to conduct original work rather than to replicate the findings of others. Replication efforts that do happen but fail to find the expected effect are usually filed away rather than published. That makes the scientific record look more robust and complete than it is—a phenomenon known as the “file drawer problem.”The emphasis on positive findings may also partly explain the fact that when original studies are subjected to replication, so many turn out to be false positives. The near-universal preference for counterintuitive, positive findings gives researchers an incentive to manipulate their methods or poke around in their data until a positive finding crops up, a common practice known as “p-hacking” because it can result in p-values, or measures of statistical significance, that make the results look stronger, and therefore more believable, than they really are. [...]The recent special issue of Social Psychology was an unprecedented collective effort by social psychologists to [rectify this situation]—by altering researchers’ and journal editors’ incentives in order to check the robustness of some of the most talked-about findings in their own field. Any researcher who wanted to conduct a replication was invited to preregister: Before collecting any data from subjects, they would submit a proposal detailing precisely how they would repeat the original study and how they would analyze the data. Proposals would be reviewed by other researchers, including the authors of the original studies, and once approved, the study’s results would be published no matter what. Preregistration of the study and analysis procedures should deter p-hacking, guaranteed publication should counteract the file drawer effect, and a requirement of large sample sizes should make it easier to detect small but statistically meaningful effects.The results were sobering. At least 10 of the 27 “important findings” in social psychology were not replicated at all. In the social priming area, only one of seven replications succeeded. [...]One way to keep things in perspective is to remember that scientific truth is created by the accretion of results over time, not by the splash of a single study. A single failure-to-replicate doesn’t necessarily invalidate a previously reported effect, much less imply fraud on the part of the original researcher—or the replicator. Researchers are most likely to fail to reproduce an effect for mundane reasons, such as insufficiently large sample sizes, innocent errors in procedure or data analysis, and subtle factors about the experimental setting or the subjects tested that alter the effect in question in ways not previously realized.Caution about single studies should go both ways, though. Too often, a single original study is treated—by the media and even by many in the scientific community—as if it definitively establishes an effect. Publications like Harvard Business Review and idea conferences like TED, both major sources of “thought leadership” for managers and policymakers all over the world, emit a steady stream of these “stats and curiosities.” Presumably, the HBR editors and TED organizers believe this information to be true and actionable. But most novel results should be initially regarded with some skepticism, because they too may have resulted from unreported or unnoticed methodological quirks or errors. Everyone involved should focus their attention on developing a shared evidence base that consists of robust empirical regularities—findings that replicate not just once but routinely—rather than of clever one-off curiosities. [...]Scholars, especially scientists, are supposed to be skeptical about received wisdom, develop their views based solely on evidence, and remain open to updating those views in light of changing evidence. But as psychologists know better than anyone, scientists are hardly free of human motives that can influence their work, consciously or unconsciously. It’s easy for scholars to become professionally or even personally invested in a hypothesis or conclusion. These biases are addressed partly through the peer review process, and partly through the marketplace of ideas—by letting researchers go where their interest or skepticism takes them, encouraging their methods, data, and results to be made as transparent as possible, and promoting discussion of differing views. The clashes between researchers of different theoretical persuasions that result from these exchanges should of course remain civil; but the exchanges themselves are a perfectly healthy part of the scientific enterprise.This is part of the reason why we cannot agree with a more recent proposal by Kahneman, who had previously urged social priming researchers to put their house in order. He contributed an essay to the special issue of Social Psychology in which he proposed a rule—to be enforced by reviewers of replication proposals and manuscripts—that authors “be guaranteed a significant role in replications of their work.” Kahneman proposed a specific process by which replicators should consult with original authors, and told Science that in the special issue, “the consultations did not reach the level of author involvement that I recommend.”Collaboration between opposing sides would probably avoid some ruffled feathers, and in some cases it could be productive in resolving disputes. With respect to the current controversy, given the potential impact of an entire journal issue on the robustness of “important findings,” and the clear desirability of buy-in by a large portion of psychology researchers, it would have been better for everyone if the original authors’ comments had been published alongside the replication papers, rather than left to appear afterward. But consultation or collaboration is not something replicators owe to original researchers, and a rule to require it would not be particularly good science policy.Replicators have no obligation to routinely involve original authors because those authors are not the owners of their methods or results. By publishing their results, original authors state that they have sufficient confidence in them that they should be included in the scientific record. That record belongs to everyone. Anyone should be free to run any experiment, regardless of who ran it first, and to publish the results, whatever they are. [...]some critics of replication drives have been too quick to suggest that replicators lack the subtle expertise to reproduce the original experiments. One prominent social psychologist has even argued that tacit methodological skill is such a large factor in getting experiments to work that failed replications have no value at all (since one can never know if the replicators really knew what they were doing, or knew all the tricks of the trade that the original researchers did), a surprising claim that drew sarcastic responses. [See LW discussion.] [...]Psychology has long been a punching bag for critics of “soft science,” but the field is actually leading the way in tackling a problem that is endemic throughout science. The replication issue of Social Psychology is just one example. The Association for Psychological Science is pushing for better reporting standards and more study of research practices, and at its annual meeting in May in San Francisco, several sessions on replication were filled to overflowing. International collaborations of psychologists working on replications, such as the Reproducibility Project and the Many Labs Replication Project (which was responsible for 13 of the 27 replications published in the special issue of Social Psychology) are springing up.Even the most tradition-bound journals are starting to change. The Journal of Personality and Social Psychology—the same journal that, in 2011, refused to even consider replication studies—recently announced that although replications are “not a central part of its mission,” it’s reversing this policy. We wish that JPSP would see replications as part of its central mission and not relegate them, as it has, to an online-only ghetto, but this is a remarkably nimble change for a 50-year-old publication. Other top journals, most notable among them Perspectives in Psychological Science, are devoting space to systematic replications and other confirmatory research. The leading journal in behavior genetics, a field that has been plagued by unreplicable claims that particular genes are associated with particular behaviors, has gone even further: It now refuses to publish original findings that do not include evidence of replication.A final salutary change is an overdue shift of emphasis among psychologists toward establishing the size of effects, as opposed to disputing whether or not they exist. The very notion of “failure” and “success” in empirical research is urgently in need of refinement. When applied thoughtfully, this dichotomy can be useful shorthand (and we’ve used it here). But there are degrees of replication between success and failure, and these degrees matter.For example, suppose an initial study of an experimental drug for cardiovascular disease suggests that it reduces the risk of heart attack by 50 percent compared to a placebo pill. The most meaningful question for follow-up studies is not the binary one of whether the drug’s effect is 50 percent or not (did the first study replicate?), but the continuous one of precisely how much the drug reduces heart attack risk. In larger subsequent studies, this number will almost inevitably drop below 50 percent, but if it remains above 0 percent for study after study, then the best message should be that the drug is in fact effective, not that the initial results “failed to replicate.”
Unfamiliar or unpopular ideas will tend to reach you via proponents who:
- ...hold extreme interpretations of these ideas.
- ...have unpleasant social characteristics.
- ...generally come across as cranks.
The basic idea: It's unpleasant to promote ideas that result in social sanction, and frustrating when your ideas are met with indifference. Both situations are more likely when talking to an ideological out-group. Given a range of positions on an in-group belief, who will decide to promote the belief to outsiders? On average, it will be those who believe the benefits of the idea are large relative to in-group opinion (extremists), those who view the social costs as small (disagreeable people), and those who are dispositionally drawn to promoting weird ideas (cranks).
I don't want to push this pattern too far. This isn't a refutation of any particular idea. There are reasonable people in the world, and some of them even express their opinions in public, (in spite of being reasonable). And sometimes the truth will be unavoidably unfamiliar and unpopular, etc. But there are also...
Some benefits that stem from recognizing these selection effects:
- It's easier to be charitable to controversial ideas, when you recognize that you're interacting with people who are terribly suited to persuade you. I'm not sure "steelmanning" is the best idea (trying to present the best argument for an opponent's position). Based on the extremity effect, another technique is to construct a much diluted version of the belief, and then try to steelman the diluted belief.
- If your group holds fringe or unpopular ideas, you can avoid these patterns when you want to influence outsiders.
- If you want to learn about an afflicted issue, you might ignore the public representatives and speak to the non-evangelical instead (you'll probably have to start the conversation).
- You can resist certain polarizing situations, in which the most visible camps hold extreme and opposing views. This situation worsens when those with non-extreme views judge the risk of participation as excessive, and leave the debate to the extremists (who are willing to take substantial risks for their beliefs). This leads to the perception that the current camps represent the only valid positions, which creates a polarizing loop. Because this is a sort of coordination failure among non-extremists, knowing to covertly look for other non-vocal moderates is a first step toward a solution. (Note: Sometimes there really aren't any moderates.)
- Related to the previous point: You can avoid exaggerating the ideological unity of a group based on the group's leadership, or believing that the entire group has some obnoxious trait present in the leadership. (Note: In things like elections and war, the views of the leadership are what you care about. But you still don't want to be confused about other group members.)
I think the first benefit listed is the most useful.
To sum up: An unpopular idea will tend to get poor representation for social reasons, which will makes it seem like a worse idea than it really is, even granting that many unpopular ideas are unpopular for good reason. So when you encounter a idea that seem unpopular, you're probably hearing about it from a sub-optimal source, and you should try to be charitable towards the idea before dismissing it.
About 18 months ago I made a post here on my workflow. I've received a handful of requests for follow-up, so I thought I would make another post detailing changes since then. I expect this post to be less useful than the last one.
For the most part, the overall outline has remained pretty stable and feels very similar to 18 months ago. Things not mentioned below have mostly stayed the same. I believe that the total effect of continued changes have been continued but much smaller improvements, though it is hard to tell (as opposed to the last changes, which were more clearly improvements).
Based on comparing time logging records I seem to now do substantially more work on average, but there are many other changes during this period that could explain the change (including changes in time logging). Changes other than work output are much harder to measure; I feel like they are positive but I wouldn't be surprised if this were an illusion.
I now regularly divide my day into two halves, and treat the two halves as separate units. I plan each separately and reflect on each separately. I divide them by an hour long period of reflecting on the morning, relaxing for 5-10 minutes, napping for 25-30 minutes, processing my emails, and planning the evening. I find that this generally makes me more productive and happier about the day. Splitting my days is often difficult due to engagements in the middle of the day, and I don't have a good solution to that.
I have longstanding objections to explicitly rationing internet use (since it seems either indicative of a broader problem that should be resolved directly, or else to serve a useful function that would be unwise to remove). That said, I now use the extension WasteNoTime to limit my consumption of blogs, webcomics, facebook, news sites, browser games, etc., to 10 minutes each half-day. This has cut the amount of time I spend browsing the internet from an average of 30-40 minutes to an average of 10-15 minutes. It doesn't seem to have been replaced by lower-quality leisure, but by a combination of work and higher-quality leisure.
Similarly, I turned off the newsfeed in facebook, which I found to improve the quality of my internet time in general (the primary issue was that I would sometimes be distracted by the newsfeed while sending messages over facebook, which wasn't my favorite way to use up wastenotime minutes).
I also tried StayFocusd, but ended up adopting WasteNoTime because of the ability to set limits per half-day (via "At work" and "not at work" timers) rather than per-day. I find that the main upside is cutting off the tail of derping (e.g. getting sucked into a blog comment thread, or looking into a particularly engrossing issue), and for this purpose per half-day timers are much more effective.
I set gmail to archive all emails on arrival and assign them the special label "In." This lets me to search for emails and compose emails, using the normal gmail interface, without being notified of new arrivals. I process the items with label "in" (typically turning emails into todo items to be processed by the same system that deals with other todo items) at the beginning of each half day. Each night I scan my email quickly for items that require urgent attention.
Todo lists / reminders:
I continue to use todo lists for each half day and for a range of special conditions. I now check these lists at the beginning of each half day rather than before going to bed.
I also maintain a third list of "reminders." These are things that I want to be reminded of periodically, organized by day; each morning I look at the day's reminders and think about them briefly. Each of them is copied and filed under a future day. If I feel like I remember a thing well I file it in far in the future, if I feel like I don't remember it well I file it in the near future.
Over the last month most of these reminders have migrated to be in the form "If X, then Y," e.g. "If I agree to do something for someone, then pause, say `actually I should think about it for a few minutes to make sure I have time,' and set a 5 minute timer that night to think about it more clearly." These are designed to fix problems that I notice when reflecting on the day. This is a recommendation from CFAR folks, which seems to be working well, though is the newest part of the system and least tested.
I now attempt to isolate things that probably need doing, but don't seem maximally important; I aim to do them only on every 5th day, and only during one half-day. If I can't finish them in this time, I will typically delay them 5 days. When they spill over to other days, I try to at least keep them to one half-day or the other. I don't know if this helps, but it feels better to have isolated unproductive-feeling blocks of time rather than scattering it throughout the week.
I don't do this very rigidly. I expect the overall level of discipline I have about it is comparable to or lower than a normal office worker who has a clearer division between their personal time and work time.
I now use Toggl for detailed time tracking. Katja Grace and I experimented with about half a dozen other systems (Harvest, Yast, Klok, Freckle, Lumina, I expect others I'm forgetting) before settling on Toggl. It has a depressing number of flaws, but ends up winning for me by making it very fast to start and switch timers which is probably the most important criterion for me. It also offers reviews that work out well with what I want to look at.
I find the main value adds from detailed time tracking are:
1. Knowing how long I've spent on projects, especially long-term projects. My intuitive estimates are often off by more than a factor of 2, even for things taking 80 hours; this can lead me to significantly underestimate the costs of taking on some kinds of projects, and it can also lead me to think an activity is unproductive instead of productive by overestimating how long I've actually spent on it.
2. Accurate breakdowns of time in a day, which guide efforts at improving my day-to-day routine. They probably also make me feel more motivated about working, and improve focus during work.
Reflection / improvement:
Reflection is now a smaller fraction of my time, down from 10% to 3-5%, based on diminishing returns to finding stuff to improve. Another 3-5% is now redirected into longer-term projects to improve particular aspects of my life (I maintain a list of possible improvements, roughly sorted by goodness). Examples: buying new furniture, improvements to my diet (Holden's powersmoothie is great), improvements to my sleep (low doses of melatonin seem good). At the moment the list of possible improvements is long enough that adding to the list is less valuable than doing things on the list.
I have equivocated a lot about how much of my time should go into this sort of thing. My best guess is the number should be higher.
I don't use pomodoros at all any more. I still have periods of uninterrupted work, often of comparable length, for individual tasks. This change wasn't extremely carefully considered, it mostly just happened. I find explicit time logging (such that I must consciously change the timer before changing tasks) seems to work as a substitute in many cases. I also maintain the habit of writing down candidate distractions and then attending to them later (if at all).
For larger tasks I find that I often prefer longer blocks of unrestricted working time. I continue to use Alinof timer to manage these blocks of uninterrupted work.
Catch disappeared, and I haven't found a replacement that I find comparably useful. (It's also not that high on the list of priorities.) I now just send emails to myself, but I do it much less often.
I no longer use beeminder. This again wasn't super-considered, though it was based on a very rough impression of overhead being larger than the short-term gains. I think beeminder was helpful for setting up a number of habits which have persisted (especially with respect to daily routine and regular focused work), and my long-term averages continue to satisfy my old beeminder goals.
I now organize notes about each project I am working on in a more standardized way, with "Queue of todos," "Current workspace," and "Data" as the three subsections. I'm not thrilled by this system, but it seems to be an improvement over the previous informal arrangement. In particular, having a workspace into which I can easily write thoughts without thinking about where they fit, and only later sorting them into the data section once it's clearer how they fit in, decreases the activation energy of using the system. I now use Toggl rather than maintaining time logs by hand.
As described in my last post I tried various randomized trials (esp. of effects of exercise, stimulant use, and sleep on mood, cognitive performance, and productive time). I have found extracting meaningful data from these trials to be extremely difficult, due to straightforward issues with signal vs. noise. There are a number of tests which I still do expect to yield meaningful data, but I've increased my estimates for the expensiveness of useful tests substantially, and they've tended to fall down the priority list. For some things I've just decided to do them without the data, since my best guess is positive in expectation and the data is too expensive to acquire.
The Effective Altruism Forum will be launched at effective-altruism.com on September 10, British time.
Now seems like a good time time to discuss why we might need an Effective Altruism Forum, and how it might compare to LessWrong.
About the Effective Altruism Forum
The motivation for the Effective Altruism Forum is to improve the quality of effective altruist discussion and coordination. A big part of this is to give many of the useful features of LessWrong to effective altruists, including:
- Archived, searchable content (this will begin with archived content from effective-altruism.com)
- Nested comments
- A karma system
- A dynamically upated list of external effective altruist blogs
- Introductory materials (this will begin with these articles)
The Effective Altruism Forum has been designed by Mihai Badic. Over the last month, it has been developed by Trike Apps, who have built the new site using the LessWrong codebase. I'm glad to report that it is now basically ready, looks nice, and is easy to use.
I expect that at the new forum, as on the effective altruist Facebook and Reddit pages, people will want to discuss the which intellectual procedures to use to pick effective actions. I also expect some proposals of effective altruist projects, and offers of resources. So users of the new forum will share LessWrong's interest in instrumental and epistemic rationality. On the other hand, I expect that few of its users will want to discuss the technical aspects of artificial intelligence, anthropics or decision theory, and to the extent that they do so, they will want to do it at LessWrong. As a result, I expect the new forum to cause:
- A bunch of materials on effective altruism and instrumental rationality to be collated for new effective altruists
- Discussion of old LessWrong materials to resurface
- A slight increase to the number of users of LessWrong, possibly offset by some users spending more of their time posting at the new forum.
At least initially, the new forum won't have a wiki or a Main/Discussion split and won't have any institutional affiliations.
It's really important to make sure that the Effective Altruism Forum is established with a beneficial culture. If people want to help that process by writing some seed materials, to be posted around the time of the site's launch, then they can contact me at ry [dot] duff [at] gmail.com. Alternatively, they can wait a short while until they automatically receive posting priveleges.
It's also important that the Effective Altruism Forum helps the shared goals of rationalists and effective altruists, and has net positive effects on LessWrong in particular. Any suggestions for improving the odds of success for the effective altruism forum are most welcome.
(Part 1 of the Sequence on Applied Causal Inference)
In this sequence, I am going to present a theory on how we can learn about causal effects using observational data. As an example, we will imagine that you have collected information on a large number of Swedes - let us call them Sven, Olof, Göran, Gustaf, Annica, Lill-Babs, Elsa and Astrid. For every Swede, you have recorded data on their gender, whether they smoked or not, and on whether they got cancer during the 10-years of follow-up. Your goal is to use this dataset to figure out whether smoking causes cancer.
We are going to use the letter A as a random variable to represent whether they smoked. A can take the value 0 (did not smoke) or 1 (smoked). When we need to talk about the specific values that A can take, we sometimes use lower case a as a placeholder for 0 or 1. We use the letter Y as a random variable that represents whether they got cancer, and L to represent their gender.
The data-generating mechanism and the joint distribution of variables
Imagine you are looking at this data set:
Did they smoke?
Did they get cancer?
This table records information about the joint distribution of the variables L, A and Y. By looking at it, you can tell that 1/4 of the Swedes were men who smoked and got cancer, 1/8 were men who did not smoke and got cancer, 1/8 were men who did not smoke and did not get cancer etc.
You can make all sorts of statistics that summarize aspects of the joint distribution. One such statistic is the correlation between two variables. If "sex" is correlated with "smoking", it means that if you know somebody's sex, this gives you information that makes it easier to predict whether they smoke. If knowing about an individual's sex gives no information about whether they smoked, we say that sex and smoking are independent. We use the symbol ∐ to mean independence.
When we are interested in causal effects, we are asking what would happen to the joint distribution if we intervened to change the value of a variable. For example, how many Swedes would get cancer in a hypothetical world where you intervened to make sure they all quit smoking?
In order to answer this, we have to ask questions about the data generating mechanism. The data generating mechanism is the algorithm that assigns value to the variables, and therefore creates the joint distribution. We will think of the data as being generated by three different algorithms: One for L, one for A and one for Y. Each of these algorithms takes the previously assigned variables as input, and then outputs a value.
Questions about the data generating mechanism include “Which variable has its value assigned first?”, “Which variables from the past (observed or unobserved) are used as inputs” and “If I change whether someone smokes, how will that change propagate to other variables that have their value assigned later". The last of these questions can be rephrased as "What is the causal effect of smoking”.
The basic problem of causal inference is that the relationship between the set of possible data generating mechanisms, and the joint distribution of variables, is many-to-one: For any correlation you observe in the dataset, there are many possible sets of algorithms for L, A and Y that could all account for the observed patterns. For example, if you are looking at a correlation between cancer and smoking, you can tell a story about cancer causing people to take up smoking, or a story about smoking causing people to get cancer, or a story about smoking and cancer sharing a common cause.
An important thing to note is that even if you have data on absolutely everyone, you still would not be able to distinguish between the possible data generating mechanisms. The problem is not that you have a limited sample. This is therefore not a statistical problem. What you need to answer the question, is not more people in your study, but a priori causal information. The purpose of this sequence is to show you how to reason about what prior causal information is necessary, and how to analyze the data if you have measured all the necessary variables.
Counterfactual Variables and "God's Table":
The first step of causal inference is to translate the English language research question «What is the causal effect of smoking» into a precise, mathematical language. One possible such language is based on counterfactual variables. These counterfactual variables allow us to encode the concept of “what would have happened if, possibly contrary to fact, the person smoked”.
We define one counterfactual variable called Ya=1 which represents the outcome in the person if he smoked, and another counterfactual variable called Ya=0 which represents the outcome if he did not smoke. Counterfactual variables such as Ya=0 are mathematical objects that represent part of the data generating mechanism: The variable tells us what value the mechanism would assign to Y, if we intervened to make sure the person did not smoke. These variables are columns in an imagined dataset that we sometimes call “God’s Table”:
Whether they would have got cancer if they smoked
Whether they would have got cancer if they didn't smoke
Let us start by making some points about this dataset. First, note that the counterfactual variables are variables just like any other column in the spreadsheet. Therefore, we can use the same type of logic that we use for any other variables. Second, note that in our framework, counterfactual variables are pre-treatment variables: They are determined long before treatment is assigned. The effect of treatment is simply to determine whether we see Ya=0 or Ya=1 in this individual.
If you had access to God's Table, you would immediately be able to look up the average causal effect, by comparing the column Ya=1 to the column Ya=0. However, the most important point about God’s Table is that we cannot observe Ya=1 and Ya=0. We only observe the joint distribution of observed variables, which we can call the “Observed Table”:
The goal of causal inference is to learn about God’s Table using information from the observed table (in combination with a priori causal knowledge). In particular, we are going to be interested in learning about the distributions of Ya=1 and Ya=0, and in how they relate to each other.
The “Gold Standard” for estimating the causal effect, is to run a randomized controlled trial where we randomly assign the value of A. This study design works because you select one random subset of the study population where you observe Ya=0, and another random subset where you observe Ya=1. You therefore have unbiased information about the distribution of both Ya=0and of Ya=1.
An important thing to point out at this stage is that it is not necessary to use an unbiased coin to assign treatment, as long as your use the same coin for everyone. For instance, the probability of being randomized to A=1 can be 2/3. You will still see randomly selected subsets of the distribution of both Ya=0 and Ya=1, you will just have a larger number of people where you see Ya=1. Usually, randomized trials use unbiased coins, but this is simply done because it increases the statistical power.
Also note that it is possible to run two different randomized controlled trials: One in men, and another in women. The first trial will give you an unbiased estimate of the effect in men, and the second trial will give you an unbiased estimate of the effect in women. If both trials used the same coin, you could think of them as really being one trial. However, if the two trials used different coins, and you pooled them into the same database, your analysis would have to account for the fact that in reality, there were two trials. If you don’t account for this, the results will be biased. This is called “confounding”. As long as you account for the fact that there really were two trials, you can still recover an estimate of the population average causal effect. This is called “Controlling for Confounding”.
In general, causal inference works by specifying a model that says the data came from a complex trial, ie, one where nature assigned a biased coin depending on the observed past. For such a trial, there will exist a valid way to recover the overall causal results, but it will require us to think carefully about what the correct analysis is.
Assumptions of Causal Inference
We will now go through in some more detail about why it is that randomized trials work, ie , the important aspects of this study design that allow us to infer causal relationships, or facts about God’s Table, using information about the joint distribution of observed variables.
We will start with an “observed table” and build towards “reconstructing” parts of God’s Table. To do this, we will need three assumptions: These are positivity, consistency and (conditional) exchangeability:
Positivity is the assumption that any individual has a positive probability of receiving all values of the treatment variable: Pr(A=a) > 0 for all values of a. In other words, you need to have both people who smoke, and people who don't smoke. If positivity does not hold, you will not have any information about the distribution of Ya for that value of a, and will therefore not be able to make inferences about it.
We can check whether this assumption holds in the sample, by checking whether there are people who are treated and people who are untreated. If you observe that in any stratum, there are individuals who are treated and individuals who are untreated, you know that positivity holds.
If we observe a stratum where no individuals are treated (or no individuals are untreated), this can be either for statistical reasons (your randomly did not sample them) or for structural reasons (individuals with these covariates are deterministically never treated). As we will see later, our models can handle random violations, but not structural violations.
In a randomized controlled trial, positivity holds because you will use a coin that has a positive probability of assigning people to either arm of the trial.
The next assumption we are going to make is that if an individual happens to have treatment (A=1), we will observe the counterfactual variable Ya=1 in this individual. This is the observed table after we make the consistency assumption:
Making the consistency assumption got us half the way to our goal. We now have a lot of information about Ya=1 and Ya=0. However, half of the data is still missing.
Although consistency seems obvious, it is an assumption, not something that is true by definition. We can expect the consistency assumption to hold if we have a well-defined intervention (ie, the intervention is a well-defined choice, not an attribute of the individual), and there is no causal interference (one individual’s outcome is not affected by whether another individual was treated).
Consistency may not hold if you have an intervention that is not well-defined: For example, there may be multiple types of cigarettes. When you measure Ya=1 in people who smoked, it will actually be a composite of multiple counterfactual variables: One for people who smoked regular cigarettes (let us call that Ya=1*) and another for people who smoked e-cigarettes (let us call that Ya=1#). Since you failed to specify whether you are interested in the effect of regular cigarettes or e-cigarettes, the construct Ya=1 is a composite without any meaning, and people will be unable to use your results to predict the consequences of their actions.
To complete the table, we require an additional assumption on the nature of the data. We call this assumption “Exchangeability”. One possible exchangeability assumption is “Ya=0 ∐ A and Ya=1 ∐ A”. This is the assumption that says “The data came from a randomized controlled trial”. If this assumption is true, you will observe a random subset of the distribution of Ya=0 in the group where A=0, and a random subset of the distribution of Ya=1 in the group where A=1.
Exchangeability is a statement about two variables being independent from each other. This means that having information about either one of the variables will not help you predict the value of the other. Sometimes, variables which are not independent are "conditionally independent". For example, it is possible that knowing somebody's race helps you predict whether they enjoy eating Hakarl, an Icelandic form of rotting fish. However, it is also possible that this is just a marker for whether they were born in the ethnically homogenous Iceland. In such a situation, it is possible that once you already know whether somebody is from Iceland, also knowing their race gives you no additional clues as to whether they will enjoy Hakarl. In this case, the variables "race" and "enjoying hakarl" are conditionally independent, given nationality.
The reason we care about conditional independence is that sometimes you may be unwilling to assume that marginal exchangeability Ya=1 ∐ A holds, but you are willing to assume conditional exchangeability Ya=1 ∐ A | L. In this example, let L be sex. The assumption then says that you can interpret the data as if it came from two different randomized controlled trials: One in men, and one in women. If that is the case, sex is a "confounder". (We will give a definition of confounding in Part 2 of this sequence. )
If the data came from two different randomized controlled trials, one possible approach is to analyze these trials separately. This is called “stratification”. Stratification gives you effect measures that are conditional on the confounders: You get one measure of the effect in men, and another in women. Unfortunately, in more complicated settings, stratification-based methods (including regression) are always biased. In those situations, it is necessary to focus the inference on the marginal distribution of Ya.
If marginal exchangeability holds (ie, if the data came from a marginally randomized trial), making inferences about the marginal distribution of Ya is easy: You can just estimate E[Ya] as E [Y|A=a].
However, if the data came from a conditionally randomized trial, we will need to think a little bit harder about how to say anything meaningful about E[Ya]. This process is the central idea of causal inference. We call it “identification”: The idea is to write an expression for the distribution of a counterfactual variable, purely in terms of observed variables. If we are able to do this, we have sufficient information to estimate causal effects just by looking at the relevant parts of the joint distribution of observed variables.
The simplest example of identification is standardization. As an example, we will show a simple proof:
Begin by using the law of total probability to factor out the confounder, in this case L:
· E(Ya) = Σ E(Ya|L= l) * Pr(L=l) (The summation sign is over l)
We do this because we know we need to introduce L behind the conditioning sign, in order to be able to use our exchangeability assumption in the next step: Then, because Ya ∐ A | L, we are allowed to introduce A=a behind the conditioning sign:
· E(Ya) = Σ E(Ya|A=a, L=l) * Pr(L=l)
Finally, use the consistency assumption: Because we are in the stratum where A=a in all individuals, we can replace Ya by Y
· E(Ya) = Σ E(Y|A=a, L=l) * Pr (L=l)
We now have an expression for the counterfactual in terms of quantities that can be observed in the real world, ie, in terms of the joint distribution of A, Y and L. In other words, we have linked the data generating mechanism with the joint distribution – we have “identified” E(Ya). We can therefore estimate E(Ya)
This identifying expression is valid if and only if L was the only confounder. If we had not observed sufficient variables to obtain conditional exchangeability, it would not be possible to identify the distribution of Ya : there would be intractable confounding.
Identification is the core concept of causal inference: It is what allows us to link the data generating mechanism to the joint distribution, to something that can be observed in the real world.
The difference between epidemiology and biostatistics
Many people see Epidemiology as «Applied Biostatistics». This is a misconception. In reality, epidemiology and biostatistics are completely different parts of the problem. To illustrate what is going on, consider this figure:
The data generating mechanism first creates a joint distribution of observed variables. Then, we sample from the joint distribution to obtain data. Biostatistics asks: If we have a sample, what can we learn about the joint distribution? Epidemiology asks: If we have all the information about the joint distribution , what can we learn about the data generating mechanism? This is a much harder problem, but it can still be analyzed with some rigor.
Epidemiology without Biostatistics is always impossible: It would not be possible to learn about the data generating mechanism without asking questions about the joint distribution. This usually involves sampling. Therefore, we will need good statistical estimators of the joint distribution.
Biostatistics without Epidemiology is usually pointless: The joint distribution of observed variables is simply not interesting in itself. You can make the claim that randomized trials is an example of biostatistics without epidemiology. However, the epidemiology is still there. It is just not necessary to think about it, because the epidemiologic part of the analysis is trivial
Note that the word “bias” means different things in Epidemiology and Biostatistics. In Biostatistics, “bias” is a property of a statistical estimator: We talk about whether ŷ is a biased estimator of E(Y |A). If an estimator is biased, it means that when you use data from a sample to make inferences about the joint distribution in the population the sample came from, there will be a systematic source of error.
In Epidemiology, “bias” means that you are estimating the wrong thing: Epidemiological bias is a question about whether E(Y|A) is a valid identification of E(Ya). If there is epidemiologic bias, it means that you estimated something in the joint distribution, but that this something does not answer the question you were interested in.
These are completely different concepts. Both are important and can lead to your estimates being wrong. It is possible for a statistically valid estimator to be biased in the epidemiologic sense, and vice versa. For your results to be valid, your estimator must be unbiased in both senses.
Analogy gets a bad rap around here, and not without reason. The kinds of argument from analogy condemned in the above links fully deserve the condemnation they get. Still, I think it's too easy to read them and walk away thinking "Boo analogy!" when not all uses of analogy are bad. The human brain seems to have hardware support for thinking in analogies, and I don't think this capability is a waste of resources, even in our highly non-ancestral environment. So, assuming that the linked posts do a sufficient job detailing the abuse and misuse of analogy, I'm going to go over some legitimate uses.
The first thing analogy is really good for is description. Take the plum pudding atomic model. I still remember this falsified proposal of negative 'raisins' in positive 'dough' largely because of the analogy, and I don't think anyone ever attempted to use it to argue for the existence of tiny subnuclear particles corresponding to cinnamon.
But this is only a modest example of what analogy can do. The following is an example that I think starts to show the true power: my comment on Robin Hanson's 'Don't Be "Rationalist"'. To summarize, Robin argued that since you can't be rationalist about everything you should budget your rationality and only be rational about the most important things; I replied that maybe rationality is like weightlifting, where your strength is finite yet it increases with use. That comment is probably the most successful thing I've ever written on the rationalist internet in terms of the attention it received, including direct praise from Eliezer and a shoutout in a Scott Alexander (yvain) post, and it's pretty much just an analogy.
Here's another example, this time from Eliezer. As part of the AI-Foom debate, he tells the story of Fermi's nuclear experiments, and in particular his precise knowledge of when a pile would go supercritical.
What do the above analogies accomplish? They provide counterexamples to universal claims. In my case, Robin's inference that rationality should be spent sparingly proceeded from the stated premise that no one is perfectly rational about anything, and weightlifting was a counterexample to the implicit claim 'a finite capacity should always be directed solely towards important goals'. If you look above my comment, anon had already said that the conclusion hadn't been proven, but without the counterexample this claim had much less impact.
In Eliezer's case, "you can never predict an unprecedented unbounded growth" is the kind of claim that sounds really convincing. "You haven't actually proved that" is a weak-sounding retort; "Fermi did it" immediately wins the point.
The final thing analogies do really well is crystallize patterns. For an example of this, let's turn to... Failure by Analogy. Yep, the anti-analogy posts are themselves written almost entirely via analogy! Alchemists who glaze lead with lemons and would-be aviators who put beaks on their machines are invoked to crystallize the pattern of 'reasoning by similarity'. The post then makes the case that neural-net worshippers are reasoning by similarity in just the same way, making the same fundamental error.
It's this capacity that makes analogies so dangerous. Crystallizing a pattern can be so mentally satisfying that you don't stop to question whether the pattern applies. The antidote to this is the question, "Why do you believe X is like Y?" Assessing the answer and judging deep similarities from superficial ones may not always be easy, but just by asking you'll catch the cases where there is no justification at all.
Summary: Overhead expenses' (CEO salary, percentage spent on fundraising) are often deemed a poor measure of charity effectiveness by Effective Altruists, and so they disprefer means of charity evaluation which rely on these. However, 'funding cannibalism' suggests that these metrics (and the norms that engender them) have value: if fundraising is broadly a zero-sum game between charities, then there's a commons problem where all charities could spend less money on fundraising and all do more good, but each is locally incentivized to spend more. Donor norms against increasing spending on zero-sum 'overheads' might be a good way of combating this. This valuable collective action of donors may explain the apparent underutilization of fundraising by charities, and perhaps should make us cautious in undermining it.
The EA critique of charity evaluation
Pre-Givewell, the common means of evaluating charities (Guidestar, Charity Navigator) used a mixture of governance checklists 'overhead indicators'. Charities would gain points both for having features associated with good governance (being transparent in the right ways, balancing budgets, the right sorts of corporate structure), but also in spending its money on programs and avoiding 'overhead expenses' like administration and (especially) fundraising. For shorthand, call this 'common sense' evaluation.
The standard EA critique is that common sense evaluation doesn't capture what is really important: outcomes. It is easy to imagine charities that look really good to common sense evaluation yet have negligible (or negative) outcomes. In the case of overheads, it becomes unclear whether these are even proxy measures of efficacy. Any fundraising that still 'turns a profit' looks like a good deal, whether it comprises five percent of a charity's spending or fifty.
A summary of the EA critique of common sense evaluation that its myopic focus on these metrics gives pathological incentives, as these metrics frequently lie anti-parallel to maximizing efficacy. To score well on these evaluations, charities may be encouraged to raise less money, hire less able staff, and cut corners in their own management, even if doing these things would be false economies.
Funding cannibalism and commons tragedies
In the wake of the ALS 'Ice bucket challenge', Will MacAskill suggested there is considerable of 'funding cannabilism' in the non-profit sector. Instead of the Ice bucket challenge 'raising' money for ALS, it has taken money that would have been donated to other causes instead - cannibalizing other causes. Rather than each charity raising funds independently of one another, they compete for a fairly fixed pie of aggregate charitable giving.
The 'cannabilism' thesis is controversial, but looks plausible to me, especially when looking at 'macro' indicators: proportion of household charitable spending looks pretty fixed whilst fundraising has increased dramatically, for example.
If true, cannibalism is important. As MacAskill points out, the money tens of millions of dollars raised for ALS is no longer an untrammelled good, alloyed as it is with the opportunity cost of whatever other causes it has cannibalized (q.v.). There's also a more general consideration: if there is a fixed pot of charitable giving insensitive to aggregate fundraising, then fundraising becomes a commons problem. If all charities could spend less on their fundraising, none would lose out, so all could spend more of their funds on their programs. However, for any alone to spend less on fundraising allows the others to cannibalize it.
Civilizing Charitable Cannibals, and Metric Meta-Myopia
Coordination among charities to avoid this commons tragedy is far fetched. Yet coordination of donors on shared norms about 'overhead ratio' can help. By penalizing a charity for spending too much on zero-sum games with other charities like fundraising, donors can stop a race to the bottom fundraising free for all and burning of the charitable commons that implies. The apparently-high marginal return to fundraising might suggest this is already in effect (and effective!)
The contrarian take would be that it is the EA critique of charity evaluation which is myopic, not the charity evaluation itself - by looking at the apparent benefit for a single charity of more overhead, the EA critique ignores the broader picture of the non-profit ecosystem, and their attack undermines a key environmental protection of an important commons - further, one which the right tail of most effective charities benefit from just as much as the crowd of 'great unwashed' other causes. (Fundraising ability and efficacy look like they should be pretty orthogonal. Besides, if they correlate well enough that you'd expect the most efficacious charities would win the zero-sum fundraising game, couldn't you dispense with Givewell and give to the best fundraisers?)
The contrarian view probably goes too far. Although there's a case for communally caring about fundraising overheads, as cannibalism leads us to guess it is zero sum, parallel reasoning is hard to apply to administration overhead: charity X doesn't lose out if charity Y spends more on management, but charity Y is still penalized by common sense evaluation even if its overall efficacy increases. I'd guess that features like executive pay lie somewhere in the middle: non-profit executives could be poached by for-profit industries, so it is not as simple as donors prodding charities to coordinate to lower executive pay; but donors can prod charities not to throw away whatever 'non-profit premium' they do have in competing with one another for top talent (c.f.). If so, we should castigate people less for caring about overhead, even if we still want to encourage them to care about efficacy too.
The invisible hand of charitable pan-handling
If true, it is unclear whether the story that should be told is 'common sense was right all along and the EA movement overconfidently criticised' or 'A stopped clock is right twice a day, and the generally wrong-headed common sense had an unintended feature amongst the bugs'. I'd lean towards the latter, simply the advocates of the common sense approach have not (to my knowledge) articulated these considerations themselves.
However, many of us believe the implicit machinery of the market can turn without many of the actors within it having any explicit understanding of it. Perhaps the same applies here. If so, we should be less confident in claiming the status quo is pathological and we can do better: there may be a rationale eluding both us and its defenders.
Applied Causal Inference for Observational Research
This sequence is an introduction to basic causal inference. It was originally written as auxiliary notes for a course in Epidemiology, but it is relevant to almost any kind of applied statistical research, including econometrics, sociology, psychology, political science etc. I would not be surprised if you guys find a lot of errors, and I would be very grateful if you point them out in the comments. This will help me improve my course notes and potentially help me improve my understanding of the material.
For mathematically inclined readers, I recommend skipping this sequence and instead reading Pearl's book on Causality. There is also a lot of good material on causal graphs on Less Wrong itself. Also, note that my thesis advisor is writing a book that covers the same material in more detail, the first two parts are available for free at his website.
Pearl's book, Miguel's book and Eliezer's writings are all more rigorous and precise than my sequence. This is partly because I have a different goal: Pearl and Eliezer are writing for mathematicians and theorists who may be interested in contributing to the theory. Instead, I am writing for consumers of science who want to understand correlation studies from the perspective of a more rigorous epistemology.
I will use Epidemiological/Counterfactual notation rather than Pearl's notation. I apologize if this is confusing. These two approaches refer to the same mathematical objects, it is just a different notation. Whereas Pearl would use the "Do-Operator" E[Y|do(a)], I use counterfactual variables E[Ya]. Instead of using Pearl's "Do-Calculus" for identification, I use Robins' G-Formula, which will give the same results.
For all applications, I will use the letter "A" to represent "treatment" or "exposure" (the thing we want to estimate the effect of), Y to represent the outcome, L to represent any measured confounders, and U to represent any unmeasured confounders.
Outline of Sequence:
I hope to publish one post every week. I have rough drafts for the following eight sections, and will keep updating this outline with links as the sequence develops:
Part 0: Sequence Announcement / Introduction (This post)
Part 1: Basic Terminology and the Assumptions of Causal Inference
Part 2: Graphical Models
Part 3: Using Causal Graphs to Understand Bias
Part 4: Time-Dependent Exposures
Part 5: The G-Formula
Part 6: Inverse Probability Weighting
Part 7: G-Estimation of Structural Nested Models and Instrumental Variables
Part 8: Single World Intervention Graphs, Cross-World Counterfactuals and Mediation Analysis
Introduction: Why Causal Inference?
The goal of applied statistical research is almost always to learn about causal effects. However, causal inference from observational is hard, to the extent that it is usually not even possible without strong, almost heroic assumptions. Because of the inherent difficulty of the task, many old-school investigators were trained to avoid making causal claims. Words like “cause” and “effect” were banished from polite company, and the slogan “correlation does not imply causation” became an article of faith which, when said loudly enough, seemingly absolved the investigators from the sin of making causal claims.
However, readers were not fooled: They always understood that epidemiologic papers were making causal claims. Of course they were making causal claims; why else would anybody be interested in a paper about the correlation between two variables? For example, why would anybody want to know about the correlation between eating nuts and longevity, unless they were wondering if eating nuts would cause them to live longer?
When readers interpreted these papers causally, were they simply ignoring the caveats, drawing conclusions that were not intended by the authors? Of course they weren’t. The discussion sections of epidemiologic articles are full of “policy implications” and speculations about biological pathways that are completely contingent on interpreting the findings causally. Quite clearly, no matter how hard the investigators tried to deny it, they were making causal claims. However, they were using methodology that was not designed for causal questions, and did not have a clear language for reasoning about where the uncertainty about causal claims comes from.
This was not sustainable, and inevitably led to a crisis of confidence, which culminated when some high-profile randomized trials showed completely different results from the preceding observational studies. In one particular case, when the Women’s Health Initiative trial showed that post-menopausal hormone replacement therapy increases the risk of cardiovascular disease, the difference was so dramatic that many thought-leaders in clinical medicine completely abandoned the idea of inferring causal relationships from observational data.
It is important to recognize that the problem was not that the results were wrong. The problem was that there was uncertainty that was not taken seriously by the investigators. A rational person who wants to learn about the world will be willing to accept that studies have errors of margin, but only as long as the investigators make a good-faith effort to examine what the sources of error are, and communicate clearly about this uncertainty to their readers. Old-school epidemiology failed at this. We are not going to make the same mistake. Instead, we are going to develop a clear, precise language for reasoning about uncertainty and bias.
In this context, we are going to talk about two sources of uncertainty – “statistical” uncertainty and “epidemiological” uncertainty.
We are going to use the word “Statistics” to refer to the theory of how we can learn about correlations from limited samples. For statisticians, the primary source of uncertainty is sampling variability. Statisticians are very good at accounting for this type of uncertainty: Concepts such as “standard errors”, “p-values” and “confidence intervals” are all attempts at quantifying and communicating the extent of uncertainty that results from sampling variability.
The old school of epidemiology would tell you to stop after you had found the correlations and accounted for the sampling variability. They believed going further was impossible. However, correlations are simply not interesting. If you truly believed that correlations tell you nothing about causation, there would be no point in doing the study.
Therefore, we are going to use the terms “Epidemiology” or “Causal Inference” to refer to the next stage in the process: Learning about causation from correlations. This is a much harder problem, with many additional sources of uncertainty, including confounding and selection bias. However, recognizing that the problem is hard does not mean that you shouldn't try, it just means that you have to be careful. As we will see, it is possible to reason rigorously about whether correlation really does imply causation in your particular study: You will just need a precise language. The goal of this sequence is simply to give you such a language.
In order to teach you the logic of this language, we are going to make several controversial statements such as «The only way to estimate a causal effect is to run a randomized controlled trial» . You may not be willing to believe this at first, but in order to understand the logic of causal inference, it is necessary that you are at least willing to suspend your disbelief and accept it as true within the course.
It is important to note that we are not just saying this to try to convince you to give up on observational studies in favor of randomized controlled trials. We are making this point because understanding it is necessary in order to appreciate what it means to control for confounding: It is not possible to give a coherent meaning to the word “confounding” unless one is trying to determine whether it is reasonable to model the data as if it came from a complex randomized trial run by nature.
When we say that causal inference is hard, what we mean by this is not that it is difficult to learn the basics concepts of the theory. What we mean is that even if you fully understand everything that has ever been written about causal inference, it is going to be very hard to infer a causal relationship from observational data, and that there will always be uncertainty about the results. This is why this sequence is not going to be a workshop that teaches you how to apply magic causal methodology. What we are interested in, is developing your ability to reason honestly about where uncertainty and bias comes from, so that you can communicate this to the readers of your studies. What we want to teach you about, is the epistemology that underlies epidemiological and statistical research with observational data.
Insisting on only using randomized trials may seem attractive to a purist, it does not take much imagination to see that there are situations where it is important to predict the consequences of an action, but where it is not possible to run a trial. In such situations, there may be Bayesian evidence to be found in nature. This evidence comes in the form of correlations in observational data. When we are stuck with this type of evidence, it is important that we have a clear framework for assessing the strength of the evidence.
I am publishing Part 1 of the sequence at the same time as this introduction. I would be very interested in hearing feedback, particularly about whether people feel this has already been covered in sufficient detail on Less Wrong. If there is no demand, there won't really be any point in transforming the rest of my course notes to a Less Wrong format.
Thanks to everyone who had a look at this before I published, including paper-machine and Vika, Janos, Eloise and Sam from the Boston Meetup group.
Summary: I don't think 'politics is the mind-killer' works well rthetorically. I suggest 'politics is hard mode' instead.
My usual first objection is that it seems odd to single politics out as a “mind-killer” when there’s plenty of evidence that tribalism happens everywhere. Recently, there has been a whole kerfuffle within the field of psychology about replication of studies. Of course, some key studies have failed to replicate, leading to accusations of “bullying” and “witch-hunts” and what have you. Some of the people involved have since walked their language back, but it was still a rather concerning demonstration of mind-killing in action. People took “sides,” people became upset at people based on their “sides” rather than their actual opinions or behavior, and so on.
Unless this article refers specifically to electoral politics and Democrats and Republicans and things (not clear from the wording), “politics” is such a frightfully broad category of human experience that writing it off entirely as a mind-killer that cannot be discussed or else all rationality flies out the window effectively prohibits a large number of important issues from being discussed, by the very people who can, in theory, be counted upon to discuss them better than most. Is it “politics” for me to talk about my experience as a woman in gatherings that are predominantly composed of men? Many would say it is. But I’m sure that these groups of men stand to gain from hearing about my experiences, since some of them are concerned that so few women attend their events.
In this article, Eliezer notes, “Politics is an important domain to which we should individually apply our rationality — but it’s a terrible domain in which to learn rationality, or discuss rationality, unless all the discussants are already rational.” But that means that we all have to individually, privately apply rationality to politics without consulting anyone who can help us do this well. After all, there is no such thing as a discussant who is “rational”; there is a reason the website is called “Less Wrong” rather than “Not At All Wrong” or “Always 100% Right.” Assuming that we are all trying to be more rational, there is nobody better to discuss politics with than each other.
The rest of my objection to this meme has little to do with this article, which I think raises lots of great points, and more to do with the response that I’ve seen to it — an eye-rolling, condescending dismissal of politics itself and of anyone who cares about it. Of course, I’m totally fine if a given person isn’t interested in politics and doesn’t want to discuss it, but then they should say, “I’m not interested in this and would rather not discuss it,” or “I don’t think I can be rational in this discussion so I’d rather avoid it,” rather than sneeringly reminding me “You know, politics is the mind-killer,” as though I am an errant child. I’m well-aware of the dangers of politics to good thinking. I am also aware of the benefits of good thinking to politics. So I’ve decided to accept the risk and to try to apply good thinking there. [...]
I’m sure there are also people who disagree with the article itself, but I don’t think I know those people personally. And to add a political dimension (heh), it’s relevant that most non-LW people (like me) initially encounter “politics is the mind-killer” being thrown out in comment threads, not through reading the original article. My opinion of the concept improved a lot once I read the article.
In the same thread, Andrew Mahone added, “Using it in that sneering way, Miri, seems just like a faux-rationalist version of ‘Oh, I don’t bother with politics.’ It’s just another way of looking down on any concerns larger than oneself as somehow dirty, only now, you know, rationalist dirty.” To which Miri replied: “Yeah, and what’s weird is that that really doesn’t seem to be Eliezer’s intent, judging by the eponymous article.”
Eliezer replied briefly, to clarify that he wasn't generally thinking of problems that can be directly addressed in local groups (but happen to be politically charged) as "politics":
Hanson’s “Tug the Rope Sideways” principle, combined with the fact that large communities are hard to personally influence, explains a lot in practice about what I find suspicious about someone who claims that conventional national politics are the top priority to discuss. Obviously local community matters are exempt from that critique! I think if I’d substituted ‘national politics as seen on TV’ in a lot of the cases where I said ‘politics’ it would have more precisely conveyed what I was trying to say.
But that doesn't resolve the issue. Even if local politics is more instrumentally tractable, the worry about polarization and factionalization can still apply, and may still make it a poor epistemic training ground.
A subtler problem with banning “political” discussions on a blog or at a meet-up is that it’s hard to do fairly, because our snap judgments about what counts as “political” may themselves be affected by partisan divides. In many cases the status quo is thought of as apolitical, even though objections to the status quo are ‘political.’ (Shades of Pretending to be Wise.)
Because politics gets personal fast, it’s hard to talk about it successfully. But if you’re trying to build a community, build friendships, or build a movement, you can’t outlaw everything ‘personal.’
And selectively outlawing personal stuff gets even messier. Last year, daenerys shared anonymized stories from women, including several that discussed past experiences where the writer had been attacked or made to feel unsafe. If those discussions are made off-limits because they relate to gender and are therefore ‘political,’ some folks may take away the message that they aren’t allowed to talk about, e.g., some harmful or alienating norm they see at meet-ups. I haven’t seen enough discussions of this failure mode to feel super confident people know how to avoid it.
Since this is one of the LessWrong memes that’s most likely to pop up in cross-subcultural dialogues (along with the even more ripe-for-misinterpretation “policy debates should not appear one-sided“…), as a first (very small) step, my action proposal is to obsolete the ‘mind-killer’ framing. A better phrase for getting the same work done would be ‘politics is hard mode’:
1. ‘Politics is hard mode’ emphasizes that ‘mind-killing’ (= epistemic difficulty) is quantitative, not qualitative. Some things might instead fall under Middlingly Hard Mode, or under Nightmare Mode…
2. ‘Hard’ invites the question ‘hard for whom?’, more so than ‘mind-killer’ does. We’re used to the fact that some people and some contexts change what’s ‘hard’, so it’s a little less likely we’ll universally generalize.
3. ‘Mindkill’ connotes contamination, sickness, failure, weakness. In contrast, ‘Hard Mode’ doesn’t imply that a thing is low-status or unworthy. As a result, it’s less likely to create the impression (or reality) that LessWrongers or Effective Altruists dismiss out-of-hand the idea of hypothetical-political-intervention-that-isn’t-a-terrible-idea. Maybe some people do want to argue for the thesis that politics is always useless or icky, but if so it should be done in those terms, explicitly — not snuck in as a connotation.
4. ‘Hard Mode’ can’t readily be perceived as a personal attack. If you accuse someone of being ‘mindkilled’, with no context provided, that smacks of insult — you appear to be calling them stupid, irrational, deluded, or the like. If you tell someone they’re playing on ‘Hard Mode,’ that’s very nearly a compliment, which makes your advice that they change behaviors a lot likelier to go over well.
5. ‘Hard Mode’ doesn’t risk bringing to mind (e.g., gendered) stereotypes about communities of political activists being dumb, irrational, or overemotional.
6. ‘Hard Mode’ encourages a growth mindset. Maybe some topics are too hard to ever be discussed. Even so, ranking topics by difficulty encourages an approach where you try to do better, rather than merely withdrawing. It may be wise to eschew politics, but we should not fear it. (Fear is the mind-killer.)
7. Edit: One of the larger engines of conflict is that people are so much worse at noticing their own faults and biases than noticing others'. People will be relatively quick to dismiss others as 'mindkilled,' while frequently flinching away from or just-not-thinking 'maybe I'm a bit mindkilled about this.' Framing the problem as a challenge rather than as a failing might make it easier to be reflective and even-handed.
This is not an attempt to get more people to talk about politics. I think this is a better framing whether or not you trust others (or yourself) to have productive political conversations.
When I playtested this post, Ciphergoth raised the worry that 'hard mode' isn't scary-sounding enough. As dire warnings go, it's light-hearted—exciting, even. To which I say: good. Counter-intuitive fears should usually be argued into people (e.g., via Eliezer's politics sequence), not connotation-ninja'd or chanted at them. The cognitive content is more clearly conveyed by 'hard mode,' and if some group (people who love politics) stands to gain the most from internalizing this message, the message shouldn't cast that very group (people who love politics) in an obviously unflattering light. LW seems fairly memetically stable, so the main issue is what would make this meme infect friends and acquaintances who haven't read the sequences. (Or Dune.)
If you just want a scary personal mantra to remind yourself of the risks, I propose 'politics is SPIDERS'. Though 'politics is the mind-killer' is fine there too.
If you and your co-conversationalists haven’t yet built up a lot of trust and rapport, or if tempers are already flaring, conveying the message ‘I’m too rational to discuss politics’ or ‘You’re too irrational to discuss politics’ can make things worse. In that context, ‘politics is the mind-killer’ is the mind-killer. At least, it’s a needlessly mind-killing way of warning people about epistemic hazards.
‘Hard Mode’ lets you speak as the Humble Aspirant rather than the Aloof Superior. Strive to convey: ‘I’m worried I’m too low-level to participate in this discussion; could you have it somewhere else?’ Or: ‘Could we talk about something closer to Easy Mode, so we can level up together?’ More generally: If you’re worried that what you talk about will impact group epistemology, you should be even more worried about how you talk about it.
Separating the roles of theory and direct empirical evidence in belief formation: the examples of minimum wage and anthropogenic global warming
I recently asked two questions on Quora with similar question structures, and the similarities and differences between the responses were interesting.
Question #1: Anthropogenic global warming, the greenhouse effect, and the historical weather record
I asked the question here. Question statement:
If you believe in Anthropogenic Global Warming (AGW), to what extent is your belief informed by the theory of the greenhouse effect, and to what extent is it informed by the historical temperature record?
In response to some comments, I added the following question details:
I also posted to Facebook here asking my friends about the pushback to my use of the term "belief" in my question.
Question #2: Effect of increase in the minimum wage on unemployment
I asked the question here. Question statement:
If you believe that raising the minimum wage is likely to increase unemployment, to what extent is your belief informed by the theory of supply and demand and to what extent is it informed by direct empirical evidence?
I added the following question details:
By "direct empirical evidence" I am referring to empirical evidence that directly pertains to the relation between minimum wage raises and employment level changes, not empirical evidence that supports the theory of supply and demand in general (because transferring that to the minimum wage context would require one to believe the transferability of the theory).
Also, when I say "believe that raising the minimum wage is likely to increase unemployment" I am talking about minimum wage increases of the sort often considered in legislative measures, and by "likely" I just mean that it's something that should always be seriously considered whenever a proposal to raise the minimum wage is made. The belief would be consistent with believing that in some cases minimum wage raises have no employment effects.
I also posted the question to Facebook here.
Similarities between the questions
The questions are structurally similar, and belong to a general question type of considerable interest to the LessWrong audience. The common features to the questions:
- In both cases, there is a theory (the greenhouse effect for Question #1, and supply and demand for Question #2) that is foundational to the domain and is supported through a wide range of lines of evidence.
- In both cases, the quantitative specifics of the extent to which the theory applies in the particular context are not clear. There are prima facie plausible arguments that other factors may cancel out the effect and there are arguments for many different effect sizes.
- In both cases, people who study the broad subject (climate scientists for Question #1, economists for Question #2) are more favorably disposed to the belief than people who do not study the broad subject.
- In both cases, a significant part of the strength of belief of subject matter experts seems to be their belief in the theory. The data, while consistent with the theory, does not seem to paint a strong picture in isolation. For the minimum wage, consider the Card and Krueger study. Bryan Caplan discusses how Bayesian reasoning with strong theoretical priors can lead one to continue believing that minimum wage increases cause unemployment to rise, without addressing Card and Krueger at the object level. For the case of anthropogenic global warming, consider the draft by Kesten C. Green (addressing whether a warming-based forecast has higher forecast accuracy than a no-change forecast) or the paper AGW doesn't cointegrate by Beenstock, Reingewertz, and Paldor (addressing whether, looking at the data alone, we can get good evidence that carbon dioxide concentration increases are linked with temperature increases).
- In both cases, outsiders to the domain, who nonetheless have expertise in other areas that one might expect gives them insight into the question, are often more skeptical of the belief. A number of weather forecasters, physicists, and forecasting experts are skeptical of long-range climate forecasting or confident assertions about anthropogenic global warming. A number of sociologists, lawyers, and politicians often are disparaging of the belief that minimum wage increases cause unemployment levels to rise. The criticism is similar: namely, that a basically correct theory is being overstretched or incorrectly applied to a situation that is too complex, is similar.
- In both cases, the debate is somewhat politically charged, largely because one's beliefs here affect one's views of proposed legislation (climate change mitigation legislation and minimum wage increase legislation). The anthropogenic global warming belief is more commonly associated with environmentalists, social democrats, and progressives, and (in the United States) with Democrats, whereas opposition to it is more common among conservatives and libertarians. The minimum wage belief is more commonly associated with free market views and (in the United States) with conservatives and Republicans, and opposition to it is more common among progressives and social democrats.
Looking for help
I'm interested in thoughts from the people here on these questions:
- Thoughts on the specifics of Question #1 and Question #2.
- Other possible questions in the same reference class (where a belief arises from a mix of theory and data, and the theory plays a fairly big role in driving the belief, while the data on its own is very ambiguous).
- Other similarities between Question #1 and Question #2.
- Ways that Question #1 and Question #2 are disanalogous.
- General thoughts on how this relates to Bayesian reasoning and other modes of belief formation based on a combination of theory and data.
EDIT: The fundraiser was successfully completed, raising the full $500 for worthwhile charities. Yay!
Today's my birthday! And per Peter Hurford's suggestion, I'm holding a birthday fundraiser to help raise money for MIRI, GiveDirectly, and Mercy for Animals. If you like my activity on LW or elsewhere, please consider giving a few dollars to one of these organizations via the fundraiser page. You can specify which organization you wish to donate in the comment of the donation, or just leave it unspecified, in which case I'll give your donation to MIRI.
If you don't happen to be particularly altruistically motivated, just consider it a birthday gift to me - it will give me warm fuzzies to know that I helped move money for worthy organizations. And if you are altruistically motivated but don't care about me in particular, maybe you still can get yourself to donate more than usual by hacky stuff like someone you know on the Internet having a birthday. :)
If someone else wants to hold their own birthday fundraiser, here are some tips: birthday fundraisers.
This post doesn't contain any new ideas that LWers don't already know. It's more of an attempt to organize my thoughts and have a writeup for future reference.
Here's a great quote from Sam Hughes, giving some examples of good and bad advice:
"You and your gaggle of girlfriends had a saying at university," he tells her. "'Drink through it'. Breakups, hangovers, finals. I have never encountered a shorter, worse, more densely bad piece of advice." Next he goes into their bedroom for a moment. He returns with four running shoes. "You did the right thing by waiting for me. Probably the first right thing you've done in the last twenty-four hours. I subscribe, as you know, to a different mantra. So we're going to run."
The typical advice given to young people who want to succeed in highly competitive areas, like sports, writing, music, or making video games, is to "follow your dreams". I think that advice is up there with "drink through it" in terms of sheer destructive potential. If it was replaced with "don't bother following your dreams" every time it was uttered, the world might become a happier place.
The amazing thing about "follow your dreams" is that thinking about it uncovers a sort of perfect storm of biases. It's fractally wrong, like PHP, where the big picture is wrong and every small piece is also wrong in its own unique way.
The big culprit is, of course, optimism bias due to perceived control. I will succeed because I'm me, the special person at the center of my experience. That's the same bias that leads us to overestimate our chances of finishing the thesis on time, or having a successful marriage, or any number of other things. Thankfully, we have a really good debiasing technique for this particular bias, known as reference class forecasting, or inside vs outside view. What if your friend Bob was a slightly better guitar player than you? Would you bet a lot of money on Bob making it big like Jimi Hendrix? The question is laughable, but then so is betting the years of your own life, with a smaller chance of success than Bob.
That still leaves many questions unanswered, though. Why do people offer such advice in the first place, why do other people follow it, and what can be done about it?
Survivorship bias is one big reason we constantly hear successful people telling us to "follow our dreams". Successful people doesn't really know why they are successful, so they attribute it to their hard work and not giving up. The media amplifies that message, while millions of failures go unreported because they're not celebrities, even though they try just as hard. So we hear about successes disproportionately, in comparison to how often they actually happen, and that colors our expectations of our own future success. Sadly, I don't know of any good debiasing techniques for this error, other than just reminding yourself that it's an error.
When someone has invested a lot of time and effort into following their dream, it feels harder to give up due to the sunk cost fallacy. That happens even with very stupid dreams, like the dream of winning at the casino, that were obviously installed by someone else for their own profit. So when you feel convinced that you'll eventually make it big in writing or music, you can remind yourself that compulsive gamblers feel the same way, and that feeling something doesn't make it true.
Of course there are good dreams and bad dreams. Some people have dreams that don't tease them for years with empty promises, but actually start paying off in a predictable time frame. The main difference between the two kinds of dream is the difference between positive-sum games, a.k.a. productive occupations, and zero-sum games, a.k.a. popularity contests. Sebastian Marshall's post Positive Sum Games Don't Require Natural Talent makes the same point, and advises you to choose a game where you can be successful without outcompeting 99% of other players.
The really interesting question to me right now is, what sets someone on the path of investing everything in a hopeless dream? Maybe it's a small success at an early age, followed by some random encouragement from others, and then you're locked in. Is there any hope for thinking back to that moment, or set of moments, and making a little twist to put yourself on a happier path? I usually don't advise people to change their desires, but in this case it seems to be the right thing to do.
Related to: Policy Debates Should Not Appear One-Sided
There is a well-known fable which runs thus:
“Driven by hunger, a fox tried to reach some grapes hanging high on the vine but was unable to, although he leaped with all his strength. As he went away, the fox remarked 'Oh, you aren't even ripe yet! I don't need any sour grapes.' People who speak disparagingly of things that they cannot attain would do well to apply this story to themselves.”
This gives rise to the common expression ‘sour grapes’, referring to a situation in which one incorrectly claims to not care about something to save face or feel better after being unable to get it.
This seems to be related to a general phenomenon, in which motivated cognition leads one to flinch away from the prospect of an action that is inconvenient or painful in the short term by concluding that a less-painful option strictly dominates the more-painful one.
In the fox’s case, the allegedly-dominating option is believing (or professing) that he did not want the grapes. This spares him the pain of feeling impotent in face of his initial failure, or the embarrassment of others thinking him to have failed. If he can’t get the grapes anyway, then he might as well erase the fact that he ever wanted them, right? The problem is that considering this line of reasoning will make it more tempting to conclude that the option really was dominating—that he really couldn’t have gotten the grapes. But maybe he could’ve gotten the grapes with a bit more work—by getting a ladder, or making a hook, or Doing More Squats in order to Improve His Vert.
The fable of the fox and the grapes doesn’t feel like a perfect fit, though, because the fox doesn’t engage in any conscious deliberation before giving up on sour grapes; the whole thing takes place subconsciously. Here are some other examples that more closely illustrate the idea of conscious rationalization by use of overly convenient partitions:
“Be who you are and say what you feel, because those who mind don't matter and those who matter don't mind.”
This advice is neither good in full generality nor bad in full generality. Clearly there are some situations where some person is worrying too much about other people judging them, or is anxious about inconveniencing others without taking their own preferences into account. But there are also clearly situations (like dealing with an unpleasant, incompetent boss) where fully exposing oneself or saying whatever comes into one’s head is not strategic and outright disastrous. Without taking into account the specifics of the situation of the recipient of the advice, it is of limited use.
It is convenient to absolve oneself of blame by writing off anybody who challenges our first impulse as someone who ‘doesn’t matter’; it means that if something goes wrong, one can avoid the painful task of analysing and modifying one’s behaviour.
In particular, we have the following corollary:
The Fundamental Fallacy of Dating:
“Be yourself and don’t hide who you are. Be up-front about what you want. If it puts your date off, then they wouldn’t have been good for you anyway, and you’ve dodged a bullet!”
In the short-term it is convenient to not have to filter or reflect on what one says (face-to-face) or writes (online dating). In the longer term, having no filter is not a smart way to approach dating. As the biases and heuristics program has shown, people are often mistaken about what they would prefer under reflection, and are often inefficient and irrational in pursuing what they want. There are complicated courtship conventions governing timelines for revealing information about oneself and negotiating preferences, that have evolved to work around these irrationalities, to the benefit of both parties. In particular, people are dynamically inconsistent, and willing to compromise a lot more later on in a courtship than they thought they would earlier on; it is often a favour to both of you to respect established boundaries regarding revealing information and getting ahead of the current stage of the relationship.
For those who have not much practised the skill of avoiding triggering Too Much Information reactions, it can feel painful and disingenuous to even try changing their behaviour, and they rationalise it via the Fundamental Fallacy. At any given moment, changing this behaviour is painful and causes a flinch reaction, even though the value of information of trying a different approach might be very high, and might cause less pain (e.g. through reduced loneliness) in the long term.
We also have:
PR rationalization and incrimination:
“There’s already enough ammunition out there if anybody wants to assassinate my character, launch a smear campaign, or perform a hatchet job. Nothing I say at this point could make it worse, so there’s no reason to censor myself.”
This is an overly convenient excuse. It does not take into account, for example, that new statements provide a new opportunity for one to come to the attention of quote miners in the first place, or that different statements might be more or less easy to seed a smear campaign; ammunition can vary in type and accessibility, so that adding more can increase the convenience of a hatchet job. It might turn out, after weighing the costs and benefits, that speaking honestly is the right decision. But one can’t know that on the strength of a convenient deontological argument that doesn’t consider those costs. Similarly:
“I’ve already pirated so much stuff I’d be screwed if I got caught. Maybe it was unwise and impulsive at first, but by now I’m past the point of no return.”
This again fails to take into account the increased risk of one’s deeds coming to attention; if most prosecutions are caused by (even if not purely about) offences shortly before the prosecution, and you expect to pirate long into the future, then your position now is the same as when you first pirated; if it was unwise then, then it’s unwise now.
The common fallacy in all these cases is that one looks at only the extreme possibilities, and throws out the inconvenient, ambiguous cases. This results in a disconnected space of possibilities that is engineered to allow one to prove a convenient conclusion. For example, the Seating Fallacy throws out the possibility that there are people who mind but also matter; the Fundamental Fallacy of Dating prematurely rules out people who are dynamically inconsistent or are imperfect introspectors, or who have uncertainty over preferences; PR rationalization fails to consider marginal effects and quantify risks in favour of a lossy binary approach.
What are other examples of situations where people (or Less Wrongers specifically) might fall prey to this failure mode?
In Policy Debates Should Not Appear One-Sided, Eliezer Yudkowsky argues that arguments on questions of fact should be one-sided, whereas arguments on policy questions should not:
On questions of simple fact (for example, whether Earthly life arose by natural selection) there's a legitimate expectation that the argument should be a one-sided battle; the facts themselves are either one way or another, and the so-called "balance of evidence" should reflect this. Indeed, under the Bayesian definition of evidence, "strong evidence" is just that sort of evidence which we only expect to find on one side of an argument.
But there is no reason for complex actions with many consequences to exhibit this onesidedness property.
The reason for this is primarily that natural selection has caused all sorts of observable phenomena. With a bit of ingenuity, we can infer that natural selection has caused them, and hence they become evidence for natural selection. The evidence for natural selection thus has a common cause, which means that we should expect the argument to be one-sided.
In contrast, even if a certain policy, say lower taxes, is the right one, the rightness of this policy does not cause its evidence (or the arguments for this policy, which is a more natural expression), the way natural selection causes its evidence. Hence there is no common cause of all of the valid arguments of relevance for the rightness of this policy, and hence no reason to expect that all of the valid arguments should support lower taxes. If someone nevertheless believes this, the best explanation of their belief is that they suffer from some cognitive bias such as the affect heuristic.
(In passing, I might mention that I think that the fact that moral debates are not one-sided indicates that moral realism is false, since if moral realism were true, moral facts should provide us with one-sided evidence on moral questions, just like natural selection provides us with one-sided evidence on the question how Earthly life arose. This argument is similar to, but distinct from, Mackie's argument from relativity.)
Now consider another kind of factual issues: multiple factor explanations. These are explanations which refer to a number of factors to explain a certain phenomenon. For instance, in his book Guns, Germs and Steel, Jared Diamond explains the fact that agriculture first arose in the Fertile Crescent by reference to no less than eight factors. I'll just list these factors briefly without going into the details of how they contributed to the rise of agriculture. The Fertile Crescent had, according to Diamond (ch. 8):
- big seeded plants, which were
- abundant and occurring in large stands whose value was obvious,
- and which were to a large degree hermaphroditic "selfers".
- It had a higher percentage of annual plants than other Mediterreanean climate zones
- It had higher diversity of species than other Mediterreanean climate zones.
- It has a higher range of elevations than other Mediterrenean climate zones
- It had a great number of domesticable big mammals.
- The hunter-gatherer life style was not that appealing in the Fertile Crescent
(Note that all of these factors have to do with geographical, botanical and zoological facts, rather than with facts about the humans themselves. Diamond's goal is to prove that agriculture arose in Eurasia due to geographical luck rather than because Eurasians are biologically superior to other humans.)
Diamond does not mention any mechanism that would make it less likely for agriculture to arise in the Fertile Crescent. Hence the score of pro-agriculture vs anti-agriculture factors in the Fertile Crescent is 8-0. Meanwhile no other area in the world has nearly as many advantages. Diamond does not provide us with a definite list of how other areas of the world fared but no non-Eurasian alternative seem to score better than about 5-3 (he is primarily interested in comparing Eurasia with other parts of the world).
Now suppose that we didn't know anything about the rise of agriculture, but that we knew that there were eight factors which could influence it. Since these factors would not be caused by the fact that agriculture first arose in the Fertile Crescent, the way the evidence for natural selection is caused by the natural selection, there would be no reason to believe that these factors were on average positively probabilistically dependent of each other. Under these conditions, one area having all the advantages and the next best lacking three of them is a highly surprising distribution of advantages. On the other hand, this is precisely the pattern that we would expect given the hypothesis that Diamond suffers from confirmation bias or another related bias. His theory is "too good to be true" and which lends support to the hypothesis that he is biased.
In this particular case, some of the factors Diamond lists presumably are positively dependent on each other. Now suppose that someone argues that all of the factors are in fact strongly positively dependent on each other, so that it is not very surprising that they all co-occur. This only pushes the problem back, however, because now we want an explanation of a) what the common cause of all of these dependencies is (it being very improbable that they all would correlate in the absence of such a common cause) and b) how it could be that this common cause increases the probability of the hypothesis via eight independent mechanisms, and doesn't decrease it via any mechanism. (This argument is complicated and I'd be happy on any input concerning it.)
Single-factor historical explanations are often criticized as being too "simplistic" whereas multiple factor explanations are standardly seen as more nuanced. Many such explanations are, however, one-sided in the way Diamond's explanation is, which indicates bias and dogmatism rather than nuance. (Another salient example I'm presently studying is taken from Steven Pinker's The Better Angels of Our Nature. I can provide you with the details on demand.*) We should be much better at detecting this kind of bias, since it for the most part goes unnoticed at present.
Generally, the sort of "too good to be true"-arguments to infer bias discussed here are strongly under-utilized. As our knowledge of the systematic and predictable ways our thought goes wrong increase, it becomes easier to infer bias from the structure or pattern of people's arguments, statements and beliefs. What we need is to explicate clearly, preferably using probability theory or other formal methods, what factors are relevant for deciding whether some pattern of arguments, statements or beliefs most likely is the result of biased thought-processes. I'm presently doing research on this and would be happy to discuss these questions in detail, either publicly or via pm.
*Edit: Pinker's argument. Pinker's goal is to explain why violence has declined throughout history. He lists the following five factors in the last chapter:
- The Leviathan (the increasing influence of the government)
- Gentle commerce (more trade leads to less violence)
- The expanding (moral) circle
- The escalator of reason
- Weaponry and disarmanent (he claims that there are no strong correlations between weapon developments and numbers of deaths)
- Resource and power (he claims that there is little connection between resource distributions and wars)
- Affluence (tight correlations between affluence and non-violence are hard to find)
- (Fall of) religion (he claims that atheist countries and people aren't systematically less violen
Last year, AlexMennen ran a prisoner's dilemma tournament with bots that could see each other's source code, which was dubbed a "program equilibrium" tournament. This year, I will be running a similar tournament. Here's how it's going to work: Anyone can submit a bot that plays the iterated PD against other bots. Bots can not only remember previous rounds, as in the standard iterated PD, but also run perfect simulations of their opponent before making a move. Please see the github repo for the full list of rules and a brief tutorial.
There are a few key differences this year:
1) The tournament is in Haskell rather than Scheme.
2) The time limit for each round is shorter (5 seconds rather than 10) but the penalty for not outputting Cooperate or Defect within the time limit has been reduced.
3) Bots cannot directly see each other's source code, but they can run their opponent, specifying the initial conditions of the simulation, and then observe the output.
All submissions should be emailed to firstname.lastname@example.org or PM'd to me here on LessWrong by September 15th, 2014. LW users with 50+ karma who want to participate but do not know Haskell can PM me with an algorithm/psuedocode, and I will translate it into a bot for them. (If there is a flood of such requests, I would appreciate some volunteers to help me out.)
Let me tell you a parable of the future. Let’s say, 70 years from now, in a large Western country we’ll call Nacirema.
One day far from now: scientific development has continued apace, and a large government project (with, unsurprisingly, a lot of military funding) has taken the scattered pieces of cutting-edge research and put them together into a single awesome technology, which could revolutionize (or at least, vastly improve) all sectors of the economy. Leading thinkers had long forecast that this area of science’s mysteries would eventually yield to progress, despite theoretical confusion and perhaps-disappointing initial results and the scorn of more conservative types and the incomprehension (or outright disgust, for ‘playing god’) of the general population, and at last - it had! The future was bright.
Unfortunately, it was hurriedly decided to use an early prototype outside the lab in an impoverished foreign country. Whether out of arrogance, bureaucratic inertia, overconfidence on the part of the involved researchers, condescending racism, the need to justify the billions of grant-dollars that cumulative went into the project over the years by showing some use of it - whatever, the reasons no longer mattered after the final order was signed. The technology was used, but the consequences turned out to be horrific: over a brief period of what seemed like mere days, entire cities collapsed and scores - hundreds - of thousands of people died. (Modern economies are extremely interdependent and fragile, and small disruptions can have large consequences; more people died in the chaos of the evacuation of the areas around Fukushima than will die of the radiation.)
At Luke Muehlhauser's request, I wrote a script to scrape all of Robin Hanson's posts to Overcoming Bias into an e-book; here's a first beta release. Please comment here with any problems—posts in the wrong order, broken links, bad formatting, missing posts. Thanks!
This is a thread for rationality-related or LW-related jokes and humor. Please post jokes (new or old) in the comments.
Q: Why are Chromebooks good Bayesians?
A: Because they frequently update!
A super-intelligent AI walks out of a box...
Q: Why did the psychopathic utilitarian push a fat man in front of a trolley?
A: Just for fun.
This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, see the announcement post. For the schedule of future topics, see MIRI's reading guide.
Welcome to the Superintelligence reading group. This week we discuss the first section in the reading guide, Past developments and present capabilities. This section considers the behavior of the economy over very long time scales, and the recent history of artificial intelligence (henceforth, 'AI'). These two areas are excellent background if you want to think about large economic transitions caused by AI.
This post summarizes the section, and offers a few relevant notes, thoughts, and ideas for further investigation. My own thoughts and questions for discussion are in the comments.
There is no need to proceed in order through this post. Feel free to jump straight to the discussion. Where applicable, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).
Reading: Foreword, and Growth modes through State of the art from Chapter 1 (p1-18)
- Economic growth has become radically faster over the course of human history. (p1-2)
- This growth has been uneven rather than continuous, perhaps corresponding to the farming and industrial revolutions. (p1-2)
- Thus history suggests large changes in the growth rate of the economy are plausible. (p2)
- This makes it more plausible that human-level AI will arrive and produce unprecedented levels of economic productivity.
- Predictions of much faster growth rates might also suggest the arrival of machine intelligence, because it is hard to imagine humans - slow as they are - sustaining such a rapidly growing economy. (p2-3)
- Thus economic history suggests that rapid growth caused by AI is more plausible than you might otherwise think.
The history of AI:
- Human-level AI has been predicted since the 1940s. (p3-4)
- Early predictions were often optimistic about when human-level AI would come, but rarely considered whether it would pose a risk. (p4-5)
- AI research has been through several cycles of relative popularity and unpopularity. (p5-11)
- By around the 1990s, 'Good Old-Fashioned Artificial Intelligence' (GOFAI) techniques based on symbol manipulation gave way to new methods such as artificial neural networks and genetic algorithms. These are widely considered more promising, in part because they are less brittle and can learn from experience more usefully. Researchers have also lately developed a better understanding of the underlying mathematical relationships between various modern approaches. (p5-11)
- AI is very good at playing board games. (12-13)
- AI is used in many applications today (e.g. hearing aids, route-finders, recommender systems, medical decision support systems, machine translation, face recognition, scheduling, the financial market). (p14-16)
- In general, tasks we thought were intellectually demanding (e.g. board games) have turned out to be easy to do with AI, while tasks which seem easy to us (e.g. identifying objects) have turned out to be hard. (p14)
- An 'optimality notion' is the combination of a rule for learning, and a rule for making decisions. Bostrom describes one of these: a kind of ideal Bayesian agent. This is impossible to actually make, but provides a useful measure for judging imperfect agents against. (p10-11)
Notes on a few things
- What is 'superintelligence'? (p22 spoiler)
In case you are too curious about what the topic of this book is to wait until week 3, a 'superintelligence' will soon be described as 'any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest'. Vagueness in this definition will be cleared up later.
- What is 'AI'?
In particular, how does 'AI' differ from other computer software? The line is blurry, but basically AI research seeks to replicate the useful 'cognitive' functions of human brains ('cognitive' is perhaps unclear, but for instance it doesn't have to be squishy or prevent your head from imploding). Sometimes AI research tries to copy the methods used by human brains. Other times it tries to carry out the same broad functions as a human brain, perhaps better than a human brain. Russell and Norvig (p2) divide prevailing definitions of AI into four categories: 'thinking humanly', 'thinking rationally', 'acting humanly' and 'acting rationally'. For our purposes however, the distinction is probably not too important.
- What is 'human-level' AI?
We are going to talk about 'human-level' AI a lot, so it would be good to be clear on what that is. Unfortunately the term is used in various ways, and often ambiguously. So we probably can't be that clear on it, but let us at least be clear on how the term is unclear.
One big ambiguity is whether you are talking about a machine that can carry out tasks as well as a human at any price, or a machine that can carry out tasks as well as a human at the price of a human. These are quite different, especially in their immediate social implications.
Other ambiguities arise in how 'levels' are measured. If AI systems were to replace almost all humans in the economy, but only because they are so much cheaper - though they often do a lower quality job - are they human level? What exactly does the AI need to be human-level at? Anything you can be paid for? Anything a human is good for? Just mental tasks? Even mental tasks like daydreaming? Which or how many humans does the AI need to be the same level as? Note that in a sense most humans have been replaced in their jobs before (almost everyone used to work in farming), so if you use that metric for human-level AI, it was reached long ago, and perhaps farm machinery is human-level AI. This is probably not what we want to point at.
Another thing to be aware of is the diversity of mental skills. If by 'human-level' we mean a machine that is at least as good as a human at each of these skills, then in practice the first 'human-level' machine will be much better than a human on many of those skills. It may not seem 'human-level' so much as 'very super-human'.
We could instead think of human-level as closer to 'competitive with a human' - where the machine has some super-human talents and lacks some skills humans have. This is not usually used, I think because it is hard to define in a meaningful way. There are already machines for which a company is willing to pay more than a human: in this sense a microscope might be 'super-human'. There is no reason for a machine which is equal in value to a human to have the traits we are interested in talking about here, such as agency, superior cognitive abilities or the tendency to drive humans out of work and shape the future. Thus we talk about AI which is at least as good as a human, but you should beware that the predictions made about such an entity may apply before the entity is technically 'human-level'.
Example of how the first 'human-level' AI may surpass humans in many ways.
Because of these ambiguities, AI researchers are sometimes hesitant to use the term. e.g. in these interviews.
- Growth modes (p1)
Robin Hanson wrote the seminal paper on this issue. Here's a figure from it, showing the step changes in growth rates. Note that both axes are logarithmic. Note also that the changes between modes don't happen overnight. According to Robin's model, we are still transitioning into the industrial era (p10 in his paper).
- What causes these transitions between growth modes? (p1-2)
One might be happier making predictions about future growth mode changes if one had a unifying explanation for the previous changes. As far as I know, we have no good idea of what was so special about those two periods. There are many suggested causes of the industrial revolution, but nothing uncontroversially stands out as 'twice in history' level of special. You might think the small number of datapoints would make this puzzle too hard. Remember however that there are quite a lot of negative datapoints - you need an explanation that didn't happen at all of the other times in history.
- Growth of growth
It is also interesting to compare world economic growth to the total size of the world economy. For the last few thousand years, the economy seems to have grown faster more or less in proportion to it's size (see figure below). Extrapolating such a trend would lead to an infinite economy in finite time. In fact for the thousand years until 1950 such extrapolation would place an infinite economy in the late 20th Century! The time since 1950 has been strange apparently.
(Figure from here)
- Early AI programs mentioned in the book (p5-6)
You can see them in action: SHRDLU, Shakey, General Problem Solver (not quite in action), ELIZA.
- Later AI programs mentioned in the book (p6)
Algorithmically generated Beethoven, algorithmic generation of patentable inventions, artificial comedy (requires download).
- Modern AI algorithms mentioned (p7-8, 14-15)
Here is a neural network doing image recognition. Here is artificial evolution of jumping and of toy cars. Here is a face detection demo that can tell you your attractiveness (apparently not reliably), happiness, age, gender, and which celebrity it mistakes you for.
- What is maximum likelihood estimation? (p9)
Bostrom points out that many types of artificial neural network can be viewed as classifiers that perform 'maximum likelihood estimation'. If you haven't come across this term before, the idea is to find the situation that would make your observations most probable. For instance, suppose a person writes to you and tells you that you have won a car. The situation that would have made this scenario most probable is the one where you have won a car, since in that case you are almost guaranteed to be told about it. Note that this doesn't imply that you should think you won a car, if someone tells you that. Being the target of a spam email might only give you a low probability of being told that you have won a car (a spam email may instead advise you of products, or tell you that you have won a boat), but spam emails are so much more common than actually winning cars that most of the time if you get such an email, you will not have won a car. If you would like a better intuition for maximum likelihood estimation, Wolfram Alpha has several demonstrations (requires free download).
- What are hill climbing algorithms like? (p9)
The second large class of algorithms Bostrom mentions are hill climbing algorithms. The idea here is fairly straightforward, but if you would like a better basic intuition for what hill climbing looks like, Wolfram Alpha has a demonstration to play with (requires free download).
If you are particularly interested in these topics, and want to do further research, these are a few plausible directions:
- How have investments into AI changed over time? Here's a start, estimating the size of the field.
- What does progress in AI look like in more detail? What can we infer from it? I wrote about algorithmic improvement curves before. If you are interested in plausible next steps here, ask me.
- What do economic models tell us about the consequences of human-level AI? Here is some such thinking; Eliezer Yudkowsky has written at length about his request for more.
How to proceed
This has been a collection of notes on the chapter. The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!
Next week, we will talk about what AI researchers think about human-level AI: when it will arrive, what it will be like, and what the consequences will be. To prepare, read Opinions about the future of machine intelligence from Chapter 1 and also When Will AI Be Created? by Luke Muehlhauser. The discussion will go live at 6pm Pacific time next Monday 22 September. Sign up to be notified here.
Go read Yvain/Scott's Meditations On Moloch. It's one of the most beautiful, disturbing, poetical look at the future that I've ever seen.
Go read it.
Don't worry, I can wait. I'm only a piece of text, my patience is infinite.
You sure you've read it?
Ok, I believe you...
I hope you wouldn't deceive an innocent and trusting blog post? You wouldn't be a monster enough to abuse the trust of a being as defenceless as a constant string of ASCII symbols?
Of course not. So you'd have read that post before proceeding to the next paragraph, wouldn't you? Of course you would.
Ok, now to the point. The "Moloch" idea is very interesting, and, at the FHI, we may try to do some research in this area (naming it something more respectable/boring, of course, something like "how to avoid stable value-losing civilization attractors").
The project hasn't started yet, but a few caveats to the Moloch idea have already occurred to me. First of all, it's not obligatory for an optimisation process to trample everything we value into the mud. This is likely to happen with an AI's motivation, but it's not obligatory for an optimisation process.
One way of seeing this is the difference between "or" and "and". Take the democratic election optimisation process. It's clear, as Scott argues, that this optimises badly in some ways. It encourages appearance over substance, some types of corruption, etc... But it also optimises along some positive axes, with some clear, relatively stable differences between the parties which reflects some voters preferences, and punishment for particularly inept behaviour from leaders (I might argue that the main benefit of democracy is not the final vote between the available options, but the filtering out of many pernicious options because they'd never be politically viable). The question is whether these two strands of optimisation can be traded off against each other, or if a minimum of each is required. So can we make a campaign that is purely appearance based with any substantive position ("or": maximum on one axis is enough), or do you need a minimum of substance and a minimum of appearance to buy off different constituencies ("and": you need some achievements on all axes)? And no, I'm not interested in discussing current political examples.
Another example Scott gave was of the capitalist optimisation process, and how it in theory matches customers' and producers' interests, but could go very wrong:
Suppose the coffee plantations discover a toxic pesticide that will increase their yield but make their customers sick. But their customers don't know about the pesticide, and the government hasn't caught up to regulating it yet. Now there's a tiny uncoupling between "selling to [customers]" and "satisfying [customers'] values", and so of course [customers'] values get thrown under the bus.
This effect can be combated to some extent with extra information. If the customers (or journalists, bloggers, etc...) know about this, then the coffee plantations will suffer. "Our food is harming us!" isn't exactly a hard story to publicise. This certainly doesn't work in every case, but increased information is something that technological progress would bring, and this needs to be considered when asking whether optimisation processes will inevitably tend to a bad equilibrium as technology improves. An accurate theory of nutrition, for instance, would have great positive impact if its recommendations could be measured.
Finally, Zack Davis's poem about the em stripped of (almost all) humanity got me thinking. The end result of that process is tragic for two reasons: first, the em retains enough humanity to have curiosity, only to get killed for this. And secondly, that em once was human. If the em was entirely stripped of human desires, the situation would be less tragic. And if the em was further constructed in a process that didn't destroy any humans, this would be even more desirable. Ultimately, if the economy could be powered by entities developed non-destructively from humans, and which were clearly not conscious or suffering themselves, this would be no different that powering the economy with the non-conscious machines we use today. This might happen if certain pieces of a human-em could be extracted, copied and networked into an effective, non-conscious entity. In that scenario, humans and human-ems could be the capital owners, and the non-conscious modified ems could be the workers. The connection of this with the Moloch argument is that it shows that certain nightmare scenarios could in some circumstances be adjusted to much better outcomes, with a small amount of coordination.
The point of the post
The reason I posted this is to get people's suggestions about ideas relevant to a "Moloch" research project, and what they thought of the ideas I'd had so far.
Hello Less Wrong, I don't post here much but I've been involved in the Bay Area Less Wrong community for several years, where many of you know me from. The following is a white paper I wrote earlier this year for my firm, RHS Financial, a San Francisco based private wealth management practice. A few months ago I presented it at a South Bay Less Wrong meetup. Since then many of you have encouraged me to post it here for the rest of the community to see. The original can be found here, please refer to the disclosures, especially if you are the SEC. I have added an afterword here beneath the citations to address some criticisms I have encountered since writing it. As a company white paper intended for a general audience, please forgive me if the following is a little too self-promoting or spends too much time on grounds already well-tread here, but I think many of you will find it of value. Hope you enjoy!
Executive Summary: Capital markets have created enormous amounts of wealth for the world and reward disciplined, long-term investors for their contribution to the productive capacity of the economy. Most individuals would do well to invest most of their wealth in the capital market assets, particularly equities. Most investors, however, consistently make poor investment decisions as a result of a poor theoretical understanding of financial markets as well as cognitive and emotional biases, leading to inferior investment returns and inefficient allocation of capital. Using an empirically rigorous approach, a rational investor may reasonably expect to exploit inefficiencies in the market and earn excess returns in so doing.
Most people understand that they need to save money for their future, and surveys consistently find a large majority of Americans expressing a desire to save and invest more than they currently are. Yet the savings rate and percentage of people who report owning stocks has trended down in recent years,1 despite the increasing ease with which individuals can participate in financial markets, thanks to the spread of discount brokers and employer 401(k) plans. Part of the reason for this is likely the unrealistically pessimistic expectations of would-be investors. According to a recent poll barely one third of Americans consider equities to be a good way to build wealth over time.2 The verdict of history, however, is against the skeptics.
The Greatest Deal of all Time
Equity ownership is probably the easiest, most powerful means of accumulating wealth over time, and people regularly forego millions of dollars over the course of their lifetimes letting their wealth sit in cash. Since its inception in 1926, the annualized total return on the S&P 500 has been 9.8% as of the end of 2012.3 $1 invested back then would be worth $3,533 by the end of the period. More saliently, a 25 year old investor investing $5,000 per year at that rate would have about $2.1 million upon retirement at 65.
The strong performance of stock markets is robust to different times and places. Though the most accurate data on the US stock market goes back to 1926, financial historians have gathered information going back to 1802 and find the average annualized real return in earlier periods is remarkably close to the more recent official records. Looking at rolling 30 year returns between 1802 and 2006, the lowest and highest annualized real returns have been 2.6% and 10.6%, respectively.4 The United States is not unique in its experience, either. In a massive study of the sixteen countries that had data on local stock, bond, and cash returns available for every year of the twentieth century, the stock market in every one had significant, positive real returns that exceeded those of cash and fixed income alternatives.5 The historical returns of US stocks only slightly exceed those of the global average.
The opportunity cost of not holding stocks is enormous. Historically the interest earned on cash equivalent investments like savings accounts has barely kept up with inflation - over the same since-1926 period inflation has averaged 3.0% while the return on 30-day treasury bills (a good proxy for bank savings rates) has been 3.5%.6 That 3.5% rate would only earn an investor $422k over the same $5k/year scenario above. The situation today is even worse. Most banks are currently paying about 0.05% on savings.
Similarly, investment grade bonds, such as those issued by the US Treasury and highly rated corporations, though often an important component of a diversified portfolio, have offered returns only modestly better than cash over the long run. The average return on 10-year treasury bonds has been 5.1%,7 earning an investor $619k over the same 40 year scenario. The yield on the 10-year treasury is currently about 3%.
Homeownership has long been a part of the American dream, and many have been taught that building equity in your home is the safest and most prudent way to save for the future. The fact of the matter, however, is that residential housing is more of a consumption good than an investment. Over the last century the value of houses have barely kept up with inflation,8 and as the recent mortgage crisis demonstrated, home prices can crash just as any other market.
In virtually every time and place we look, equities are the best performing asset available, a fact which is consistent with the economic theory that risky assets must offer a premium to their investors to compensate them for the additional uncertainty they bear. What has puzzled economists for decades is why the so-called equity risk premium is so large and why so many individuals invest so little in stocks.9
Your Own Worst Enemy
Recent insights from multidisciplinary approaches in cognitive science have shed light on the issue, demonstrating that instead of rationally optimizing between various trade-offs, human beings regularly rely on heuristics - mental shortcuts that require little cognitive effort - when making decisions.10 These heuristics lead to taking biased approaches to problems that deviate from optimal decision making in systematic and predictable ways. Such biases affect financial decisions in a large number of ways, one of the most profound and pervasive being the tendency of myopic loss aversion.
Myopic loss aversion refers to the combined result of two observed regularities in the way people think: that losses feel bad to a greater extent than equivalent gains feel good, and that people rely too heavily (anchor) on recent and readily available information. 11Taken together, it is easy to see how these mental errors could bias an individual against holding stocks. Though the historical and expected return on equities greatly exceeds those of bonds and cash, over short time horizons they can suffer significant losses. And while the loss of one’s home equity is generally a nebulous abstraction that may not manifest itself consciously for years, stock market losses are highly visible, drawing attention to themselves in brokerage statements and newspaper headlines. Not surprisingly, then, an all too common pattern among investors is to start investing at a time when the headlines are replete with stories of the riches being made in markets, only to suffer a pullback and quickly sell out at ten, twenty, thirty plus percent losses and sit on cash for years until the next bull market is again near its peak in a vicious circle of capital destruction. Indeed, in the 20 year period ending 2012, the S&P 500 returned 8.2% and investment grade bonds returned 6.3% annualized. The inflation rate was 2.5%, and the average retail investor earned an annualized rate of 2.3%.12
Even when investors can overcome their myopic loss aversion and stay in the stock market for the long haul, investment success is far from assured. The methods by which investors choose which stocks or stock managers to buy, hold, and sell are also subject to a host of biases which consistently lead to suboptimal investing and performance. Chief among these is overconfidence, the belief that one’s judgements and skills are reliably superior.
Overconfidence is endemic to the human experience. The vast majority of people think of themselves as more intelligent, attractive, and competent than most of their peers,13 even in the face of proof to the contrary. 93% of people consider themselves to be above-average drivers,14 for example, and that percentage decreases only slightly if you ask people to evaluate their driving skill after being admitted to a hospital following a traffic accident.15 Similarly, most investors are confident they can consistently beat the market. One survey found 74% of mutual fund investors believed the funds they held would “consistently beat the S&P 500 every year” in spite of the statistical reality that more than half of US stock funds underperform in a given year and virtually none will outperform it each and every year. Many investors will even report having beaten the index despite having verifiably underperformed it by several percentage points.16
Overconfidence leads investors to take outsized bets on what they know and are familiar with. Investors around the world commonly hold 80% or more of their portfolios in investments from their own country,17 and one third of 401(k) assets are invested in participants’ own employer’s stock.18 Such concentrated portfolios are demonstrably riskier than a broadly diversified portfolio, yet investors regularly evaluate their investments as less risky than the general market, even if their securities had recently lost significantly more than the overall market.
If an investor believes himself to possess superior talent in selecting investments, he is likely to trade more as a result in an attempt to capitalize on each new opportunity that presents itself. In this endeavor, the harder investors try, the worse they do. In one major study, the quintile of investors who traded the most over a five year period earned an average annualized 7.1 percentage points less than the quintile that traded the least.19
The Folly of Wall Street
Relying on experts does little to help. Wall Street employs an army of analysts to follow the every move of all the major companies traded on the market, predicting their earnings and their expected performance relative to peers, but on the whole they are about as effective as a strategy of throwing darts. Burton Malkiel explains in his book A Random Walk Down Wall Street how he tracked the one and five year earnings forecasts on companies in the S&P 500 from analysts at 19 Wall Street firms and found that in aggregate the estimates had no more predictive power than if you had just assumed a given company’s earnings would grow at the same rate as the long-term average rate of growth in the economy. This is consistent with a much broader body of literature demonstrating that the predictions of statistical prediction rules - formulas that make predictions based on simple statistical rules - reliably outperform those of human experts. Statistical prediction rules have been used to predict the auction price of bordeaux better than expert wine tasters,20 marital happiness better than marriage counselors,21 academic performance better than admissions officers,22 criminal recidivism better than criminologists,23 and bankruptcy better than loan officers,24 to name just a few examples. This is an incredible finding that’s difficult to overstate. When considering complex issues such as these our natural intuition is to trust experts who can carefully weigh all the relevant information in determining the best course of action. But in reality experts are simply humans who have had more time to reinforce their preconceived notions on a particular topic and are more likely to anchor their attention on items that only introduce statistical noise.
Back in the world of finance, It turns out that to a first approximation the best estimate on the return to expect from a given stock is the long-run historical average of the stock market, and the best estimate of the return to expect from a stock picking mutual fund is the long-run historical average of the stock market minus its fees. The active stock pickers who manage mutual funds have on the whole demonstrated little ability to outperform the market. To be sure, at any given time there are plenty of managers who have recently beaten the market smartly, and if you look around you will even find a few with records that have been terrific over ten years or more. But just as a coin-flipping contest between thousands of contestants would no doubt yield a few who had uncannily “called it” a dozen or more times in a row, the number of market beating mutual fund managers is no greater than what you should expect as a result of pure luck.25
Expert and amatuer investors alike underestimate how competitive the capital markets are. News is readily available and quickly acted upon, and any fact you know about that you think gives you an edge is probably already a value in the cells of thousands of spreadsheets of analysts trading billions of dollars. Professor of Finance at Yale and Nobel Laureate Robert Shiller makes this point in a lecture using an example of a hypothetical drug company that announces it has received FDA approval to market a new drug:
Suppose you then, the next day, read in The Wall Street Journal about this new announcement. Do you think you have any chance of beating the market by trading on it? I mean, you're like twenty-four hours late, but I hear people tell me — I hear, "I read in Business Week that there was a new announcement, so I'm thinking of buying." I say, "Well, Business Week — that information is probably a week old." Even other people will talk about trading on information that's years old, so you kind of think that maybe these people are naïve. First of all, you're not a drug company expert or whatever it is that's needed. Secondly, you don't know the math — you don't know how to calculate present values, probably. Thirdly, you're a month late. You get the impression that a lot of people shouldn't be trying to beat the market. You might say, to a first approximation, the market has it all right so don't even try.26
In that last sentence Shiller hints at one of the most profound and powerful ideas in finance: the efficient market hypothesis. The core of the efficient market hypothesis is that when news that impacts the value of a company is released, stock prices will adjust instantly to account for the new information and bring it back to equilibrium where it’s no longer a “good” or “bad” investment but simply a fair one for its risk level. Because news is unpredictable by definition, it is impossible then to reliably outperform the market as a whole, and the seemingly ingenious investors on the latest cover of Forbes or Fortune are simply lucky.
A Noble Lie
In the 50s, 60s, and 70s several economists who would go on to win Nobel prizes worked out the implications of the efficient market hypothesis and created a new intellectual framework known as modern portfolio theory.27 The upshot is that capital markets reward investors for taking risk, and the more risk you take, the higher your return should be (in expectation, it might not turn out to be the case, which is why it’s risky). But the market doesn’t reward unnecessary risk, such as taking out a second mortgage to invest in your friend’s hot dog stand. It only rewards systematic risk, the risks associated with being exposed to the vagaries of the entire economy, such as interest rates, inflation, and productivity growth.28 Stock of small companies are riskier and have a higher expected return than stocks of large companies, which are riskier than corporate bonds, which are riskier than Treasury bonds. But owning one small cap stock doesn’t offer a higher expected return than another small cap stock, or a portfolio of hundreds of small caps for that matter. Owning more of a particular stock merely exposes you to the idiosyncratic risks that particular company faces and for which you are not compensated. Diversifying assets across as many securities as possible, it is possible to reduce the volatility of your portfolio without lowering its expected return.
This approach to investing dictates that you should determine an acceptable level of risk for your portfolio, then buy the largest basket of securities possible that targets that risk, ideally while paying the least amount possible in fees. Academic activism in favor of this passive approach gained momentum through the 70s, culminating in the launch of the first commercially available index fund in 1976, offered by The Vanguard Group. The typical index fund seeks to replicate the overall market performance of a broad class of investments such as large US stocks by owning all the securities in that market in proportion to their market weights. Thus if XYZ stock makes up 2% of the value of the relevant asset class, the index fund will allocate 2% of its funds to that stock. Because index funds only seek to replicate the market instead of beating it, they save costs on research and management teams and pass the savings along to investors through lower fees.
Index funds were originally derided and attracted little investment, but years of passionate advocacy by popularizers such as Jack Bogle and Burton Malkiel as well as the consensus of the economics profession has helped to lift them into the mainstream. Index funds now command trillions of dollars of assets and cover every segment of the market in stocks, bonds, and alternative assets in the US and abroad. In 2003 Vanguard launched its target retirement funds, which took the logic of passive investing even further by providing a single fund that would automatically shift from more aggressive to more conservative index investments as its investors approached retirement. Target retirement funds have since become especially popular options in 401(k) plans.
The rise of index investing has been a boon to individual investors, who have clearly benefited from the lower fees and greater diversification they offer. To the extent that investors have bought into the idea of passive investing over market timing and active security selection they have collectively saved themselves a fortune by not giving in to their value-destroying biases. For all the good index funds have done though, since their birth in the 70s, the intellectual foundation upon which they stand, the efficient market hypothesis, has been all but disproved.
The EMH is now the noble lie of the economics profession; while economists usually teach their students and the public that the capital markets are efficient and unbeatable, their research over the last few decades has shown otherwise. In a telling example, Paul Samuelson, who helped originate the EMH and advocated it in his best selling textbook, was a large, early investor in Berkshire Hathaway, Warren Buffett’s active investment holding company.29 But real people regularly ruin their lives through sloppy investing, and for them perhaps it is better just to say that beating the market can’t be done, so just buy, hold, and forget about it. We, on the other hand, believe a more nuanced understanding of the facts can be helpful.
Shortly after the efficient market hypothesis was first put forth researchers realized the idea had serious theoretical shortcomings.30 Beginning as early as 1977 they also found empirical “anomalies,” factors other than systematic risk that seemed to predict returns.31 Most of the early findings focused on valuation ratios - measures of a firm’s market price in relation to an accounting measure such as book value or earnings - and found that “cheap” stocks on average outperformed “expensive” stocks, confirming the value investment philosophy first promulgated by the legendary Depression-era investor Benjamin Graham and popularized by his most famous student, Warren Buffett. In 1992 Eugene Fama, one of the fathers of the efficient market hypothesis, published, along with Ken French, a groundbreaking paper demonstrating that the cheapest decile stocks in the US, as measured by the price to book ratio, outperformed the highest decile stocks by an astounding 11.9% per year, despite there being little difference in risk between them.32
A year later, researchers found convincing evidence of a momentum anomaly in US stocks: stocks that had the highest performance over the last 3-12 months continued to outperform relative to those with the lowest performance. The effect size was comparable to that of the value anomaly and again the discrepancy could not be explained with any conventional measure of risk.33
Since then, researchers have replicated the value and momentum effects across larger and deeper datasets, finding comparably large effect sizes in different times, regions, and asset classes. In a highly ambitious 2012 paper, Clifford Asness (a former student of Fama’s) and Tobias Moskowitz documented the significance of value and momentum across 18 national equity markets, 10 currencies, 10 government bonds, and 27 commodity futures.
Though value and momentum are the most pervasive and best documented of the market anomalies, many others have been discovered across the capital markets. Others include the small-cap premium34 (small company stocks tend to outperform large company stocks even in excess of what should be expected by their risk), the liquidity premium35 (less frequently traded securities tend to outperform more frequently traded securities), short-term reversal36 (equities with the lowest one-week to one-month performance tend to outperform over short time horizons), carry37 (high-yielding currencies tend to appreciate against low-yielding currencies), roll yield38,39 (bonds and futures at steeply negatively sloped points along the yield curve tend to outperform those at flatter or positively sloped points), profitability40 (equities of firms with higher proportions of profits over assets or equity tend to outperform those with lower profitability), calendar effects41 (stocks tend to have stronger returns in January and weaker returns on Mondays), and corporate action premia42 (securities of corporations that will, currently are, or have recently engaged in mergers, acquisitions, spin-offs, and other events tend to consistently under or outperform relative to what would be expected by their risk).
Most of these market anomalies appear remarkably robust compared to findings in other social sciences,43 especially considering that they seem to imply trillions of dollars of easy money is being overlooked in plain sight. Intelligent observers often question how such inefficiencies could possibly persist in the face of such strong incentives to exploit them until they disappear. Several explanations have been put forth, some of which are conflicting but which all probably have some explanatory power.
The first interpretation of the anomalies is to deny that they are actually anomalous, but rather are compensation for risk that isn’t captured by the standard asset pricing models. This is the view of Eugene Fama, who first postulated that the value premium was compensation for assuming risk of financial distress and bankruptcy that was not fully captured by simply measuring the standard deviation of a value stock’s returns.44 Subsequent research, however, disproved that the value effect was explained by exposure to financial distress.45 More sophisticated arguments point to the fact that the excess returns of value, momentum, and many other premiums exhibit greater skewness, kurtosis, or other statistical moments than the broad market: subtle statistical indications of greater risk, but the differences hardly seem large enough to justify the large return premiums observed.46
The only sense in which e.g. value and momentum stocks seem genuinely “riskier” is in career risk; though the factor premiums are significant and robust in the long term, they are not consistent or predictable along short time horizons. Reaping their rewards requires patience, and an analyst or portfolio manager who recommends an investment for his clients based on these factors may end up waiting years before it pays off, typically more than enough time to be fired.47 Though any investment strategy is bound to underperform at times, strategies that seek to exploit the factors most predictive of excess returns are especially susceptible to reputational hazard. Value stocks tend to be from unpopular companies in boring, slow growth industries. Momentum stocks are often from unproven companies with uncertain prospects or are from fallen angels who have only recently experienced a turn of luck. Conversely, stocks that score low on value and momentum factors are typically reputable companies with popular products that are growing rapidly and forging new industry standards in their wake.
Consider then, two companies in the same industry: Ol’Timer Industries, which has been around for decades and is consistently profitable but whose product lines are increasingly considered uncool and outdated. Recent attempts to revamp the company’s image by the firm’s new CEO have had modest success but consumers and industry experts expect this to be just delaying further inevitable loss of market share to NuTime.ly, founded eight years ago and posting exponential revenue growth and rapid adoption by the coveted 18-35 year old demographic, who typically describe its products using a wide selection of contemporary idioms and slang indicating superior social status and functionality. Ol’Timer Industries’ stock will likely score highly on value on momentum factors relative to NuTime.ly and so have a higher expected return. But consider the incentives of the investment professional choosing between the two: if he chooses Ol’Timer and it outperforms he may be congratulated and rewarded perhaps slightly more than if he had chosen NuTime.ly and it outperforms, but if he chooses Ol’Timer and it underperforms he is a fool and a laughingstock who wasted clients’ money on his pet theory when “everyone knew” NuTime.ly was going to win. At least if he chooses NuTime.ly and it underperforms it was a fluke that none of his peers saw coming, save for a few wingnuts who keep yammering about the arcane theories of Gene Fama and Benjamin Graham.
For most investors, “it is better for reputation to fail conventionally than to succeed unconventionally” as John Maynard Keynes observed in his General Theory. Not that this is at all restricted to investors, professional or amateur. In a similar vein, professional soccer goalkeepers continue to jump left or right on penalty kicks when statistics show they’d block more shots standing still.48 But standing in place while the ball soars into the upper right corner makes the goalkeeper look incompetent. The proclivity of middle managers and bureaucrats to default to uncontroversial decisions formed by groupthink is familiar enough to be the stuff of popular culture; nobody ever got fired for buying IBM, as the saying goes. Psychological experiments have shown that people will often affirm an obviously false observation about simple facts such as the relative lengths of straight lines on a board if others have affirmed it before them.49
We find ourselves back to the nature of human thinking and the biases and other cognitive errors that afflict it. This is what most interpretations of the market anomalies focuses on. Both amatuer and professional investors are human beings that are apt to make investment decisions not through a methodical application of modern portfolio theory but based rather on stories, anecdotes, hunches, and ideologies. Most of the anomalies make sense in light of an understanding of some of the most common biases such as anchoring and availability bias, status quo bias, and herd behavior.50 Rational investors seeking to exploit these inefficiencies may be able to do so to a limited extent, but if they are using other peoples’ money then they are constrained by the biases of their clients. The more aggressively they attempt to exploit market inefficiencies, the more they risk badly underperforming the market long enough to suffer devastating withdrawals of capital.51
It is no surprise then, that the most successful investors have found ways to rely on “sticky” capital unlikely to slip out of their control at the worst time. Warren Buffett invests the float of his insurance company holdings, which behaves in actuarially predictable ways; David Swensen manages the Yale endowment fund, which has an explicitly indefinite time horizon and a rules based spending rate; Renaissance Technologies, arguably the most successful hedge fund ever, only invests its own money; Dimensional Fund Advisors, one of the only mutual fund companies that has consistently earned excess returns through factor premiums, only sells through independent financial advisors who undergo a due diligence process to ensure they share similar investment philosophies.
Building a Better Portfolio
So what is an investor to do? The prospect of delicately crafting a portfolio that’s adequately diversified while taking advantage of return premiums may seem daunting, and one may be tempted to simply buy a Vanguard target retirement fund appropriate for their age and be done with it. Doing so is certainly a reasonable option. But we believe that with a disciplined investment strategy informed by the findings discussed above superior results are possible.
The first place to start is an assessment of your risk tolerance. How far can your portfolio fall before it adversely affects your quality of life? For investors saving for retirement with many more years of work ahead of them, the answer will likely be “quite a lot.” With ten years or more to work with, your portfolio will likely recover from even the most extreme bear markets. But people do not naturally think in ten-year increments, and many must live off their portfolio principal; accept that in the short term your portfolio will sometimes be in the red and consider what percentage decline over a period of a few months to a year you are comfortable enduring. Over a one year period the “worst case scenario” on diversified stock portfolios is historically about a 40% decline. For a traditional “moderate” portfolio of 60% stocks, 40% bonds it has been about a 25% decline.52
With a target on how much risk to accept in your portfolio, modern portfolio theory shows us a technique for achieving the most efficient tradeoff between risk and return possible called mean-variance optimization. An adequate treatment of MVO is beyond the scope of this paper,53 but essentially the task is to forecast expected returns on the major asset classes (e.g. US Stocks, International Stocks, and Investment Grade Bonds) then compute the weights for each that will achieve the highest expected return for a given amount of risk. We use an approach to mean variance optimization known as the Black-Litterman model54 and estimate expected returns using a limited number of simple inputs; for example, the expected return on an index of stocks can be closely approximated using the current dividend yield plus the long run growth rate of the economy.55
With optimal portfolio weights determined, next the investor must select the investment vehicles to use to gain exposure to the various asset classes. Though traditional index funds are a reasonable option, in recent years several “enhanced index” mutual fund and ETFs have been released that provide inexpensive, broad exposure to the hundreds or thousands of securities in a given asset classes while enhancing exposure to one or more of the major factor premiums discussed above such as value, profitability, or momentum. Research Affiliates, for example, licences a “fundamental index” that has been shown to provide efficient exposure to value and small-cap stocks across many markets.56 These “RAFI” indexes have been licensed to the asset management firms Charles Schwab and PowerShares to be made available through mutual funds and ETFs to the general investing public, and have generally outperformed their traditional index fund counterparts since inception.
Over the course of time, portfolio allocations will drift from their optimized allocations as particular asset classes inevitably outperform relative to other ones. Leaving this unchecked can lead to a portfolio that is no longer risk-return efficient. The investor must periodically rebalance the portfolio by selling securities that have become overweight and buying others that are underweight. Research suggests that by setting “tolerance bands” around target asset allocations, monitoring the portfolio frequently and trading when weights drift outside tolerance, investors can take further advantage of inter-asset-class value and momentum effects and boost return while reducing risk.57
Most investors, however, do not rebalance systematically, perhaps in part because it can be psychologically distressing. Rebalancing necessarily entails regularly selling assets that have been performing well in order to buy ones that have been laggards, exactly when your cognitive biases are most likely to tell you that it’s a bad idea. Indeed, neuroscientists have observed in laboratory experiments that when individuals consider the prospect of buying more of a risky asset that has lost them money, it activates the modules in the brain associated with anticipation of physical pain and anxiety.58 Dealing with investment losses is literally painful for investors.
Many investors may find it helpful to their peace of mind as well as their portfolio to outsource the entire process to a party with less emotional attachment in their portfolio. Realistically, most investors have neither the time nor the motivation necessary to attain a firm understanding of modern portfolio theory, research the capital market expectations on various asset classes and securities, and regularly monitor and rebalance their portfolio, all with enough rigor to make it worth the effort compared to a simple indexing strategy. By utilizing the skills of a good financial advisor, however, an investor can leverage the expertise of a professional with the bandwidth to execute these tactics in a cost-efficient manner.
A financial advisor should be able to engage you as an investor and acquire a firm understanding of your goals, needs, and attitudes towards risk, money, and markets. Because he or she will have an entire practice over which to efficiently dedicate time and resources on portfolio research, optimization, and trading, the financial advisor should be able to craft a portfolio that’s optimized for your personal situation. Financial advisors, as institutional investors, generally have access to institutional class funds that retail investors do not, including many of those that have demonstrated the greatest dedication to exploiting the factor premiums. Notably, DFA and AQR, the two fund families with the greatest academic support, are generally only available to individual investors through a financial advisor. Should your professionally managed portfolio provide a better risk adjusted return than a comparable do-it-yourself index fund approach, the FA’s fees have paid for themselves.
Furthermore, a good financial advisor will make sure your investments are tax efficient and that you are making the most of tax-preferred accounts. Researchers have shown that after asset allocation, asset location, the strategic placement of investments in accounts with different tax treatment, is one of the most important factors in net portfolio returns,59 yet most individual investors largely ignore these effects.60 Advisor’s fees can generally be paid with pre-tax funds as well, further enhancing tax efficiency.
Invest with Purpose
There is something of a paradox involved in investing. Finance is a highly specialized and technical field, but money is a very personal and emotional topic. Achieving the joy and fulfillment associated with financial success requires a large measure of emotional detachment and impersonal pragmatism. Far too often people suffer great loss by confusing loyalties and aspirations, fears and regrets with the efficient allocation of their portfolio assets. We as advisors hate to see this happen; there is nothing to celebrate about the needless destruction of capital, it is truly a loss for us all. One of the greatest misconceptions about finance is that investing is just a zero-sum game, that one trader’s gain is another’s loss. Nothing could be further from the truth. Economists have shown that one of the greatest predictors of a nation’s well being is its financial development.61 The more liquid and active our capital markets, the greater our society’s capacity for innovation and progress. When you invest in the stock market, you are contributing your share to the productive capacity of our world, your return is your reward for helping make it better, outperformance is a sign that you have steered capital to those with the greatest use for it.
With the right accounts and investments in place and a process for managing them effectively, you the investor are freed to focus on what you are working and investing for, and an advisor can work with you to help get you there. Whether you want to travel the world, buy the house of your dreams, send your children to the best college, maximize your philanthropic giving, or simply retire early, an advisor can help you develop a financial plan to turn the dollars and cents of your portfolio into the life you want to live, building more health, wealth, and happiness for you, your loved ones, and the world.
1. “U.S. Stock Ownership Stays at Record Low,” Gallup.
2. “U.S. Investors Not Sold on Stock Market as Wealth Creator,” Gallup.
3. Data provided by Morningstar.
4. Siegel, Stocks for the Long Run, 5-25
5. Dimson et al, Triumph of the Optimists.
6. Ibid. 3
8. Shiller, “Understanding Recent Trends in House Prices and Home Ownership.”
9. Mankiw and Zeldes, for example, find that to justify the historical equity risk premium observed, investors would in aggregate need to be indifferent between a certain payoff of $51,209 and a 50/50 bet paying either $50,000 or $100,000. Mankiw and Zeldes, “The consumption of stockholders and nonstockholders,” 8.
10. For a highly readable introduction to the idea of cognitive biases, see Daniel Kahneman’s book “Thinking: Fast and Slow.” Kahneman has been a pioneer in the field and for his work won the 2002 Nobel prize in economics.
11. Benartzi and Thaler, “Myopic Loss Aversion and the Equity Premium Puzzle.”
12. “Guide to the Markets,” J.P. Morgan Asset Management
13. See, for example, Kruger and Dunning, "Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments" and Zuckerman and Jost, "What Makes You Think You're So Popular? Self Evaluation Maintenance and the Subjective Side of the ‘Friendship Paradox’"
14. Svenson, “Are We All Less Risky and More Skillful than Our Fellow Drivers?”
15. Preston and Harris, “Psychology of Drivers in Traffic Accidents.”
16. Zweig, Your Money and Your Brain. 88-91.
17. French and Poterba, “Investor Diversification and International Equity Markets.”
18. Ibid. 14. p. 98-99.
19. Barber and Odean, “Trading is Hazardous to Your Wealth: The Common Stock Investment Performance of Individual Investors.”
20. Ashenfelter et al, “Predicting the Quality and Prices of Bordeaux Wine.”
21. Thornton, "Toward a Linear Prediction of Marital Happiness."
22. Swets et al, "Psychological Science Can Improve Diagnostic Decisions."
23. Carroll et al, "Evaluation, Diagnosis, and Prediction in Parole Decision-Making."
24. Stillwell et al, "Evaluating Credit Applications: A Validation of Multiattribute Utility Weight Elicitation Techniques"
25. See Fama and French, “Luck versus Skill in the Cross-Section of Mutual Fund Returns.” They do find modest evidence of skill at the right tail end of the distribution under the capital asset pricing model. After controlling for the value, size, and momentum factor premiums (discussed below), however, evidence of net-of-fee skill is not significantly different than zero.
26. Shiller, “Efficient Markets vs. Excess Volatility.”
27. Professor Goetzmann of the Yale School of Management has a introductory hyper-text textbook on modern portfolio theory available on his website, “An Introduction to Investment Theory.”
28. In the language of modern portfolio theory this risk is known at a security’s beta. Mathematically it is the covariance of the security’s returns with the market’s returns, divided by the variance of the market’s returns.
29. Setton, “The Berkshire Bunch.”
30. For example, Grossman and Stiglitz prove in “On the Impossibility of Informationally Efficient Markets” that market efficiency cannot be an equilibrium because without excess returns there is no incentive for arbitrageurs to correct mispricings. More recently, Markowitz, one of fathers of modern portfolio theory, showed in “Market Efficiency: A Theoretical Distinction and So What” that if a couple key assumptions of MPT are relaxed, the market portfolio is no longer optimal for most investors.
31. Basu, “Investment Performance of Common Stocks in Relation to their Price-Earnings Ratios: A Test of the Efficient Market Hypothesis.”
32. Fama and French, “The Cross-Section of Expected Stock Returns.”
33. Jegadeesh and Titman, “Returns to Buying Winners and Selling Losers: Implications for Stock Market Efficiency”
34. Ibid. 31.
35. Pastor and Stambaugh, “Liquidity Risk and Expected Stock Returns.”
36. Jegadeesh, “Evidence of Predictable Behavior or Security Returns.”
37. Froot and Thaler, “Anomalies: Foreign Exchange.”
38. Campbell and Shiller, “Yield Spreads and Interest Rate Movements: A Bird’s Eye View.”
39. Erb and Harvey, “The Tactical and Strategic Value of Commodity Futures.”
40. Novy-Marx, “The Other Side of Value: The Gross Profitability Premium.”
41. Thaler, “Seasonal Movements in Security Prices.”
42. Mitchell and Pulvino, “Characteristics of Risk and Return in Risk Arbitrage.”
43. See McLean and Pontiff, “Does Academic Research Destroy Stock Return Predictability?” A meta analysis of 82 equity return factors was able to replicate 72 using out of sample data.
44. Fama and French, “Size and Book-to-Market Factors in Earnings and Returns.”
45. Daniel and Titman, “Evidence on the Characteristics of Cross Sectional Variation in Stock Returns.”
46. Hwang and Rubesam, “Is Value Really Riskier than Growth?”
47. Numerous investor profiles have expounded on the difficulty of being a rational investor in an irrational market. In a recent article in Institutional Investor, Asness and Liew give a highly readable overview of the risk vs. mispricing debate and discuss the problems they encountered launching a value-oriented hedge fund in the middle of the dot-com bubble.
48. Bar-Eli, “Action Bias Among Elite Soccer Goalkeepers: The Case of Penalty Kicks. Journal of Economic Psychology.”
49. Asch, “Opinions and Social Pressure.”
50. Daniel et al provides one of the most thorough theoretical discussions on how certain common cognitive biases can result in systematically biased security prices in “Investor Psychology and Security Market Under- and Overreaction.”
51. Schleifer and Vishny, “The Limits of Arbitrage.”
52. Data provided by Vanguard.
53. Chapter 2 of Goetzmann’s “An Introduction to Investment Theory” provides an introductory discussion.
54. The Black-Litterman model allows investors to combine their estimates of expected returns with equilibrium implied returns in a Bayesian framework that largely overcomes the input-sensitivity problems associated with traditional mean-variance optimization. Idzorek offers a thorough introduction in “A Step-By-Step Guide to the Black-Litterman Model.”
55. Ilmanen’s “Expected Returns on Major Asset Classes” provides a detailed explanation of the theory and evidence of forecasting expected returns.
56. Walkshausl and Lobe, “Fundamental Indexing Around the World.”
57. Buetow et al, “The Benefits of Rebalancing.”
58. Kuhnen and Knutson, “The Neural Basis of Financial Risk Taking.”
59. Dammon et al, “Optimal Asset Location and Allocation with Taxable and Tax-Deferred Investing.”
60. Bodie and Crane, “Personal Investing: Advice, Theory, and Evidence from a Survey of TIAA-CREF Participants.”
61. Yongseok Shin of the Federal Reserve provides a brief review of the literature on this research in “Financial Markets: An Engine for Economic Growth.”
Asch, Solomon E. "Opinions and Social Pressure." Scientific American 193, no. 5 (12 1955).
Ashenfelter, Orley. "Predicting the Quality and Prices of Bordeaux Wine*." The Economic Journal 118, no. 529 (12 2008).
Asness, Clifford and Liew, John. “The Great Divide over Market Efficiency.” Institutional Investor, March 3, 2014.
Asness, Clifford, Moskowitz, Tobias, and Pedersen, Lasse. “Value and Momentum Everywhere.” The Journal of Finance 68, no. 3 (6, 2013).
Bar-Eli, Michael, Ofer H. Azar, Ilana Ritov, Yael Keidar-Levin, and Galit Schein. "Action Bias among Elite Soccer Goalkeepers: The Case of Penalty Kicks." Journal of Economic Psychology 28, no. 5 (12 2007).
Barber, Brad M., and Terrance Odean. "Trading Is Hazardous to Your Wealth: The Common Stock Investment Performance of Individual Investors." The Journal of Finance 55, no. 2 (12 2000).
Basu, S. "Investment Performance of Common Stocks in Relation to Their Price-Earnings Ratios: A Test of the Efficient Market Hypothesis."The Journal of Finance 32, no. 3 (12 1977).
Benartzi, S., and R. H. Thaler. "Myopic Loss Aversion and the Equity Premium Puzzle." The Quarterly Journal of Economics110, no. 1 (12, 1995).
Bodie, Zvi, and Dwight B. Crane. "Personal Investing: Advice, Theory, and Evidence." Financial Analysts Journal 53, no. 6 (12 1997).
Buetow, Gerald W., Ronald Sellers, Donald Trotter, Elaine Hunt, and Willie A. Whipple. "The Benefits of Rebalancing." The Journal of Portfolio Management 28, no. 2 (12 2002).
Campbell, John and Shiller, Robert. “Yield Spreads and Interest Rate Movements: A Bird’s Eye View.” The Econometrics of Financial Markets, 58 no. 3 (1991).
Carroll, John S., Richard L. Wiener, Dan Coates, Jolene Galegher, and James J. Alibrio. "Evaluation, Diagnosis, and Prediction in Parole Decision Making." Law & Society Review 17, no. 1 (12 1982).
Dammon, Robert M., Chester S. Spatt, and Harold H. Zhang. "Optimal Asset Location and Allocation with Taxable and Tax-Deferred Investing." The Journal of Finance 59, no. 3 (12 2004).
Daniel, Kent, and Sheridan Titman. "Evidence on the Characteristics of Cross Sectional Variation in Stock Returns." The Journal of Finance52, no. 1 (12 1997).
Daniel, Kent, Hirshleifer, David, and Subrahmanyam, Avanidhar. “Investor Psychology and Security Market Under- and Overreactions.” The Journal of Finance, 53 no. 6 (1998).
Dimson, Elroy, Marsh, Paul, and Staunton, Mike. Triumph of the Optimists. Princeton: Princeton University Press, 2002.
Erb, Cfa Claude B., and Campbell R. Harvey. "The Strategic and Tactical Value of Commodity Futures." CFA Digest 36, no. 3 (12 2006).
Fama, Eugene F., and Kenneth R. French. "The Cross-Section of Expected Stock Returns." The Journal of Finance 47, no. 2 (12 1992).
Fama, Eugene F., and Kenneth R. French. "Luck versus Skill in the Cross-Section of Mutual Fund Returns." The Journal of Finance65, no. 5 (12 2010).
Fama, Eugene F., and Kenneth R. French. "Size and Book-to-Market Factors in Earnings and Returns."The Journal of Finance 50, no. 1 (12 1995).
French, Kenneth and Poterba, James. “Investor Diversification and International Equity Markets.” American Economic Review (1991).
Froot, Kenneth A., and Richard H. Thaler. "Anomalies: Foreign Exchange." Journal of Economic Perspectives 4, no. 3 (12 1990).
“Guide to the Markets.” J.P. Morgan Asset Management. 2014
Goetzmann, William. An Introduction to Investment Theory. Yale School of Management. Accessed April 09, 2014. http://viking.som.yale.edu/will/finman540/classnotes/notes.html
Grossman, Sanford and Stiglitz, Joseph. “On the Impossibility of Informationally Efficent Markets.” The American Economic Review 70, no. 3 (6, 1980).
Hwang, Soosung and Rubesam, Alexandre. “Is Value Really Riskier Than Growth? An Answer with Time-Varying Return Reversal.” Journal of Banking and Finance, 37 no. 7 (2013).
Idzorek, Thomas. “A Step-by-Step Guide to the Black-Litterman Model.” Ibbotson Associates (2005).
Ilmanen, Antti. “Expected Returns on Major Asset Classes.” Research Foundation of CFA Institute (2012).
Jegadeesh, Narasimhan, and Sheridan Titman. "Returns to Buying Winners and Selling Losers: Implications for Stock Market Efficiency." The Journal of Finance48, no. 1 (12 1993).
Kahneman, Daniel. Thinking, Fast and Slow. New York: Farrar, Straus and Giroux, 2011.
Kruger, Justin, and David Dunning. "Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-assessments." Journal of Personality and Social Psychology77, no. 6 (12 1999).
Kuhnen, Camelia M., and Brian Knutson. "The Neural Basis of Financial Risk Taking." Neuron 47, no. 5 (12 2005).
Malkiel, Burton. A Random Walk Down Wall Street: Time-Tested Strategies for Successful Investing (Tenth Edition). New York: W.W. Norton & Company, 2012.
Mankiw, N.gregory, and Stephen P. Zeldes. "The Consumption of Stockholders and Nonstockholders." Journal of Financial Economics 29, no. 1 (12 1991).
Markowitz, Harry M. "Market Efficiency: A Theoretical Distinction and So What?" Financial Analysts Journal 61, no. 5 (12 2005).
McLean, David and Pontiff, Jeffrey. “Does Academic Research Destroy Stock Return Predictability?” Working Paper, (2013).
Mitchell, Mark, and Todd Pulvino. "Characteristics of Risk and Return in Risk Arbitrage." The Journal of Finance 56, no. 6 (12 2001).
Novy-Marx, Robert. "The Other Side of Value: The Gross Profitability Premium." Journal of Financial Economics 108, no. 1 (12 2013).
Pastor, Lubos and Stambaugh, Robert. “Liquidity Risk and Expected Stock Returns.” The Journal of Political Economy, 111 no. 3 (6, 2003).
Preston, Caroline E., and Stanley Harris. "Psychology of Drivers in Traffic Accidents." Journal of Applied Psychology 49, no. 4 (12 1965).
Setton, Dolly. “The Berkshire Bunch.” Forbes, October 12, 1998.
Shleifer, Andrei, and Robert W. Vishny. "The Limits of Arbitrage."The Journal of Finance 52, no. 1 (12 1997).
Siegel, Jeremy J. Stocks for the Long Run: The Definitive Guide to Financial Market Returns and Long-term Investment Strategies (Forth Edition). New York: McGraw-Hill, 2008.
Shiller, Robert. “Understanding Recent Trends in House Prices and Homeownership.” Housing, Housing Finance and Monetary Policy, Jackson Hole Conference Series, Federal Reserve Bank of Kansas City, 2008, pp. 85-123
Shiller, Robert. “Efficient Markets vs. Excess Volatility.” Yale. Accessed April 09, 2014. http://oyc.yale.edu/economics/econ-252-08/lecture-6
Shin, Yongseok. “Financial Markets: An Engine for Economic Growth.” The Regional Economist (July 2013).
Stillwell, William G., F.hutton Barron, and Ward Edwards. "Evaluating Credit Applications: A Validation of Multiattribute Utility Weight Elicitation Techniques."Organizational Behavior and Human Performance 32, no. 1 (12 1983).
Svenson, Ola. "Are We All Less Risky and More Skillful than Our Fellow Drivers?" Acta Psychologica47, no. 2 (12 1981).
Swets, J. A., R. M. Dawes, and J. Monahan. "Psychological Science Can Improve Diagnostic Decisions."Psychological Science in the Public Interest 1, no. 1 (12, 2000).
Thaler, Richard. "Anomalies: Seasonal Movements in Security Prices II: Weekend, Holiday, Turn of the Month, and Intraday Effects."Journal of Economic Perspectives1, no. 2 (12 1987).
Thornton, B. "Toward a Linear Prediction Model of Marital Happiness." Personality and Social Psychology Bulletin 3, no. 4 (12, 1977).
"U.S. Stock Ownership Stays at Record Low." Gallup. Accessed April 09, 2014. http://www.gallup.com/poll/162353/stock-ownership-stays-record-low.aspx.
Walkshäusl, Christian, and Sebastian Lobe. "Fundamental Indexing around the World." Review of Financial Economics 19, no. 3 (12 2010).
Zuckerman, Ezra W., and John T. Jost. "What Makes You Think You're so Popular? Self-Evaluation Maintenance and the Subjective Side of the "Friendship Paradox""Social Psychology Quarterly 64, no. 3 (12 2001).
Zweig, Jason. Your Money and Your Brain: How the New Science of Neuroeconomics Can Help Make You Rich. New York: Simon & Schuster, 2007.
I wish to thank Romeo Stevens for the feedback and proofreading he provided for early drafts of this paper. You should go buy his Mealsquares (just look how happy I look eating them there!)
If the section on statistical prediction rules sounded familiar it's probably because I stole all the examples from this Less Wrong article by lukeprog about them. After you're done giving this article karma you should go give that one some more.
After I made my South Bay meetup presentation Peter McCluskey wrote on the Bay Area LW mailing list that "Your paper's report of 'a massive study of the sixteen countries that had data on local stock, bond, and cash returns available for every year of the twentieth century' could be considered a study of survivorship bias, in that it uses criteria which exclude countries where stocks lost 100% at some point (Russia, Poland, China, Hungary)." This is a good point and is worth addressing, which some researchers have done in recent years. Dimson, Marsh, and Staunton (2006) find that the surviving markets of the 20th century I cite in my paper dominated the global market capitalization in 1900 and the effect of national stock-market implosions was mostly negligible on worldwide averages. Peter did go on to say that "I don't know of better advice for the average person than to invest in equities, and I have most of my wealth in equities..." so I think we're mostly on the same page at least in terms of practical advice.
In a conversation with Alyssa Vance she similarly expressed skepticism that the equity risk premium has been significantly greater than zero due to the fact that at some point in the 20th century most major economies experienced double-digit inflation and very high marginal rates of taxation on capital income. It is true that taxes and inflation significantly dilute an investor's return, and one would be foolish to ignore their effects. But while they may reduce the absolute attractiveness of equities, the effects of taxes and inflation actually make stocks look more attractive relative to the alternatives of bonds and cash investments. In the US and most jurisdictions, the dividends and capital gains earned on stocks are taxed at preferential rates relative to the interest earned on fixed income investments, which is typically taxed as ordinary income. Furthermore, the majority of individual investors hold a large fraction of their investments in tax-sheltered accounts (such as 401(k)s and IRAs in the US).
At my South Bay meetup presentation, Patrick LaVictoire (among others) expressed incredulity at my claim that retail investors have on average badly underperformed relevant benchmarks and that by implication institutional investors have outperformed. The source I cite in my paper is gated but there is plenty of research on actual investor performance. Morningstar regularly publishes info on how investors routinely underperform the mutual funds they invest in by buying into and selling out of them at the wrong times. Finding data on institutional investors is a little trickier but Busse, Goyal, and Wahal (2010) find that institutional investors managing e.g. pensions, foundations, and endowments on average outperform the broad US equity market in the US equity sleeve of their portfolios. (the language of that paper sounds much more pessimistic, with "alphas are statistically indistinguishable from zero" in the abstract. The key is that they are controlling for the size, value, and momentum effects discussed in my paper. In other words, once we account for the fact that institutional investors are taking advantage of the factor premiums that have been shown to most consistently outperform a simple index strategy, they aren't providing any extra value. This ties in with the idea of "shrinking alpha" or "smart beta" that is currently en vogue in my industry.)
I'm happy to address further questions and criticisms in the comments.
Here is an interesting blog post about a guy who did a resume experiment between two positions which he argues are by experience identical, but occupy different "social status" positions in tech: A software engineer and a data manager.
Interview A: as Software Engineer
Bill faced five hour-long technical interviews. Three went well. One was so-so, because it focused on implementation details of the JVM, and Bill’s experience was almost entirely in C++, with a bit of hobbyist OCaml. The last interview sounds pretty hellish. It was with the VP of Data Science, Bill’s prospective boss, who showed up 20 minutes late and presented him with one of those interview questions where there’s “one right answer” that took months, if not years, of in-house trial and error to discover. It was one of those “I’m going to prove that I’m smarter than you” interviews...
Let’s recap this. Bill passed three of his five interviews with flying colors. One of the interviewers, a few months later, tried to recruit Bill to his own startup. The fourth interview was so-so, because he wasn’t a Java expert, but came out neutral. The fifth, he failed because he didn’t know the in-house Golden Algorithm that took years of work to discover. When I asked that VP/Data Science directly why he didn’t hire Bill (and he did not know that I knew Bill, nor about this experiment) the response I got was “We need people who can hit the ground running.” Apparently, there’s only a “talent shortage” when startup people are trying to scam the government into changing immigration policy. The undertone of this is that “we don’t invest in people”.
Or, for a point that I’ll come back to, software engineers lack the social status necessary to make others invest in them.
Interview B: as Data Science manager.
A couple weeks later, Bill interviewed at a roughly equivalent company for the VP-level position, reporting directly to the CTO.
Worth noting is that we did nothing to make Bill more technically impressive than for Company A. If anything, we made his technical story more honest, by modestly inflating his social status while telling a “straight shooter” story for his technical experience. We didn’t have to cover up periods of low technical activity; that he was a manager, alone, sufficed to explain those away.
Bill faced four interviews, and while the questions were behavioral and would be “hard” for many technical people, he found them rather easy to answer with composure. I gave him the Golden Answer, which is to revert to “There’s always a trade-off between wanting to do the work yourself, and knowing when to delegate.” It presents one as having managerial social status (the ability to delegate) but also a diligent interest in, and respect for, the work. It can be adapted to pretty much any “behavioral” interview question...
Bill passed. Unlike for a typical engineering position, there were no reference checks. The CEO said, “We know you’re a good guy, and we want to move fast on you”. As opposed tot he 7-day exploding offers typically served to engineers, Bill had 2 months in which to make his decision. He got a fourth week of vacation without even having to ask for it, and genuine equity (about 75% of a year’s salary vesting each year)...
It was really interesting, as I listened in, to see how different things are once you’re “in the club”. The CEO talked to Bill as an equal, not as a paternalistic, bullshitting, “this is good for your career” authority figure. There was a tone of equality that a software engineer would never get from the CEO of a 100-person tech company.
The author concludes that positions that are labeled as code-monkey-like are low status, while positions that are labeled as managerial are high status. Even if they are "essentially" doing the same sort of work.
Not sure about this methodology, but it's food for thought.
The article is titled "The scientific A-Team saving the world from killer viruses, rogue AI and the paperclip apocalypse", and features interviews with Martin Rees, Huw Price, Jaan Tallinn and Partha Dasgupta. The author takes a rather positive tone about CSER and MIRI's endeavours, and mentions x-risks other than AI (bioengineered pandemic, global warming with human interference, distributed manufacturing).
I find it interesting that the inferential distance for the layman to the concept of paperclipping AI is much reduced by talking about paperclipping America, rather than the entire universe: though the author admits still struggling with the concept. Unusually for an journalist who starts off unfamiliar with these concepts, he writes in a tone that suggests that he takes the ideas seriously, without the sort of "this is very far-fetched and thus I will not lower myself to seriously considering it" countersignalling usually seen with x-risk coverage. There is currently the usual degree of incredulity in the comments section though.
For those unfamiliar with The Guardian, it is a British left-leaning newspaper with a heavy focus on social justice and left-wing political issues.
Attempt at the briefest content-full Less Wrong post:
Once AI is developed, it could "easily" colonise the universe. So the Great Filter (preventing the emergence of star-spanning civilizations) must strike before AI could be developed. If AI is easy, we could conceivably have built it already, or we could be on the cusp of building it. So the Great Filter must predate us, unless AI is hard.
View more: Next