Filter Last three months

Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Goal retention discussion with Eliezer

56 MaxTegmark 04 September 2014 10:23PM

Although I feel that Nick Bostrom’s new book “Superintelligence” is generally awesome and a well-needed milestone for the field, I do have one quibble: both he and Steve Omohundro appear to be more convinced than I am by the assumption that an AI will naturally tend to retain its goals as it reaches a deeper understanding of the world and of itself. I’ve written a short essay on this issue from my physics perspective, available at http://arxiv.org/pdf/1409.0813.pdf.

Eliezer Yudkowsky just sent the following extremely interesting comments, and told me he was OK with me sharing them here to spur a broader discussion of these issues, so here goes.

On Sep 3, 2014, at 17:21, Eliezer Yudkowsky <yudkowsky@gmail.com> wrote:

Hi Max!  You're asking the right questions.  Some of the answers we can
give you, some we can't, few have been written up and even fewer in any
well-organized way.  Benja or Nate might be able to expound in more detail
while I'm in my seclusion.

Very briefly, though:
The problem of utility functions turning out to be ill-defined in light of
new discoveries of the universe is what Peter de Blanc named an
"ontological crisis" (not necessarily a particularly good name, but it's
what we've been using locally).

http://intelligence.org/files/OntologicalCrises.pdf

The way I would phrase this problem now is that an expected utility
maximizer makes comparisons between quantities that have the type
"expected utility conditional on an action", which means that the AI's
utility function must be something that can assign utility-numbers to the
AI's model of reality, and these numbers must have the further property
that there is some computationally feasible approximation for calculating
expected utilities relative to the AI's probabilistic beliefs.  This is a
constraint that rules out the vast majority of all completely chaotic and
uninteresting utility functions, but does not rule out, say, "make lots of
paperclips".

Models also have the property of being Bayes-updated using sensory
information; for the sake of discussion let's also say that models are
about universes that can generate sensory information, so that these
models can be probabilistically falsified or confirmed.  Then an
"ontological crisis" occurs when the hypothesis that best fits sensory
information corresponds to a model that the utility function doesn't run
on, or doesn't detect any utility-having objects in.  The example of
"immortal souls" is a reasonable one.  Suppose we had an AI that had a
naturalistic version of a Solomonoff prior, a language for specifying
universes that could have produced its sensory data.  Suppose we tried to
give it a utility function that would look through any given model, detect
things corresponding to immortal souls, and value those things.  Even if
the immortal-soul-detecting utility function works perfectly (it would in
fact detect all immortal souls) this utility function will not detect
anything in many (representations of) universes, and in particular it will
not detect anything in the (representations of) universes we think have
most of the probability mass for explaining our own world.  In this case
the AI's behavior is undefined until you tell me more things about the AI;
an obvious possibility is that the AI would choose most of its actions
based on low-probability scenarios in which hidden immortal souls existed
that its actions could affect.  (Note that even in this case the utility
function is stable!)

Since we don't know the final laws of physics and could easily be
surprised by further discoveries in the laws of physics, it seems pretty
clear that we shouldn't be specifying a utility function over exact
physical states relative to the Standard Model, because if the Standard
Model is even slightly wrong we get an ontological crisis.  Of course
there are all sorts of extremely good reasons we should not try to do this
anyway, some of which are touched on in your draft; there just is no
simple function of physics that gives us something good to maximize.  See
also Complexity of Value, Fragility of Value, indirect normativity, the
whole reason for a drive behind CEV, and so on.  We're almost certainly
going to be using some sort of utility-learning algorithm, the learned
utilities are going to bind to modeled final physics by way of modeled
higher levels of representation which are known to be imperfect, and we're
going to have to figure out how to preserve the model and learned
utilities through shifts of representation.  E.g., the AI discovers that
humans are made of atoms rather than being ontologically fundamental
humans, and furthermore the AI's multi-level representations of reality
evolve to use a different sort of approximation for "humans", but that's
okay because our utility-learning mechanism also says how to re-bind the
learned information through an ontological shift.

This sorta thing ain't going to be easy which is the other big reason to
start working on it well in advance.  I point out however that this
doesn't seem unthinkable in human terms.  We discovered that brains are
made of neurons but were nonetheless able to maintain an intuitive grasp
on what it means for them to be happy, and we don't throw away all that
info each time a new physical discovery is made.  The kind of cognition we
want does not seem inherently self-contradictory.

Three other quick remarks:

*)  Natural selection is not a consequentialist, nor is it the sort of
consequentialist that can sufficiently precisely predict the results of
modifications that the basic argument should go through for its stability.
The Omohundrian/Yudkowskian argument is not that we can take an arbitrary
stupid young AI and it will be smart enough to self-modify in a way that
preserves its values, but rather that most AIs that don't self-destruct
will eventually end up at a stable fixed-point of coherent
consequentialist values.  This could easily involve a step where, e.g., an
AI that started out with a neural-style delta-rule policy-reinforcement
learning algorithm, or an AI that started out as a big soup of
self-modifying heuristics, is "taken over" by whatever part of the AI
first learns to do consequentialist reasoning about code.  But this
process doesn't repeat indefinitely; it stabilizes when there's a
consequentialist self-modifier with a coherent utility function that can
precisely predict the results of self-modifications.  The part where this
does happen to an initial AI that is under this threshold of stability is
a big part of the problem of Friendly AI and it's why MIRI works on tiling
agents and so on!

*)  Natural selection is not a consequentialist, nor is it the sort of
consequentialist that can sufficiently precisely predict the results of
modifications that the basic argument should go through for its stability.
It built humans to be consequentialists that would value sex, not value
inclusive genetic fitness, and not value being faithful to natural
selection's optimization criterion.  Well, that's dumb, and of course the
result is that humans don't optimize for inclusive genetic fitness.
Natural selection was just stupid like that.  But that doesn't mean
there's a generic process whereby an agent rejects its "purpose" in the
light of exogenously appearing preference criteria.  Natural selection's
anthropomorphized "purpose" in making human brains is just not the same as
the cognitive purposes represented in those brains.  We're not talking
about spontaneous rejection of internal cognitive purposes based on their
causal origins failing to meet some exogenously-materializing criterion of
validity.  Our rejection of "maximize inclusive genetic fitness" is not an
exogenous rejection of something that was explicitly represented in us,
that we were explicitly being consequentialists for.  It's a rejection of
something that was never an explicitly represented terminal value in the
first place.  Similarly the stability argument for sufficiently advanced
self-modifiers doesn't go through a step where the successor form of the
AI reasons about the intentions of the previous step and respects them
apart from its constructed utility function.  So the lack of any universal
preference of this sort is not a general obstacle to stable
self-improvement.

*)   The case of natural selection does not illustrate a universal
computational constraint, it illustrates something that we could
anthropomorphize as a foolish design error.  Consider humans building Deep
Blue.  We built Deep Blue to attach a sort of default value to queens and
central control in its position evaluation function, but Deep Blue is
still perfectly able to sacrifice queens and central control alike if the
position reaches a checkmate thereby.  In other words, although an agent
needs crystallized instrumental goals, it is also perfectly reasonable to
have an agent which never knowingly sacrifices the terminally defined
utilities for the crystallized instrumental goals if the two conflict;
indeed "instrumental value of X" is simply "probabilistic belief that X
leads to terminal utility achievement", which is sensibly revised in the
presence of any overriding information about the terminal utility.  To put
it another way, in a rational agent, the only way a loose generalization
about instrumental expected-value can conflict with and trump terminal
actual-value is if the agent doesn't know it, i.e., it does something that
it reasonably expected to lead to terminal value, but it was wrong.

This has been very off-the-cuff and I think I should hand this over to
Nate or Benja if further replies are needed, if that's all right.

[meta] Future moderation and investigation of downvote abuse cases, or, I don't want to deal with this stuff

45 Kaj_Sotala 17 August 2014 02:40PM

Since the episode with Eugine_Nier, I have received three private messages from different people asking me to investigate various cases of suspected mass downvoting. And to be quite honest, I don't want to deal with this. Eugine's case was relatively clear-cut, since he had engaged in systematic downvoting of a massive scale, but the new situations are a lot fuzzier and I'm not sure of what exactly the rules should be (what counts as a permitted use of the downvote system and what doesn't?).

At least one person has also privately contacted me and offered to carry out moderator duties if I don't want them, but even if I told them yes (on what basis? why them and not someone else?), I don't know what kind of policy I should tell them to enforce. I only happened to be appointed a moderator because I was in the list of top 10 posters at a particular time, and I don't feel like I should have any particular authority to make the rules. Nor do I feel like I have any good idea of what the rules should be, or who would be the right person to enforce them.

In any case, I don't want to be doing this job, nor do I particularly feel like being responsible for figuring out who should, or how, or what the heck. I've already started visiting LW less often because I dread having new investigation requests to deal with. So if you folks could be so kind as to figure it out without my involvement? If there's a clear consensus that someone in particular should deal with this, I can give them mod powers, or something.

Anthropic signature: strange anti-correlations

42 Stuart_Armstrong 21 October 2014 04:59PM

Imagine that the only way that civilization could be destroyed was by a large pandemic that occurred at the same time as a large recession, so that governments and other organisations were too weakened to address the pandemic properly.

Then if we looked at the past, as observers in a non-destroyed civilization, what would we expect to see? We could see years with no pandemics or no recessions; we could see mild pandemics, mild recessions, or combinations of the two; we could see large pandemics with no or mild recessions; or we could see large recessions with no or mild pandemics. We wouldn't see large pandemics combined with large recessions, as that would have caused us to never come into existence. These are the only things ruled out by anthropic effects.

Assume that pandemics and recessions are independent (at least, in any given year) in terms of "objective" (non-anthropic) probabilities. Then what would we see? We would see that pandemics and recessions appear to be independent when either of them are of small intensity. But as the intensity rose, they would start to become anti-correlated, with a large version of one completely precluding a large version of the other.

The effect is even clearer if we have a probabilistic relation between pandemics, recessions and extinction (something like: extinction risk proportional to product of recession size times pandemic size). Then we would see an anti-correlation rising smoothly with intensity.

Thus one way of looking for anthropic effects in humanity's past is to look for different classes of incidents that are uncorrelated at small magnitude, and anti-correlated at large magnitudes. More generally, to look for different classes of incidents where the correlation changes at different magnitudes - without any obvious reasons. Than might be the signature of an anthropic disaster we missed - or rather, that missed us.

[meta] New LW moderator: Viliam_Bur

37 Kaj_Sotala 13 September 2014 01:37PM

Some time back, I wrote that I was unwilling to continue with investigations into mass downvoting, and asked people for suggestions on how to deal with them from now on. The top-voted proposal in that thread suggested making Viliam_Bur into a moderator, and Viliam gracefully accepted the nomination. So I have given him moderator privileges and also put him in contact with jackk, who provided me with the information necessary to deal with the previous cases. Future requests about mass downvote investigations should be directed to Viliam.

Thanks a lot for agreeing to take up this responsibility, Viliam! It's not an easy one, but I'm very grateful that you're willing to do it. Please post a comment here so that we can reward you with some extra upvotes. :)

LW client-side comment improvements

35 Bakkot 07 August 2014 08:40PM

All of these things I mentioned in the most recent open thread, but since the first one is directly relevant and the comment where I posted it somewhat hard to come across, I figured I'd make a post too.

 

Custom Comment Highlights

NOTE FOR FIREFOX USERS: this contained a bug which has been squashed, causing the list of comments not to be automatically populated (depending on your version of Firefox). I suggest reinstalling. Sorry, no automatic updates unless you use the Chrome extension (though with >50% probability there will be no further updates).

You know how the highlight for new comments on Less Wrong threads disappears if you reload the page, making it difficult to find those comments again? Here is a userscript you can install to fix that (provided you're on Firefox or Chrome). Once installed, you can set the date after which comments are highlighted, and easily scroll to new comments. See screenshots. Installation is straightforward (especially for Chrome, since I made an extension as well).

Bonus: works even if you're logged out or don't have an account, though you'll have to set the highlight time manually.


Delay Before Commenting

Another script to add a delay and checkbox reading "In posting this, I am making a good-faith contribution to the collective search for truth." before allowing you to comment. Made in response to a comment by army1987.


Slate Star Codex Comment Highlighter

Edit: You no longer need to install this, since Scott's added it to his blog. Unless you want the little numbers in the title bar.

Yet another script, to make finding recent comments over at Slate Star Codex a lot easier. Also comes in Chrome extension flavor. See screenshots. Not directly relevant to Less Wrong, but there's a lot of overlap in readership, so you may be interested.

Note for LW Admins / Yvain
These would be straightforward to make available to all users (on sufficiently modern browsers), since they're just a bit of Javascript getting injected. If you'd like to, feel free, and message me if I can be of help.

[LINK] Speed superintelligence?

34 Stuart_Armstrong 14 August 2014 03:57PM

From Toby Ord:

Tool assisted speedruns (TAS) are when people take a game and play it frame by frame, effectively providing super reflexes and forethought, where they can spend a day deciding what to do in the next 1/60th of a second if they wish. There are some very extreme examples of this, showing what can be done if you really play a game perfectly. For example, this video shows how to winSuper Mario Bros 3 in 11 minutes. It shows how different optimal play can be from normal play. In particular, on level 8-1, it gains 90 extra lives by a sequence of amazing jumps.

Other TAS runs get more involved and start exploiting subtle glitches in the game. For example, this page talks about speed running NetHack, using a lot of normal tricks, as well as luck manipulation (exploiting the RNG) and exploiting a dangling pointer bug to rewrite parts of memory.

Though there are limits to what AIs could do with sheer speed, it's interesting that great performance can be achieved with speed alone, that this allows different strategies from usual ones, and that it allows the exploitation of otherwise unexploitable glitches and bugs in the setup.

Quantified Risks of Gay Male Sex

33 pianoforte611 18 August 2014 11:55PM

If you are a gay male then you’ve probably worried at one point about sexually transmitted diseases. Indeed men who have sex with men have some of the highest prevalence of many of these diseases. And if you’re not a gay male, you’ve probably still thought about STDs at one point. But how much should you worry? There are many organizations and resources that will tell you to wear a condom, but very few will tell you the relative risks of wearing a condom vs not. I’d like to provide a concise summary of the risks associated with gay male sex and the extent to which these risks can be reduced. (See Mark Manson’s guide for a similar resources for heterosexual sex.). I will do so by first giving some information about each disease, including its prevalence among gay men. Most of this data will come from the US, but the US actually has an unusually high prevalence for many diseases. Certainly HIV is much less common in many parts of Europe. I will end with a case study of HIV, which will include an analysis of the probabilities of transmission broken down by the nature of sex act and a discussion of risk reduction techniques.

When dealing with risks associated with sex, there are few relevant parameters. The most common is the prevalence – the proportion of people in the population that have the disease. Since you can only get a disease from someone who has it, the prevalence is arguably the most important statistic. There are two more relevant statistics – the per act infectivity (the chance of contracting the disease after having sex once) and the per partner infectivity (the chance of contracting the disease after having sex with one partner for the duration of the relationship). As it turns out the latter two probabilities are very difficult to calculate. I only obtained those values for for HIV. It is especially difficult to determine per act risks for specific types of sex acts since many MSM engage in a variety of acts with multiple partners. Nevertheless estimates do exist and will explored in detail in the HIV case study section.

HIV

Prevalence: Between 13 - 28%. My guess is about 13%.

The most infamous of the STDs. There is no cure but it can be managed with anti-retroviral therapy. A commonly reported statistic is that 19% of MSM (men who have sex with men) in the US are HIV positive (1). For black MSM, this number was 28% and for white MSM this number was 16%. This is likely an overestimate, however, since the sample used was gay men who frequent bars and clubs. My estimate of 13% comes from CDC's total HIV prevalence in gay men of 590,000 (2) and their data suggesting that MSM comprise 2.9% of men in the US (3).

 

Gonorrhea

Prevalence: Between 9% and 15% in the US

This disease affects the throat and the genitals but it is treatable with antibiotics. The CDC estimates 15.5% prevalence (4). However, this is likely an overestimate since the sample used was gay men in health clinics. Another sample (in San Francisco health clinics) had a pharyngeal gonorrhea prevalence of 9% (5).

 

Syphilis

Prevalence: 0.825% in the US

 My estimate was calculated in the same manner as my estimate for HIV. I used the CDC's data (6). Syphilis is transmittable by oral and anal sex (7) and causes genital sores that may look harmless at first (8). Syphilis is curable with penicillin however the presence of sores increases the infectivity of HIV.

 

Herpes (HSV-1 and HSV-2)

Prevalence: HSV-2 - 18.4% (9); HSV-1 - ~75% based on Australian data  (10)

This disease is mostly asymptomatic and can be transmitted through oral or anal sex. Sometimes sores will appear and they will usually go away with time. For the same reason as syphilis, herpes can increase the chance of transmitting HIV. The estimate for HSV-1 is probably too high. Snowball sampling was used and most of the men recruited were heavily involved in organizations for gay men and were sexually active in the past 6 months. Also half of them reported unprotected anal sex in the past six months. The HSV-2 sample came from a random sample of US households (11).

 

Clamydia

Prevalence: Rectal - 0.5% - 2.3% ; Pharyngeal - 3.0 - 10.5% (12)

 Like herpes, it is often asymptomatic - perhaps as low as 10% of infected men report symptoms. It is curable with antibiotics.

 

HPV

Prevalence: 47.2% (13)

 This disease is incurable (though a vaccine exists for men and women) but usually asymptomatic. It is capable of causing cancers of the penis, throat and anus. Oddly there are no common tests for HPV in part because there are many strains (over 100) most of which are relatively harmless. Sometimes it goes away on its own (14). The prevalence rate was oddly difficult to find, the number I cited came from a sample of men from Brazil, Mexico and the US.

 

Case Study of HIV transmission; risks and strategies for reducing risk

 IMPORTANT: None of the following figures should be generalized to other diseases. Many of these numbers are not even the same order of magnitude as the numbers for other diseases. For example, HIV is especially difficult to transmit via oral sex, but Herpes can very easily be transmitted.

Unprotected Oral Sex per-act risk (with a positive partner or partner of unknown serostatus):

Non-zero but very small. Best guess .03% without condom (15)

 Unprotected Anal sex per-act risk (with positive partner): 

Receptive: 0.82% - 1.4% (16) (17)

                          Insertive Circumcised: 0.11% (18)

         Insertive Uncircumcised: 0.62% (18)

 Protected Anal sex per-act risk (with positive partner):  

  Estimates range from 2 times lower to twenty times lower (16)  (19) and the risk is highly dependent on the slippage and   breakage rate.


Contracting HIV from oral sex is very rare. In one study, 67 men reported performing oral sex on at least one HIV positive partner and none were infected (20). However, transmission is possible (15). Because instances of oral transmission of HIV are so rare, the risk is hard to calculate so should be taken with a grain of salt. The number cited was obtained from a group of individuals that were either HIV positive or high risk for HIV. The per act-risk with a positive partner is therefore probably somewhat higher.

 Note that different HIV positive men have different levels of infectivity hence the wide range of values for per-act probability of transmission. Some men with high viral loads (the amount of HIV in the blood) may have an infectivity of greater than 10% per unprotected anal sex act (17).

 

Risk reducing strategies

 Choosing sex acts that have a lower transmission rate (oral sex, protected insertive anal sex, non-insertive) is one way to reduce risk. Monogamy, testing, antiretroviral therapy, PEP and PrEP are five other ways.

 

Testing Your partner/ Monogamy

 If your partner tests negative then they are very unlikely to have HIV. There is a 0.047% chance of being HIV positive if they tested negative using a blood test and a 0.29% chance of being HIV positive if they tested negative using an oral test. If they did further tests then the chance is even lower. (See the section after the next paragraph for how these numbers were calculated).

 So if your partner tests negative, the real danger is not the test giving an incorrect result. The danger is that your partner was exposed to HIV before the test, but his body had not started to make antibodies yet. Since this can take weeks or months, it is possible for your partner who tested negative to still have HIV even if you are both completely monogamous.

 ____

For tests, the sensitivity - the probability that an HIV positive person will test positive - is 99.68% for blood tests (21), 98.03% with oral tests. The specificity - the probability that an HIV negative person will test negative - is 99.74% for oral tests and 99.91% for blood tests. Hence the probability that a person who tested negative will actually be positive is:

 P(Positive | tested negative) = P(Positive)*(1-sensitivity)/(P(Negative)*specificity + P(Positive)*(1-sensitivity)) = 0.047% for blood test, 0.29% for oral test

 Where P(Positive) = Prevalence of HIV, I estimated this to be 13%.

 However, according to a writer for About.com (22) - a doctor who works with HIV - there are often multiple tests which drive the sensitivity up to 99.997%.

 

Home Testing

Oraquick is an HIV test that you can purchase online and do yourself at home. It costs $39.99 for one kit. The sensitivity is 93.64%, the specificity is 99.87% (23). The probability that someone who tested negative will actually be HIV positive is 0.94%. - assuming a 13% prevalence for HIV. The same danger mentioned above applies - if the infection occurred recently the test would not detect it.

 

 Anti-Retroviral therapy

 Highly active anti-retroviral therapy (HAART), when successful, can reduce the viral load – the amount of HIV in the blood - to low or undetectable levels. Baggaley et. al (17) reports that in heterosexual couples, there have been some models relating viral load to infectivity. She applies these models to MSM and reports that the per-act risk for unprotected anal sex with a positive partner should be 0.061%. However, she notes that different models produce very different results thus this number should be taken with a grain of salt.

 

 Post-Exposure Prophylaxis (PEP)

 A last resort if you think you were exposed to HIV is to undergo post-exposure prophylaxis within 72 hours. Antiretroviral drugs are taken for about a month in the hopes of preventing the HIV from infecting any cells. In one case controlled study some health care workers who were exposed to HIV were given PEP and some were not, (this was not under the control of the experimenters). Workers that contracted HIV were less likely to have been given PEP with an odds ratio of 0.19 (24). I don’t know whether PEP is equally effective at mitigating risk from other sources of exposure.

 

 Pre-Exposure Prophylaxis (PrEP)

 This is a relatively new risk reduction strategy. Instead of taking anti-retroviral drugs after exposure, you take anti-retroviral drugs every day in order to prevent HIV infection. I could not find a per-act risk, but in a randomized controlled trial, MSM who took PrEP were less likely to become infected with HIV than men who did not (relative reduction  - 41%). The average number of sex partners was 18. For men who were more consistent and had a 90% adherence rate, the relative reduction was better - 73%. (25) (26).

1: http://www.cdc.gov/mmwr/preview/mmwrhtml/mm5937a2.htm?s_cid=mm5937a2_w

2: http://www.cdc.gov/hiv/statistics/basics/ataglance.html

3: http://www.cdc.gov/nchs/data/ad/ad362.pdf

4: http://www.cdc.gov/std/stats10/msm.htm

5: http://cid.oxfordjournals.org/content/41/1/67.short

6: http://www.cdc.gov/std/syphilis/STDFact-MSM-Syphilis.htm

7: http://www.cdc.gov/mmwr/preview/mmwrhtml/mm5341a2.htm

8: http://www.cdc.gov/std/syphilis/stdfact-syphilis.htm

9: http://journals.lww.com/stdjournal/Abstract/2010/06000/Men_Who_Have_Sex_With_Men_in_the_United_States_.13.aspx

10: http://jid.oxfordjournals.org/content/194/5/561.full

11: http://www.nber.org/nhanes/nhanes-III/docs/nchs/manuals/planop.pdf

12: http://www.cdc.gov/std/chlamydia/STDFact-Chlamydia-detailed.htm

13: http://jid.oxfordjournals.org/content/203/1/49.short

14: http://www.cdc.gov/std/hpv/stdfact-hpv-and-men.htm

15: http://journals.lww.com/aidsonline/pages/articleviewer.aspx?year=1998&issue=16000&article=00004&type=fulltext#P80

16: http://aje.oxfordjournals.org/content/150/3/306.short

17: http://ije.oxfordjournals.org/content/early/2010/04/20/ije.dyq057.full

18: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2852627/

19:

http://journals.lww.com/stdjournal/Fulltext/2002/01000/Reducing_the_Risk_of_Sexual_HIV_Transmission_.7.aspx

20:

http://journals.lww.com/aidsonline/Fulltext/2002/11220/Risk_of_HIV_infection_attributable_to_oral_sex.22.aspx

21: http://www.thelancet.com/journals/laninf/article/PIIS1473-3099%2811%2970368-1/abstract

22:

http://aids.about.com/od/hivpreventionquestions/f/How-Often-Do-False-Positive-And-False-Negative-Hiv-Test-Results-Occur.htm

23: http://www.ncbi.nlm.nih.gov/pubmed/18824617

24: http://onlinelibrary.wiley.com/doi/10.1002/14651858.CD002835.pub3/abstract

25: http://www.nejm.org/doi/full/10.1056/Nejmoa1011205#t=articleResults

26: http://www.cmaj.ca/content/184/10/1153.short

Six Plausible Meta-Ethical Alternatives

33 Wei_Dai 06 August 2014 12:04AM

In this post, I list six metaethical possibilities that I think are plausible, along with some arguments or plausible stories about how/why they might be true, where that's not obvious. A lot of people seem fairly certain in their metaethical views, but I'm not and I want to convey my uncertainty as well as some of the reasons for it.

  1. Most intelligent beings in the multiverse share similar preferences. This came about because there are facts about what preferences one should have, just like there exist facts about what decision theory one should use or what prior one should have, and species that manage to build intergalactic civilizations (or the equivalent in other universes) tend to discover all of these facts. There are occasional paperclip maximizers that arise, but they are a relatively minor presence or tend to be taken over by more sophisticated minds.
  2. Facts about what everyone should value exist, and most intelligent beings have a part of their mind that can discover moral facts and find them motivating, but those parts don't have full control over their actions. These beings eventually build or become rational agents with values that represent compromises between different parts of their minds, so most intelligent beings end up having shared moral values along with idiosyncratic values.
  3. There aren't facts about what everyone should value, but there are facts about how to translate non-preferences (e.g., emotions, drives, fuzzy moral intuitions, circular preferences, non-consequentialist values, etc.) into preferences. These facts may include, for example, what is the right way to deal with ontological crises. The existence of such facts seems plausible because if there were facts about what is rational (which seems likely) but no facts about how to become rational, that would seem like a strange state of affairs.
  4. None of the above facts exist, so the only way to become or build a rational agent is to just think about what preferences you want your future self or your agent to hold, until you make up your mind in some way that depends on your psychology. But at least this process of reflection is convergent at the individual level so each person can reasonably call the preferences that they endorse after reaching reflective equilibrium their morality or real values.
  5. None of the above facts exist, and reflecting on what one wants turns out to be a divergent process (e.g., it's highly sensitive to initial conditions, like whether or not you drank a cup of coffee before you started, or to the order in which you happen to encounter philosophical arguments). There are still facts about rationality, so at least agents that are already rational can call their utility functions (or the equivalent of utility functions in whatever decision theory ends up being the right one) their real values.
  6. There aren't any normative facts at all, including facts about what is rational. For example, it turns out there is no one decision theory that does better than every other decision theory in every situation, and there is no obvious or widely-agreed-upon way to determine which one "wins" overall.

(Note that for the purposes of this post, I'm concentrating on morality in the axiological sense (what one should value) rather than in the sense of cooperation and compromise. So alternative 1, for example, is not intended to include the possibility that most intelligent beings end up merging their preferences through some kind of grand acausal bargain.)

It may be useful to classify these possibilities using labels from academic philosophy. Here's my attempt: 1. realist + internalist 2. realist + externalist 3. relativist 4. subjectivist 5. moral anti-realist 6. normative anti-realist. (A lot of debates in metaethics concern the meaning of ordinary moral language, for example whether they refer to facts or merely express attitudes. I mostly ignore such debates in the above list, because it's not clear what implications they have for the questions that I care about.)

One question LWers may have is, where does Eliezer's metathics fall into this schema? Eliezer says that there are moral facts about what values every intelligence in the multiverse should have, but only humans are likely to discover these facts and be motivated by them. To me, Eliezer's use of language is counterintuitive, and since it seems plausible that there are facts about what everyone should value (or how each person should translate their non-preferences into preferences) that most intelligent beings can discover and be at least somewhat motivated by, I'm reserving the phrase "moral facts" for these. In my language, I think 3 or maybe 4 is probably closest to Eliezer's position.

Hal Finney has just died.

32 cousin_it 28 August 2014 07:39PM

How to write an academic paper, according to me

31 Stuart_Armstrong 15 October 2014 12:29PM

Disclaimer: this is entirely a personal viewpoint, formed by a few years of publication in a few academic fields. EDIT: Many of the comments are very worth reading as well.

Having recently finished a very rushed submission (turns out you can write a novel paper in a day and half, if you're willing to sacrifice quality and sanity), I've been thinking about how academic papers are structured - and more importantly, how they should be structured.

It seems to me that the key is to consider the audience. Or, more precisely, to consider the audiences - because different people will read you paper to different depths, and you should cater to all of them. An example of this is the "inverted pyramid" structure for many news articles - start with the salient facts, then the most important details, then fill in the other details. The idea is to ensure that a reader who stops reading at any point (which happens often) will nevertheless have got the most complete impression that it was possible to convey in the bit that they did read.

So, with that model in mind, lets consider the different levels of audience for a general academic paper (of course, some papers just can't fit into this mould, but many can):

 

continue reading »

Talking to yourself: A useful thinking tool that seems understudied and underdiscussed

31 chaosmage 09 September 2014 04:56PM

I have returned from a particularly fruitful Google search, with unexpected results.

My question was simple. I was pretty sure that talking to myself aloud makes me temporarily better at solving problems that need a lot of working memory. It is a thinking tool that I find to be of great value, and that I imagine would be of interest to anyone who'd like to optimize their problem solving. I just wanted to collect some evidence on that, make sure I'm not deluding myself, and possibly learn how to enhance the effect.

This might be just lousy Googling on my part, but the evidence is surprisingly unclear and disorganized. There are at least three seperate Wiki pages for it. They don't link to each other. Instead they present the distinct models of three seperate fields: autocommunication in communication studies, semiotics and other cultural studies, intrapersonal communication ("self-talk" redirects here) in anthropology and (older) psychology and private speech in developmental psychology. The first is useless for my purpose, the second mentions "may increase concentration and retention" with no source, the third confirms my suspicion that this behavior boosts memory, motivation and creativity, but it only talks about children.

Google Scholar yields lots of sports-related results for "self-talk" because it can apparently improve the performance of athletes and if there's something that obviously needs the optimization power of psychology departments, it is competitive sports. For "intrapersonal communication" it has papers indicating it helps in language acquisition and in dealing with social anxiety. Both are dwarfed by the results for "private speech", which again focus on children. There's very little on "autocommunication" and what is there has nothing to do with the functioning of individual minds.

So there's a bunch of converging pieces of evidence supporting the usefulness of this behavior, but they're from several seperate fields that don't seem to have noticed each other very much. How often do you find that?

Let me quickly list a few ways that I find it plausible to imagine talking to yourself could enhance rational thought.

  • It taps the phonological loop, a distinct part of working memory that might otherwise sit idle in non-auditory tasks. More memory is always better, right?
  • Auditory information is retained more easily, so making thoughts auditory helps remember them later.
  • It lets you commit to thoughts, and build upon them, in a way that is more powerful (and slower) than unspoken thought while less powerful (but quicker) than action. (I don't have a good online source for this one, but Inside Jokes should convince you, and has lots of new cognitive science to boot.)
  • System 1 does seem to understand language, especially if it does not use complex grammar - so this might be a useful way for results of System 2 reasoning to be propagated. Compare affirmations. Anecdotally, whenever I'm starting a complex task, I find stating my intent out loud makes a huge difference in how well the various submodules of my mind cooperate.
  • It lets separate parts of your mind communicate in a fairly natural fashion, slows each of them down to the speed of your tongue and makes them not interrupt each other so much. (This is being used as a psychotherapy method.) In effect, your mouth becomes a kind of talking stick in their discussion.

All told, if you're talking to yourself you should be more able to solve complex problems than somebody of your IQ who doesn't, although somebody of your IQ with a pen and a piece of paper should still outthink both of you.

Given all that, I'm surprised this doesn't appear to have been discussed on LessWrong. Honesty: Beyond Internal Truth comes close but goes past it. Again, this might be me failing to use a search engine, but I think this is worth more of our attention that it has gotten so far.

I'm now almost certain talking to myself is useful, and I already find hindsight bias trying to convince me I've always been so sure. But I wasn't - I was suspicious because talking to yourself is an early warning sign of schizophrenia, and is frequent in dementia. But in those cases, it might simply be an autoregulatory response to failing working memory, not a pathogenetic element. After all, its memory enhancing effect is what the developmental psychologists say the kids use it for. I do expect social stigma, which is why I avoid talking to myself when around uninvolved or unsympathetic people, but my solving of complex problems tends to happen away from those anyway so that hasn't been an issue really.

So, what do you think? Useful?

Fixing Moral Hazards In Business Science

30 DavidLS 18 October 2014 09:10PM

I'm a LW reader, two time CFAR alumnus, and rationalist entrepreneur.

Today I want to talk about something insidious: marketing studies.

Until recently I considered studies of this nature merely unfortunate, funny even. However, my recent experiences have caused me to realize the situation is much more serious than this. Product studies are the public's most frequent interaction with science. By tolerating (or worse, expecting) shitty science in commerce, we are undermining the public's perception of science as a whole.

The good news is this appears fixable. I think we can change how startups perform their studies immediately, and use that success to progressively expand.

Product studies have three features that break the assumptions of traditional science: (1) few if any follow up studies will be performed, (2) the scientists are in a position of moral hazard, and (3) the corporation seeking the study is in a position of moral hazard (for example, the filing cabinet bias becomes more of a "filing cabinet exploit" if you have low morals and the budget to perform 20 studies).

I believe we can address points 1 and 2 directly, and overcome point 3 by appealing to greed.

Here's what I'm proposing: we create a webapp that acts as a high quality (though less flexible) alternative to a Contract Research Organization. Since it's a webapp, the cost of doing these less flexible studies will approach the cost of the raw product to be tested. For most web companies, that's $0.

If we spend the time to design the standard protocols well, it's quite plausible any studies done using this webapp will be in the top 1% in terms of scientific rigor.

With the cost low, and the quality high, such a system might become the startup equivalent of citation needed. Once we have a significant number of startups using the system, and as we add support for more experiment types, we will hopefully attract progressively larger corporations.

Is anyone interested in helping? I will personally write the webapp and pay for the security audit if we can reach quorum on the initial protocols.

Companies who have expressed interested in using such a system if we build it:

(I sent out my inquiries at 10pm yesterday, and every one of these companies got back to me by 3am. I don't believe "startups love this idea" is an overstatement.)

So the question is: how do we do this right?

Here are some initial features we should consider:

  • Data will be collected by a webapp controlled by a trusted third party, and will only be editable by study participants.
  • The results will be computed by software decided on before the data is collected.
  • Studies will be published regardless of positive or negative results.
  • Studies will have mandatory general-purpose safety questions. (web-only products likely exempt)
  • Follow up studies will be mandatory for continued use of results in advertisements.
  • All software/contracts/questions used will be open sourced (MIT) and creative commons licensed (CC BY), allowing for easier cross-product comparisons.

Any placebos used in the studies must be available for purchase as long as the results are used in advertising, allowing for trivial study replication.

Significant contributors will receive:

  • Co-authorship on the published paper for the protocol.
  • (Through the paper) an Erdos number of 2.
  • The satisfaction of knowing you personally helped restore science's good name (hopefully).

I'm hoping that if a system like this catches on, we can get an "effective startups" movement going :)

So how do we do this right?

Unpopular ideas attract poor advocates: Be charitable

30 mushroom 15 September 2014 07:30PM

Unfamiliar or unpopular ideas will tend to reach you via proponents who:

  •  ...hold extreme interpretations of these ideas.
  • ...have unpleasant social characteristics.
  • ...generally come across as cranks.

The basic idea: It's unpleasant to promote ideas that result in social sanction, and frustrating when your ideas are met with indifference. Both situations are more likely when talking to an ideological out-group. Given a range of positions on an in-group belief, who will decide to promote the belief to outsiders? On average, it will be those who believe the benefits of the idea are large relative to in-group opinion (extremists), those who view the social costs as small (disagreeable people), and those who are dispositionally drawn to promoting weird ideas (cranks).

I don't want to push this pattern too far. This isn't a refutation of any particular idea. There are reasonable people in the world, and some of them even express their opinions in public, (in spite of being reasonable). And sometimes the truth will be unavoidably unfamiliar and unpopular, etc. But there are also...

Some benefits that stem from recognizing these selection effects:

  • It's easier to be charitable to controversial ideas, when you recognize that you're interacting with people who are terribly suited to persuade you. I'm not sure "steelmanning" is the best idea (trying to present the best argument for an opponent's position). Based on the extremity effect, another technique is to construct a much diluted version of the belief, and then try to steelman the diluted belief.
  • If your group holds fringe or unpopular ideas, you can avoid these patterns when you want to influence outsiders.
  • If you want to learn about an afflicted issue, you might ignore the public representatives and speak to the non-evangelical instead (you'll probably have to start the conversation).
  • You can resist certain polarizing situations, in which the most visible camps hold extreme and opposing views. This situation worsens when those with non-extreme views judge the risk of participation as excessive, and leave the debate to the extremists (who are willing to take substantial risks for their beliefs). This leads to the perception that the current camps represent the only valid positions, which creates a polarizing loop. Because this is a sort of coordination failure among non-extremists, knowing to covertly look for other non-vocal moderates is a first step toward a solution. (Note: Sometimes there really aren't any moderates.)
  • Related to the previous point: You can avoid exaggerating the ideological unity of a group based on the group's leadership, or believing that the entire group has some obnoxious trait present in the leadership. (Note: In things like elections and war, the views of the leadership are what you care about. But you still don't want to be confused about other group members.)

 

I think the first benefit listed is the most useful.

To sum up: An unpopular idea will tend to get poor representation for social reasons, which will makes it seem like a worse idea than it really is, even granting that many unpopular ideas are unpopular for good reason. So when you encounter a idea that seem unpopular, you're probably hearing about it from a sub-optimal source, and you should try to be charitable towards the idea before dismissing it.

Fighting Biases and Bad Habits like Boggarts

30 palladias 21 August 2014 05:07PM

TL;DR: Building humor into your habits for spotting and correcting errors makes the fix more enjoyable, easier to talk about and receive social support, and limits the danger of a contempt spiral. 

 

One of the most reliably bad decisions I've made on a regular basis is the choice to stay awake (well, "awake") and on the internet past the point where I can get work done, or even have much fun.  I went through a spell where I even fell asleep on the couch more nights than not, unable to muster the will or judgement to get up and go downstairs to bed.

I could remember (even sometimes in the moment) that this was a bad pattern, but, the more tired I was, the more tempting it was to think that I should just buckle down and apply more willpower to be more awake and get more out of my computer time.  Going to bed was a solution, but it was hard for it not to feel (to my sleepy brain and my normal one) like a bit of a cop out.

Only two things helped me really keep this failure mode in check.  One was setting a hard bedtime (and beeminding it) as part of my sacrifice for Advent.   But the other key tool (which has lasted me long past Advent) is the gif below.

sleep eating ice cream

The poor kid struggling to eat his ice cream cone, even in the face of his exhaustion, is hilarious.  And not too far off the portrait of me around 2am scrolling through my Feedly.

Thinking about how stupid or ineffective or insufficiently strong-willed I'm being makes it hard for me to do anything that feels like a retreat from my current course of action.  I want to master the situation and prove I'm stronger.  But catching on to the fact that my current situation (of my own making or not) is ridiculous, makes it easier to laugh, shrug, and move on.

I think the difference is that it's easy for me to feel contemptuous of myself when frustrated, and easy to feel fond when amused.

I've tried to strike the new emotional tone when I'm working on catching and correcting other errors.  (e.g "Stupid, you should have known to leave more time to make the appointment!  Planning fallacy!"  becomes "Heh, I guess you thought that adding two "trivially short" errands was a closed set, and must remain 'trivially short.'  That's a pretty silly error.")

In the first case, noticing and correcting an error feels punitive, since it's quickly followed by a hefty dose of flagellation, but the second comes with a quick laugh and a easier shift to a growth mindset framing.  Funny stories about errors are also easier to tell, increasing the chance my friends can help catch me out next time, or that I'll be better at spotting the error just by keeping it fresh in my memory. Not to mention, in order to get the joke, I tend to look for a more specific cause of the error than stupid/lazy/etc.

As far as I can tell, it also helps that amusement is a pretty different feeling than the ones that tend to be active when I'm falling into error (frustration, anger, feeling trapped, impatience, etc).  So, for a couple of seconds at least, I'm out of the rut and now need to actively return to it to stay stuck. 

In the heat of the moment of anger/akrasia/etc is a bad time to figure out what's funny, but, if you're reflecting on your errors after the fact, in a moment of consolation, it's easier to go back armed with a helpful reframing, ready to cast Riddikulus!

 

Crossposted from my personal blog, Unequally Yoked.

Announcing The Effective Altruism Forum

29 RyanCarey 24 August 2014 08:07AM

The Effective Altruism Forum will be launched at effective-altruism.com on September 10, British time.

Now seems like a good time time to discuss why we might need an Effective Altruism Forum, and how it might compare to LessWrong.

About the Effective Altruism Forum

The motivation for the Effective Altruism Forum is to improve the quality of effective altruist discussion and coordination. A big part of this is to give many of the useful features of LessWrong to effective altruists, including:

 

  • Archived, searchable content (this will begin with archived content from effective-altruism.com)
  • Meetups
  • Nested comments
  • A karma system
  • A dynamically upated list of external effective altruist blogs
  • Introductory materials (this will begin with these articles)

 

The Effective Altruism Forum has been designed by Mihai Badic. Over the last month, it has been developed by Trike Apps, who have built the new site using the LessWrong codebase. I'm glad to report that it is now basically ready, looks nice, and is easy to use.

I expect that at the new forum, as on the effective altruist Facebook and Reddit pages, people will want to discuss the which intellectual procedures to use to pick effective actions. I also expect some proposals of effective altruist projects, and offers of resources. So users of the new forum will share LessWrong's interest in instrumental and epistemic rationality. On the other hand, I expect that few of its users will want to discuss the technical aspects of artificial intelligence, anthropics or decision theory, and to the extent that they do so, they will want to do it at LessWrong. As a result, I  expect the new forum to cause:

 

  • A bunch of materials on effective altruism and instrumental rationality to be collated for new effective altruists
  • Discussion of old LessWrong materials to resurface
  • A slight increase to the number of users of LessWrong, possibly offset by some users spending more of their time posting at the new forum.

 

At least initially, the new forum won't have a wiki or a Main/Discussion split and won't have any institutional affiliations.

Next Steps:

It's really important to make sure that the Effective Altruism Forum is established with a beneficial culture. If people want to help that process by writing some seed materials, to be posted around the time of the site's launch, then they can contact me at ry [dot] duff [at] gmail.com. Alternatively, they can wait a short while until they automatically receive posting priveleges.

It's also important that the Effective Altruism Forum helps the shared goals of rationalists and effective altruists, and has net positive effects on LessWrong in particular. Any suggestions for improving the odds of success for the effective altruism forum are most welcome.

A Day Without Defaults

28 katydee 20 October 2014 08:07AM

Author's note: this post was written on Sunday, Oct. 19th. Its sequel will be written on Sunday, Oct. 27th.

Last night, I went to bed content with a fun and eventful weekend gone by. This morning, I woke up, took a shower, did my morning exercises, and began eat breakfast before making the commute up to work.

At the breakfast table, though, I was surprised to learn that it was Sunday, not Monday. I had misremembered what day it was and in fact had an entire day ahead of me with nothing on the agenda. At first, this wasn't very interesting, but then I started thinking. What to do with an entirely free day, without any real routine?

I realized that I didn't particularly know what to do, so I decided that I would simply live a day without defaults. At each moment of the day, I would act only in accordance with my curiosity and genuine interest. If I noticed myself becoming bored, disinterested, or otherwise less than enthused about what was going on, I would stop doing it.

What I found was quite surprising. I spent much less time doing routine activities like reading the news and browsing discussion boards, and much more time doing things that I've "always wanted to get around to"-- meditation, trying out a new exercise routine, even just spending some time walking around outside and relaxing in the sun.

Further, this seemed to actually make me more productive. When I sat down to get some work done, it was because I was legitimately interested in finishing my work and curious as to whether I could use a new method I had thought up in order to solve it. I was able to resolve something that's been annoying me for a while in much less time than I thought it would take.

By the end of the day, I started thinking "is there any reason that I don't spend every day like this?" As far as I can tell, there isn't really. I do have a few work tasks that I consider relatively uninteresting, but there are multiple solutions to that problem that I suspect I can implement relatively easily.

My plan is to spend the next week doing the same thing that I did today and then report back. I'm excited to let you all know what I find!

A proof of Löb's theorem in Haskell

28 cousin_it 19 September 2014 01:01PM

I'm not sure if this post is very on-topic for LW, but we have many folks who understand Haskell and many folks who are interested in Löb's theorem (see e.g. Eliezer's picture proof), so I thought why not post it here? If no one likes it, I can always just move it to my own blog.

A few days ago I stumbled across a post by Dan Piponi, claiming to show a Haskell implementation of something similar to Löb's theorem. Unfortunately his code had a couple flaws. It was circular and relied on Haskell's laziness, and it used an assumption that doesn't actually hold in logic (see the second comment by Ashley Yakeley there). So I started to wonder, what would it take to code up an actual proof? Wikipedia spells out the steps very nicely, so it seemed to be just a matter of programming.

Well, it turned out to be harder than I thought.

One problem is that Haskell has no type-level lambdas, which are the most obvious way (by Curry-Howard) to represent formulas with propositional variables. These are very useful for proving stuff in general, and Löb's theorem uses them to build fixpoints by the diagonal lemma.

The other problem is that Haskell is Turing complete, which means it can't really be used for proof checking, because a non-terminating program can be viewed as the proof of any sentence. Several people have told me that Agda or Idris might be better choices in this regard. Ultimately I decided to use Haskell after all, because that way the post will be understandable to a wider audience. It's easy enough to convince yourself by looking at the code that it is in fact total, and transliterate it into a total language if needed. (That way you can also use the nice type-level lambdas and fixpoints, instead of just postulating one particular fixpoint as I did in Haskell.)

But the biggest problem for me was that the Web didn't seem to have any good explanations for the thing I wanted to do! At first it seems like modal proofs and Haskell-like languages should be a match made in heaven, but in reality it's full of subtle issues that no one has written down, as far as I know. So I'd like this post to serve as a reference, an example approach that avoids all difficulties and just works.

LW user lmm has helped me a lot with understanding the issues involved, and wrote a candidate implementation in Scala. The good folks on /r/haskell were also very helpful, especially Samuel Gélineau who suggested a nice partial implementation in Agda, which I then converted into the Haskell version below.

To play with it online, you can copy the whole bunch of code, then go to CompileOnline and paste it in the edit box on the left, replacing what's already there. Then click "Compile & Execute" in the top left. If it compiles without errors, that means everything is right with the world, so you can change something and try again. (I hate people who write about programming and don't make it easy to try out their code!) Here we go:

main = return ()
-- Assumptions
data Theorem a
logic1 = undefined :: Theorem (a -> b) -> Theorem a -> Theorem b logic2 = undefined :: Theorem (a -> b) -> Theorem (b -> c) -> Theorem (a -> c) logic3 = undefined :: Theorem (a -> b -> c) -> Theorem (a -> b) -> Theorem (a -> c)
data Provable a
rule1 = undefined :: Theorem a -> Theorem (Provable a) rule2 = undefined :: Theorem (Provable a -> Provable (Provable a)) rule3 = undefined :: Theorem (Provable (a -> b) -> Provable a -> Provable b)
data P
premise = undefined :: Theorem (Provable P -> P)
data Psi
psi1 = undefined :: Theorem (Psi -> (Provable Psi -> P)) psi2 = undefined :: Theorem ((Provable Psi -> P) -> Psi)
-- Proof
step3 :: Theorem (Psi -> Provable Psi -> P) step3 = psi1
step4 :: Theorem (Provable (Psi -> Provable Psi -> P)) step4 = rule1 step3
step5 :: Theorem (Provable Psi -> Provable (Provable Psi -> P)) step5 = logic1 rule3 step4
step6 :: Theorem (Provable (Provable Psi -> P) -> Provable (Provable Psi) -> Provable P) step6 = rule3
step7 :: Theorem (Provable Psi -> Provable (Provable Psi) -> Provable P) step7 = logic2 step5 step6
step8 :: Theorem (Provable Psi -> Provable (Provable Psi)) step8 = rule2
step9 :: Theorem (Provable Psi -> Provable P) step9 = logic3 step7 step8
step10 :: Theorem (Provable Psi -> P) step10 = logic2 step9 premise
step11 :: Theorem ((Provable Psi -> P) -> Psi) step11 = psi2
step12 :: Theorem Psi step12 = logic1 step11 step10
step13 :: Theorem (Provable Psi) step13 = rule1 step12
step14 :: Theorem P step14 = logic1 step10 step13
-- All the steps squished together
lemma :: Theorem (Provable Psi -> P) lemma = logic2 (logic3 (logic2 (logic1 rule3 (rule1 psi1)) rule3) rule2) premise
theorem :: Theorem P theorem = logic1 lemma (rule1 (logic1 psi2 lemma))

To make sense of the code, you should interpret the type constructor Theorem as the symbol ⊢ from the Wikipedia proof, and Provable as the symbol ☐. All the assumptions have value "undefined" because we don't care about their computational content, only their types. The assumptions logic1..3 give just enough propositional logic for the proof to work, while rule1..3 are direct translations of the three rules from Wikipedia. The assumptions psi1 and psi2 describe the specific fixpoint used in the proof, because adding general fixpoint machinery would make the code much more complicated. The types P and Psi, of course, correspond to sentences P and Ψ, and "premise" is the premise of the whole theorem, that is, ⊢(☐P→P). The conclusion ⊢P can be seen in the type of step14.

As for the "squished" version, I guess I wrote it just to satisfy my refactoring urge. I don't recommend anyone to try reading that, except maybe to marvel at the complexity :-)

EDIT: in addition to the previous Reddit thread, there's now a new Reddit thread about this post.

Changes to my workflow

28 paulfchristiano 26 August 2014 05:29PM

About 18 months ago I made a post here on my workflow. I've received a handful of requests for follow-up, so I thought I would make another post detailing changes since then. I expect this post to be less useful than the last one.

For the most part, the overall outline has remained pretty stable and feels very similar to 18 months ago. Things not mentioned below have mostly stayed the same. I believe that the total effect of continued changes have been continued but much smaller improvements, though it is hard to tell (as opposed to the last changes, which were more clearly improvements).

Based on comparing time logging records I seem to now do substantially more work on average, but there are many other changes during this period that could explain the change (including changes in time logging). Changes other than work output are much harder to measure; I feel like they are positive but I wouldn't be surprised if this were an illusion.

Splitting days:

I now regularly divide my day into two halves, and treat the two halves as separate units. I plan each separately and reflect on each separately. I divide them by an hour long period of reflecting on the morning, relaxing for 5-10 minutes, napping for 25-30 minutes, processing my emails, and planning the evening. I find that this generally makes me more productive and happier about the day. Splitting my days is often difficult due to engagements in the middle of the day, and I don't have a good solution to that.

WasteNoTime:

I have longstanding objections to explicitly rationing internet use (since it seems either indicative of a broader problem that should be resolved directly, or else to serve a useful function that would be unwise to remove). That said, I now use the extension WasteNoTime to limit my consumption of blogs, webcomics, facebook, news sites, browser games, etc., to 10 minutes each half-day. This has cut the amount of time I spend browsing the internet from an average of 30-40 minutes to an average of 10-15 minutes. It doesn't seem to have been replaced by lower-quality leisure, but by a combination of work and higher-quality leisure.

Similarly, I turned off the newsfeed in facebook, which I found to improve the quality of my internet time in general (the primary issue was that I would sometimes be distracted by the newsfeed while sending messages over facebook, which wasn't my favorite way to use up wastenotime minutes).

I also tried StayFocusd, but ended up adopting WasteNoTime because of the ability to set limits per half-day (via "At work" and "not at work" timers) rather than per-day. I find that the main upside is cutting off the tail of derping (e.g. getting sucked into a blog comment thread, or looking into a particularly engrossing issue), and for this purpose per half-day timers are much more effective.

Email discipline:

I set gmail to archive all emails on arrival and assign them the special label "In." This lets me to search for emails and compose emails, using the normal gmail interface, without being notified of new arrivals. I process the items with label "in" (typically turning emails into todo items to be processed by the same system that deals with other todo items) at the beginning of each half day. Each night I scan my email quickly for items that require urgent attention. 

Todo lists / reminders:

I continue to use todo lists for each half day and for a range of special conditions. I now check these lists at the beginning of each half day rather than before going to bed.

I also maintain a third list of "reminders." These are things that I want to be reminded of periodically, organized by day; each morning I look at the day's reminders and think about them briefly. Each of them is copied and filed under a future day. If I feel like I remember a thing well I file it in far in the future, if I feel like I don't remember it well I file it in the near future.

Over the last month most of these reminders have migrated to be in the form "If X, then Y," e.g. "If I agree to do something for someone, then pause, say `actually I should think about it for a few minutes to make sure I have time,' and set a 5 minute timer that night to think about it more clearly." These are designed to fix problems that I notice when reflecting on the day. This is a recommendation from CFAR folks, which seems to be working well, though is the newest part of the system and least tested.

Isolating "todos":

I now attempt to isolate things that probably need doing, but don't seem maximally important; I aim to do them only on every 5th day, and only during one half-day. If I can't finish them in this time, I will typically delay them 5 days. When they spill over to other days, I try to at least keep them to one half-day or the other. I don't know if this helps, but it feels better to have isolated unproductive-feeling blocks of time rather than scattering it throughout the week.

I don't do this very rigidly. I expect the overall level of discipline I have about it is comparable to or lower than a normal office worker who has a clearer division between their personal time and work time.

Toggl:

I now use Toggl for detailed time tracking. Katja Grace and I experimented with about half a dozen other systems (Harvest, Yast, Klok, Freckle, Lumina, I expect others I'm forgetting) before settling on Toggl. It has a depressing number of flaws, but ends up winning for me by making it very fast to start and switch timers which is probably the most important criterion for me. It also offers reviews that work out well with what I want to look at.

I find the main value adds from detailed time tracking are:

1. Knowing how long I've spent on projects, especially long-term projects. My intuitive estimates are often off by more than a factor of 2, even for things taking 80 hours; this can lead me to significantly underestimate the costs of taking on some kinds of projects, and it can also lead me to think an activity is unproductive instead of productive by overestimating how long I've actually spent on it.

2. Accurate breakdowns of time in a day, which guide efforts at improving my day-to-day routine. They probably also make me feel more motivated about working, and improve focus during work.

Reflection / improvement:

Reflection is now a smaller fraction of my time, down from 10% to 3-5%, based on diminishing returns to finding stuff to improve. Another 3-5% is now redirected into longer-term projects to improve particular aspects of my life (I maintain a list of possible improvements, roughly sorted by goodness). Examples: buying new furniture, improvements to my diet (Holden's powersmoothie is great), improvements to my sleep (low doses of melatonin seem good). At the moment the list of possible improvements is long enough that adding to the list is less valuable than doing things on the list.

I have equivocated a lot about how much of my time should go into this sort of thing. My best guess is the number should be higher.

-Pomodoros:

I don't use pomodoros at all any more. I still have periods of uninterrupted work, often of comparable length, for individual tasks. This change wasn't extremely carefully considered, it mostly just happened. I find explicit time logging (such that I must consciously change the timer before changing tasks) seems to work as a substitute in many cases. I also maintain the habit of writing down candidate distractions and then attending to them later (if at all).

For larger tasks I find that I often prefer longer blocks of unrestricted working time. I continue to use Alinof timer to manage these blocks of uninterrupted work.

-Catch:

Catch disappeared, and I haven't found a replacement that I find comparably useful. (It's also not that high on the list of priorities.) I now just send emails to myself, but I do it much less often.

-Beeminder:

I no longer use beeminder. This again wasn't super-considered, though it was based on a very rough impression of overhead being larger than the short-term gains. I think beeminder was helpful for setting up a number of habits which have persisted (especially with respect to daily routine and regular focused work), and my long-term averages continue to satisfy my old beeminder goals.

Project outlines:

I now organize notes about each project I am working on in a more standardized way, with "Queue of todos," "Current workspace," and "Data" as the three subsections. I'm not thrilled by this system, but it seems to be an improvement over the previous informal arrangement. In particular, having a workspace into which I can easily write thoughts without thinking about where they fit, and only later sorting them into the data section once it's clearer how they fit in, decreases the activation energy of using the system. I now use Toggl rather than maintaining time logs by hand.

Randomized trials:

As described in my last post I tried various randomized trials (esp. of effects of exercise, stimulant use, and sleep on mood, cognitive performance, and productive time). I have found extracting meaningful data from these trials to be extremely difficult, due to straightforward issues with signal vs. noise. There are a number of tests which I still do expect to yield meaningful data, but I've increased my estimates for the expensiveness of useful tests substantially, and they've tended to fall down the priority list. For some things I've just decided to do them without the data, since my best guess is positive in expectation and the data is too expensive to acquire.

 

[link] Why Psychologists' Food Fight Matters

28 Pablo_Stafforini 01 August 2014 07:52AM

Why Psychologists’ Food Fight Matters: Important findings” haven’t been replicated, and science may have to change its ways. By Michelle N. Meyer and Christopher Chabris. Slate, July 31, 2014. [Via Steven Pinker's Twitter account, who adds: "Lesson for sci journalists: Stop reporting single studies, no matter how sexy (these are probably false). Report lit reviews, meta-analyses."]  Some excerpts:

Psychologists are up in arms over, of all things, the editorial process that led to the recent publication of a special issue of the journal Social Psychology. This may seem like a classic case of ivory tower navel gazing, but its impact extends far beyond academia. The issue attempts to replicate 27 “important findings in social psychology.” Replication—repeating an experiment as closely as possible to see whether you get the same results—is a cornerstone of the scientific method. Replication of experiments is vital not only because it can detect the rare cases of outright fraud, but also because it guards against uncritical acceptance of findings that were actually inadvertent false positives, helps researchers refine experimental techniques, and affirms the existence of new facts that scientific theories must be able to explain.

One of the articles in the special issue reported a failure to replicate a widely publicized 2008 study by Simone Schnall, now tenured at Cambridge University, and her colleagues. In the original study, two experiments measured the effects of people’s thoughts or feelings of cleanliness on the harshness of their moral judgments. In the first experiment, 40 undergraduates were asked to unscramble sentences, with one-half assigned words related to cleanliness (like pure or pristine) and one-half assigned neutral words. In the second experiment, 43 undergraduates watched the truly revolting bathroom scene from the movie Trainspotting, after which one-half were told to wash their hands while the other one-half were not. All subjects in both experiments were then asked to rate the moral wrongness of six hypothetical scenarios, such as falsifying one’s résumé and keeping money from a lost wallet. The researchers found that priming subjects to think about cleanliness had a “substantial” effect on moral judgment: The hand washers and those who unscrambled sentences related to cleanliness judged the scenarios to be less morally wrong than did the other subjects. The implication was that people who feel relatively pure themselves are—without realizing it—less troubled by others’ impurities. The paper was covered by ABC News, the Economist, and the Huffington Post, among other outlets, and has been cited nearly 200 times in the scientific literature.

However, the replicators—David Johnson, Felix Cheung, and Brent Donnellan (two graduate students and their adviser) of Michigan State University—found no such difference, despite testing about four times more subjects than the original studies. [...]

The editor in chief of Social Psychology later agreed to devote a follow-up print issue to responses by the original authors and rejoinders by the replicators, but as Schnall told Science, the entire process made her feel “like a criminal suspect who has no right to a defense and there is no way to win.” The Science article covering the special issue was titled “Replication Effort Provokes Praise—and ‘Bullying’ Charges.” Both there and in her blog post, Schnall said that her work had been “defamed,” endangering both her reputation and her ability to win grants. She feared that by the time her formal response was published, the conversation might have moved on, and her comments would get little attention.

How wrong she was. In countless tweets, Facebook comments, and blog posts, several social psychologists seized upon Schnall’s blog post as a cri de coeur against the rising influence of “replication bullies,” “false positive police,” and “data detectives.” For “speaking truth to power,” Schnall was compared to Rosa Parks. The “replication police” were described as “shameless little bullies,” “self-righteous, self-appointed sheriffs” engaged in a process “clearly not designed to find truth,” “second stringers” who were incapable of making novel contributions of their own to the literature, and—most succinctly—“assholes.” Meanwhile, other commenters stated or strongly implied that Schnall and other original authors whose work fails to replicate had used questionable research practices to achieve sexy, publishable findings. At one point, these insinuations were met with threats of legal action. [...]

Unfortunately, published replications have been distressingly rare in psychology. A 2012 survey of the top 100 psychology journals found that barely 1 percent of papers published since 1900 were purely attempts to reproduce previous findings. Some of the most prestigious journals have maintained explicit policies against replication efforts; for example, the Journal of Personality and Social Psychology published a paper purporting to support the existence of ESP-like “precognition,” but would not publish papers that failed to replicate that (or any other) discovery. Science publishes “technical comments” on its own articles, but only if they are submitted within three months of the original publication, which leaves little time to conduct and document a replication attempt.

The “replication crisis” is not at all unique to social psychology, to psychological science, or even to the social sciences. As Stanford epidemiologist John Ioannidis famously argued almost a decade ago, “Most research findings are false for most research designs and for most fields.” Failures to replicate and other major flaws in published research have since been noted throughout science, including in cancer research, research into the genetics of complex diseases like obesity and heart disease, stem cell research, and studies of the origins of the universe. Earlier this year, the National Institutes of Health stated “The complex system for ensuring the reproducibility of biomedical research is failing and is in need of restructuring.”

Given the stakes involved and its centrality to the scientific method, it may seem perplexing that replication is the exception rather than the rule. The reasons why are varied, but most come down to the perverse incentives driving research. Scientific journals typically view “positive” findings that announce a novel relationship or support a theoretical claim as more interesting than “negative” findings that say that things are unrelated or that a theory is not supported. The more surprising the positive finding, the better, even though surprising findings are statistically less likely to be accurate. Since journal publications are valuable academic currency, researchers—especially those early in their careers—have strong incentives to conduct original work rather than to replicate the findings of others. Replication efforts that do happen but fail to find the expected effect are usually filed away rather than published. That makes the scientific record look more robust and complete than it is—a phenomenon known as the “file drawer problem.”

The emphasis on positive findings may also partly explain the fact that when original studies are subjected to replication, so many turn out to be false positives. The near-universal preference for counterintuitive, positive findings gives researchers an incentive to manipulate their methods or poke around in their data until a positive finding crops up, a common practice known as “p-hacking” because it can result in p-values, or measures of statistical significance, that make the results look stronger, and therefore more believable, than they really are. [...]

The recent special issue of Social Psychology was an unprecedented collective effort by social psychologists to [rectify this situation]—by altering researchers’ and journal editors’ incentives in order to check the robustness of some of the most talked-about findings in their own field. Any researcher who wanted to conduct a replication was invited to preregister: Before collecting any data from subjects, they would submit a proposal detailing precisely how they would repeat the original study and how they would analyze the data. Proposals would be reviewed by other researchers, including the authors of the original studies, and once approved, the study’s results would be published no matter what. Preregistration of the study and analysis procedures should deter p-hacking, guaranteed publication should counteract the file drawer effect, and a requirement of large sample sizes should make it easier to detect small but statistically meaningful effects.

The results were sobering. At least 10 of the 27 “important findings” in social psychology were not replicated at all. In the social priming area, only one of seven replications succeeded. [...]

One way to keep things in perspective is to remember that scientific truth is created by the accretion of results over time, not by the splash of a single study. A single failure-to-replicate doesn’t necessarily invalidate a previously reported effect, much less imply fraud on the part of the original researcher—or the replicator. Researchers are most likely to fail to reproduce an effect for mundane reasons, such as insufficiently large sample sizes, innocent errors in procedure or data analysis, and subtle factors about the experimental setting or the subjects tested that alter the effect in question in ways not previously realized.

Caution about single studies should go both ways, though. Too often, a single original study is treated—by the media and even by many in the scientific community—as if it definitively establishes an effect. Publications like Harvard Business Review and idea conferences like TED, both major sources of “thought leadership” for managers and policymakers all over the world, emit a steady stream of these “stats and curiosities.” Presumably, the HBR editors and TED organizers believe this information to be true and actionable. But most novel results should be initially regarded with some skepticism, because they too may have resulted from unreported or unnoticed methodological quirks or errors. Everyone involved should focus their attention on developing a shared evidence base that consists of robust empirical regularities—findings that replicate not just once but routinely—rather than of clever one-off curiosities. [...]

Scholars, especially scientists, are supposed to be skeptical about received wisdom, develop their views based solely on evidence, and remain open to updating those views in light of changing evidence. But as psychologists know better than anyone, scientists are hardly free of human motives that can influence their work, consciously or unconsciously. It’s easy for scholars to become professionally or even personally invested in a hypothesis or conclusion. These biases are addressed partly through the peer review process, and partly through the marketplace of ideas—by letting researchers go where their interest or skepticism takes them, encouraging their methods, data, and results to be made as transparent as possible, and promoting discussion of differing views. The clashes between researchers of different theoretical persuasions that result from these exchanges should of course remain civil; but the exchanges themselves are a perfectly healthy part of the scientific enterprise.

This is part of the reason why we cannot agree with a more recent proposal by Kahneman, who had previously urged social priming researchers to put their house in order. He contributed an essay to the special issue of Social Psychology in which he proposed a rule—to be enforced by reviewers of replication proposals and manuscripts—that authors “be guaranteed a significant role in replications of their work.” Kahneman proposed a specific process by which replicators should consult with original authors, and told Science that in the special issue, “the consultations did not reach the level of author involvement that I recommend.”

Collaboration between opposing sides would probably avoid some ruffled feathers, and in some cases it could be productive in resolving disputes. With respect to the current controversy, given the potential impact of an entire journal issue on the robustness of “important findings,” and the clear desirability of buy-in by a large portion of psychology researchers, it would have been better for everyone if the original authors’ comments had been published alongside the replication papers, rather than left to appear afterward. But consultation or collaboration is not something replicators owe to original researchers, and a rule to require it would not be particularly good science policy.

Replicators have no obligation to routinely involve original authors because those authors are not the owners of their methods or results. By publishing their results, original authors state that they have sufficient confidence in them that they should be included in the scientific record. That record belongs to everyone. Anyone should be free to run any experiment, regardless of who ran it first, and to publish the results, whatever they are. [...]

some critics of replication drives have been too quick to suggest that replicators lack the subtle expertise to reproduce the original experiments. One prominent social psychologist has even argued that tacit methodological skill is such a large factor in getting experiments to work that failed replications have no value at all (since one can never know if the replicators really knew what they were doing, or knew all the tricks of the trade that the original researchers did), a surprising claim that drew sarcastic responses. [See LW discussion.] [...]

Psychology has long been a punching bag for critics of “soft science,” but the field is actually leading the way in tackling a problem that is endemic throughout science. The replication issue of Social Psychology is just one example. The Association for Psychological Science is pushing for better reporting standards and more study of research practices, and at its annual meeting in May in San Francisco, several sessions on replication were filled to overflowing. International collaborations of psychologists working on replications, such as the Reproducibility Project and the Many Labs Replication Project (which was responsible for 13 of the 27 replications published in the special issue of Social Psychology) are springing up.

Even the most tradition-bound journals are starting to change. The Journal of Personality and Social Psychology—the same journal that, in 2011, refused to even consider replication studies—recently announced that although replications are “not a central part of its mission,” it’s reversing this policy. We wish that JPSP would see replications as part of its central mission and not relegate them, as it has, to an online-only ghetto, but this is a remarkably nimble change for a 50-year-old publication. Other top journals, most notable among them Perspectives in Psychological Science, are devoting space to systematic replications and other confirmatory research. The leading journal in behavior genetics, a field that has been plagued by unreplicable claims that particular genes are associated with particular behaviors, has gone even further: It now refuses to publish original findings that do not include evidence of replication.

A final salutary change is an overdue shift of emphasis among psychologists toward establishing the size of effects, as opposed to disputing whether or not they exist. The very notion of “failure” and “success” in empirical research is urgently in need of refinement. When applied thoughtfully, this dichotomy can be useful shorthand (and we’ve used it here). But there are degrees of replication between success and failure, and these degrees matter.

For example, suppose an initial study of an experimental drug for cardiovascular disease suggests that it reduces the risk of heart attack by 50 percent compared to a placebo pill. The most meaningful question for follow-up studies is not the binary one of whether the drug’s effect is 50 percent or not (did the first study replicate?), but the continuous one of precisely how much the drug reduces heart attack risk. In larger subsequent studies, this number will almost inevitably drop below 50 percent, but if it remains above 0 percent for study after study, then the best message should be that the drug is in fact effective, not that the initial results “failed to replicate.”

What false beliefs have you held and why were you wrong?

27 Punoxysm 16 October 2014 05:58PM

What is something you used to believe, preferably something concrete with direct or implied predictions, that you now know was dead wrong. Was your belief rational given what you knew and could know back then, or was it irrational, and why?

 

Edit: I feel like some of these are getting a bit glib and political. Please try to explain what false assumptions or biases were underlying your beliefs - be introspective - this is LW after all.

Causal Inference Sequence Part 1: Basic Terminology and the Assumptions of Causal Inference

27 Anders_H 30 July 2014 08:56PM

(Part 1 of the Sequence on Applied Causal Inference

 

In this sequence, I am going to present a theory on how we can learn about causal effects using observational data.  As an example, we will imagine that you have collected information on a large number of Swedes - let us call them Sven, Olof, Göran,  Gustaf, Annica,  Lill-Babs, Elsa and Astrid. For every Swede, you have recorded data on their gender, whether they smoked or not, and on whether they got cancer during the 10-years of follow-up.   Your goal is to use this dataset to figure out whether smoking causes cancer.   

We are going to use the letter A as a random variable to represent whether they smoked. A can take the value 0 (did not smoke) or 1 (smoked).  When we need to talk about the specific values that A can take, we sometimes use lower case a as a placeholder for 0 or 1.    We use the letter Y as a random variable that represents whether they got cancer, and L to represent their gender. 

The data-generating mechanism and the joint distribution of variables

Imagine you are looking at this data set:

ID

L

A

Y

Name

Sex

Did they smoke?

Did they get cancer?

Sven

Male

Yes

Yes

Olof

Male

No

Yes

Göran

Male

Yes

Yes

Gustaf

Male

No

No

Annica

Female

Yes

Yes

Lill-Babs

Female

Yes

No

Elsa

Female

Yes

No

Astrid

Female

No

No

 

 

This table records information about the joint distribution of the variables L, A and Y.  By looking at it, you can tell that 1/4 of the Swedes were men who smoked and got cancer, 1/8 were men who did not smoke and got cancer, 1/8 were men who did not smoke and did not get cancer etc.  

You can make all sorts of statistics that summarize aspects of the joint distribution.  One such statistic is the correlation between two variables.  If "sex" is correlated with "smoking", it means that if you know somebody's sex, this gives you information that makes it easier to predict whether they smoke.   If knowing about an individual's sex gives no information about whether they smoked, we say that sex and smoking are independent.  We use the symbol ∐ to mean independence. 

When we are interested in causal effects, we are asking what would happen to the joint distribution if we intervened to change the value of a variable.  For example, how many Swedes would get cancer in a hypothetical world where you intervened to make sure they all quit smoking?  

In order to answer this, we have to ask questions about the data generating mechanism. The data generating mechanism is the algorithm that assigns value to the variables, and therefore creates the joint distribution. We will think of the data as being generated by three different algorithms: One for L, one for A and one for Y.    Each of these algorithms takes the previously assigned variables as input, and then outputs a value.    

Questions about the data generating mechanism include “Which variable has its value assigned first?”,  “Which variables from the past (observed or unobserved) are used as inputs” and “If I change whether someone smokes, how will that change propagate to other variables that have their value assigned later".    The last of these questions can be rephrased as "What is the causal effect of smoking”.    

The basic problem of causal inference is that the relationship between the set of possible data generating mechanisms, and the joint distribution of variables, is many-to-one:   For any correlation you observe in the dataset, there are many possible sets of algorithms for L, A and Y that could all account for the observed patterns. For example, if you are looking at a correlation between cancer and smoking, you can tell a story about cancer causing people to take up smoking, or a story about smoking causing people to get cancer, or a story about smoking and cancer sharing a common cause.  

An important thing to note is that even if you have data on absolutely everyone, you still would not be able to distinguish between the possible data generating mechanisms. The problem is not that you have a limited sample. This is therefore not a statistical problem.  What you need to answer the question, is not more people in your study, but a priori causal information.  The purpose of this sequence is to show you how to reason about what prior causal information is necessary, and how to analyze the data if you have measured all the necessary variables. 

Counterfactual Variables and "God's Table":

The first step of causal inference is to translate the English language research question «What is the causal effect of smoking» into a precise, mathematical language.  One possible such language is based on counterfactual variables.  These counterfactual variables allow us to encode the concept of “what would have happened if, possibly contrary to fact, the person smoked”.

We define one counterfactual variable called Ya=1 which represents the outcome in the person if he smoked, and another counterfactual variable called Ya=0 which represents the outcome if he did not smoke. Counterfactual variables such as Ya=0 are mathematical objects that represent part of the data generating mechanism:  The variable tells us what value the mechanism would assign to Y, if we intervened to make sure the person did not smoke. These variables are columns in an imagined dataset that we sometimes call “God’s Table”:

 

ID

A

Y

Ya=1

Ya=0

 

Smoking

Cancer

Whether they would have got cancer if they smoked

Whether they would have got cancer if they didn't smoke

Sven

1

1

1

1

Olof

0

1

0

1

Göran

1

1

1

0

Gustaf

0

0

0

0

 

 

 

Let us start by making some points about this dataset.  First, note that the counterfactual variables are variables just like any other column in the spreadsheet.   Therefore, we can use the same type of logic that we use for any other variables.  Second, note that in our framework, counterfactual variables are pre-treatment variables:  They are determined long before treatment is assigned. The effect of treatment is simply to determine whether we see Ya=0 or Ya=1 in this individual.

If you had access to God's Table, you would immediately be able to look up the average causal effect, by comparing the column Ya=1 to the column Ya=0.  However, the most important point about God’s Table is that we cannot observe Ya=1 and Ya=0. We only observe the joint distribution of observed variables, which we can call the “Observed Table”:

 

ID

A

Y

Sven

1

1

Olof

0

1

Göran

1

1

Gustaf

0

0

 

 

The goal of causal inference is to learn about God’s Table using information from the observed table (in combination with a priori causal knowledge).  In particular, we are going to be interested in learning about the distributions of Ya=1 and Ya=0, and in how they relate to each other.  

 

Randomized Trials

The “Gold Standard” for estimating the causal effect, is to run a randomized controlled trial where we randomly assign the value of A.   This study design works because you select one random subset of the study population where you observe Ya=0, and another random subset where you observe Ya=1.   You therefore have unbiased information about the distribution of both Ya=0and of Ya=1

An important thing to point out at this stage is that it is not necessary to use an unbiased coin to assign treatment, as long as your use the same coin for everyone.   For instance, the probability of being randomized to A=1 can be 2/3.  You will still see randomly selected subsets of the distribution of both Ya=0 and Ya=1, you will just have a larger number of people where you see Ya=1.     Usually, randomized trials use unbiased coins, but this is simply done because it increases the statistical power. 

Also note that it is possible to run two different randomized controlled trials:  One in men, and another in women.  The first trial will give you an unbiased estimate of the effect in men, and the second trial will give you an unbiased estimate of the effect in women.  If both trials used the same coin, you could think of them as really being one trial. However, if the two trials used different coins, and you pooled them into the same database, your analysis would have to account for the fact that in reality, there were two trials. If you don’t account for this, the results will be biased.  This is called “confounding”. As long as you account for the fact that there really were two trials, you can still recover an estimate of the population average causal effect. This is called “Controlling for Confounding”.

In general, causal inference works by specifying a model that says the data came from a complex trial, ie, one where nature assigned a biased coin depending on the observed past.  For such a trial, there will exist a valid way to recover the overall causal results, but it will require us to think carefully about what the correct analysis is. 

Assumptions of Causal Inference

We will now go through in some more detail about why it is that randomized trials work, ie , the important aspects of this study design that allow us to infer causal relationships, or facts about God’s Table, using information about the joint distribution of observed variables.  

We will start with an “observed table” and build towards “reconstructing” parts of God’s Table.  To do this, we will need three assumptions: These are positivity, consistency and (conditional) exchangeability:

ID

A

Y

Sven

1

1

Olof

0

1

Göran

1

1

Gustaf

0

0

 

 

 

Positivity

Positivity is the assumption that any individual has a positive probability of receiving all values of the treatment variable:   Pr(A=a) > 0 for all values of a.  In other words, you need to have both people who smoke, and people who don't smoke.  If positivity does not hold, you will not have any information about the distribution of Ya for that value of a, and will therefore not be able to make inferences about it.

We can check whether this assumption holds in the sample, by checking whether there are people who are treated and people who are untreated. If you observe that in any stratum, there are individuals who are treated and individuals who are untreated, you know that positivity holds.  

If we observe a stratum where no individuals are treated (or no individuals are untreated), this can be either for statistical reasons (your randomly did not sample them) or for structural reasons (individuals with these covariates are deterministically never treated).  As we will see later, our models can handle random violations, but not structural violations.

In a randomized controlled trial, positivity holds because you will use a coin that has a positive probability of assigning people to either arm of the trial.

Consistency

The next assumption we are going to make is that if an individual happens to have treatment (A=1), we will observe the counterfactual variable Ya=1 in this individual. This is the observed table after we make the consistency assumption:

ID

A

Y

Ya=1

Ya=0

Sven

1

1

1

*

Olof

0

1

*

1

Göran

1

1

1

*

Gustaf

0

0

*

0

 

 

 

 

 Making the consistency assumption got us half the way to our goal.  We now have a lot of information about Ya=1 and Ya=0. However, half of the data is still missing.

Although consistency seems obvious, it is an assumption, not something that is true by definition.  We can expect the consistency assumption to hold if we have a well-defined intervention (ie, the intervention is a well-defined choice, not an attribute of the individual), and there is no causal interference (one individual’s outcome is not affected by whether another individual was treated).

Consistency may not hold if you have an intervention that is not well-defined:  For example, there may be multiple types of cigarettes. When you measure Ya=1 in people who smoked, it will actually be a composite of multiple counterfactual variables:  One for people who smoked regular cigarettes (let us call that Ya=1*) and another for people who smoked e-cigarettes (let us call that Ya=1#)   Since you failed to specify whether you are interested in the effect of regular cigarettes or e-cigarettes, the construct Ya=1 is a composite without any meaning, and people will be unable to use your results to predict the consequences of their actions.

Exchangeability

To complete the table, we require an additional assumption on the nature of the data. We call this assumption “Exchangeability”.  One possible exchangeability assumption is “Ya=0 ∐ A and Ya=1 ∐ A”.   This is the assumption that says “The data came from a randomized controlled trial”. If this assumption is true, you will observe a random subset of the distribution of Ya=0 in the group where A=0, and a random subset of the distribution of Ya=1 in the group where A=1.

Exchangeability is a statement about two variables being independent from each other. This means that having information about either one of the variables will not help you predict the value of the other.  Sometimes, variables which are not independent are "conditionally independent".  For example, it is possible that knowing somebody's race helps you predict whether they enjoy eating Hakarl, an Icelandic form of rotting fish.  However, it is also possible that this is just a marker for whether they were born in the ethnically homogenous Iceland. In such a situation, it is possible that once you already know whether somebody is from Iceland, also knowing their race gives you no additional clues as to whether they will enjoy Hakarl.  In this case, the variables "race" and "enjoying hakarl" are conditionally independent, given nationality. 

The reason we care about conditional independence is that sometimes you may be unwilling to assume that marginal exchangeability Ya=1 ∐ A holds, but you are willing to assume conditional exchangeability Ya=1 ∐ A  | L.  In this example, let L be sex.  The assumption then says that you can interpret the data as if it came from two different randomized controlled trials: One in men, and one in women. If that is the case, sex is a "confounder". (We will give a definition of confounding in Part 2 of this sequence. )

If the data came from two different randomized controlled trials, one possible approach is to analyze these trials separately. This is called “stratification”.  Stratification gives you effect measures that are conditional on the confounders:  You get one measure of the effect in men, and another in women.  Unfortunately, in more complicated settings, stratification-based methods (including regression) are always biased. In those situations, it is necessary to focus the inference on the marginal distribution of Ya.

Identification

If marginal exchangeability holds (ie, if the data came from a marginally randomized trial), making inferences about the marginal distribution of Ya is easy: You can just estimate E[Ya] as E [Y|A=a].

However, if the data came from a conditionally randomized trial, we will need to think a little bit harder about how to say anything meaningful about E[Ya]. This process is the central idea of causal inference. We call it “identification”:  The idea is to write an expression for the distribution of a counterfactual variable, purely in terms of observed variables.  If we are able to do this, we have sufficient information to estimate causal effects just by looking at the relevant parts of the joint distribution of observed variables.

The simplest example of identification is standardization.  As an example, we will show a simple proof:

Begin by using the law of total probability to factor out the confounder, in this case L:

·         E(Ya) = Σ  E(Ya|L= l) * Pr(L=l)    (The summation sign is over l)

We do this because we know we need to introduce L behind the conditioning sign, in order to be able to use our exchangeability assumption in the next step:   Then,  because Ya  ∐ A | L,  we are allowed to introduce A=a behind the conditioning sign:

·         E(Ya) =  Σ  E(Ya|A=a, L=l) * Pr(L=l)

Finally, use the consistency assumption:   Because we are in the stratum where A=a in all individuals, we can replace Ya by Y

·         E(Ya) = Σ E(Y|A=a, L=l) * Pr (L=l)

 

We now have an expression for the counterfactual in terms of quantities that can be observed in the real world, ie, in terms of the joint distribution of A, Y and L. In other words, we have linked the data generating mechanism with the joint distribution – we have “identified”  E(Ya).  We can therefore estimate E(Ya)

This identifying expression is valid if and only if L was the only confounder. If we had not observed sufficient variables to obtain conditional exchangeability, it would not be possible to identify the distribution of Ya : there would be intractable confounding.

Identification is the core concept of causal inference: It is what allows us to link the data generating mechanism to the joint distribution, to something that can be observed in the real world. 

 

The difference between epidemiology and biostatistics

Many people see Epidemiology as «Applied Biostatistics».  This is a misconception. In reality, epidemiology and biostatistics are completely different parts of the problem.  To illustrate what is going on, consider this figure:

 

 

The data generating mechanism first creates a joint distribution of observed variables.  Then, we sample from the joint distribution to obtain data. Biostatistics asks:  If we have a sample, what can we learn about the joint distribution?  Epidemiology asks:  If we have all the information about the joint distribution , what can we learn about the data generating mechanism?   This is a much harder problem, but it can still be analyzed with some rigor.

Epidemiology without Biostatistics is always impossible:  It would not be possible to learn about the data generating mechanism without asking questions about the joint distribution. This usually involves sampling.  Therefore, we will need good statistical estimators of the joint distribution.

Biostatistics without Epidemiology is usually pointless:  The joint distribution of observed variables is simply not interesting in itself. You can make the claim that randomized trials is an example of biostatistics without epidemiology.  However, the epidemiology is still there. It is just not necessary to think about it, because the epidemiologic part of the analysis is trivial

Note that the word “bias” means different things in Epidemiology and Biostatistics.  In Biostatistics, “bias” is a property of a statistical estimator:  We talk about whether ŷ is a biased estimator of E(Y |A).   If an estimator is biased, it means that when you use data from a sample to make inferences about the joint distribution in the population the sample came from, there will be a systematic source of error.

In Epidemiology, “bias” means that you are estimating the wrong thing:  Epidemiological bias is a question about whether E(Y|A) is a valid identification of E(Ya).   If there is epidemiologic bias, it means that you estimated something in the joint distribution, but that this something does not answer the question you were interested in.    

These are completely different concepts. Both are important and can lead to your estimates being wrong. It is possible for a statistically valid estimator to be biased in the epidemiologic sense, and vice versa.   For your results to be valid, your estimator must be unbiased in both senses.

 


You’re Entitled to Everyone’s Opinion

24 satt 20 September 2014 03:39PM

Over the past year, I've noticed a topic where Less Wrong might have a blind spot: public opinion. Since last September I've had (or butted into) five conversations here where someone's written something which made me think, "you wouldn't be saying that if you'd looked up surveys where people were actually asked about this". The following list includes six findings I've brought up in those LW threads. All of the findings come from surveys of public opinion in the United States, though some of the results are so obvious that polls scarcely seem necessary to establish their truth.

  1. The public's view of the harms and benefits from scientific research has consistently become more pessimistic since the National Science Foundation began its surveys in 1979. (In the wake of repeated misconduct scandals, and controversies like those over vaccination, global warming, fluoridation, animal research, stem cells, and genetic modification, people consider scientists less objective and less trustworthy.)
  2. Most adults identify as neither Republican nor Democrat. (Although the public is far from apolitical, lots of people are unhappy with how politics currently works, and also recognize that their beliefs align imperfectly with the simplistic left-right axis. This dissuades them from identifying with mainstream parties.)
  3. Adults under 30 are less likely to believe that abortion should be illegal than the middle-aged. (Younger adults tend to be more socially liberal in general than their parents' generation.)
  4. In the 1960s, those under 30 were less likely than the middle-aged to think the US made a mistake in sending troops to fight in Vietnam. (The under-30s were more likely to be students and/or highly educated, and more educated people were less likely to think sending troops to Vietnam was a mistake.)
  5. The Harris Survey asked, in November 1969, "as far as their objectives are concerned, do you sympathize with the goals of the people who are demonstrating, marching, and protesting against the war in Vietnam, or do you disagree with their goals?" Most respondents aged 50+ sympathized with the protesters' goals, whereas only 28% of under-35s did. (Despite the specific wording of the question, the younger respondents worried that the protests reflected badly on their demographic, whereas older respondents were more often glad to see their own dissent voiced.)
  6. A 2002 survey found that about 90% of adult smokers agreed with the statement, "If you had to do it over again, you would not have started smoking." (While most smokers derive enjoyment from smoking, many weight smoking's negative consequences strongly enough that they'd rather not smoke; they continue smoking because of habit or addiction.)

continue reading »

Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities

24 KatjaGrace 16 September 2014 01:00AM

This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, see the announcement post. For the schedule of future topics, see MIRI's reading guide.


Welcome to the Superintelligence reading group. This week we discuss the first section in the reading guide, Past developments and present capabilities. This section considers the behavior of the economy over very long time scales, and the recent history of artificial intelligence (henceforth, 'AI'). These two areas are excellent background if you want to think about large economic transitions caused by AI.

This post summarizes the section, and offers a few relevant notes, thoughts, and ideas for further investigation. My own thoughts and questions for discussion are in the comments.

There is no need to proceed in order through this post. Feel free to jump straight to the discussion. Where applicable, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).

Reading: Foreword, and Growth modes through State of the art from Chapter 1 (p1-18)


Summary

Economic growth:

  1. Economic growth has become radically faster over the course of human history. (p1-2)
  2. This growth has been uneven rather than continuous, perhaps corresponding to the farming and industrial revolutions. (p1-2)
  3. Thus history suggests large changes in the growth rate of the economy are plausible. (p2)
  4. This makes it more plausible that human-level AI will arrive and produce unprecedented levels of economic productivity.
  5. Predictions of much faster growth rates might also suggest the arrival of machine intelligence, because it is hard to imagine humans - slow as they are - sustaining such a rapidly growing economy. (p2-3)
  6. Thus economic history suggests that rapid growth caused by AI is more plausible than you might otherwise think.

The history of AI:

  1. Human-level AI has been predicted since the 1940s. (p3-4)
  2. Early predictions were often optimistic about when human-level AI would come, but rarely considered whether it would pose a risk. (p4-5)
  3. AI research has been through several cycles of relative popularity and unpopularity. (p5-11)
  4. By around the 1990s, 'Good Old-Fashioned Artificial Intelligence' (GOFAI) techniques based on symbol manipulation gave way to new methods such as artificial neural networks and genetic algorithms. These are widely considered more promising, in part because they are less brittle and can learn from experience more usefully. Researchers have also lately developed a better understanding of the underlying mathematical relationships between various modern approaches. (p5-11)
  5. AI is very good at playing board games. (12-13)
  6. AI is used in many applications today (e.g. hearing aids, route-finders, recommender systems, medical decision support systems, machine translation, face recognition, scheduling, the financial market). (p14-16)
  7. In general, tasks we thought were intellectually demanding (e.g. board games) have turned out to be easy to do with AI, while tasks which seem easy to us (e.g. identifying objects) have turned out to be hard. (p14)
  8. An 'optimality notion' is the combination of a rule for learning, and a rule for making decisions. Bostrom describes one of these: a kind of ideal Bayesian agent. This is impossible to actually make, but provides a useful measure for judging imperfect agents against. (p10-11)

Notes on a few things

  1. What is 'superintelligence'? (p22 spoiler)
    In case you are too curious about what the topic of this book is to wait until week 3, a 'superintelligence' will soon be described as 'any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest'. Vagueness in this definition will be cleared up later. 
  2. What is 'AI'?
    In particular, how does 'AI' differ from other computer software? The line is blurry, but basically AI research seeks to replicate the useful 'cognitive' functions of human brains ('cognitive' is perhaps unclear, but for instance it doesn't have to be squishy or prevent your head from imploding). Sometimes AI research tries to copy the methods used by human brains. Other times it tries to carry out the same broad functions as a human brain, perhaps better than a human brain. Russell and Norvig (p2) divide prevailing definitions of AI into four categories: 'thinking humanly', 'thinking rationally', 'acting humanly' and 'acting rationally'. For our purposes however, the distinction is probably not too important.
  3. What is 'human-level' AI? 
    We are going to talk about 'human-level' AI a lot, so it would be good to be clear on what that is. Unfortunately the term is used in various ways, and often ambiguously. So we probably can't be that clear on it, but let us at least be clear on how the term is unclear. 

    One big ambiguity is whether you are talking about a machine that can carry out tasks as well as a human at any price, or a machine that can carry out tasks as well as a human at the price of a human. These are quite different, especially in their immediate social implications.

    Other ambiguities arise in how 'levels' are measured. If AI systems were to replace almost all humans in the economy, but only because they are so much cheaper - though they often do a lower quality job - are they human level? What exactly does the AI need to be human-level at? Anything you can be paid for? Anything a human is good for? Just mental tasks? Even mental tasks like daydreaming? Which or how many humans does the AI need to be the same level as? Note that in a sense most humans have been replaced in their jobs before (almost everyone used to work in farming), so if you use that metric for human-level AI, it was reached long ago, and perhaps farm machinery is human-level AI. This is probably not what we want to point at.

    Another thing to be aware of is the diversity of mental skills. If by 'human-level' we mean a machine that is at least as good as a human at each of these skills, then in practice the first 'human-level' machine will be much better than a human on many of those skills. It may not seem 'human-level' so much as 'very super-human'.

    We could instead think of human-level as closer to 'competitive with a human' - where the machine has some super-human talents and lacks some skills humans have. This is not usually used, I think because it is hard to define in a meaningful way. There are already machines for which a company is willing to pay more than a human: in this sense a microscope might be 'super-human'. There is no reason for a machine which is equal in value to a human to have the traits we are interested in talking about here, such as agency, superior cognitive abilities or the tendency to drive humans out of work and shape the future. Thus we talk about AI which is at least as good as a human, but you should beware that the predictions made about such an entity may apply before the entity is technically 'human-level'.


    Example of how the first 'human-level' AI may surpass humans in many ways.

    Because of these ambiguities, AI researchers are sometimes hesitant to use the term. e.g. in these interviews.
  4. Growth modes (p1) 
    Robin Hanson wrote the seminal paper on this issue. Here's a figure from it, showing the step changes in growth rates. Note that both axes are logarithmic. Note also that the changes between modes don't happen overnight. According to Robin's model, we are still transitioning into the industrial era (p10 in his paper).
  5. What causes these transitions between growth modes? (p1-2)
    One might be happier making predictions about future growth mode changes if one had a unifying explanation for the previous changes. As far as I know, we have no good idea of what was so special about those two periods. There are many suggested causes of the industrial revolution, but nothing uncontroversially stands out as 'twice in history' level of special. You might think the small number of datapoints would make this puzzle too hard. Remember however that there are quite a lot of negative datapoints - you need an explanation that didn't happen at all of the other times in history. 
  6. Growth of growth
    It is also interesting to compare world economic growth to the total size of the world economy. For the last few thousand years, the economy seems to have grown faster more or less in proportion to it's size (see figure below). Extrapolating such a trend would lead to an infinite economy in finite time. In fact for the thousand years until 1950 such extrapolation would place an infinite economy in the late 20th Century! The time since 1950 has been strange apparently. 

    (Figure from here)
  7. Early AI programs mentioned in the book (p5-6)
    You can see them in action: SHRDLU, Shakey, General Problem Solver (not quite in action), ELIZA.
  8. Later AI programs mentioned in the book (p6)
    Algorithmically generated Beethoven, algorithmic generation of patentable inventionsartificial comedy (requires download).
  9. Modern AI algorithms mentioned (p7-8, 14-15) 
    Here is a neural network doing image recognition. Here is artificial evolution of jumping and of toy cars. Here is a face detection demo that can tell you your attractiveness (apparently not reliably), happiness, age, gender, and which celebrity it mistakes you for.
  10. What is maximum likelihood estimation? (p9)
    Bostrom points out that many types of artificial neural network can be viewed as classifiers that perform 'maximum likelihood estimation'. If you haven't come across this term before, the idea is to find the situation that would make your observations most probable. For instance, suppose a person writes to you and tells you that you have won a car. The situation that would have made this scenario most probable is the one where you have won a car, since in that case you are almost guaranteed to be told about it. Note that this doesn't imply that you should think you won a car, if someone tells you that. Being the target of a spam email might only give you a low probability of being told that you have won a car (a spam email may instead advise you of products, or tell you that you have won a boat), but spam emails are so much more common than actually winning cars that most of the time if you get such an email, you will not have won a car. If you would like a better intuition for maximum likelihood estimation, Wolfram Alpha has several demonstrations (requires free download).
  11. What are hill climbing algorithms like? (p9)
    The second large class of algorithms Bostrom mentions are hill climbing algorithms. The idea here is fairly straightforward, but if you would like a better basic intuition for what hill climbing looks like, Wolfram Alpha has a demonstration to play with (requires free download).

In-depth investigations

If you are particularly interested in these topics, and want to do further research, these are a few plausible directions:

  1. How have investments into AI changed over time? Here's a start, estimating the size of the field.
  2. What does progress in AI look like in more detail? What can we infer from it? I wrote about algorithmic improvement curves before. If you are interested in plausible next steps here, ask me.
  3. What do economic models tell us about the consequences of human-level AI? Here is some such thinking; Eliezer Yudkowsky has written at length about his request for more.

How to proceed

This has been a collection of notes on the chapter. The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

Next week, we will talk about what AI researchers think about human-level AI: when it will arrive, what it will be like, and what the consequences will be. To prepare, read Opinions about the future of machine intelligence from Chapter 1 and also When Will AI Be Created? by Luke Muehlhauser. The discussion will go live at 6pm Pacific time next Monday 22 September. Sign up to be notified here.

Funding cannibalism motivates concern for overheads

24 Thrasymachus 30 August 2014 12:42AM

Summary: Overhead expenses' (CEO salary, percentage spent on fundraising) are often deemed a poor measure of charity effectiveness by Effective Altruists, and so they disprefer means of charity evaluation which rely on these. However, 'funding cannibalism' suggests that these metrics (and the norms that engender them) have value: if fundraising is broadly a zero-sum game between charities, then there's a commons problem where all charities could spend less money on fundraising and all do more good, but each is locally incentivized to spend more. Donor norms against increasing spending on zero-sum 'overheads' might be a good way of combating this. This valuable collective action of donors may explain the apparent underutilization of fundraising by charities, and perhaps should make us cautious in undermining it.

The EA critique of charity evaluation

Pre-Givewell, the common means of evaluating charities (GuidestarCharity Navigator) used a mixture of governance checklists 'overhead indicators'. Charities would gain points both for having features associated with good governance (being transparent in the right ways, balancing budgets, the right sorts of corporate structure), but also in spending its money on programs and avoiding 'overhead expenses' like administration and (especially) fundraising. For shorthand, call this 'common sense' evaluation.

The standard EA critique is that common sense evaluation doesn't capture what is really important: outcomes. It is easy to imagine charities that look really good to common sense evaluation yet have negligible (or negative) outcomes.  In the case of overheads, it becomes unclear whether these are even proxy measures of efficacy. Any fundraising that still 'turns a profit' looks like a good deal, whether it comprises five percent of a charity's spending or fifty.

A summary of the EA critique of common sense evaluation that its myopic focus on these metrics gives pathological incentives, as these metrics frequently lie anti-parallel to maximizing efficacy. To score well on these evaluations, charities may be encouraged to raise less money, hire less able staff, and cut corners in their own management, even if doing these things would be false economies.

 

Funding cannibalism and commons tragedies

In the wake of the ALS 'Ice bucket challenge', Will MacAskill suggested there is considerable of 'funding cannabilism' in the non-profit sector. Instead of the Ice bucket challenge 'raising' money for ALS, it has taken money that would have been donated to other causes instead - cannibalizing other causes. Rather than each charity raising funds independently of one another, they compete for a fairly fixed pie of aggregate charitable giving.

The 'cannabilism' thesis is controversial, but looks plausible to me, especially when looking at 'macro' indicators: proportion of household charitable spending looks pretty fixed whilst fundraising has increased dramatically, for example.

If true, cannibalism is important. As MacAskill points out, the money tens of millions of dollars raised for ALS is no longer an untrammelled good, alloyed as it is with the opportunity cost of whatever other causes it has cannibalized (q.v.). There's also a more general consideration: if there is a fixed pot of charitable giving insensitive to aggregate fundraising, then fundraising becomes a commons problem. If all charities could spend less on their fundraising, none would lose out, so all could spend more of their funds on their programs. However, for any alone to spend less on fundraising allows the others to cannibalize it.

 

Civilizing Charitable Cannibals, and Metric Meta-Myopia

Coordination among charities to avoid this commons tragedy is far fetched. Yet coordination of  donors on shared norms about 'overhead ratio' can help. By penalizing a charity for spending too much on zero-sum games with other charities like fundraising, donors can stop a race to the bottom fundraising free for all and burning of the charitable commons that implies. The apparently-high marginal return to fundraising might suggest this is already in effect (and effective!)

The contrarian take would be that it is the EA critique of charity evaluation which is myopic, not the charity evaluation itself - by looking at the apparent benefit for a single charity of more overhead, the EA critique ignores the broader picture of the non-profit ecosystem, and their attack undermines a key environmental protection of an important commons - further, one which the right tail of most effective charities benefit from just as much as the crowd of 'great unwashed' other causes. (Fundraising ability and efficacy look like they should be pretty orthogonal. Besides, if they correlate well enough that you'd expect the most efficacious charities would win the zero-sum fundraising game, couldn't you dispense with Givewell and give to the best fundraisers?)

The contrarian view probably goes too far. Although there's a case for communally caring about fundraising overheads, as cannibalism leads us to guess it is zero sum, parallel reasoning is hard to apply to administration overhead: charity X doesn't lose out if charity Y spends more on management, but charity Y is still penalized by common sense evaluation even if its overall efficacy increases. I'd guess that features like executive pay lie somewhere in the middle: non-profit executives could be poached by for-profit industries, so it is not as simple as donors prodding charities to coordinate to lower executive pay; but donors can prod charities not to throw away whatever 'non-profit premium' they do have in competing with one another for top talent (c.f.). If so, we should castigate people less for caring about overhead, even if we still want to encourage them to care about efficacy too.

The invisible hand of charitable pan-handling

If true, it is unclear whether the story that should be told is 'common sense was right all along and the EA movement overconfidently criticised' or 'A stopped clock is right twice a day, and the generally wrong-headed common sense had an unintended feature amongst the bugs'. I'd lean towards the latter, simply the advocates of the common sense approach have not (to my knowledge) articulated these considerations themselves.

However, many of us believe the implicit machinery of the market can turn without many of the actors within it having any explicit understanding of it. Perhaps the same applies here. If so, we should be less confident in claiming the status quo is pathological and we can do better: there may be a rationale eluding both us and its defenders.

"Follow your dreams" as a case study in incorrect thinking

24 cousin_it 20 August 2014 01:18PM

This post doesn't contain any new ideas that LWers don't already know. It's more of an attempt to organize my thoughts and have a writeup for future reference.

Here's a great quote from Sam Hughes, giving some examples of good and bad advice:

"You and your gaggle of girlfriends had a saying at university," he tells her. "'Drink through it'. Breakups, hangovers, finals. I have never encountered a shorter, worse, more densely bad piece of advice." Next he goes into their bedroom for a moment. He returns with four running shoes. "You did the right thing by waiting for me. Probably the first right thing you've done in the last twenty-four hours. I subscribe, as you know, to a different mantra. So we're going to run."

The typical advice given to young people who want to succeed in highly competitive areas, like sports, writing, music, or making video games, is to "follow your dreams". I think that advice is up there with "drink through it" in terms of sheer destructive potential. If it was replaced with "don't bother following your dreams" every time it was uttered, the world might become a happier place.

The amazing thing about "follow your dreams" is that thinking about it uncovers a sort of perfect storm of biases. It's fractally wrong, like PHP, where the big picture is wrong and every small piece is also wrong in its own unique way.

The big culprit is, of course, optimism bias due to perceived control. I will succeed because I'm me, the special person at the center of my experience. That's the same bias that leads us to overestimate our chances of finishing the thesis on time, or having a successful marriage, or any number of other things. Thankfully, we have a really good debiasing technique for this particular bias, known as reference class forecasting, or inside vs outside view. What if your friend Bob was a slightly better guitar player than you? Would you bet a lot of money on Bob making it big like Jimi Hendrix? The question is laughable, but then so is betting the years of your own life, with a smaller chance of success than Bob.

That still leaves many questions unanswered, though. Why do people offer such advice in the first place, why do other people follow it, and what can be done about it?

Survivorship bias is one big reason we constantly hear successful people telling us to "follow our dreams". Successful people doesn't really know why they are successful, so they attribute it to their hard work and not giving up. The media amplifies that message, while millions of failures go unreported because they're not celebrities, even though they try just as hard. So we hear about successes disproportionately, in comparison to how often they actually happen, and that colors our expectations of our own future success. Sadly, I don't know of any good debiasing techniques for this error, other than just reminding yourself that it's an error.

When someone has invested a lot of time and effort into following their dream, it feels harder to give up due to the sunk cost fallacy. That happens even with very stupid dreams, like the dream of winning at the casino, that were obviously installed by someone else for their own profit. So when you feel convinced that you'll eventually make it big in writing or music, you can remind yourself that compulsive gamblers feel the same way, and that feeling something doesn't make it true.

Of course there are good dreams and bad dreams. Some people have dreams that don't tease them for years with empty promises, but actually start paying off in a predictable time frame. The main difference between the two kinds of dream is the difference between positive-sum games, a.k.a. productive occupations, and zero-sum games, a.k.a. popularity contests. Sebastian Marshall's post Positive Sum Games Don't Require Natural Talent makes the same point, and advises you to choose a game where you can be successful without outcompeting 99% of other players.

The really interesting question to me right now is, what sets someone on the path of investing everything in a hopeless dream? Maybe it's a small success at an early age, followed by some random encouragement from others, and then you're locked in. Is there any hope for thinking back to that moment, or set of moments, and making a little twist to put yourself on a happier path? I usually don't advise people to change their desires, but in this case it seems to be the right thing to do.

Announcing the 2014 program equilibrium iterated PD tournament

24 tetronian2 31 July 2014 12:24PM

Last year, AlexMennen ran a prisoner's dilemma tournament with bots that could see each other's source code, which was dubbed a "program equilibrium" tournament. This year, I will be running a similar tournament. Here's how it's going to work: Anyone can submit a bot that plays the iterated PD against other bots. Bots can not only remember previous rounds, as in the standard iterated PD, but also run perfect simulations of their opponent before making a move. Please see the github repo for the full list of rules and a brief tutorial.

There are a few key differences this year:

1) The tournament is in Haskell rather than Scheme.

2) The time limit for each round is shorter (5 seconds rather than 10) but the penalty for not outputting Cooperate or Defect within the time limit has been reduced.

3) Bots cannot directly see each other's source code, but they can run their opponent, specifying the initial conditions of the simulation, and then observe the output.

All submissions should be emailed to pdtournament@gmail.com or PM'd to me here on LessWrong by September 15th, 2014. LW users with 50+ karma who want to participate but do not know Haskell can PM me with an algorithm/psuedocode, and I will translate it into a bot for them. (If there is a flood of such requests, I would appreciate some volunteers to help me out.)

Sequence Announcement: Applied Causal Inference

24 Anders_H 30 July 2014 08:55PM

Applied Causal Inference for Observational Research

This sequence is an introduction to basic causal inference.  It was originally written as auxiliary notes for a course in Epidemiology, but it is relevant to almost any kind of applied statistical research, including econometrics, sociology, psychology, political science etc.  I would not be surprised if you guys find a lot of errors, and I would be very grateful if you point them out in the comments. This will help me improve my course notes and potentially help me improve my understanding of the material. 

For mathematically inclined readers, I recommend skipping this sequence and instead reading Pearl's book on Causality.  There is also a lot of good material on causal graphs on Less Wrong itself.   Also, note that my thesis advisor is writing a book that covers the same material in more detail, the first two parts are available for free at his website.

Pearl's book, Miguel's book and Eliezer's writings are all more rigorous and precise than my sequence.  This is partly because I have a different goal:  Pearl and Eliezer are writing for mathematicians and theorists who may be interested in contributing to the theory.  Instead,  I am writing for consumers of science who want to understand correlation studies from the perspective of a more rigorous epistemology.  

I will use Epidemiological/Counterfactual notation rather than Pearl's notation. I apologize if this is confusing.  These two approaches refer to the same mathematical objects, it is just a different notation. Whereas Pearl would use the "Do-Operator" E[Y|do(a)], I use counterfactual variables  E[Ya].  Instead of using Pearl's "Do-Calculus" for identification, I use Robins' G-Formula, which will give the same results. 

For all applications, I will use the letter "A" to represent "treatment" or "exposure" (the thing we want to estimate the effect of),  Y to represent the outcome, L to represent any measured confounders, and U to represent any unmeasured confounders. 

Outline of Sequence:

I hope to publish one post every week.  I have rough drafts for the following eight sections, and will keep updating this outline with links as the sequence develops:


Part 0:  Sequence Announcement / Introduction (This post)

Part 1:  Basic Terminology and the Assumptions of Causal Inference

Part 2:  Graphical Models

Part 3:  Using Causal Graphs to Understand Bias

Part 4:  Time-Dependent Exposures

Part 5:  The G-Formula

Part 6:  Inverse Probability Weighting

Part 7:  G-Estimation of Structural Nested Models and Instrumental Variables

Part 8:  Single World Intervention Graphs, Cross-World Counterfactuals and Mediation Analysis

 

 Introduction: Why Causal Inference?

The goal of applied statistical research is almost always to learn about causal effects.  However, causal inference from observational is hard, to the extent that it is usually not even possible without strong, almost heroic assumptions.   Because of the inherent difficulty of the task, many old-school investigators were trained to avoid making causal claims.  Words like “cause” and “effect” were banished from polite company, and the slogan “correlation does not imply causation” became an article of faith which, when said loudly enough,  seemingly absolved the investigators from the sin of making causal claims.

However, readers were not fooled:  They always understood that epidemiologic papers were making causal claims.  Of course they were making causal claims; why else would anybody be interested in a paper about the correlation between two variables?   For example, why would anybody want to know about the correlation between eating nuts and longevity, unless they were wondering if eating nuts would cause them to live longer?

When readers interpreted these papers causally, were they simply ignoring the caveats, drawing conclusions that were not intended by the authors?   Of course they weren’t.  The discussion sections of epidemiologic articles are full of “policy implications” and speculations about biological pathways that are completely contingent on interpreting the findings causally. Quite clearly, no matter how hard the investigators tried to deny it, they were making causal claims. However, they were using methodology that was not designed for causal questions, and did not have a clear language for reasoning about where the uncertainty about causal claims comes from. 

This was not sustainable, and inevitably led to a crisis of confidence, which culminated when some high-profile randomized trials showed completely different results from the preceding observational studies.  In one particular case, when the Women’s Health Initiative trial showed that post-menopausal hormone replacement therapy increases the risk of cardiovascular disease, the difference was so dramatic that many thought-leaders in clinical medicine completely abandoned the idea of inferring causal relationships from observational data.

It is important to recognize that the problem was not that the results were wrong. The problem was that there was uncertainty that was not taken seriously by the investigators. A rational person who wants to learn about the world will be willing to accept that studies have errors of margin, but only as long as the investigators make a good-faith effort to examine what the sources of error are, and communicate clearly about this uncertainty to their readers.  Old-school epidemiology failed at this.  We are not going to make the same mistake. Instead, we are going to develop a clear, precise language for reasoning about uncertainty and bias.

In this context, we are going to talk about two sources of uncertainty – “statistical” uncertainty and “epidemiological” uncertainty. 

We are going to use the word “Statistics” to refer to the theory of how we can learn about correlations from limited samples.  For statisticians, the primary source of uncertainty is sampling variability. Statisticians are very good at accounting for this type of uncertainty: Concepts such as “standard errors”, “p-values” and “confidence intervals” are all attempts at quantifying and communicating the extent of uncertainty that results from sampling variability.

The old school of epidemiology would tell you to stop after you had found the correlations and accounted for the sampling variability. They believed going further was impossible. However, correlations are simply not interesting. If you truly believed that correlations tell you nothing about causation, there would be no point in doing the study.

Therefore, we are going to use the terms “Epidemiology” or “Causal Inference” to refer to the next stage in the process:  Learning about causation from correlations.  This is a much harder problem, with many additional sources of uncertainty, including confounding and selection bias. However, recognizing that the problem is hard does not mean that you shouldn't try, it just means that you have to be careful. As we will see, it is possible to reason rigorously about whether correlation really does imply causation in your particular study: You will just need a precise language. The goal of this sequence is simply to give you such a language.

In order to teach you the logic of this language, we are going to make several controversial statements such as «The only way to estimate a causal effect is to run a randomized controlled trial» . You may not be willing to believe this at first, but in order to understand the logic of causal inference, it is necessary that you are at least willing to suspend your disbelief and accept it as true within the course. 

It is important to note that we are not just saying this to try to convince you to give up on observational studies in favor of randomized controlled trials.   We are making this point because understanding it is necessary in order to appreciate what it means to control for confounding: It is not possible to give a coherent meaning to the word “confounding” unless one is trying to determine whether it is reasonable to model the data as if it came from a complex randomized trial run by nature. 

 

--

When we say that causal inference is hard, what we mean by this is not that it is difficult to learn the basics concepts of the theory.  What we mean is that even if you fully understand everything that has ever been written about causal inference, it is going to be very hard to infer a causal relationship from observational data, and that there will always be uncertainty about the results. This is why this sequence is not going to be a workshop that teaches you how to apply magic causal methodology. What we are interested in, is developing your ability to reason honestly about where uncertainty and bias comes from, so that you can communicate this to the readers of your studies.  What we want to teach you about, is the epistemology that underlies epidemiological and statistical research with observational data. 

Insisting on only using randomized trials may seem attractive to a purist, it does not take much imagination to see that there are situations where it is important to predict the consequences of an action, but where it is not possible to run a trial. In such situations, there may be Bayesian evidence to be found in nature. This evidence comes in the form of correlations in observational data. When we are stuck with this type of evidence, it is important that we have a clear framework for assessing the strength of the evidence. 

 

--

 

I am publishing Part 1 of the sequence at the same time as this introduction. I would be very interested in hearing feedback, particularly about whether people feel this has already been covered in sufficient detail on Less Wrong.  If there is no demand, there won't really be any point in transforming the rest of my course notes to a Less Wrong format. 

Thanks to everyone who had a look at this before I published, including paper-machine and Vika, Janos, Eloise and Sam from the Boston Meetup group. 

In the grim darkness of the far future there is only war continued by other means

23 Eneasz 21 October 2014 07:39PM

(cross-posted from my blog)

I. PvE vs PvP

Ever since it’s advent in Doom, PvP (Player vs Player) has been an integral part of almost every major video game. This is annoying to PvE (Player vs Environment) fans like myself, especially when PvE mechanics are altered (read: simplified and degraded) for the purpose of accommodating the PvP game play. Even in games which are ostensibly about the story & world, rather than direct player-on-player competition.

The reason for this comes down to simple math. PvE content is expensive to make. An hour of game play can take many dozens, or nowadays even hundreds, of man-hours of labor to produce. And once you’ve completed a PvE game, you’re done with it. There’s nothing else, you’ve reached “The End”, congrats. You can replay it a few times if you really loved it, like re-reading a book, but the content is the same. MMORGs recycle content by forcing you to grind bosses many times before you can move on to the next one, but that’s as fun as the word “grind” makes it sound. At that point people are there more for the social aspect and the occasional high than the core gameplay itself.

PvP “content”, OTOH, generates itself. Other humans keep learning and getting better and improvising new tactics. Every encounter has the potential to be new and exciting, and they always come with the rush of triumphing over another person (or the crush of losing to the same).

But much more to the point – In PvE potentially everyone can make it into the halls of “Finished The Game;” and if everyone is special, no one is. PvP has a very small elite – there can only be one #1 player, and people are always scrabbling for that position, or defending it. PvP harnesses our status-seeking instinct to get us to provide challenges for each other rather than forcing the game developers to develop new challenges for us. It’s far more cost effective, and a single man-hour of labor can produce hundreds or thousands of hours of game play. StarCraft  continued to be played at a massive level for 12 years after its release, until it was replaced with StarCraft II.

So if you want to keep people occupied for a looooong time without running out of game-world, focus on PvP

II. Science as PvE

In the distant past (in internet time) I commented at LessWrong that discovering new aspects of reality was exciting and filled me with awe and wonder and the normal “Science is Awesome” applause lights (and yes, I still feel that way). And I sneered at the status-grubbing of politicians and administrators and basically everyone that we in nerd culture disliked in high school. How temporary and near-sighted! How zero-sum (and often negative-sum!), draining resources we could use for actual positive-sum efforts like exploration and research! A pox on their houses!

Someone replied, asking why anyone should care about the minutia of lifeless, non-agenty forces? How could anyone expend so much of their mental efforts on such trivia when there are these complex, elaborate status games one can play instead? Feints and countermoves and gambits and evasions, with hidden score-keeping and persistent reputation effects… and that’s just the first layer! The subtle ballet of interaction is difficult even to watch, and when you get billions of dancers interacting it can be the most exhilarating experience of all.

This was the first time I’d ever been confronted with status-behavior as anything other than wasteful. Of course I rejected it at first, because no one is allowed to win arguments in real time. But it stuck with me. I now see the game play, and it is intricate. It puts Playing At The Next Level in a whole new perspective. It is the constant refinement and challenge and lack of a final completion-condition that is the heart of PvP. Human status games are the PvP of real life.

Which, by extension of the metaphor, makes Scientific Progress the PvE of real life. Which makes sense. It is us versus the environment in the most literal sense. It is content that was provided to us, rather than what we make ourselves. And it is limited – in theory we could some day learn everything that there is to learn.

III. The Best of All Possible Worlds

I’ve mentioned a few times I have difficulty accepting reality as real. Say you were trying to keep a limitless number of humans happy and occupied for an unbounded amount of time. You provide them PvE content to get them started. But you don’t want the PvE content to be their primary focus, both because they’ll eventually run out of it, and also because once they’ve completely cracked it there’s a good chance they’ll realize they’re in a simulation. You know that PvP is a good substitute for PvE for most people, often a superior one, and that PvP can get recursively more complex and intricate without limit and keep the humans endlessly occupied and happy, as long as their neuro-architecture is right. It’d be really great if they happened to evolve in a way that made status-seeking extremely pleasurable for the majority of the species, even if that did mean that the ones losing badly were constantly miserable regardless of their objective well-being. This would mean far, far more lives could be lived and enjoyed without running out of content than would otherwise be possible.

IV. Implications for CEV

It’s said that the Coherent Extrapolated Volition is “our wish if we knew more, thought faster, were more the people we wished to be, hard grown up farther together.” This implies a resolution to many conflicts. No more endless bickering about whether the Red Tribe is racist or the Blue Tribe is arrogant pricks. A more unified way of looking at the world that breaks down those conceptual conflicts. But if PvP play really is an integral part of the human experience, a true CEV would notice that, and would preserve these differences instead. To ensure that we always had rival factions sniping at each other over irreconcilable, fundamental disagreements in how reality should be approached and how problems should be solved. To forever keep partisan politics as part of the human condition, so we have this dance to enjoy. Stripping it out would be akin to removing humanity’s love of music, because dancing inefficiently consumes great amounts of energy just so we can end up where we started.

Carl von Clausewitz famously said “War is the continuation of politics by other means.”  The correlate of “Politics is the continuation of war by other means” has already been proposed. It is not unreasonable to speculate that in the grim darkness of the far future, there is only war continued by other means. Which, all things considered, is greatly preferable to actual war. As long as people like Scott are around to try to keep things somewhat civil and preventing an escalation into violence, this may not be terrible.

Questions on Theism

23 Aiyen 08 October 2014 09:02PM

Long time lurker, but I've barely posted anything. I'd like to ask Less Wrong for help.

Reading various articles by the Rationalist Community over the years, here, on Slate Star Codex and a few other websites, I have found that nearly all of it makes sense. Wonderful sense, in fact, the kind of sense you only really find when the author is actually thinking through the implications of what they're saying, and it's been a breath of fresh air. I generally agree, and when I don't it's clear why we're differing, typically due to a dispute in priors.

Except in theism/atheism.

In my experience, when atheists make their case, they assume a universe without miracles, i.e. a universe that looks like one would expect if there was no God. Given this assumption, atheism is obviously the rational and correct stance to take. And generally, Christian apologists make the same assumption! They assert miracles in the Bible, but do not point to any accounts of contemporary supernatural activity. And given such assumptions, the only way one can make a case for Christianity is with logical fallacies, which is exactly what most apologists do. The thing is though, there are plenty of contemporary miracle accounts.

Near death experiences. Answers to prayer that seem to violate the laws of physics. I'm comfortable with dismissing Christian claims that an event was "more than coincidence", because given how many people are praying and looking for God's hand in events, and the fact that an unanswered prayer will generally be forgotten while a seemingly-answered one will be remembered, one would expect to see "more than coincidence" in any universe with believers, whether or not there was a God. But there are a LOT of people out there claiming to have seen events that one would expect to never occur in a naturalistic universe. I even recall reading an atheist's account of his deconversion (I believe it was Luke Muehlhauser; apologies if I'm misremembering) in which he states that as a Christian, he witnessed healings he could not explain. Now, one could say that these accounts are the result of people lying, but I expect people to be rather more honest than that, and Luke is hardly going to make up evidence for the Christian God in an article promoting unbelief! One could say that "miracles" are misunderstood natural events, but there are plenty of accounts that seem pretty unlikely without Divine intervention-I've even read claims by Christians that they had seen people raised from the dead by prayer. And so I'd like to know how atheists respond to the evidence of miracles.

This isn't just idle curiosity. I am currently a Christian (or maybe an agnostic terrified of ending up on the wrong side of Pascal's Wager), and when you actually take religion seriously, it can be a HUGE drain on quality of life. I find myself being frightened of hell, feeling guilty when I do things that don't hurt anyone but are still considered sins, and feeling guilty when I try to plan out my life, wondering if I should just put my plans in God's hands. To make matters worse, I grew up in a dysfunctional, very Christian family, and my emotions seem to be convinced that being a true Christian means acting like my parents (who were terrible role models; emulating them means losing at life).

I'm aware of plenty of arguments for non-belief: Occam's Razor giving atheism as one's starting prior in the absence of strong evidence for God, the existence of many contradictory religions proving that humanity tends to generate false gods, claims in Genesis that are simply false (Man created from mud, woman from a rib, etc. have been conclusively debunked by science), commands given by God that seem horrifyingly immoral, no known reason why Christ's death would be needed for human redemption (many apologists try to explain this, but their reasoning never makes sense), no known reason why if belief in Jesus is so important why God wouldn't make himself blatantly obvious, hell seeming like an infinite injustice, the Bible claiming that any prayer prayed in faith will be answered contrasted with the real world where this isn't the case, a study I read about in which praying for the sick didn't improve results at all (and the group that was told they were being prayed for actually had worse results!), etc. All of this, plus the fact that it seems that nearly everyone who's put real effort into their epistemology doesn't believe and moreover is very confident in their nonbelief (I am reminded of Eliezer's comment that he would be less worried about a machine that destroys the universe if the Christian God exists than one that has a one in a trillion chance of destroying us) makes me wonder if there really isn't a God, and in so realizing this, I can put down burdens that have been hurting for nearly my entire life. But the argument from miracles keeps me in faith, keeps me frightened. If there is a good argument against miracles, learning it could be life changing.

Thank you very much. I do not have words to describe how much this means to me.

I'm holding a birthday fundraiser

23 Kaj_Sotala 05 September 2014 12:38PM

EDIT: The fundraiser was successfully completed, raising the full $500 for worthwhile charities. Yay!

Today's my birthday! And per Peter Hurford's suggestion, I'm holding a birthday fundraiser to help raise money for MIRI, GiveDirectly, and Mercy for Animals. If you like my activity on LW or elsewhere, please consider giving a few dollars to one of these organizations via the fundraiser page. You can specify which organization you wish to donate in the comment of the donation, or just leave it unspecified, in which case I'll give your donation to MIRI.

If you don't happen to be particularly altruistically motivated, just consider it a birthday gift to me - it will give me warm fuzzies to know that I helped move money for worthy organizations. And if you are altruistically motivated but don't care about me in particular, maybe you still can get yourself to donate more than usual by hacky stuff like someone you know on the Internet having a birthday. :)

If someone else wants to hold their own birthday fundraiser, here are some tips: birthday fundraisers.

Multiple Factor Explanations Should Not Appear One-Sided

23 Stefan_Schubert 07 August 2014 02:10PM

In Policy Debates Should Not Appear One-Sided, Eliezer Yudkowsky argues that arguments on questions of fact should be one-sided, whereas arguments on policy questions should not:

On questions of simple fact (for example, whether Earthly life arose by natural selection) there's a legitimate expectation that the argument should be a one-sided battle; the facts themselves are either one way or another, and the so-called "balance of evidence" should reflect this.  Indeed, under the Bayesian definition of evidence, "strong evidence" is just that sort of evidence which we only expect to find on one side of an argument.

But there is no reason for complex actions with many consequences to exhibit this onesidedness property.

The reason for this is primarily that natural selection has caused all sorts of observable phenomena. With a bit of ingenuity, we can infer that natural selection has caused them, and hence they become evidence for natural selection. The evidence for natural selection thus has a common cause, which means that we should expect the argument to be one-sided.

In contrast, even if a certain policy, say lower taxes, is the right one, the rightness of this policy does not cause its evidence (or the arguments for this policy, which is a more natural expression), the way natural selection causes its evidence. Hence there is no common cause of all of the valid arguments of relevance for the rightness of this policy, and hence no reason to expect that all of the valid arguments should support lower taxes. If someone nevertheless believes this, the best explanation of their belief is that they suffer from some cognitive bias such as the affect heuristic.

(In passing, I might mention that I think that the fact that moral debates are not one-sided indicates that moral realism is false, since if moral realism were true, moral facts should provide us with one-sided evidence on moral questions, just like natural selection provides us with one-sided evidence on the question how Earthly life arose. This argument is similar to, but distinct from, Mackie's argument from relativity.)

Now consider another kind of factual issues: multiple factor explanations. These are explanations which refer to a number of factors to explain a certain phenomenon. For instance, in his book Guns, Germs and Steel, Jared Diamond explains the fact that agriculture first arose in the Fertile Crescent by reference to no less than eight factors. I'll just list these factors briefly without going into the details of how they contributed to the rise of agriculture. The Fertile Crescent had, according to Diamond (ch. 8):

  1. big seeded plants, which were
  2. abundant and occurring in large stands whose value was obvious,
  3. and which were to a large degree hermaphroditic "selfers".
  4. It had a higher percentage of annual plants than other Mediterreanean climate zones
  5. It had higher diversity of species than other Mediterreanean climate zones.
  6. It has a higher range of elevations than other Mediterrenean climate zones
  7. It had a great number of domesticable big mammals.
  8. The hunter-gatherer life style was not that appealing in the Fertile Crescent

(Note that all of these factors have to do with geographical, botanical and zoological facts, rather than with facts about the humans themselves. Diamond's goal is to prove that agriculture arose in Eurasia due to geographical luck rather than because Eurasians are biologically superior to other humans.)

Diamond does not mention any mechanism that would make it less likely for agriculture to arise in the Fertile Crescent. Hence the score of pro-agriculture vs anti-agriculture factors in the Fertile Crescent is 8-0. Meanwhile no other area in the world has nearly as many advantages. Diamond does not provide us with a definite list of how other areas of the world fared but no non-Eurasian alternative seem to score better than about 5-3 (he is primarily interested in comparing Eurasia with other parts of the world).

Now suppose that we didn't know anything about the rise of agriculture, but that we knew that there were eight factors which could influence it. Since these factors would not be caused by the fact that agriculture first arose in the Fertile Crescent, the way the evidence for natural selection is caused by the natural selection, there would be no reason to believe that these factors were on average positively probabilistically dependent of each other. Under these conditions, one area having all the advantages and the next best lacking three of them is a highly surprising distribution of advantages. On the other hand, this is precisely the pattern that we would expect given the hypothesis that Diamond suffers from confirmation bias or another related bias. His theory is "too good to be true" and which lends support to the hypothesis that he is biased.

In this particular case, some of the factors Diamond lists presumably are positively dependent on each other. Now suppose that someone argues that all of the factors are in fact strongly positively dependent on each other, so that it is not very surprising that they all co-occur. This only pushes the problem back, however, because now we want an explanation of a) what the common cause of all of these dependencies is (it being very improbable that they all would correlate in the absence of such a common cause) and b) how it could be that this common cause increases the probability of the hypothesis via eight independent mechanisms, and doesn't decrease it via any mechanism. (This argument is complicated and I'd be happy on any input concerning it.)

Single-factor historical explanations are often criticized as being too "simplistic" whereas multiple factor explanations are standardly seen as more nuanced. Many such explanations are, however, one-sided in the way Diamond's explanation is, which indicates bias and dogmatism rather than nuance. (Another salient example I'm presently studying is taken from Steven Pinker's The Better Angels of Our Nature. I can provide you with the details on demand.*) We should be much better at detecting this kind of bias, since it for the most part goes unnoticed at present.

Generally, the sort of "too good to be true"-arguments to infer bias discussed here are strongly under-utilized. As our knowledge of the systematic and predictable ways our thought goes wrong increase, it becomes easier to infer bias from the structure or pattern of people's arguments, statements and beliefs. What we need is to explicate clearly, preferably using probability theory or other formal methods, what factors are relevant for deciding whether some pattern of arguments, statements or beliefs most likely is the result of biased thought-processes. I'm presently doing research on this and would be happy to discuss these questions in detail, either publicly or via pm.

*Edit: Pinker's argument. Pinker's goal is to explain why violence has declined throughout history. He lists the following five factors in the last chapter:

  • The Leviathan (the increasing influence of the government)
  • Gentle commerce (more trade leads to less violence)
  • Feminization
  • The expanding (moral) circle
  • The escalator of reason
He also lists some "important but inconsistent" factors:
  • Weaponry and disarmanent (he claims that there are no strong correlations between weapon developments and numbers of deaths)
  • Resource and power (he claims that there is little connection between resource distributions and wars)
  • Affluence (tight correlations between affluence and non-violence are hard to find)
  • (Fall of) religion (he claims that atheist countries and people aren't systematically less violen
This case is interestingly different from Diamond's. Firstly, it is not entirely clear to what extent these five mechanisms are actually different. It could be argued that "the escalator of reason" is a common cause of the other one's: that this causes us to have better self-control, which brings out the better angels of our nature, which essentially is feminization and the expanding circle, and which leads to better control over the social environment (the Leviathan) which in turn leads to more trade.

Secondly, the expression "inconsistent" suggests that the four latter factors are comprised by different sub-mechanisms that play in different directions. That is most clearly seen regarding weaponry and disarmament. Clearly, more efficient weapons leads to more deaths when they are being used. That is an important reason why World War II was so comparatively bloody. But it also leads to a lower chance of the weapons actually being used. The terrifying power of nuclear weapons is an important reason why they've only been used twice in wars. Hence we here have two different mechanisms playing in different directions.

I do think that "the escalator of reason" is a fundamental cause behind the other mechanisms. But it also presumably has some effects which increases the level of violence. For one thing, more rational people are more effective at what they do, which means they can kill more people if they want to. (It is just that normally, they don't want to do it as often as irrational people.) (We thus have the same structure that we had regarding weaponry.)

Also, in traditional societies, pro-social behaviour is often underwritten by mythologies which have no basis in fact. When these mythologies were dissolved by reason, many feared that chaous would ensue ("when God is dead, everything is permitted"). This did not happen. But it is hard to deny that such mythologies can lead to less violence, and that therefore their dissolution through reason can lead to more violence.

We shouldn't get too caught up in the details of this particular case, however. What is important is, again, that there is something suspicious with only listing mechanisms that play in the one direction. In this case, it is not even hard to find important mechanisms that play in the other direction. In my view, putting them in the other scale, as it were, leads to a better understanding of how the history of violence has unfolded. That said, I find DavidAgain's counterarguments below interesting.

 

Overly convenient clusters, or: Beware sour grapes

22 KnaveOfAllTrades 02 September 2014 04:04AM

Related to: Policy Debates Should Not Appear One-Sided

There is a well-known fable which runs thus:

“Driven by hunger, a fox tried to reach some grapes hanging high on the vine but was unable to, although he leaped with all his strength. As he went away, the fox remarked 'Oh, you aren't even ripe yet! I don't need any sour grapes.' People who speak disparagingly of things that they cannot attain would do well to apply this story to themselves.”

This gives rise to the common expression ‘sour grapes’, referring to a situation in which one incorrectly claims to not care about something to save face or feel better after being unable to get it.

This seems to be related to a general phenomenon, in which motivated cognition leads one to flinch away from the prospect of an action that is inconvenient or painful in the short term by concluding that a less-painful option strictly dominates the more-painful one.

In the fox’s case, the allegedly-dominating option is believing (or professing) that he did not want the grapes. This spares him the pain of feeling impotent in face of his initial failure, or the embarrassment of others thinking him to have failed. If he can’t get the grapes anyway, then he might as well erase the fact that he ever wanted them, right? The problem is that considering this line of reasoning will make it more tempting to conclude that the option really was dominating—that he really couldn’t have gotten the grapes. But maybe he could’ve gotten the grapes with a bit more work—by getting a ladder, or making a hook, or Doing More Squats in order to Improve His Vert.

The fable of the fox and the grapes doesn’t feel like a perfect fit, though, because the fox doesn’t engage in any conscious deliberation before giving up on sour grapes; the whole thing takes place subconsciously. Here are some other examples that more closely illustrate the idea of conscious rationalization by use of overly convenient partitions:

The Seating Fallacy:

“Be who you are and say what you feel, because those who mind don't matter and those who matter don't mind.”

This advice is neither good in full generality nor bad in full generality. Clearly there are some situations where some person is worrying too much about other people judging them, or is anxious about inconveniencing others without taking their own preferences into account. But there are also clearly situations (like dealing with an unpleasant, incompetent boss) where fully exposing oneself or saying whatever comes into one’s head is not strategic and outright disastrous. Without taking into account the specifics of the situation of the recipient of the advice, it is of limited use.

It is convenient to absolve oneself of blame by writing off anybody who challenges our first impulse as someone who ‘doesn’t matter’; it means that if something goes wrong, one can avoid the painful task of analysing and modifying one’s behaviour.

In particular, we have the following corollary:

The Fundamental Fallacy of Dating:

“Be yourself and don’t hide who you are. Be up-front about what you want. If it puts your date off, then they wouldn’t have been good for you anyway, and you’ve dodged a bullet!”

In the short-term it is convenient to not have to filter or reflect on what one says (face-to-face) or writes (online dating). In the longer term, having no filter is not a smart way to approach dating. As the biases and heuristics program has shown, people are often mistaken about what they would prefer under reflection, and are often inefficient and irrational in pursuing what they want. There are complicated courtship conventions governing timelines for revealing information about oneself and negotiating preferences, that have evolved to work around these irrationalities, to the benefit of both parties. In particular, people are dynamically inconsistent, and willing to compromise a lot more later on in a courtship than they thought they would earlier on; it is often a favour to both of you to respect established boundaries regarding revealing information and getting ahead of the current stage of the relationship.

For those who have not much practised the skill of avoiding triggering Too Much Information reactions, it can feel painful and disingenuous to even try changing their behaviour, and they rationalise it via the Fundamental Fallacy. At any given moment, changing this behaviour is painful and causes a flinch reaction, even though the value of information of trying a different approach might be very high, and might cause less pain (e.g. through reduced loneliness) in the long term.

We also have:

PR rationalization and incrimination:

“There’s already enough ammunition out there if anybody wants to assassinate my character, launch a smear campaign, or perform a hatchet job. Nothing I say at this point could make it worse, so there’s no reason to censor myself.”

This is an overly convenient excuse. It does not take into account, for example, that new statements provide a new opportunity for one to come to the attention of quote miners in the first place, or that different statements might be more or less easy to seed a smear campaign; ammunition can vary in type and accessibility, so that adding more can increase the convenience of a hatchet job. It might turn out, after weighing the costs and benefits, that speaking honestly is the right decision. But one can’t know that on the strength of a convenient deontological argument that doesn’t consider those costs. Similarly:

“I’ve already pirated so much stuff I’d be screwed if I got caught. Maybe it was unwise and impulsive at first, but by now I’m past the point of no return.”

 This again fails to take into account the increased risk of one’s deeds coming to attention; if most prosecutions are caused by (even if not purely about) offences shortly before the prosecution, and you expect to pirate long into the future, then your position now is the same as when you first pirated; if it was unwise then, then it’s unwise now.

~~~~

The common fallacy in all these cases is that one looks at only the extreme possibilities, and throws out the inconvenient, ambiguous cases. This results in a disconnected space of possibilities that is engineered to allow one to prove a convenient conclusion. For example, the Seating Fallacy throws out the possibility that there are people who mind but also matter; the Fundamental Fallacy of Dating prematurely rules out people who are dynamically inconsistent or are imperfect introspectors, or who have uncertainty over preferences; PR rationalization fails to consider marginal effects and quantify risks in favour of a lossy binary approach.

What are other examples of situations where people (or Less Wrongers specifically) might fall prey to this failure mode?

Polymath-style attack on the Parliamentary Model for moral uncertainty

21 danieldewey 26 September 2014 01:51PM

Thanks to ESrogsStefan_Schubert, and the Effective Altruism summit for the discussion that led to this post!

This post is to test out Polymath-style collaboration on LW. The problem we've chosen to try is formalizing and analyzing Bostrom and Ord's "Parliamentary Model" for dealing with moral uncertainty.

I'll first review the Parliamentary Model, then give some of Polymath's style suggestions, and finally suggest some directions that the conversation could take.

continue reading »

Robin Hanson's "Overcoming Bias" posts as an e-book.

21 ciphergoth 31 August 2014 01:26PM

At Luke Muehlhauser's request, I wrote a script to scrape all of Robin Hanson's posts to Overcoming Bias into an e-book; here's a first beta release. Please comment here with any problems—posts in the wrong order, broken links, bad formatting, missing posts. Thanks!

 


 

A Guide to Rational Investing

20 ColbyDavis 15 September 2014 02:36AM

Hello Less Wrong, I don't post here much but I've been involved in the Bay Area Less Wrong community for several years, where many of you know me from. The following is a white paper I wrote earlier this year for my firm, RHS Financial, a San Francisco based private wealth management practice. A few months ago I presented it at a South Bay Less Wrong meetup. Since then many of you have encouraged me to post it here for the rest of the community to see. The original can be found here, please refer to the disclosures, especially if you are the SEC. I have added an afterword here beneath the citations to address some criticisms I have encountered since writing it. As a company white paper intended for a general audience, please forgive me if the following is a little too self-promoting or spends too much time on grounds already well-tread here, but I think many of you will find it of value. Hope you enjoy!

 

 

Executive Summary: Capital markets have created enormous amounts of wealth for the world and reward disciplined, long-term investors for their contribution to the productive capacity of the economy. Most individuals would do well to invest most of their wealth in the capital market assets, particularly equities. Most investors, however, consistently make poor investment decisions as a result of a poor theoretical understanding of financial markets as well as cognitive and emotional biases, leading to inferior investment returns and inefficient allocation of capital. Using an empirically rigorous approach, a rational investor may reasonably expect to exploit inefficiencies in the market and earn excess returns in so doing.

 

 

 

 

Most people understand that they need to save money for their future, and surveys consistently find a large majority of Americans expressing a desire to save and invest more than they currently are. Yet the savings rate and percentage of people who report owning stocks has trended down in recent years,1 despite the increasing ease with which individuals can participate in financial markets, thanks to the spread of discount brokers and employer 401(k) plans. Part of the reason for this is likely the unrealistically pessimistic expectations of would-be investors. According to a recent poll barely one third of Americans consider equities to be a good way to build wealth over time.2 The verdict of history, however, is against the skeptics.


The Greatest Deal of all Time


Equity ownership is probably the easiest, most powerful means of accumulating wealth over time, and people regularly forego millions of dollars over the course of their lifetimes letting their wealth sit in cash. Since its inception in 1926, the annualized total return on the S&P 500 has been 9.8% as of the end of 2012.3 $1 invested back then would be worth $3,533 by the end of the period. More saliently, a 25 year old investor investing $5,000 per year at that rate would have about $2.1 million upon retirement at 65.


The strong performance of stock markets is robust to different times and places. Though the most accurate data on the US stock market goes back to 1926, financial historians have gathered information going back to 1802 and find the average annualized real return in earlier periods is remarkably close to the more recent official records. Looking at rolling 30 year returns between 1802 and 2006, the lowest and highest annualized real returns have been 2.6% and 10.6%, respectively.4 The United States is not unique in its experience, either. In a massive study of the sixteen countries that had data on local stock, bond, and cash returns available for every year of the twentieth century, the stock market in every one had significant, positive real returns that exceeded those of cash and fixed income alternatives.5 The historical returns of US stocks only slightly exceed those of the global average.


The opportunity cost of not holding stocks is enormous. Historically the interest earned on cash equivalent investments like savings accounts has barely kept up with inflation - over the same since-1926 period inflation has averaged 3.0% while the return on 30-day treasury bills (a good proxy for bank savings rates) has been 3.5%.6 That 3.5% rate would only earn an investor $422k over the same $5k/year scenario above. The situation today is even worse. Most banks are currently paying about 0.05% on savings.


Similarly, investment grade bonds, such as those issued by the US Treasury and highly rated corporations, though often an important component of a diversified portfolio, have offered returns only modestly better than cash over the long run. The average return on 10-year treasury bonds has been 5.1%,7 earning an investor $619k over the same 40 year scenario. The yield on the 10-year treasury is currently about 3%.


Homeownership has long been a part of the American dream, and many have been taught that building equity in your home is the safest and most prudent way to save for the future. The fact of the matter, however, is that residential housing is more of a consumption good than an investment. Over the last century the value of houses have barely kept up with inflation,8 and as the recent mortgage crisis demonstrated, home prices can crash just as any other market.


In virtually every time and place we look, equities are the best performing asset available, a fact which is consistent with the economic theory that risky assets must offer a premium to their investors to compensate them for the additional uncertainty they bear. What has puzzled economists for decades is why the so-called equity risk premium is so large and why so many individuals invest so little in stocks.9


Your Own Worst Enemy


Recent insights from multidisciplinary approaches in cognitive science have shed light on the issue, demonstrating that instead of rationally optimizing between various trade-offs, human beings regularly rely on heuristics - mental shortcuts that require little cognitive effort - when making decisions.10 These heuristics lead to taking biased approaches to problems that deviate from optimal decision making in systematic and predictable ways. Such biases affect financial decisions in a large number of ways, one of the most profound and pervasive being the tendency of myopic loss aversion.


Myopic loss aversion refers to the combined result of two observed regularities in the way people think: that losses feel bad to a greater extent than equivalent gains feel good, and that people rely too heavily (anchor) on recent and readily available information. 11Taken together, it is easy to see how these mental errors could bias an individual against holding stocks. Though the historical and expected return on equities greatly exceeds those of bonds and cash, over short time horizons they can suffer significant losses. And while the loss of one’s home equity is generally a nebulous abstraction that may not manifest itself consciously for years, stock market losses are highly visible, drawing attention to themselves in brokerage statements and newspaper headlines. Not surprisingly, then, an all too common pattern among investors is to start investing at a time when the headlines are replete with stories of the riches being made in markets, only to suffer a pullback and quickly sell out at ten, twenty, thirty plus percent losses and sit on cash for years until the next bull market is again near its peak in a vicious circle of capital destruction. Indeed, in the 20 year period ending 2012, the S&P 500 returned 8.2% and investment grade bonds returned 6.3% annualized. The inflation rate was 2.5%, and the average retail investor earned an annualized rate of 2.3%.12


Even when investors can overcome their myopic loss aversion and stay in the stock market for the long haul, investment success is far from assured. The methods by which investors choose which stocks or stock managers to buy, hold, and sell are also subject to a host of biases which consistently lead to suboptimal investing and performance. Chief among these is overconfidence, the belief that one’s judgements and skills are reliably superior.


Overconfidence is endemic to the human experience. The vast majority of people think of themselves as more intelligent, attractive, and competent than most of their peers,13 even in the face of proof to the contrary. 93% of people consider themselves to be above-average drivers,14 for example, and that percentage decreases only slightly if you ask people to evaluate their driving skill after being admitted to a hospital following a traffic accident.15 Similarly, most investors are confident they can consistently beat the market. One survey found 74% of mutual fund investors believed the funds they held would “consistently beat the S&P 500 every year” in spite of the statistical reality that more than half of US stock funds underperform in a given year and virtually none will outperform it each and every year. Many investors will even report having beaten the index despite having verifiably underperformed it by several percentage points.16


Overconfidence leads investors to take outsized bets on what they know and are familiar with. Investors around the world commonly hold 80% or more of their portfolios in investments from their own country,17 and one third of 401(k) assets are invested in participants’ own employer’s stock.18 Such concentrated portfolios are demonstrably riskier than a broadly diversified portfolio, yet investors regularly evaluate their investments as less risky than the general market, even if their securities had recently lost significantly more than the overall market.


If an investor believes himself to possess superior talent in selecting investments, he is likely to trade more as a result in an attempt to capitalize on each new opportunity that presents itself. In this endeavor, the harder investors try, the worse they do. In one major study, the quintile of investors who traded the most over a five year period earned an average annualized 7.1 percentage points less than the quintile that traded the least.19


The Folly of Wall Street


Relying on experts does little to help. Wall Street employs an army of analysts to follow the every move of all the major companies traded on the market, predicting their earnings and their expected performance relative to peers, but on the whole they are about as effective as a strategy of throwing darts. Burton Malkiel explains in his book A Random Walk Down Wall Street how he tracked the one and five year earnings forecasts on companies in the S&P 500 from analysts at 19 Wall Street firms and found that in aggregate the estimates had no more predictive power than if you had just assumed a given company’s earnings would grow at the same rate as the long-term average rate of growth in the economy. This is consistent with a much broader body of literature demonstrating that the predictions of statistical prediction rules - formulas that make predictions based on simple statistical rules - reliably outperform those of human experts. Statistical prediction rules have been used to predict the auction price of bordeaux better than expert wine tasters,20 marital happiness better than marriage counselors,21 academic performance better than admissions officers,22 criminal recidivism better than criminologists,23 and bankruptcy better than loan officers,24 to name just a few examples. This is an incredible finding that’s difficult to overstate. When considering complex issues such as these our natural intuition is to trust experts who can carefully weigh all the relevant information in determining the best course of action. But in reality experts are simply humans who have had more time to reinforce their preconceived notions on a particular topic and are more likely to anchor their attention on items that only introduce statistical noise.


Back in the world of finance, It turns out that to a first approximation the best estimate on the return to expect from a given stock is the long-run historical average of the stock market, and the best estimate of the return to expect from a stock picking mutual fund is the long-run historical average of the stock market minus its fees. The active stock pickers who manage mutual funds have on the whole demonstrated little ability to outperform the market. To be sure, at any given time there are plenty of managers who have recently beaten the market smartly, and if you look around you will even find a few with records that have been terrific over ten years or more. But just as a coin-flipping contest between thousands of contestants would no doubt yield a few who had uncannily “called it” a dozen or more times in a row, the number of market beating mutual fund managers is no greater than what you should expect as a result of pure luck.25


Expert and amatuer investors alike underestimate how competitive the capital markets are. News is readily available and quickly acted upon, and any fact you know about that you think gives you an edge is probably already a value in the cells of thousands of spreadsheets of analysts trading billions of dollars. Professor of Finance at Yale and Nobel Laureate Robert Shiller makes this point in a lecture using an example of a hypothetical drug company that announces it has received FDA approval to market a new drug:


Suppose you then, the next day, read in The Wall Street Journal about this new announcement. Do you think you have any chance of beating the market by trading on it? I mean, you're like twenty-four hours late, but I hear people tell me — I hear, "I read in Business Week that there was a new announcement, so I'm thinking of buying." I say, "Well, Business Week — that information is probably a week old." Even other people will talk about trading on information that's years old, so you kind of think that maybe these people are naïve. First of all, you're not a drug company expert or whatever it is that's needed. Secondly, you don't know the math — you don't know how to calculate present values, probably. Thirdly, you're a month late. You get the impression that a lot of people shouldn't be trying to beat the market. You might say, to a first approximation, the market has it all right so don't even try.26


In that last sentence Shiller hints at one of the most profound and powerful ideas in finance: the efficient market hypothesis. The core of the efficient market hypothesis is that when news that impacts the value of a company is released, stock prices will adjust instantly to account for the new information and bring it back to equilibrium where it’s no longer a “good” or “bad” investment but simply a fair one for its risk level. Because news is unpredictable by definition, it is impossible then to reliably outperform the market as a whole, and the seemingly ingenious investors on the latest cover of Forbes or Fortune are simply lucky.


A Noble Lie


In the 50s, 60s, and 70s several economists who would go on to win Nobel prizes worked out the implications of the efficient market hypothesis and created a new intellectual framework known as modern portfolio theory.27 The upshot is that capital markets reward investors for taking risk, and the more risk you take, the higher your return should be (in expectation, it might not turn out to be the case, which is why it’s risky). But the market doesn’t reward unnecessary risk, such as taking out a second mortgage to invest in your friend’s hot dog stand. It only rewards systematic risk, the risks associated with being exposed to the vagaries of the entire economy, such as interest rates, inflation, and productivity growth.28 Stock of small companies are riskier and have a higher expected return than stocks of large companies, which are riskier than corporate bonds, which are riskier than Treasury bonds. But owning one small cap stock doesn’t offer a higher expected return than another small cap stock, or a portfolio of hundreds of small caps for that matter. Owning more of a particular stock merely exposes you to the idiosyncratic risks that particular company faces and for which you are not compensated. Diversifying assets across as many securities as possible, it is possible to reduce the volatility of your portfolio without lowering its expected return.


This approach to investing dictates that you should determine an acceptable level of risk for your portfolio, then buy the largest basket of securities possible that targets that risk, ideally while paying the least amount possible in fees. Academic activism in favor of this passive approach gained momentum through the 70s, culminating in the launch of the first commercially available index fund in 1976, offered by The Vanguard Group. The typical index fund seeks to replicate the overall market performance of a broad class of investments such as large US stocks by owning all the securities in that market in proportion to their market weights. Thus if XYZ stock makes up 2% of the value of the relevant asset class, the index fund will allocate 2% of its funds to that stock. Because index funds only seek to replicate the market instead of beating it, they save costs on research and management teams and pass the savings along to investors through lower fees.


Index funds were originally derided and attracted little investment, but years of passionate advocacy by popularizers such as Jack Bogle and Burton Malkiel as well as the consensus of the economics profession has helped to lift them into the mainstream. Index funds now command trillions of dollars of assets and cover every segment of the market in stocks, bonds, and alternative assets in the US and abroad. In 2003 Vanguard launched its target retirement funds, which took the logic of passive investing even further by providing a single fund that would automatically shift from more aggressive to more conservative index investments as its investors approached retirement. Target retirement funds have since become especially popular options in 401(k) plans.


The rise of index investing has been a boon to individual investors, who have clearly benefited from the lower fees and greater diversification they offer. To the extent that investors have bought into the idea of passive investing over market timing and active security selection they have collectively saved themselves a fortune by not giving in to their value-destroying biases. For all the good index funds have done though, since their birth in the 70s, the intellectual foundation upon which they stand, the efficient market hypothesis, has been all but disproved.


The EMH is now the noble lie of the economics profession; while economists usually teach their students and the public that the capital markets are efficient and unbeatable, their research over the last few decades has shown otherwise. In a telling example, Paul Samuelson, who helped originate the EMH and advocated it in his best selling textbook, was a large, early investor in Berkshire Hathaway, Warren Buffett’s active investment holding company.29 But real people regularly ruin their lives through sloppy investing, and for them perhaps it is better just to say that beating the market can’t be done, so just buy, hold, and forget about it. We, on the other hand, believe a more nuanced understanding of the facts can be helpful.


Premium Investing


Shortly after the efficient market hypothesis was first put forth researchers realized the idea had serious theoretical shortcomings.30 Beginning as early as 1977 they also found empirical “anomalies,” factors other than systematic risk that seemed to predict returns.31 Most of the early findings focused on valuation ratios - measures of a firm’s market price in relation to an accounting measure such as book value or earnings - and found that “cheap” stocks on average outperformed “expensive” stocks, confirming the value investment philosophy first promulgated by the legendary Depression-era investor Benjamin Graham and popularized by his most famous student, Warren Buffett. In 1992 Eugene Fama, one of the fathers of the efficient market hypothesis, published, along with Ken French, a groundbreaking paper demonstrating that the cheapest decile stocks in the US, as measured by the price to book ratio, outperformed the highest decile stocks by an astounding 11.9% per year, despite there being little difference in risk between them.32


A year later, researchers found convincing evidence of a momentum anomaly in US stocks: stocks that had the highest performance over the last 3-12 months continued to outperform relative to those with the lowest performance. The effect size was comparable to that of the value anomaly and again the discrepancy could not be explained with any conventional measure of risk.33


Since then, researchers have replicated the value and momentum effects across larger and deeper datasets, finding comparably large effect sizes in different times, regions, and asset classes. In a highly ambitious 2012 paper, Clifford Asness (a former student of Fama’s) and Tobias Moskowitz documented the significance of value and momentum across 18 national equity markets, 10 currencies, 10 government bonds, and 27 commodity futures.


Though value and momentum are the most pervasive and best documented of the market anomalies, many others have been discovered across the capital markets. Others include the small-cap premium34 (small company stocks tend to outperform large company stocks even in excess of what should be expected by their risk), the liquidity premium35 (less frequently traded securities tend to outperform more frequently traded securities), short-term reversal36 (equities with the lowest one-week to one-month performance tend to outperform over short time horizons), carry37 (high-yielding currencies tend to appreciate against low-yielding currencies), roll yield38,39 (bonds and futures at steeply negatively sloped points along the yield curve tend to outperform those at flatter or positively sloped points), profitability40 (equities of firms with higher proportions of profits over assets or equity tend to outperform those with lower profitability), calendar effects41 (stocks tend to have stronger returns in January and weaker returns on Mondays), and corporate action premia42 (securities of corporations that will, currently are, or have recently engaged in mergers, acquisitions, spin-offs, and other events tend to consistently under or outperform relative to what would be expected by their risk).


Most of these market anomalies appear remarkably robust compared to findings in other social sciences,43 especially considering that they seem to imply trillions of dollars of easy money is being overlooked in plain sight. Intelligent observers often question how such inefficiencies could possibly persist in the face of such strong incentives to exploit them until they disappear. Several explanations have been put forth, some of which are conflicting but which all probably have some explanatory power.


The first interpretation of the anomalies is to deny that they are actually anomalous, but rather are compensation for risk that isn’t captured by the standard asset pricing models. This is the view of Eugene Fama, who first postulated that the value premium was compensation for assuming risk of financial distress and bankruptcy that was not fully captured by simply measuring the standard deviation of a value stock’s returns.44 Subsequent research, however, disproved that the value effect was explained by exposure to financial distress.45 More sophisticated arguments point to the fact that the excess returns of value, momentum, and many other premiums exhibit greater skewness, kurtosis, or other statistical moments than the broad market: subtle statistical indications of greater risk, but the differences hardly seem large enough to justify the large return premiums observed.46


The only sense in which e.g. value and momentum stocks seem genuinely “riskier” is in career risk; though the factor premiums are significant and robust in the long term, they are not consistent or predictable along short time horizons. Reaping their rewards requires patience, and an analyst or portfolio manager who recommends an investment for his clients based on these factors may end up waiting years before it pays off, typically more than enough time to be fired.47 Though any investment strategy is bound to underperform at times, strategies that seek to exploit the factors most predictive of excess returns are especially susceptible to reputational hazard. Value stocks tend to be from unpopular companies in boring, slow growth industries. Momentum stocks are often from unproven companies with uncertain prospects or are from fallen angels who have only recently experienced a turn of luck. Conversely, stocks that score low on value and momentum factors are typically reputable companies with popular products that are growing rapidly and forging new industry standards in their wake.


Consider then, two companies in the same industry: Ol’Timer Industries, which has been around for decades and is consistently profitable but whose product lines are increasingly considered uncool and outdated. Recent attempts to revamp the company’s image by the firm’s new CEO have had modest success but consumers and industry experts expect this to be just delaying further inevitable loss of market share to NuTime.ly, founded eight years ago and posting exponential revenue growth and rapid adoption by the coveted 18-35 year old demographic, who typically describe its products using a wide selection of contemporary idioms and slang indicating superior social status and functionality. Ol’Timer Industries’ stock will likely score highly on value on momentum factors relative to NuTime.ly and so have a higher expected return. But consider the incentives of the investment professional choosing between the two: if he chooses Ol’Timer and it outperforms he may be congratulated and rewarded perhaps slightly more than if he had chosen NuTime.ly and it outperforms, but if he chooses Ol’Timer and it underperforms he is a fool and a laughingstock who wasted clients’ money on his pet theory when “everyone knew” NuTime.ly was going to win. At least if he chooses NuTime.ly and it underperforms it was a fluke that none of his peers saw coming, save for a few wingnuts who keep yammering about the arcane theories of Gene Fama and Benjamin Graham.


For most investors, “it is better for reputation to fail conventionally than to succeed unconventionally” as John Maynard Keynes observed in his General Theory. Not that this is at all restricted to investors, professional or amateur. In a similar vein, professional soccer goalkeepers continue to jump left or right on penalty kicks when statistics show they’d block more shots standing still.48 But standing in place while the ball soars into the upper right corner makes the goalkeeper look incompetent. The proclivity of middle managers and bureaucrats to default to uncontroversial decisions formed by groupthink is familiar enough to be the stuff of popular culture; nobody ever got fired for buying IBM, as the saying goes. Psychological experiments have shown that people will often affirm an obviously false observation about simple facts such as the relative lengths of straight lines on a board if others have affirmed it before them.49


We find ourselves back to the nature of human thinking and the biases and other cognitive errors that afflict it. This is what most interpretations of the market anomalies focuses on. Both amatuer and professional investors are human beings that are apt to make investment decisions not through a methodical application of modern portfolio theory but based rather on stories, anecdotes, hunches, and ideologies. Most of the anomalies make sense in light of an understanding of some of the most common biases such as anchoring and availability bias, status quo bias, and herd behavior.50 Rational investors seeking to exploit these inefficiencies may be able to do so to a limited extent, but if they are using other peoples’ money then they are constrained by the biases of their clients. The more aggressively they attempt to exploit market inefficiencies, the more they risk badly underperforming the market long enough to suffer devastating withdrawals of capital.51


It is no surprise then, that the most successful investors have found ways to rely on “sticky” capital unlikely to slip out of their control at the worst time. Warren Buffett invests the float of his insurance company holdings, which behaves in actuarially predictable ways; David Swensen manages the Yale endowment fund, which has an explicitly indefinite time horizon and a rules based spending rate; Renaissance Technologies, arguably the most successful hedge fund ever, only invests its own money; Dimensional Fund Advisors, one of the only mutual fund companies that has consistently earned excess returns through factor premiums, only sells through independent financial advisors who undergo a due diligence process to ensure they share similar investment philosophies.


Building a Better Portfolio


So what is an investor to do? The prospect of delicately crafting a portfolio that’s adequately diversified while taking advantage of return premiums may seem daunting, and one may be tempted to simply buy a Vanguard target retirement fund appropriate for their age and be done with it. Doing so is certainly a reasonable option. But we believe that with a disciplined investment strategy informed by the findings discussed above superior results are possible.


The first place to start is an assessment of your risk tolerance. How far can your portfolio fall before it adversely affects your quality of life? For investors saving for retirement with many more years of work ahead of them, the answer will likely be “quite a lot.” With ten years or more to work with, your portfolio will likely recover from even the most extreme bear markets. But people do not naturally think in ten-year increments, and many must live off their portfolio principal; accept that in the short term your portfolio will sometimes be in the red and consider what percentage decline over a period of a few months to a year you are comfortable enduring. Over a one year period the “worst case scenario” on diversified stock portfolios is historically about a 40% decline. For a traditional “moderate” portfolio of 60% stocks, 40% bonds it has been about a 25% decline.52


With a target on how much risk to accept in your portfolio, modern portfolio theory shows us a technique for achieving the most efficient tradeoff between risk and return possible called mean-variance optimization. An adequate treatment of MVO is beyond the scope of this paper,53 but essentially the task is to forecast expected returns on the major asset classes (e.g. US Stocks, International Stocks, and Investment Grade Bonds) then compute the weights for each that will achieve the highest expected return for a given amount of risk. We use an approach to mean variance optimization known as the Black-Litterman model54 and estimate expected returns using a limited number of simple inputs; for example, the expected return on an index of stocks can be closely approximated using the current dividend yield plus the long run growth rate of the economy.55


With optimal portfolio weights determined, next the investor must select the investment vehicles to use to gain exposure to the various asset classes. Though traditional index funds are a reasonable option, in recent years several “enhanced index” mutual fund and ETFs have been released that provide inexpensive, broad exposure to the hundreds or thousands of securities in a given asset classes while enhancing exposure to one or more of the major factor premiums discussed above such as value, profitability, or momentum. Research Affiliates, for example, licences a “fundamental index” that has been shown to provide efficient exposure to value and small-cap stocks across many markets.56 These “RAFI” indexes have been licensed to the asset management firms Charles Schwab and PowerShares to be made available through mutual funds and ETFs to the general investing public, and have generally outperformed their traditional index fund counterparts since inception.


Over the course of time, portfolio allocations will drift from their optimized allocations as particular asset classes inevitably outperform relative to other ones. Leaving this unchecked can lead to a portfolio that is no longer risk-return efficient. The investor must periodically rebalance the portfolio by selling securities that have become overweight and buying others that are underweight. Research suggests that by setting “tolerance bands” around target asset allocations, monitoring the portfolio frequently and trading when weights drift outside tolerance, investors can take further advantage of inter-asset-class value and momentum effects and boost return while reducing risk.57


Most investors, however, do not rebalance systematically, perhaps in part because it can be psychologically distressing. Rebalancing necessarily entails regularly selling assets that have been performing well in order to buy ones that have been laggards, exactly when your cognitive biases are most likely to tell you that it’s a bad idea. Indeed, neuroscientists have observed in laboratory experiments that when individuals consider the prospect of buying more of a risky asset that has lost them money, it activates the modules in the brain associated with anticipation of physical pain and anxiety.58 Dealing with investment losses is literally painful for investors.


Many investors may find it helpful to their peace of mind as well as their portfolio to outsource the entire process to a party with less emotional attachment in their portfolio. Realistically, most investors have neither the time nor the motivation necessary to attain a firm understanding of modern portfolio theory, research the capital market expectations on various asset classes and securities, and regularly monitor and rebalance their portfolio, all with enough rigor to make it worth the effort compared to a simple indexing strategy. By utilizing the skills of a good financial advisor, however, an investor can leverage the expertise of a professional with the bandwidth to execute these tactics in a cost-efficient manner.


A financial advisor should be able to engage you as an investor and acquire a firm understanding of your goals, needs, and attitudes towards risk, money, and markets. Because he or she will have an entire practice over which to efficiently dedicate time and resources on portfolio research, optimization, and trading, the financial advisor should be able to craft a portfolio that’s optimized for your personal situation. Financial advisors, as institutional investors, generally have access to institutional class funds that retail investors do not, including many of those that have demonstrated the greatest dedication to exploiting the factor premiums. Notably, DFA and AQR, the two fund families with the greatest academic support, are generally only available to individual investors through a financial advisor. Should your professionally managed portfolio provide a better risk adjusted return than a comparable do-it-yourself index fund approach, the FA’s fees have paid for themselves.


Furthermore, a good financial advisor will make sure your investments are tax efficient and that you are making the most of tax-preferred accounts. Researchers have shown that after asset allocation, asset location, the strategic placement of investments in accounts with different tax treatment, is one of the most important factors in net portfolio returns,59 yet most individual investors largely ignore these effects.60 Advisor’s fees can generally be paid with pre-tax funds as well, further enhancing tax efficiency.


Invest with Purpose


There is something of a paradox involved in investing. Finance is a highly specialized and technical field, but money is a very personal and emotional topic. Achieving the joy and fulfillment associated with financial success requires a large measure of emotional detachment and impersonal pragmatism. Far too often people suffer great loss by confusing loyalties and aspirations, fears and regrets with the efficient allocation of their portfolio assets. We as advisors hate to see this happen; there is nothing to celebrate about the needless destruction of capital, it is truly a loss for us all. One of the greatest misconceptions about finance is that investing is just a zero-sum game, that one trader’s gain is another’s loss. Nothing could be further from the truth. Economists have shown that one of the greatest predictors of a nation’s well being is its financial development.61 The more liquid and active our capital markets, the greater our society’s capacity for innovation and progress. When you invest in the stock market, you are contributing your share to the productive capacity of our world, your return is your reward for helping make it better, outperformance is a sign that you have steered capital to those with the greatest use for it.

 

With the right accounts and investments in place and a process for managing them effectively, you the investor are freed to focus on what you are working and investing for, and an advisor can work with you to help get you there. Whether you want to travel the world, buy the house of your dreams, send your children to the best college, maximize your philanthropic giving, or simply retire early, an advisor can help you develop a financial plan to turn the dollars and cents of your portfolio into the life you want to live, building more health, wealth, and happiness for you, your loved ones, and the world.

 

Notes

 

1. “U.S. Stock Ownership Stays at Record Low,” Gallup.

2. U.S. Investors Not Sold on Stock Market as Wealth Creator,” Gallup.

3. Data provided by Morningstar.

4. Siegel, Stocks for the Long Run, 5-25

5. Dimson et al, Triumph of the Optimists.

6. Ibid. 3

7. Ibid

8. Shiller, “Understanding Recent Trends in House Prices and Home Ownership.”

9. Mankiw and Zeldes, for example, find that to justify the historical equity risk premium observed, investors would in aggregate need to be indifferent between a certain payoff of $51,209 and a 50/50 bet paying either $50,000 or $100,000. Mankiw and Zeldes, “The consumption of stockholders and nonstockholders,” 8.

10. For a highly readable introduction to the idea of cognitive biases, see Daniel Kahneman’s book “Thinking: Fast and Slow.” Kahneman has been a pioneer in the field and for his work won the 2002 Nobel prize in economics.

11. Benartzi and Thaler, “Myopic Loss Aversion and the Equity Premium Puzzle.”

12. “Guide to the Markets,” J.P. Morgan Asset Management

13. See, for example, Kruger and Dunning,  "Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments" and Zuckerman and Jost,  "What Makes You Think You're So Popular? Self Evaluation Maintenance and the Subjective Side of the ‘Friendship Paradox’"

14. Svenson, “Are We All Less Risky and More Skillful than Our Fellow Drivers?”

15. Preston and Harris, “Psychology of Drivers in Traffic Accidents.”

16. Zweig, Your Money and Your Brain. 88-91.

17. French and Poterba, “Investor Diversification and International Equity Markets.”

18. Ibid. 14. p. 98-99.

19. Barber and Odean, “Trading is Hazardous to Your Wealth: The Common Stock Investment Performance of Individual Investors.”

20. Ashenfelter et al, “Predicting the Quality and Prices of Bordeaux Wine.”

21. Thornton, "Toward a Linear Prediction of Marital Happiness."

22. Swets et al, "Psychological Science Can Improve Diagnostic Decisions."

23. Carroll et al, "Evaluation, Diagnosis, and Prediction in Parole Decision-Making."

24. Stillwell et al, "Evaluating Credit Applications: A Validation of Multiattribute Utility Weight Elicitation Techniques"

25. See Fama and French, “Luck versus Skill in the Cross-Section of Mutual Fund Returns.” They do find modest evidence of skill at the right tail end of the distribution under the capital asset pricing model. After controlling for the value, size, and momentum factor premiums (discussed below), however, evidence of net-of-fee skill is not significantly different than zero.

26. Shiller, “Efficient Markets vs. Excess Volatility.”

27. Professor Goetzmann of the Yale School of Management has a introductory hyper-text textbook on modern portfolio theory available on his website, “An Introduction to Investment Theory.”

28. In the language of modern portfolio theory this risk is known at a security’s beta. Mathematically it is the covariance of the security’s returns with the market’s returns, divided by the variance of the market’s returns.

29. Setton, “The Berkshire Bunch.”

30. For example, Grossman and Stiglitz prove in “On the Impossibility of Informationally Efficient Markets” that market efficiency cannot be an equilibrium because without excess returns there is no incentive for arbitrageurs to correct mispricings. More recently, Markowitz, one of fathers of modern portfolio theory, showed in “Market Efficiency: A Theoretical Distinction and So What” that if a couple key assumptions of MPT are relaxed, the market portfolio is no longer optimal for most investors.

31. Basu, “Investment Performance of Common Stocks in Relation to their Price-Earnings Ratios: A Test of the Efficient Market Hypothesis.”

32. Fama and French, “The Cross-Section of Expected Stock Returns.”

33. Jegadeesh and Titman, “Returns to Buying Winners and Selling Losers: Implications for Stock Market Efficiency”

34. Ibid. 31.

35. Pastor and Stambaugh, “Liquidity Risk and Expected Stock Returns.”

36. Jegadeesh, “Evidence of Predictable Behavior or Security Returns.”

37. Froot and Thaler, “Anomalies: Foreign Exchange.”

38. Campbell and Shiller, “Yield Spreads and Interest Rate Movements: A Bird’s Eye View.”

39. Erb and Harvey, “The Tactical and Strategic Value of Commodity Futures.”

40. Novy-Marx, “The Other Side of Value: The Gross Profitability Premium.”

41. Thaler, “Seasonal Movements in Security Prices.”

42. Mitchell and Pulvino, “Characteristics of Risk and Return in Risk Arbitrage.”

43. See McLean and Pontiff, “Does Academic Research Destroy Stock Return Predictability?” A meta analysis of 82 equity return factors was able to replicate 72 using out of sample data.

44. Fama and French, “Size and Book-to-Market Factors in Earnings and Returns.”

45. Daniel and Titman, “Evidence on the Characteristics of Cross Sectional Variation in Stock Returns.”

46. Hwang and Rubesam, “Is Value Really Riskier than Growth?”

47. Numerous investor profiles have expounded on the difficulty of being a rational investor in an irrational market. In a recent article in Institutional Investor, Asness and Liew give a highly readable overview of the risk vs. mispricing debate and discuss the problems they encountered launching a value-oriented hedge fund in the middle of the dot-com bubble.

48. Bar-Eli, “Action Bias Among Elite Soccer Goalkeepers: The Case of Penalty Kicks. Journal of Economic Psychology.”

49. Asch, “Opinions and Social Pressure.”

50. Daniel et al provides one of the most thorough theoretical discussions on how certain common cognitive biases can result in systematically biased security prices in “Investor Psychology and Security Market Under- and Overreaction.”

51. Schleifer and Vishny, “The Limits of Arbitrage.”

52. Data provided by Vanguard.

53. Chapter 2 of Goetzmann’s “An Introduction to Investment Theory” provides an introductory discussion.

54. The Black-Litterman model allows investors to combine their estimates of expected returns with equilibrium implied returns in a Bayesian framework that largely overcomes the input-sensitivity problems associated with traditional mean-variance optimization. Idzorek offers a thorough introduction in “A Step-By-Step Guide to the Black-Litterman Model.”

55. Ilmanen’s “Expected Returns on Major Asset Classes” provides a detailed explanation of the theory and evidence of forecasting expected returns.

56. Walkshausl and Lobe, “Fundamental Indexing Around the World.”

57. Buetow et al, “The Benefits of Rebalancing.”

58. Kuhnen and Knutson, “The Neural Basis of Financial Risk Taking.”

59. Dammon et al, “Optimal Asset Location and Allocation with Taxable and Tax-Deferred Investing.”

60. Bodie and Crane, “Personal Investing: Advice, Theory, and Evidence from a Survey of TIAA-CREF Participants.”

61. Yongseok Shin of the Federal Reserve provides a brief review of the literature on this research in “Financial Markets: An Engine for Economic Growth.”

 

 

Works Cited

 

 

Asch, Solomon E. "Opinions and Social Pressure." Scientific American 193, no. 5 (12 1955).

Ashenfelter, Orley. "Predicting the Quality and Prices of Bordeaux Wine*." The Economic Journal 118, no. 529 (12 2008).

Asness, Clifford and Liew, John. “The Great Divide over Market Efficiency.” Institutional Investor, March 3, 2014.

Asness, Clifford, Moskowitz, Tobias, and Pedersen, Lasse. “Value and Momentum Everywhere.” The Journal of Finance 68, no. 3 (6, 2013).

Bar-Eli, Michael, Ofer H. Azar, Ilana Ritov, Yael Keidar-Levin, and Galit Schein. "Action Bias among Elite Soccer Goalkeepers: The Case of Penalty Kicks." Journal of Economic Psychology 28, no. 5 (12 2007).

Barber, Brad M., and Terrance Odean. "Trading Is Hazardous to Your Wealth: The Common Stock Investment Performance of Individual Investors." The Journal of Finance 55, no. 2 (12 2000).

Basu, S. "Investment Performance of Common Stocks in Relation to Their Price-Earnings Ratios: A Test of the Efficient Market Hypothesis."The Journal of Finance 32, no. 3 (12 1977).

Benartzi, S., and R. H. Thaler. "Myopic Loss Aversion and the Equity Premium Puzzle." The Quarterly Journal of Economics110, no. 1 (12, 1995).

Bodie, Zvi, and Dwight B. Crane. "Personal Investing: Advice, Theory, and Evidence." Financial Analysts Journal 53, no. 6 (12 1997).

Buetow, Gerald W., Ronald Sellers, Donald Trotter, Elaine Hunt, and Willie A. Whipple. "The Benefits of Rebalancing." The Journal of Portfolio Management 28, no. 2 (12 2002).

Campbell, John and Shiller, Robert. “Yield Spreads and Interest Rate Movements: A Bird’s Eye View.” The Econometrics of Financial Markets, 58 no. 3 (1991).

Carroll, John S., Richard L. Wiener, Dan Coates, Jolene Galegher, and James J. Alibrio. "Evaluation, Diagnosis, and Prediction in Parole Decision Making." Law & Society Review 17, no. 1 (12 1982).

Dammon, Robert M., Chester S. Spatt, and Harold H. Zhang. "Optimal Asset Location and Allocation with Taxable and Tax-Deferred Investing." The Journal of Finance 59, no. 3 (12 2004).

Daniel, Kent, and Sheridan Titman. "Evidence on the Characteristics of Cross Sectional Variation in Stock Returns." The Journal of Finance52, no. 1 (12 1997).

Daniel, Kent, Hirshleifer, David, and Subrahmanyam, Avanidhar. “Investor Psychology and Security Market Under- and Overreactions.” The Journal of Finance, 53 no. 6 (1998).

Dimson, Elroy, Marsh, Paul, and Staunton, Mike. Triumph of the Optimists. Princeton: Princeton University Press, 2002.

Erb, Cfa Claude B., and Campbell R. Harvey. "The Strategic and Tactical Value of Commodity Futures." CFA Digest 36, no. 3 (12 2006).

Fama, Eugene F., and Kenneth R. French. "The Cross-Section of Expected Stock Returns." The Journal of Finance 47, no. 2 (12 1992).

Fama, Eugene F., and Kenneth R. French. "Luck versus Skill in the Cross-Section of Mutual Fund Returns." The Journal of Finance65, no. 5 (12 2010).

Fama, Eugene F., and Kenneth R. French. "Size and Book-to-Market Factors in Earnings and Returns."The Journal of Finance 50, no. 1 (12 1995).

French, Kenneth and Poterba, James. “Investor Diversification and International Equity Markets.” American Economic Review (1991).

Froot, Kenneth A., and Richard H. Thaler. "Anomalies: Foreign Exchange." Journal of Economic Perspectives 4, no. 3 (12 1990).

“Guide to the Markets.” J.P. Morgan Asset Management. 2014

Goetzmann, William. An Introduction to Investment Theory. Yale School of Management. Accessed April 09, 2014. http://viking.som.yale.edu/will/finman540/classnotes/notes.html

Grossman, Sanford and Stiglitz, Joseph. “On the Impossibility of Informationally Efficent Markets.” The American Economic Review 70, no. 3 (6, 1980).

Hwang, Soosung and Rubesam, Alexandre. “Is Value Really Riskier Than Growth? An Answer with Time-Varying Return Reversal.” Journal of Banking and Finance, 37 no. 7 (2013).

Idzorek, Thomas. “A Step-by-Step Guide to the Black-Litterman Model.” Ibbotson Associates (2005).

Ilmanen, Antti. “Expected Returns on Major Asset Classes.” Research Foundation of CFA Institute (2012).

Jegadeesh, Narasimhan, and Sheridan Titman. "Returns to Buying Winners and Selling Losers: Implications for Stock Market Efficiency." The Journal of Finance48, no. 1 (12 1993).

Kahneman, Daniel. Thinking, Fast and Slow. New York: Farrar, Straus and Giroux, 2011.

Kruger, Justin, and David Dunning. "Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-assessments." Journal of Personality and Social Psychology77, no. 6 (12 1999).

Kuhnen, Camelia M., and Brian Knutson. "The Neural Basis of Financial Risk Taking." Neuron 47, no. 5 (12 2005).

Malkiel, Burton. A Random Walk Down Wall Street: Time-Tested Strategies for Successful Investing (Tenth Edition). New York: W.W. Norton & Company, 2012.

Mankiw, N.gregory, and Stephen P. Zeldes. "The Consumption of Stockholders and Nonstockholders." Journal of Financial Economics 29, no. 1 (12 1991).

Markowitz, Harry M. "Market Efficiency: A Theoretical Distinction and So What?" Financial Analysts Journal 61, no. 5 (12 2005).

McLean, David and Pontiff, Jeffrey. “Does Academic Research Destroy Stock Return Predictability?” Working Paper, (2013).

Mitchell, Mark, and Todd Pulvino. "Characteristics of Risk and Return in Risk Arbitrage." The Journal of Finance 56, no. 6 (12 2001).

Novy-Marx, Robert. "The Other Side of Value: The Gross Profitability Premium." Journal of Financial Economics 108, no. 1 (12 2013).

Pastor, Lubos and Stambaugh, Robert. “Liquidity Risk and Expected Stock Returns.” The Journal of Political Economy, 111 no. 3 (6, 2003).

Preston, Caroline E., and Stanley Harris. "Psychology of Drivers in Traffic Accidents." Journal of Applied Psychology 49, no. 4 (12 1965).

Setton, Dolly. The Berkshire Bunch.” Forbes, October 12, 1998.

Shleifer, Andrei, and Robert W. Vishny. "The Limits of Arbitrage."The Journal of Finance 52, no. 1 (12 1997).

Siegel, Jeremy J. Stocks for the Long Run: The Definitive Guide to Financial Market Returns and Long-term Investment Strategies (Forth Edition). New York: McGraw-Hill, 2008.

Shiller, Robert. “Understanding Recent Trends in House Prices and Homeownership.” Housing, Housing Finance and Monetary Policy, Jackson Hole Conference Series, Federal Reserve Bank of Kansas City, 2008, pp. 85-123

Shiller, Robert. “Efficient Markets vs. Excess Volatility.” Yale. Accessed April 09, 2014. http://oyc.yale.edu/economics/econ-252-08/lecture-6

Shin, Yongseok. “Financial Markets: An Engine for Economic Growth.” The Regional Economist (July 2013).

Stillwell, William G., F.hutton Barron, and Ward Edwards. "Evaluating Credit Applications: A Validation of Multiattribute Utility Weight Elicitation Techniques."Organizational Behavior and Human Performance 32, no. 1 (12 1983).

Svenson, Ola. "Are We All Less Risky and More Skillful than Our Fellow Drivers?" Acta Psychologica47, no. 2 (12 1981).

Swets, J. A., R. M. Dawes, and J. Monahan. "Psychological Science Can Improve Diagnostic Decisions."Psychological Science in the Public Interest 1, no. 1 (12, 2000).

Thaler, Richard. "Anomalies: Seasonal Movements in Security Prices II: Weekend, Holiday, Turn of the Month, and Intraday Effects."Journal of Economic Perspectives1, no. 2 (12 1987).

Thornton, B. "Toward a Linear Prediction Model of Marital Happiness." Personality and Social Psychology Bulletin 3, no. 4 (12, 1977).

"U.S. Stock Ownership Stays at Record Low." Gallup. Accessed April 09, 2014. http://www.gallup.com/poll/162353/stock-ownership-stays-record-low.aspx.

Walkshäusl, Christian, and Sebastian Lobe. "Fundamental Indexing around the World." Review of Financial Economics 19, no. 3 (12 2010).

Zuckerman, Ezra W., and John T. Jost. "What Makes You Think You're so Popular? Self-Evaluation Maintenance and the Subjective Side of the "Friendship Paradox""Social Psychology Quarterly 64, no. 3 (12 2001).

 

Zweig, Jason. Your Money and Your Brain: How the New Science of Neuroeconomics Can Help Make You Rich. New York: Simon & Schuster, 2007.

 

Afterword/Acknowledgements

 

I wish to thank Romeo Stevens for the feedback and proofreading he provided for early drafts of this paper. You should go buy his Mealsquares (just look how happy I look eating them there!)

 

If the section on statistical prediction rules sounded familiar it's probably because I stole all the examples from this Less Wrong article by lukeprog about them. After you're done giving this article karma you should go give that one some more.

 

After I made my South Bay meetup presentation Peter McCluskey wrote on the Bay Area LW mailing list that "Your paper's report of 'a massive study of the sixteen countries that had data on local stock, bond, and cash returns available for every year of the twentieth century' could be considered a study of survivorship bias, in that it uses criteria which exclude countries where stocks lost 100% at some point (Russia, Poland, China, Hungary)." This is a good point and is worth addressing, which some researchers have done in recent years. Dimson, Marsh, and Staunton (2006) find that the surviving markets of the 20th century I cite in my paper dominated the global market capitalization in 1900 and the effect of national stock-market implosions was mostly negligible on worldwide averages. Peter did go on to say that "I don't know of better advice for the average person than to invest in equities, and I have most of my wealth in equities..." so I think we're mostly on the same page at least in terms of practical advice.

 

In a conversation with Alyssa Vance she similarly expressed skepticism that the equity risk premium has been significantly greater than zero due to the fact that at some point in the 20th century most major economies experienced double-digit inflation and very high marginal rates of taxation on capital income. It is true that taxes and inflation significantly dilute an investor's return, and one would be foolish to ignore their effects. But while they may reduce the absolute attractiveness of equities, the effects of taxes and inflation actually make stocks look more attractive relative to the alternatives of bonds and cash investments. In the US and most jurisdictions, the dividends and capital gains earned on stocks are taxed at preferential rates relative to the interest earned on fixed income investments, which is typically taxed as ordinary income. Furthermore, the majority of individual investors hold a large fraction of their investments in tax-sheltered accounts (such as 401(k)s and IRAs in the US).

 

At my South Bay meetup presentation, Patrick LaVictoire (among others) expressed incredulity at my claim that retail investors have on average badly underperformed relevant benchmarks and that by implication institutional investors have outperformed. The source I cite in my paper is gated but there is plenty of research on actual investor performance. Morningstar regularly publishes info on how investors routinely underperform the mutual funds they invest in by buying into and selling out of them at the wrong times. Finding data on institutional investors is a little trickier but Busse, Goyal, and Wahal (2010) find that institutional investors managing e.g. pensions, foundations, and endowments on average outperform the broad US equity market in the US equity sleeve of their portfolios. (the language of that paper sounds much more pessimistic, with "alphas are statistically indistinguishable from zero" in the abstract. The key is that they are controlling for the size, value, and momentum effects discussed in my paper. In other words, once we account for the fact that institutional investors are taking advantage of the factor premiums that have been shown to most consistently outperform a simple index strategy, they aren't providing any extra value. This ties in with the idea of "shrinking alpha" or "smart beta" that is currently en vogue in my industry.)

 

I'm happy to address further questions and criticisms in the comments.

Moloch: optimisation, "and" vs "or", information, and sacrificial ems

20 Stuart_Armstrong 06 August 2014 03:57PM

Go read Yvain/Scott's Meditations On Moloch. It's one of the most beautiful, disturbing, poetical look at the future that I've ever seen.

Go read it.

Don't worry, I can wait. I'm only a piece of text, my patience is infinite.

De-dum, de-dum.

You sure you've read it?

Ok, I believe you...

Really.

I hope you wouldn't deceive an innocent and trusting blog post? You wouldn't be a monster enough to abuse the trust of a being as defenceless as a constant string of ASCII symbols?

Of course not. So you'd have read that post before proceeding to the next paragraph, wouldn't you? Of course you would.

 

Academic Moloch

Ok, now to the point. The "Moloch" idea is very interesting, and, at the FHI, we may try to do some research in this area (naming it something more respectable/boring, of course, something like "how to avoid stable value-losing civilization attractors").

The project hasn't started yet, but a few caveats to the Moloch idea have already occurred to me. First of all, it's not obligatory for an optimisation process to trample everything we value into the mud. This is likely to happen with an AI's motivation, but it's not obligatory for an optimisation process.

One way of seeing this is the difference between "or" and "and". Take the democratic election optimisation process. It's clear, as Scott argues, that this optimises badly in some ways. It encourages appearance over substance, some types of corruption, etc... But it also optimises along some positive axes, with some clear, relatively stable differences between the parties which reflects some voters preferences, and punishment for particularly inept behaviour from leaders (I might argue that the main benefit of democracy is not the final vote between the available options, but the filtering out of many pernicious options because they'd never be politically viable). The question is whether these two strands of optimisation can be traded off against each other, or if a minimum of each is required. So can we make a campaign that is purely appearance based with any substantive position ("or": maximum on one axis is enough), or do you need a minimum of substance and a minimum of appearance to buy off different constituencies ("and": you need some achievements on all axes)? And no, I'm not interested in discussing current political examples.

Another example Scott gave was of the capitalist optimisation process, and how it in theory matches customers' and producers' interests, but could go very wrong:

Suppose the coffee plantations discover a toxic pesticide that will increase their yield but make their customers sick. But their customers don't know about the pesticide, and the government hasn't caught up to regulating it yet. Now there's a tiny uncoupling between "selling to [customers]" and "satisfying [customers'] values", and so of course [customers'] values get thrown under the bus.

This effect can be combated to some extent with extra information. If the customers (or journalists, bloggers, etc...) know about this, then the coffee plantations will suffer. "Our food is harming us!" isn't exactly a hard story to publicise. This certainly doesn't work in every case, but increased information is something that technological progress would bring, and this needs to be considered when asking whether optimisation processes will inevitably tend to a bad equilibrium as technology improves. An accurate theory of nutrition, for instance, would have great positive impact if its recommendations could be measured.

Finally, Zack Davis's poem about the em stripped of (almost all) humanity got me thinking. The end result of that process is tragic for two reasons: first, the em retains enough humanity to have curiosity, only to get killed for this. And secondly, that em once was human. If the em was entirely stripped of human desires, the situation would be less tragic. And if the em was further constructed in a process that didn't destroy any humans, this would be even more desirable. Ultimately, if the economy could be powered by entities developed non-destructively from humans, and which were clearly not conscious or suffering themselves, this would be no different that powering the economy with the non-conscious machines we use today. This might happen if certain pieces of a human-em could be extracted, copied and networked into an effective, non-conscious entity. In that scenario, humans and human-ems could be the capital owners, and the non-conscious modified ems could be the workers. The connection of this with the Moloch argument is that it shows that certain nightmare scenarios could in some circumstances be adjusted to much better outcomes, with a small amount of coordination.

 

The point of the post

The reason I posted this is to get people's suggestions about ideas relevant to a "Moloch" research project, and what they thought of the ideas I'd had so far.

The Great Filter is early, or AI is hard

19 Stuart_Armstrong 29 August 2014 04:17PM

Attempt at the briefest content-full Less Wrong post:

Once AI is developed, it could "easily" colonise the universe. So the Great Filter (preventing the emergence of star-spanning civilizations) must strike before AI could be developed. If AI is easy, we could conceivably have built it already, or we could be on the cusp of building it. So the Great Filter must predate us, unless AI is hard.

Expected utility, unlosing agents, and Pascal's mugging

19 Stuart_Armstrong 28 July 2014 06:05PM

Still very much a work in progress

EDIT: model/existence proof of unlosing agents can be found here.

Why do we bother about utility functions on Less Wrong? Well, because of results of the New man and the Morning Star, which showed that, essentially, if you make decisions, you better use something equivalent to expected utility maximisation. If you don't, you lose. Lose what? It doesn't matter, money, resources, whatever: the point is that any other system can be exploited by other agents or the universe itself to force you into a pointless loss. A pointless loss being a lose that give you no benefit or possibility of benefit - it's really bad.

The justifications for the axioms of expected utility are, roughly:

  1. (Completeness) "If you don't decide, you'll probably lose pointlessly."
  2. (Transitivity) "If your choices form loops, people can make you lose pointlessly."
  3. (Continuity/Achimedean) This axiom (and acceptable weaker versions of it) is much more subtle that it seems; "No choice is infinity important" is what it seems to say, but " 'I could have been a contender' isn't good enough" is closer to what it does. Anyway, that's a discussion for another time.
  4. (Independence) "If your choice aren't independent, people can expect to make you lose pointlessly."

 

Equivalency is not identity

A lot of people believe a subtlety different version of the result:

  • If you don't have a utility function, you'll lose pointlessly.

This is wrong. The correct result is:

  • If you don't lose pointlessly, then your decisions are equivalent with having a utility function.
continue reading »

2014 Less Wrong Census/Survey - Call For Critiques/Questions

18 Yvain 11 October 2014 06:39AM

It's that time of year again. Actually, a little earlier than that time of year, but I'm pushing it ahead a little to match when Ozy and I expect to have more free time to process the results.

The first draft of the 2014 Less Wrong Census/Survey is complete (see 2013 results here) .

You can see the survey below if you promise not to try to take the survey because it's not done yet and this is just an example!

2014 Less Wrong Census/Survey Draft

I want two things from you.

First, please critique this draft (it's much the same as last year's). Tell me if any questions are unclear, misleading, offensive, confusing, or stupid. Tell me if the survey is so unbearably long that you would never possibly take it. Tell me if anything needs to be rephrased.

Second, I am willing to include any question you want in the Super Extra Bonus Questions section, as long as it is not offensive, super-long-and-involved, or really dumb. Please post any questions you want there. Please be specific - not "Ask something about taxes" but give the exact question you want me to ask as well as all answer choices.

Try not to add more than a few questions per person, unless you're sure yours are really interesting. Please also don't add any questions that aren't very easily sort-able by a computer program like SPSS unless you can commit to sorting the answers yourself.

I will probably post the survey to Main and officially open it for responses sometime early next week.

Solstice 2014 - Kickstarter and Megameetup

18 Raemon 10 October 2014 05:55PM


Summary:

  • We're running another Winter Solstice kickstarter - this is to fund the venue, musicians, food, drink and decorations for a big event in NYC on December 20th, as well as to record more music and print a larger run of the Solstice Book of Traditions. 
  • I'd also like to raise additional money so I can focus full time for the next couple months on helping other communities run their own version of the event, tailored to meet their particular needs while still feeling like part of a cohesive, broader movement - and giving the attendees a genuinely powerful experience. 

The Beginning

Four years ago, twenty NYC rationalists gathered in a room to celebrate the Winter Solstice. We sang songs and told stories about things that seemed very important to us. The precariousness of human life. The thousands of years of labor and curiosity that led us from a dangerous stone age to the modern world. The potential to create something even better, if humanity can get our act together and survive long enough.

One of the most important ideas we honored was the importance of facing truths, even when they are uncomfortable or make us feel silly or are outright terrifying. Over the evening, we gradually extinguished candles, acknowledging harsher and harsher elements of reality.

Until we sat in absolute darkness - aware that humanity is flawed, and alone, in an unforgivingly neutral universe. 

But also aware that we sit beside people who care deeply about truth, and about our future. Aware that across the world, people are working to give humanity a bright tomorrow, and that we have the power to help. Aware that across history, people have looked impossible situations in the face, and through ingenuity and persperation, made the impossible happen.

That seemed worth celebrating. 


The Story So Far

As it turned out, this resonated with people outside the rationality community. When we ran the event again in 2012, non-religious but non-Less Wrong attended the event and told me they found it very moving. In 2013, we pushed it much larger - I ran a kickstarter campaign to fund a big event in NYC. 

A hundred and fifty people from various communities attended. From Less Wrong in particular, we had groups from Boston, San Francisco, North Carolina, Ottawa, and Ohio among other places. The following day was one of the largest East Coast Megameetups. 

Meanwhile, in the Bay Area, several people put together an event that gathered around 80 attendees. In Boston and Vancouever and Leipzig Germany, people ran smaller events. This is shaping up to take root as a legitimate holiday, celebrating human history and our potential future.

This year, we want to do that all again. I also want to dedicate more time to helping other people run their events. Getting people to start celebrating a new holiday is a tricky feat. I've learned a lot about how to go about that and want to help others run polished events that feel connecting and inspirational.


So, what's happening, and how can you help?

 

  • The Big Solstice itself will be Saturday, December 20th at 7:00 PM. To fund it, we're aiming to raise $7500 on kickstarter. This is enough to fund the aforementioned venue, food, drink, live musicians, record new music, and print a larger run of the Solstice Book of Traditions. It'll also pay some expenses for the Megameetup. Please consider contributing to the kickstarter.
  • If you'd like to host your own Solstice (either a large or a private one) and would like advice, please contact me at raemon777@gmail.com and we'll work something out.
  • There will also be Solstices (of varying sizes) run by Less Wrong / EA folk held in the Bay Area, Seattle, Boston and Leipzig. (There will probably be a larger but non-LW-centered Solstice in Los Angeles and Boston as well).
  • In NYC, there will be a Rationality and EA Megameetup running from Friday, Dec 19th through Sunday evening.
    • Friday night and Saturday morning: Arrival, Settling
    • Saturday at 2PM - 4:30PM: Unconference (20 minute talks, workshops or discussions)
    • Saturday at 7PM: Big Solstice
    • Sunday at Noon: Unconference 2
    • Sunday at 2PM: Strategic New Years Resolution Planning
    • Sunday at 3PM: Discussion of creating private ritual for individual communities
  • If you're interested in coming to the Megameetup, please fill out this form saying how many people you're bringing, whether you're interested in giving a talk, and whether you're bringing a vehicle, so we can plan adequately. (We have lots of crash space, but not infinite bedding, so bringing sleeping bags or blankets would be helpful)

Effective Altruism?

 

Now, at Less Wrong we like to talk about how to spend money effectively, so I should be clear about a few things. I'm raising non-trivial money for this, but this should be coming out of people's Warm Fuzzies Budgets, not their Effective Altruism budgets. This is a big, end of the year community feel-good festival. 

That said, I do think this is an especially important form of Warm Fuzzies. I've had EA-type folk come to me and tell me the Solstice inspired them to work harder, make life changes, or that it gave them an emotional booster charge to keep going even when things were hard. I hope, eventually, to have this measurable in some fashion such that I can point to it and say "yes, this was important, and EA folk should definitely consider it important." 

But I'm not especially betting on that, and there are some failure modes where the Solstice ends up cannibalizing more resources that could have went towards direct impact. So, please consider that this may be especially valuable entertainment, that pushes culture in a direction where EA ideas can go more mainstream and gives hardcore EAs a motivational boost. But I encourage you to support it with dollars that wouldn't have gone towards direct Effective Altruism.

View more: Next