Update on the Brain Preservation Foundation Prize
Brain Preservation Foundation President Kenneth Hayworth just wrote a synopsis of the recent ongoings from the major two competitors for the BPF prizes. Here is the summary:
Brain Preservation Prize competitor Shawn Mikula just published his whole mouse brain electron microscopy protocol in Nature Methods (paper, BPF interview), putting him close to winning the mouse phase of our prize.
Brain Preservation Prize competitor 21st Century Medicine has developed a new “Aldehyde-Stabilized Cryopreservation” technique–preliminary results show good ultrastructure preservation even after storage of a whole rabbit brain at -135 degrees C.
This work was funded in part from donations from LW users. In particular, a grant to support the work of LW user Robert McIntyre at 21st Century Medicine that the BPF was able to provide has been instrumental.
In order to continue this type of research and to bolster it, BPF welcomes your support in a variety of different ways, including awareness-raising, donations, and volunteering. Please reach out if you would like to volunteer, or you can PM me and I will help put you in touch. And if you have any suggestions for the BPF, please feel free to discuss them in the comments below.
Calories per dollar vs calories per glycemic load: some notes on my diet

2015 New Years Resolution Thread
The new year is a popular Schelling point to make changes to your activities, habits, and/or thought processes. This is often done via the New Year's Resolution. One standard piece of advice for NYRs is to make them achievable, since they are often too ambitious and people end up giving up and potentially falling victim to the what-the-hell effect.
Wikipedia has a nice list of popular NYRs. For ideas from other LW contributors, here are some previous NYRs discussed on LW:
- Somervta aimed to spend at least two hours/week learning to program (here)
- ArisKatsaris aimed to tithe to charity (here)
- Swimmer963 aimed to experiment more with relationships (here)
- RichardKennaway aimed to not die (here)
- orthonormal aimed (for many years in a row) to make new mistakes (here)
- Perplexed aimed to avoid making karma micromanagement postmortems (here)
- Yvain aimed to check whether there was a donation matching opportunity the next week before making a donation (here)
(If one of these were from you, perhaps you'd like to discuss whether they were successful or not?)
In the spirit of collaboration, I propose that we discuss any NYRs we have made or are thinking of making for 2015 in this thread.
What are the most common and important trade-offs that decision makers face?
One way to manipulate your level of abstraction related to a task
In construal level theory, ideas can be classified along a spectrum from concrete ("near" in Robin Hanson's terminology) to abstract ("far"). As a summary, here is the abstract from a 2010 review (pdf):
People are capable of thinking about the future, the past, remote locations, another person’s perspective, and counterfactual alternatives. Without denying the uniqueness of each process, it is proposed that they constitute different forms of traversing psychological distance. Psychological distance is egocentric: Its reference point is the self in the here and now, and the different ways in which an object might be removed from that point—in time, in space, in social distance, and in hypotheticality— constitute different distance dimensions. Transcending the self in the here and now entails mental construal, and the farther removed an object is from direct experience, the higher (more abstract) the level of construal of that object. Supporting this analysis, research shows (a) that the various distances are cognitively related to each other, (b) that they similarly influence and are influenced by level of mental construal, and (c) that they similarly affect prediction, preference, and action.
Now, what if you want to think about some thing in a more or less near or far way? Here's one well-studied strategy to do so (e.g., see pdf here).
To think about a task in more concrete terms, ask yourself how you would do it. Then, however you answer that question, ask yourself how would you do that. Do this two (or so) more times, and you will be thinking about that task significantly more concretely.
To think about a task in more abstract terms, ask yourself why you would do it. Then ask yourself why you would want that 3 (or so) more times.
An excerpt from the 2007 study in the second link to give an example of how this would work:
Suppose you indicate “taking a vacation” as one of your goals. Please write the goal in the uppermost square. Then, think why you would like to go on vacation, and write your answer in the square underneath. Suppose that you write “in order to rest.” Now, please think why you would like to rest, and write your answer in the third square. Suppose that you write “in order to renew your energy.” Finally, write in the last square why you would like to renew your energy.
[LINK] Hypothesis about the mechanism for storing long-term memory
A major problem in understanding memory is how it can be very long-lasting and stable from early childhood until death, despite massive interruptions in brain state as extreme as prolonged comas. Current prominent candidates for molecular substrates for long-term memory storage have focused on macromolecules such as calmodulin-dependent protein kinase II (CaMKII) coupled with the NMDA receptor and protein phosphatase 2A (2), protein kinase M zeta (PKMζ) (3), and cytoplasmic polyadenylation element binding protein (CPEB) (4), all of which are inside postsynaptic spines. To retain information despite metabolic turnover, all such candidates need to have some sort of bistable switch (e.g., state of phosphorylation or prion conformation) and a mechanism by which older copies of the molecule pass on their status to newer copies to preserve the information. A major problem is that individual intracellular molecules typically last at most a few days before being turned over. Therefore, the information would have to survive being copied tens of thousands of times in a long-lived human, despite metabolic interruptions. Such robust fidelity would be extremely difficult to engineer. Even dynamic computer memory with sophisticated refresh and error correction circuits cannot cope with even a momentary hiccup in its power supply. Instead, long-term information storage in both computers and human civilizations requires writing the information onto physically stable storage media (e.g., magnetic disks, clay tablets, or acid-free paper), which do not require frequent energy-dependent recopying. Aside from some nuclear pore constituents, all of the known really long-lived proteins are insoluble [extracellular matrix] components such as crystallin, elastin, collagen, and proteoglycans (5), which gain stability by extensive cross-linkage and remoteness from intracellular degradative machinery, such as proteasomes, lysosomes, and autophagy.
Which cognitive biases should we trust in?
There have been (at least) a couple of attempts on LW to make Anki flashcards from Wikipedia's famous List of Cognitive Biases, here and here. However, stylistically they are not my type of flashcard, with too much info in the "answer" section.
Further, and more troublingly, I'm not sure whether all of the biases in the flashcards are real, generalizable effects; or, if they are real, whether they have effect sizes large enough to be worth the effort to learn & disseminate. Psychology is an academic discipline with all of the baggage that entails. Psychology is also one of the least tangible sciences, which is not helpful.
There are studies showing that Wikipedia is no less reliable than more conventional sources, but this is in aggregate, and it seems plausible (though difficult to detect without diligently checking sources) that the set of cognitive bias articles on Wikipedia has high variance in quality.
We do have some knowledge of how many of them were made, in that LW user nerfhammer wrote a bunch. But, as far as I can tell, s/he didn't discuss how s/he selected biases to include. (Though, s/he is obviously quite knowledgable on the subject, see e.g. here.)
As the articles stand today, many (e.g., here, here, here, here, and here) only cite research from one study/lab. I do not want to come across as whining: the authors who wrote these on Wikipedia are awesome. But, as a consumer the lack of independent replication makes me nervous. I don't want to contribute to information cascades.
Nevertheless, I do still want to make flashcards for at least some of these biases, because I am relatively sure that there are some strong, important, widespread biases out there.
So, I am asking LW whether you all have any ideas about, on the meta level,
1) how we should go about deciding/indexing which articles/biases capture legit effects worth knowing,
and, on the object level,
2) which of the biases/heuristics/fallacies are actually legit (like, a list).
Here are some of my ideas. First, for how to decide:
- Only include biases that are mentioned by prestigious sources like Kahneman in his new book. Upside: authoritative. Downside: potentially throwing out some good info and putting too much faith in one source.
- Only include biases whose Wikipedia articles cite at least two primary articles that share none of the same authors. Upside: establishes some degree of consensus in the field. Downside: won't actually vet the articles for quality, and a presumably false assumption that the Wikipedia pages will reflect the state of knowledge in the field.
- Search for the name of the bias (or any bold, alternative names on Wikipedia) on Google scholar, and only accept those with, say, >30 citations. Upside: less of a sampling bias of what is included on Wikipedia, which is likely to be somewhat arbitrary. Downside: information cascades occur in academia too, and this method doesn't filter for actual experimental evidence (e.g., there could be lots of reviews discussing the idea).
- Make some sort of a voting system where experts (surely some frequent this site) can weigh in on what they think of the primary evidence for a given bias. Upside: rather than counting articles, evaluates actual evidence for the bias. Downside: seems hard to get the scale (~ 8 - 12 + people voting) to make this useful.
- Build some arbitrarily weighted rating scale that takes into account some or all of the above. Upside: meta. Downside: garbage in, garbage out, and the first three features seem highly correlated anyway.
Second, for which biases to include. I'm just going off of which ones I have heard of and/or look legit on a fairly quick run through. Note that those annotated with a (?) are ones I am especially unsure about.
- anchoring
- availability
- bandwagon effect
- base rate neglect
- choice-supportive bias
- clustering illusion
- confirmation bias
- conjunction fallacy (is subadditivity a subset of this?)
- conservatism (?)
- context effect (aka state-dependent memory)
- curse of knowledge (?)
- contrast effect
- decoy effect (aka independence of irrelevant alternatives)
- Dunning–Kruger effect (?)
- duration neglect
- empathy gap
- expectation bias
- framing
- gambler's fallacy
- halo effect
- hindsight bias
- hyperbolic discounting
- illusion of control
- illusion of transparency
- illusory correlation
- illusory superiority
- illusion of validity (?)
- impact bias
- information bias (? aka failure to consider value of information)
- in-group bias (this is also clearly real, but I'm also not sure I'd call it a bias)
- escalation of commitment (aka sunk cost/loss aversion/endowment effect; note, contra Gwern, that I do think this is a useful fallacy to know about, if overrated)
- false consensus (related to projection bias)
- Forer effect
- fundamental attribution error (related to the just-world hypothesis)
- familiarity principle (aka mere exposure effect)
- moral licensing (aka moral credential)
- negativity bias (seems controversial & it's troubling that there is also a positivity bias)
- normalcy bias (related to existential risk?)
- omission bias
- optimism bias (related to overconfidence)
- outcome bias (aka moral luck)
- outgroup homogeneity bias
- peak-end rule
- primacy
- planning fallacy
- reactance (aka contrarianism)
- recency
- representativeness
- self-serving bias
- social desirability bias
- status quo bias
Happy to hear any thoughts!
The Outside View Of Human Complexity
One common question: how complex is some aspect of the human body? In addition to directly evaluating the available evidence for that aspect, one fruitful tactic in making this kind of prediction is to analyze past predictions about similar phenomena and assume that the outcome will be similar. This is called reference class forecasting, and is often referred to on this site as "taking the outside view."
First, how do we define complexity? Loosely, I will consider a more complex situation to be one with more components, either in total number or type, which allows for more degrees of freedom in the system considered. Using this loose definition for now, how do our predictions about human complexity tend to fare?
Point: Predictions about concrete things have tended to overestimate our complexity
Once we know about their theoretical existence of phenomenon but before they are systematically measured, our predictions about measurable traits of the human body tend to err on the side of being more complex (i.e., more extensive or variable) than reality.
1) Although scholars throughout history have tended to think that human brains must be vastly differently from those of other animals, on the molecular and cellular level there have turned out to be few differences. As Eric Kandel relates in his autobiography (p. 236), "because human mental processes have long been thought to be unique, some early students of the brain expected to find many new classes of proteins lurking in our gray matter. Instead, science has found surprisingly few proteins that are truly unique to the human brain and no signaling systems that are unique to it."
2) There turned out to be fewer protein-coding genes in human body than most people expected. We have data on this by way of an informal betting market in the early 2000's, described here ($) and here (OA). The predictions ranged from 26,000 - 150,000, and that lower bound prediction won, even though it probably wasn't low enough! As of 2008, the predicted number by Ensembl was in the 23,000s. (As an aside, humans don't have the largest genome in terms of number of nucleotides either, by far. That title currently belongs to the canopy plant, pictured below (thanks to kodamatic for the photo, and to Pellicer et al. for the sequencing effort).)

3) Intro neuro texts (including one co-written by the aforementioned Kandel) claim that there are 10-fold (or more) more glia cells than neurons in the human brain. Since glia play crucial support roles and can even propagate info signals, this is not a trivial claim and would vastly increase the processing power of the brain. But when it has actually been measured, the ratio of glial to neural cells is actually around one to one in most species, including humans (see here and here).
Counterpoint: Categories we use to explain the function of our bodies have tended to be more arbitrary than we recognize
1) One active area of research is in determining whether the distinguishing characteristics between what we consider cell "types" are more quantitative or qualitative (i.e., degree rather than form). Consider, for example, the continuum between the "classical" m1 and "alternative" m2 macrophages, which contributes whether those immune cells will be pro- or anti-tumor. Or consider the gradient of pluripotency in stem cells. If cell types are on a spectrum, depending upon the sort of transcripts or proteins they contain at any given moment, that suggests that they may be able to have more different sorts of interactions at different points in time.
2) Although we found fewer human genes than most geneticists expected, components of genes (exons) have been found to be able to combine in many ways, a phenomenon called alternative splicing. One article (here) found that of genes with multiple exons, more than 90% are alternatively spliced. Specifically, these researchers found ~67,000 alternatively spliced transcripts from ~20,000 genes. Since these alternatively spliced genes have different nucleic acid sequences, they could (and probably do) have quite different functions.
3) The chromatin state of a given portion of the genome, i.e. where it falls on the spectrum of euchromatic vs heterochromatic, seems to have the ability to explain a large percentage of a variance in whether or not that gene is expressed. For example, one study (here) shows a strikingly high correlation between the ability of one transcription factor to bind to DNA and the chromatin state of that region of DNA (check figure 3). The fact that these chromatin states can be transmitted between generations via germ cells is also a fascinating finding that has implications which increase the complexity of human biology as compared to the "static DNA" model.
Synthesis: When to expect more or less complexity
The above is far from systematic, but I think it portrays the trends. The known unknowns have tended to end up lower in complexity than we've predicted. But unknown unknowns continue to blindside us, unabated, adding to the total complexity of the human body.
Why do we tend to over-estimate the complexity of known unknowns in the human body? People who study biological processes want to find more "degrees of freedom" in their systems, so that the phenomenon they're studying can have more explanatory power. The standard reason for this is that they want their results to have an impact in preventing or curing diseases, while the cynical ("Hansonian") reason is that they want to attract more status and funding. The real answer is probably a mix of both, but either way, the result is that we tend to over-estimate the complexity of the known unknowns.
Why does it take so long to recognize the vast number of unknown unknowns? I think the best explanation for this is the standard, "Kuhnian" one, that shifting a paradigm is difficult. Adding an entirely new facet to any established scientific discipline requires slow-moving institutional support, and human biology is no exception. Look, for example, at the history of neurogenesis. Another explanation is technological, that we just don't have the capacity to observe certain things until we reach a given level of engineering success. We could not have known about histone-based epigenetics until we had the capacity to visualize cells at the level of electron microscopy (see pdf).
The next time someone uses an argument like "the human body is so complex," try to notice whether they are referring to a prediction about the way that the human body and biology work in general, or one particular aspect of the human body. If they're referring to the general issue, at scales from the atomic to the molecular to the tissue level, they're right: there's loads we don't understand and probably lots of important stuff we don't even know about. But if they're referring to a particular as-of-yet unmeasurable aspect of the human body, past history suggests that that particular phenomenon is likely to be less complex than you might guess.
References
Kandel, E. In Search of Memory: The Emergence of a New Science of Mind. amazon.
Pennisi, A. 2003 Low Number Wins the GeneSweep Pool. abstract.
Human Genome Information Project. 2008 How Many Genes Are in the Human Genome?. link.
Pellicer J. 2010 The largest eukaryotic genome of them all? abstract. doi: 10.1111/j.1095-8339.2010.01072.x
Kandel E, et al. Principles of Neural Science. amazon.
Azevedo FA, et al. 2009 Equal numbers of neuronal and nonneuronal cells make the human brain an isometrically scaled-up primate brain. pubmed.
Ma J, et al. 2010 The M1 form of tumor-associated macrophages in non-small cell lung cancer is positively associated with survival time. doi:10.1186/1471-2407-10-112
Hough SR, Laslett AL, Grimmond SB, Kolle G, Pera MF (2009) A Continuum of Cell States Spans Pluripotency and Lineage Commitment in Human Embryonic Stem Cells. PLoS ONE 4(11): e7708. doi:10.1371/journal.pone.0007708
Toung JM. 2011 RNA-sequence analysis of human B-cells. abstract. doi:10.1101/gr.116335.110
John S, et al. 2011 Chromatin accessibility pre-determines glucocorticoid receptor binding patterns. doi:10.1038/ng.759.
Olins DE and Olins AL. 2003 Chromatin history: our view from the bridge, pdf.
Wheeler A. A Brief History and Timeline: Adult mammalian neurogenesis. link.
What Makes My Attempt Special?
A crucial question towards the beginning of any research project is, why should my group succeed in elucidating an answer to a question where others may have tried and failed?
Here's how I'm going about dividing the possible worlds, but I'm interested to see if anyone has any other strategies. First, the whole question is conditional on nobody having already answered the particular question you're interested in. So, you first need an exhaustive lit review, that should scale in intensity based on how much effort you expect to actually expend on the project. Still nothing? These are the remaining possibilities:
1) Nobody else has ever thought of your question, even though all of the pieces of knowledge needed to formulate it have been known for years. If the field has many people involved, the probability of this is vanishingly small and you should systematically disabuse yourself of your fantasies if you think like this often. Still... if true, the prognosis: a good sign.
2) Nobody else has ever thought of your question, because it wouldn't have been ask-able without pieces of knowledge that were discovered just recently. This is common in fast-paced fields and it's why they can be especially exciting. The prognosis: a good sign, but work quickly!
3) Others have thought of your question, but didn't think it was interesting enough to devote serious attention to. We should take this seriously, as how informed others choose to allocate their attention is one of our better approximations to real prediction markets. So, the prognosis: bad sign. Figure out whether you can not only answer your question but validate its usefulness / importance, too.
4) Others have thought of your question, thought it was interesting, but have never tried to answer it because of resource or tech restraints, which you do not face. Prognosis: probably the best-case scenario.
5) Others have thought of your question and run the relevant tests, but failed to get any consistent / reliable results. It'd be nice if there were no publication bias but of course there is--people are much more likely to publish statistically significant, positive results. Due to this bias, it is sometimes hard to tell precisely how many dead skeletons and dismembered brains line your path, and because of this uncertainty you must assign this possibility a non-zero probability. The prognosis: a bad sign, but do you feel lucky?
6) Others have thought of your question, run the relevant tests, and failed to get consistent / reliable results, but used a different method than the one you will use. Your new tech might clear up some of the murkiness, but it's important here to be precise about which specific issues your method solves and which it doesn't. The prognosis: all things equal, a good sign.
These are the considerations we make when we decide whether to pursue a given topic. But even if you do choose to pursue the question, some of these possibilities have policy recommendations for how to proceed. For example, using new tech, even if it's not necessarily demonstrably better in all cases, seems like a good idea given the possibility of #6.
Step Back
From a recent Psychological Science,
In everyday life, individuals typically approach desired stimuli by stepping forward and avoid aversive stimuli by stepping backward... Cognitive functioning was gauged by means of a Stroop task immediately after a participant stepped in one direction... Stepping backward significantly enhanced cognitive performance compared to stepping forward or sideways. Considering the effect size, backward locomotion appears to be a very powerful trigger to mobilize cognitive resources.
As Chris Chatham notes,
This work is remarkable not only for demonstrating how a very concrete and simple bodily experience can influence even the highest levels of cognitive processing (in this case, the so-called "cognitive control" processes that enable focused attention), but also because performance on the Stroop task is notoriously difficult to improve.
When you suddenly realize that a task is more difficult than you assumed it would be, or when you face a particularly difficult choice in pursuit of rationality, you may find it useful to literally take a step back. For those of us who are particularly interested in making good decisions, this may also serve the purpose of self-signaling, as Yvain and commenters discussed earlier.
Chris's post has a link to a pdf of the paper.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)