The scope of "free will" within biology?
I've recently read through Eliezer's sequence on "free will", and I generally found it to be a fairly satisfying resolution/dissolution of the many misunderstandings involved in standard debates about the subject. There's no conflict between saying "your past circumstances determined that you would rush into the burning orphanage" and "you decided to rush into the burning orphanage"; what really matters is the experience of weighing possible options against your emotions and morals, without knowledge of what you will decide, rather than some hypothetical freedom to have done something different, etc. Basically, the experience of deciding between alternatives is real, don't worry too much about nonsense philosophical "free will" debates, just move on and live your life. Fine.
But I'm trying to figure out the best way to conceptualize the idea that certain biological conditions can "inhibit" your "free will," even under a reductionist understanding of the concept. Consider this recent article in The Atlantic called "The Brain on Trial." The basic argument is that we have much less control over ourselves than we think, that biology and upbringing have tremendous influences on our decisions, and that the criminal justice system needs to account for the pervasiveness of biological influence on our actions. On the one hand, duh. The article treats the idea that we are "just" our biology as some kind of big revelation that has only recently been understood:
The crux of the problem is that it no longer makes sense to ask, “To what extent was it his biology, and to what extent was it him?,” because we now understand that there is no meaningful distinction between a person’s biology and his decision-making. They are inseparable.
Is that because we've just now discovered reductionism? If we weren't "just" our biology, what would we be? Magic? Whatever we mean by consciousness and decision-making, I'm sure LW members pretty much all accept that they occur within physics. The author doesn't even seem to fully grasp this point himself, because he states at the end that there "may" be at least some space for free will, independent of our biology, but that we just don't understand it yet:
Free will may exist (it may simply be beyond our current science), but one thing seems clear: if free will does exist, it has little room in which to operate. It can at best be a small factor riding on top of vast neural networks shaped by genes and environment.
Obviously most LW reductionists are going to immediately grasp that "free will" doesn't exist in addition to our neural networks. What would that even mean? It's not "90% neural networks, 10% free will" -- the point is that the process of your neural networks operating normally on a particular decision is what we mean by "free will," at least when we care to use that concept. (If anyone thinks I've stated this incorrectly, feel free to correct me.)
But still, notwithstanding that a lot of this article sort of seems to be missing the point (largely because the author doesn't quite get how obvious the central premise really is), I'm still wrestling with how to understand some of its more specific points, within the reductionist understanding of free will. For example, Charles Whitman, the shooter who killed 13 people from the UT Tower, had written out a suicide note noting that he had recently been the "victim of many unusual and irrational thoughts" and requesting that his brain be examined. An autopsy revealed that he had a large brain tumor that had damaged his amygdala, thus causing emotional and social disturbances. Similarly, in 2000, a man named "Alex" (fake name, but real case) suddenly developed pedophilic impulses at age 40, and was eventually convicted of child molestation. Turns out he also had a brain tumor, and once it was removed, his sexual interests went back to normal. The pedophilic impulses soon returned, and the doctors discovered the tumor had grown back -- they removed it for good, and his behavior went back to normal.
Obviously people like Charles and Alex aren't "victims of their biology" anymore than the rest of us. Nobody's brain has some magic "free will" space that "exempts" the person from biology. But even under the reductionist conception of free will, it still seems like Charles and Alex are somehow "less free" than "normal" people. Even though everyone's decisions are, in some sense, determined by their past circumstances, there still seems to be a meaningful way in which Charles are Alex are less able to make decisions "for themselves" than those of us without brain tumors -- almost as if they had a tick which caused involuntary physical actions, but drawn out over time in patterns, rather than in single bursts. Or to put it differently, where the phrase "your past circumstances determine who you are when you face a choice, you are still the one that decides" holds true for most people, it seems like it doesn't hold true for them. At the very least, it seems like we would certainly be justified in judging Charles and Alex differently from people who don't suffer from brain tumors.
But if we're already committed to the reductionist understanding of free will in the first place, what does this intuition that Charles and Alex are somehow "less free" really mean? Obviously we all have biological impulses that make us more or less inclined to make certain decisions, and that might therefore impede on some ideal conception of "control" over ourselves. But are these impulses qualitatively different from biological conditions that "override" normal decision-making? Is a brain tumor pushing on your amygdala more akin to prison bars that really do inhibit your free will in a purely physical sense, or just a more intense version of genes that give you a slight disposition toward violent behavior?
My intuition is that somewhere along the line here I may be asking a "wrong question," or importing some remnant of a non-biological conception of free will into my thinking. But I can't quite pin this issue down in a way that really resolves the answer in a satisfying way, so I was hoping that some of you might be able to help me reason through this appropriately. Thoughts?
psychology and applications of reinforcement learning: where do I learn more?
Minicamp made me take the notion of an Ugh Field seriously, and I've found Ugh Fields a fairly useful model for understanding how my brain works. I have/had lots of topics that have been unpleasant to think about and the cause of that unpleasantness seems to be strongly correlated with previous negative experiences.
More generally, animals, including humans, seem to use something like Temporal Difference learning very frequently (one source of that impression). If that's so, then understanding TD and related psychological research should give me a more accurate model of myself. I would expect it to help me understand when my dispositions and habits are likely to be useful (by knowing how they developed) and understand how to change my dispositions and habits. Thus I have a couple of questions:
- Are my impressions accurate?
- What books, papers, posts are the best for understanding these topics? I'd like material that addresses any of the following:
- How TD or related algorithms work
- What evidence says about whether human and/or animal brains frequently use TD or related algorithms and what situations brains use it for
- Practical consequences of the research (e.g. Ugh Fields, doing X is a good way to build habit Y, smiling is a reinforcement, etc.)
Review of Doris, 'The Moral Psychology Handbook' (2010)
The Moral Psychology Handbook (2010), edited by John Doris, is probably the best way to become familiar with the exciting interdisciplinary field of moral psychology. The chapters are written by philosophers, psychologists, and neuroscientists. A few of them are all three, and the university department to which they are assigned is largely arbitrary.
I should also note that the chapter authors happen to comprise a large chunk of my own 'moral philosophers who don't totally suck' list. The book is also exciting because it undermines or outright falsifies a long list of popular philosophical theories with - gasp! - empirical evidence.
Chapter 1: Evolution of Morality (Machery & Mallon)
The authors examine three interpretations of the claim that morality evolved. The claims "Some components of moral psychology evolved" and "Normative cognition is a product of evolution" are empirically well-supported but philosophically uninteresting. The stronger claim that "Moral cognition (a kind of normative cognition) evolved" is more philosophically interesting, but at present not strongly supported by the evidence (according to the authors).
The chapter serves as a compact survey of recent models for the evolution of morality in humans (Joyce, Hauser, de Waal, etc.), and attempts to draw philosophical conclusions about morality from these descriptive models (e.g. Joyce, Street).
Chapter 2: Multi-system Moral Psychology (Cushman, Young, & Greene)
The authors survey the psychological and neuroscientific evidence showing that moral judgments are both intuitive/affective/unconscious and rational/cognitive/conscious, and propose a dual-process theory of moral judgment. Scientific data is used to verify or falsify philosophical theories proposed as, for example, explanations for trolley-problem cases.
Consequentialist moral judgments are more associated with rational thought than deontological judgment, but both deontological and consequentialist moral judgments have their sources in emotion. Deontological judgments are associated with 'alarm bell' emotions that circumvent reasoning and provide absolute demands on behavior. Alarm bell emotions are rooted in (for example) the amygdala. Consequentialist judgments are associated with 'currency' emotions provide negotiable motivations that weigh for and against particular behaviors, and are rooted in meso-limbic regions that track a stimulus' reward magnitude, reward probability, and expected value.
This chapter might be the best one in the book.
Chapter 3: Moral Motivation (Schroeder, Roskies, & Nichols)
The authors categorize philosophical theories of moral motivation into four groups:
- Instrumentalists think people are motivated when they form beliefs about how to satisfy pre-existing desires.
- Cognitivists think people are motivated merely by the belief that something is right or wrong.
- Sentimentalists think people are morally motivated only by emotions.
- Personalists think people are motivated by their character: their knowledge of good and bad, their wanting for good or bad, their emotions about good or bad, and their habits of responding to these three.
The authors then argue that the neuroscience of motivation fits best with the instrumentalist and personalist pictures of moral motivation, poses some problems for sentimentalists, and presents grave problems for cognitivists. The main weakness of the chapter is that its picture of the neuroscience of motivation is mostly drawn from a decade-old neuroscience textbook. As such, the chapter misses many new developments, especially the important discoveries occurring in neuroeconomics. Still, I can personally attest that the latest neuroscience still comes down most strongly in favor of instrumentalists and personalists, but there are recent details that could have been included in this chapter.
Chapter 4: Moral Emotions (Prinz & Nichols)
The authors survey studies that illuminate the role of emotions in moral cognition, and discuss several models that have been proposed, concluding that the evidence currently respects each of them. They then focus on a more detailed discussion of two emotions that are particularly causal in the moral judgments of Western society: anger and guilt.
The chapter is strong in example experiments, but a higher-level discussion of the role of emotions in moral judgment is provided by chapter 2.
Chapter 5: Altruism (Stich, Doris, & Roedder)
The authors distinguish four kinds of desires: (1) desires for pleasure and avoiding pain, (2) self-interested desires, (3) desires that are not self-interested and no for the well-being of others, and (4) desires for the well-being of others. Psychological hedonism maintains that all (terminal, as opposed to instrumental) desires are of type 1. Psychological egoism says that all desires are of type 2 (which includes type 1). Altruism claims that some desires fall into category 4. And if there are desires of tyep 3 but none of type 4, then both egoism and altruism are false.
The authors survey evolutionary arguments for and against altruism, but are not yet convinced by any of them.
Psychology, however, does support the existence of altruism, which seems to be "the product of an emotional response to another's distress." The authors survey the experimental evidence, especially the work of Batson. They conclude there is significant support for the existence of genuine human altruism. We are not motivated by selfishness alone.
Chapter 6: Moral Reasoning (Harman, Mason, & Sinnott-Armstrong)
The authors clarify the roles of conscious and unconscious moral reasoning, and reject one popular theory of moral reasoning: the deductive model. One of many reasons for their rejection of the deductive model is that it assumes we come to explicit moral conclusions by applying logic, probability theory, and decision theory to pre-existing moral principles, but in the deductive model these principles are understood in terms of psychological theories of concepts that are probably false. The authors survey the 'classical view of concepts' (concepts as defined in terms of necessary and sufficient conditions) and conclude that it is less likely to be true than alternate theories of mental concepts that are less friendly to the deductive model of moral reasoning.
The authors propose an alternate model of moral reasoning whereby one makes mutual adjustments to one's beliefs and plans and values in pursuit of what Rawls called 'reflective equilibrium.'
Chapter 7: Moral Intuitions (Sinnott-Armstrong, Young, & Cushman)
The authors refer to moral intuitions as "strong, stable, immediate moral beliefs." The 'immediate' part means that these moral beliefs do not arise through conscious reasoning; the subject is conscious only of the resulting moral belief.
Their project is this:
...moral intuitions are unreliable to the extent that morally irrelevant factors affect moral intuitions. When they are distorted by irrelevant factors, moral intuitions can be likened to mirages or seeing pink elephants while one is on LSD. Only when beliefs arise in more reputable ways do they have a fighting chance of being justified. Hence we need to know about the processes that produce moral intuitions before we can determine whether moral intuitions are justified.
Thus the chapter engages in something like Less Wrong-style 'dissolution to algorithm.'
A major weakness of this article is that it focuses on the understanding of intuitions as attribute substitution heuristics, but ignores the other two major sources of intuitive judgments: evolutionary psychology and unconscious associative learning.
Chapter 8: Linguistics and Moral Theory (Roedder & Harman)
This chapter examines the 'linguistic analogy' in moral psychology - the analogy between Chomsky's 'universal grammar' and what has been called 'universal moral grammar.' The authors don't have any strong conclusions, but instead suggest that this linguistic analogy may be a helpful framework for pursuing further research. They list five ways in particular the analogy is useful. This chapter can be skipped without missing much.
Chapter 9: Rules (Mallon & Nichols)
The authors survey the evidence that moral rules "are mentally represented and play a causal role in the production of judgment and behavior." This may be obvious, but it's nice to have the evidence collected somewhere.
Chapter 10: Responsibility (Knobe & Doris)
This chapter surveys the experimental studies that test people's attributions of moral responsibility. In short, people do not make such judgments according to invariant principles, as assumed by most of 20th century moral philosophy. (Moral philosophers have spent most of their time trying to find a set of principles that accounted for people's ordinary moral judgments, and showing that alternate sets of principles failed to capture people's ordinary moral judgments in particular circumstances.)
People adopt different moral criteria for judging different cases, even when they verbally endorse a simple set of abstract principles. This should not be surprising, as the same had already been shown to be true in linguistics and in non-moral judgment. The chapter surveys the variety of ways in which people adopt different moral criteria for different cases.
Chapter 11: Character (Merritt, Doris, & Harman)
This chapter surveys the evidence from situationist psychology, which undermines the 'robust character traits' view of human psychology upon which many varieties of virtue ethics depend.
Chapter 12: Well-Being (Tiberius & Plakias)
This chapter surveys competing concepts of 'well-being' in psychology, and provides reasons for using the 'life satisfaction' concept of well-being, especially in philosophy. The authors then discuss life satisfaction and normativity; for example the worry about the arbitrariness of factors that lead to human life satisfaction.
Chapter 13: Race and Racial Cognition (Kelly, Machery, & Mallon)
I didn't read this chapter.
[REVIEW] Foundations of Neuroeconomic Analysis
Neuroeconomics is the application of advances in neuroscience to the fundamentals of economics: choice and valuation. Foundations of Neuroeconomic Analyis by Paul Glimcher, an active researcher in this area, presents a summary of this relatively new field to psychologists and economists. Although written as a serious work, the presentation is made across disciplines, so it should be accessible to anyone interested without much background knowledge in either area. Although the writing is so-so, the book covers multiple Less Wrong-relevant themes, from reductionism to neuroscience to decision theory. If nothing else, the results discussed provide a wonderful example of how no one knows what science doesn't know. I doubt many economists are aware researchers can point to something very similar to utility on a brain scanner and would scoff at the very notion.
Because of the book's wide target audience, there is not enough detail for specialists, but possibly a little too much for non-specialists. If you are interested in this topic, the best reason to pick up the book would be to track down further references. I hope the following summary does the book justice for everyone else.
Are book summaries of this sort useful? The recent review/summary of Predictably Irrational appears to have gone over well. Any suggestions to improve possible future reviews?
Introduction
Many economists think economics is fundamentally separate from psychology and neuroscience; since they take choices as primitives, little if any knowledge would be gained from understanding the mechanisms underlying choice. However, science steadily brings reduction and linkages between previously unrelated disciplines. A striking amount has already been discovered about the exact processes in the brain governing choice and valuation. On the other side, neuroscientists and psychologist underestimate the ability of economists to say whether claims about the brain are logically coherent or not.
Section I: The Challenge of Neuroeconomics
Consider a man and woman who have an affair with each other at a professional conference, which they later consider a mistake. An economist looking at this situation would treat their choice to sleep together as revealing a preference, regardless of their verbal claims. A psychologist would consider how mental states mediated this decision, and would be more willing to consider whether the decision was a mistake or not. Biologists would be more likely to point to ancestral benefits of extra-pair copulations, not considering the reflective judgements as directly relevant. These explanations largely speak past each other, hinting that a unified theory could do much better in predicting behavior.
The key to this is establishing linkages between the logical primitives of each discipline. Behavior could be explained on the level of physics, biology, psychology, or economics, but whether low-level explanations are practical is a different matter. Realistically, linking disciplines will strengthen both fields by mutually constraining the theories available to them.
With the neoclassical revolution, economics developed concepts of utility as reflecting ordinal relationships over revealed preferences. Choices that satisfied certain consistency conditions could be treated as if generated by a utility function. Additional axioms allowed consistent choice under uncertainty to be added to the theory. There are notable problems with this approach, but the core ideas of utility and maximization have surprisingly close neural analogues. Rather than operating "as if" individuals act on the basis of utility, a hard theory of "because" is being developed.
A look at visual perception reveals our subjective experience of light intensity varies subtantially depending on the wavelength of the light. Brightness is concept that resides in the mind, and furthermore sensitivity to different wavelengths corresponds precisely to the absorption spectra of the chemical rhodopsin in our retinas. All perceptions are represented in the mind along a power scale with some variance. Because the distributions of perceptions overlap, subjects can report accurately that a dimmer light is perceptually brighter. This suggests random utility models developed for statisical purposes might be directly explain what happens in the brain. One interesting consequence about the power scaling law is that risk aversion would be embedded at the level of perception.
Section II: The Choice Mechanism
Due to its relative simplicity, eye movement serves as a model for motor control and perhaps decisions broadly. The superior colliculus represents possible eye movements topographically with "hills" of activity. Eventually, the tissue transitions to a bursting state where the most active hill becomes much more active and the rest are inhibited via a "winner-take-all" or "argmax" mechanism. All inputs have to eye motion have to pass through the superior colliculus, so this represents a common final pathway of processed sensory signals. By giving monkeys varying awards for eye-motion tasks, activity in the lateral intraparietal area (LIP) correlates strongly with the probability and size of reward in an area known to trigger action before the action is taken. In other words, this appears to be a direct neural representation of subjective expected valuation. If monkey subjects play a game with mixed strategies in equilibrium, neuron firing rates are all roughly equal, matching the conclusion that expected utilities of actions are equalized when an opponent is mixing.
Cortial neurons fire almost like independent Poisson processes, resulting in neurons down the line being able to easily extract the mean firing rate of the inputs. Interneuronal correlation can vary according to the task at hand, resulting in greater or lesser variation of the final decision, so descriptive decision theories must incorporate randomness in choice. This also provides support for mixed strategies being represented directly in the brain.
Subjective valuations are normalized, and are only considered relative to the other options at hand. This normalization maximizes the joint information of neurons, increasing the efficiency of value representation. One consequence is that as the choice set increases, valuations start overlapping, and choice becomes essentially random. Activity also varies according to the delay of rewards, matching previous findings of hyperbolic discounting. While these findings are largely based on eye-movements in monkeys, this provides a clear path of how choice can be reduced to neural mechanisms.
Section III: Valuation
Back to visual perception, our judgements are made relative to other elements in the environment. Color looks roughly the same indoors and outdoors, even though there can be six orders of magnitude more illumination outside. Drifting reference points make absolute values unrecoverable. Local irrationalies due to reliance on a reference point arise because evolution is trading off between accurate sensory encoding and the costs of these irrationalities.
One promising way to specify the reference point is as the discounted sum of our future wealth. Learning depends on the difference between actual and expected rewards, so valuation compared to a reference point arises from the learning process. In the brain, reward prediction errors are encoded through dopamine. Dopamine firing rates are well-described by an exponentially weighted sum of previous awards subtracted from the most recent award. Hebb's law, which says "cells that fire together, wire together", describes how long-term predictions work.
Valuation appears to be orginally constructed in the striatum and medial prefrontal cortex. The reference level encoded there can be directly observed with brain scanners. Various other regions provide inputs to construct value. For instance, the orbitofrontal cortex (OFC) provides an assessment of risk. Subjects with lesions in this area exhibit almost perfect risk neutrality. Values might also be stored in the OFC, again in a compressed and encoded way. Longer-term valuations might be stored in the amygdala.
Because valuations are encoded relatively and don't work well over large choice sets, humans might edit out options by sequentially considering particular attributes until the choice set become manageable. Sorting by attributes can lead to irrational choices, unsurprisingly.
Probabilistic valuations depend on whether the expectation was learned experientially or symbolically. Symbolically communicated probabilities, where the person is told a number, are overweighed near zero and underweighted near one. Experientially communicated probabilities, where the person samples the lotteries directly, exhibit the opposite pattern. This suggests at least two mechanisms at work, especially with the ability to deal with symbolic probabilities arising relatively late in our evolutionary history. Also, while experiential expected values incorporate probabilities implicitly, this information can't be extracted. When probabilities change, the only means to change valuations is to relearn them from scratch.
Section IV: Summary and Conclusions
Here the author presents formalized models of the descriptive theory. The normative uses of this theory are still unclear. Even if we can identify subjective valuations in the brain, does this have any relation to welfare?
The four critical observations of neuroeconomics are reference-dependence, the lack of an absolute measure of anything in the brain, stochasticity in choice, and the influence of learning on choice. Along with the question of the welfare implications of these findings, six primary questions are currently unanswered:
- Where is subjective value stored and how does it get to choice?
- What part of the brain governs when it is "time to choose"?
- What neural mechanism guides complementarity between goods?
- How does symbolic probability work?
- How does the state of the world and utility interact?
- How does the brain represent money?
Philip Zimbardo (Stanford Prison Experiment) answers questions on Reddit (Link)
In March, a user on Reddit emailed psychologist Philip Zimbardo (leader of the Stanford Prison Experiment) to arrange an "IAmA" interview. Zimbardo agreed to answer the top 5 questions from this thread. Yesterday his answers were posted here.
The chosen questions touched on research ethics, what he originally expected to learn from the experiment, the role of psychoactive drugs in society, reading recommendations and more.
After responding, Zimbardo posed a question of his own to Reddit:
I ask you: Is it good that the Milgram and Zimbardo studies were done, or wrong? Should they be allowed to be replicated with interesting variations (such as female guards and prisoners) if institutional guidelines are imposed and followed? Or is it better for society not to know about the nature of the "dark side" of human nature?
The Trouble with Bright Girls [link]
The Trouble with Bright Girls (article @ the Huffington Post)
Excerpt:
My graduate advisor, psychologist Carol Dweck (author of "Mindset") conducted a series of studies in the 1980s, looking at how Bright Girls and boys in the fifth grade handled new, difficult and confusing material.
She found that Bright Girls, when given something to learn that was particularly foreign or complex, were quick to give up; the higher the girls' IQ, the more likely they were to throw in the towel. In fact, the straight-A girls showed the most helpless responses. Bright boys, on the other hand, saw the difficult material as a challenge, and found it energizing. They were more likely to redouble their efforts rather than give up.
The topic of this article seems to relate to several common Less Wrong issues: the nature of human intelligence, and the gender imbalance among LW readers.
I'm not sure how much credence I give to the proposed explanation of the difference in mindsets. It may well have to do with socialization and feedback, but the specific description of feedback that is presented seems a bit too much of a "just-so story" to me. The difference itself is fascinating, though, and I hope more is done to further our understanding of it.
Link: Chessboxing could help train automatic emotion regulation
EDIT: Argh, I really failed to read this closely. Rewriting...
Just saw this over at Not Exactly Rocket Science. Chessboxing (or similar games) could help train automatic emotion regulation. Obviously this should generalize. Has this - by which I mean finding things that can help train automatic emotion regulation - been done before? This doesn't seem to be anything new - and this is extrapolation, not experimental results - but it's a neat application.
People Neglect Who They Really Are When Predicting Their Own Future Happiness [link]
The scientists who conducted this interesting study...
found that our natural sunny or negative dispositions might be a more powerful predictor of future happiness than any specific event. They also discovered that most of us ignore our own personalities when we think about what lies ahead -- and thus miscalculate our future feelings.
Goals vs. Rewards
Related: Terminal Values and Instrumental Values, Applying behavioral psychology on myself
Recently I asked myself, what do I want? My immediate response was that I wanted to be less stressed, particularly for financial reasons. So I started to affirm to myself that my goal was to become wealthy, and also to become less stressed. But then in a fit of cognitive dissonance, I realized that both money and relaxation are most easily considered in terms of being rewards, not goals. I was oddly surprised by the fact that there is a distinction between the two concepts to begin with.
It later occurred to me to wonder if some things work better when framed as goals and not as rewards. Freedom, long life, good relationships, and productivity seemed some likely candidates. I can't quite see them as rewards because a) I feel everyone innately deserves and should have them (even though they might have to work for them), and b) they don't quite give the kind of fuzzies that motivate immediate action.
These two kinds of positive motivation seem to work in psychologically dissimilar ways. Money for example is more like chocolate, something one has immediate instinctive motive to obtain and consume. Freedom of speech is more along the lines of having enough air to breathe. A person needs and perhaps inherently deserves to have at least a little bit of it all the time, and as a general rule will have a constant background motive to ensure that it stays available. It's a longer-term form of motivation.
A reward seems to be something where you receive immediate fuzzies when you achieve it. Getting paid, getting a pat on the back, getting your posts and comments upvoted... Things where you might consider them more or less optional in the grander scheme of things, yet they tend to trigger an immediate sense of positive anticipation before the event which is reinforced by a sense of satisfaction after. Actually writing a good post or comment, actually doing a good job, being a good spouse or friend -- these are surely related, but are goals in and of themselves. The mental picture for a goal is one of achieving, as opposed to receiving.
One thing that seems likely to me is that the presence of shared goals (and the communication thereof) tends to a good way to generate long term social bonds. Rewards seem to be more of a good way to deliberately steer behavior in more specific aspects. Both are thus important elements of social signaling within a tribe, but serve different underlying purposes.
As an example I have the transhumanist goal of eliminating the current limitations of the human lifespan, and tend to have an affinity for people who also internalize that goal. But someone who does not embrace that goal on a deep level may still display specific behavior that I consider helpful for that goal, e.g. displaying comprehension of its internal logic or having a tolerant attitude towards actions I think need to be taken. I'm probably somewhat less likely to form a long-term relationship with that person than if they were identifiable as a fellow transhumanist, but I am still likely to upvote their comments or otherwise signal approval in ways that don't demand too much long term commitment.
The distinctions I've drawn here between a goal and a reward might not apply directly to non-human intelligences. In fact it might be misleading in the more generalized context to call a reward something other than a goal (it is at least an implicit goal or value). However the distinction still seems like something that could be relevant for instrumental rationality and personal development. Our brains process the two forms of motivational anticipation in different ways. It may be that a part of the akrasia problem -- failure to take action towards a goal -- actually relates to a failure to properly categorize a given motive, and hence failure to process it usefully.
Thanks to the early commenters for their feedback: TheOtherDave, nornagest, endoself, David Gerard, nazgulnarsil, and Normal Anomaly. Hopefully this expanded version is more clear.
The Decline Effect and the Scientific Method [link]
The Decline Effect and the Scientific Method (article @ the New Yorker)
First, as a physicist, I do have to point out that this article concerns mainly softer sciences, e.g. psychology, medicine, etc.
A summary of explanations for this effect:
- "The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out."
- "Jennions, similarly, argues that the decline effect is largely a product of publication bias, or the tendency of scientists and scientific journals to prefer positive data over null results, which is what happens when no effect is found."
- "Richard Palmer... suspects that an equally significant issue is the selective reporting of results—the data that scientists choose to document in the first place. ... Palmer emphasizes that selective reporting is not the same as scientific fraud. Rather, the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results."
- "According to Ioannidis, the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher. ... The current “obsession” with replicability distracts from the real problem, which is faulty design."
These problems are with the proper usage of the scientific method, not the principle of the method itself. Certainly, it's important to address them. I think the reason they appear so often in the softer sciences is that biological entities are enormously complex, and so higher-level ideas that make large generalizations are more susceptible to random error and statistical anomalies, as well as personal bias, conscious and unconscious.
For those who haven't read it, take a look at Richard Feynman on cargo cult science if you want a good lecture on experimental design.
When and how psychological data is collected affects the kind of students who volunteer
http://bps-research-digest.blogspot.com/2010/12/when-and-how-psychological-data-is.html
Psychology has a serious problem. You may have heard about its over-dependence on WEIRD participants - that is, those from Western, Educated, Industrialised, Rich Democracies. More specifically, as regular readers will be aware, countless psychology studies involve undergraduate students, particularly psych undergrads. Apart from the obvious fact that this limits the generalisability of the findings, Edward Witt and his colleagues provide evidence in a new paper for two further problems, this time involving self-selection biases.
Just over 500 Michigan State University undergrads (75 per cent were female) had the option, at a time of their choosing during the Spring 2010 semester, to volunteer either for an on-line personality study, or a face-to-face version. The data collection was always arranged for Wednesdays at 12.30pm to control for time of day/week effects. Also, the same personality survey was administered by computer in the same way in both experiment types, it's just that in the face-to-face version it was made clear that the students had to attend the research lab, and an experimenter would be present.
Just 30 per cent of the sample opted for the face-to-face version. Predictably enough, these folk tended to score more highly on extraversion. The effect size was small (d=-.26) but statistically significant. Regards more specific personality traits, the students who chose the face-to-face version were also more altruistic and less cautious.
What about choice of semester week? As you might expect, it was the more conscientious students who opted for dates earlier in the semester (r=.-.20). What's more, men were far more likely to volunteer later in the semester, even after controlling for average personality difference between the sexes. For example, 18 per cent of week one participants were male compared with 52 per cent in the final, 13th week.
In other words, the kind of people who volunteer for research will likely vary according to the time of semester and the mode of data collection. Imagine you used false negative feedback on a cognitive task to explore effects on confidence and performance. Participants tested at the start of semester, who are typically more conscientious and motivated, are likely to be affected in a different way than participants who volunteer later in the semester.
This isn't the first time that self-selection biases have been reported in psychology. A 2007 study, for example, suggested that people who volunteer for a 'prison study' are likely to score higher than average on aggressiveness and social dominance, thus challenging the generalisability of Zimbardo's seminal work. However, despite the occasional study highlighting these effects, there seems to be little enthusiasm in the social psychological community to do much about it.
So what to do? The specific issues raised in the current study could be addressed by sampling throughout a semester and replicating effects using different data collection methods. 'Many papers based on college students make reference to the real world implications of their findings for phenomena like aggression, basic cognitive processes, prejudice, and mental health,' the researchers said. 'Nonetheless, the use of convenience samples place limitations on the kinds of inferences drawn from research. In the end, we strongly endorse the idea that psychological science will be improved as researchers pay increased attention to the attributes of the participants in their studies.'
_________________________________
Witt, E., Donnellan, M., and Orlando, M. (2011). Timing and selection effects within a psychology subject pool: Personality and sex matter. Personality and Individual Differences, 50 (3), 355-359 DOI: 10.1016/j.paid.2010.10.019
Science reveals how not to choke under pressure
Found via reddit, excerpt:
Choking happens when we let anxious thoughts distract us or when we start trying to consciously control motor skills best left on autopilot. ...
In her new book, Choke: What the Secrets of the Brain Reveal About Success and Failure at Work and at Play, Beilock deconstructs high-stakes moments—the ones seen around the world and the ones only our mothers care about—to explore why we sometimes falter, and why other times we nail it. ...
What goes wrong in our brain when this happens?
Working memory, housed in the prefrontal cortex, is what allows us to do calculations in our head and reason through a problem. Unfortunately, it’s a limited resource. If we’re doing an activity that requires a lot of cognitive horsepower, such as responding to an on-the-spot question, and at the same time we’re worrying about screwing up, then suddenly we don’t have the brainpower we need.Also, once we feel stressed, we often try to control what we’re doing in order to ensure success. So if we’re doing a task that normally operates largely outside of conscious awareness, such as an easy golf swing, what screws us up is the impulse to think about and control our actions. Suddenly we’re too attentive to what we’re doing, and all the training that has improved our motor skills is for naught, since our conscious attention is essentially hijacking motor memory. ...
How can I prevent myself from overthinking?
You might think that writing about your worries would just make them more salient. But there is work in clinical psychology showing that writing helps limit ruminative thoughts—those negative thoughts that are very hard to shake and that seem to grow the more you dwell on them. The idea is that you cognitively outsource your worries to the page. Writing about worries for 10 minutes right before taking a standardized test is really beneficial.
Applied cognitive science: learning from a faux pas
Cross-posted from my LiveJournal:
Yesterday evening, I pasted to two IRC channels an excerpt of what someone had written. In the context of the original text, that excerpt had seemed to me like harmless if somewhat raunchy humor. What I didn't realize at the time was that by removing the context, the person writing it came off looking like a jerk, and by laughing at it I came off looking as something of a jerk as well.
Two people, both of whom I have known for many years now and whose opinions I value, approached me by private message and pointed out that that may not have been the smartest thing to do. My initial reaction was defensive, but I soon realized that they were right and thanked them for pointing it out to me. Putting on a positive growth mindset, I decided to treat this event as a positive one, as in the future I'd know better.
Later that evening, as I lay in bed waiting to fall asleep, the episode replayed itself in my mind. I learnt long ago that trying to push such replays out of my mind would just make them take longer and make them feel worse. So I settled back to just observing the replay and waiting for it to go away. As I waited, I started thinking about what kind of lower-level neural process this feeling might be a sign of.
Artificial neural networks use what is called a backpropagation algorithm to learn from mistakes. First the network is provided some input, then it computes some value, and then the obtained value is compared to the expected value. The difference between the obtained and expected value is the error, which is then propagated back from the end of the network to the input layer. As the error signal works it way through the network, neural weights are adjusted in such a fashion to produce a different output the next time.
Backprop is known to be biologically unrealistic, but there are more realistic algorithms that work in a roughly similar manner. The human brain seems to be using something called temporal difference learning. As Roko described it: "Your brain propagates the psychological pain 'back to the earliest reliable stimulus for the punishment'. If you fail or are punished sufficiently many times in some problem area, and acting in that area is always preceeded by [doing something], your brain will propagate the psychological pain right back to the moment you first begin to [do that something]".
As I lay there in bed, I couldn't help the feeling that something similar to those two algorithms was going on. The main thing that kept repeating itself was not the actual action of pasting the quote to the channel or laughing about it, but the admonishments from my friends. Being independently rebuked for something by two people I considered important: a powerful error signal that had to be taken into account. Their reactions filling my mind: an attempt to re-set the network to the state it was in soon after the event. The uncomfortable feeling of thinking about that: negative affect flooding the network as it was in that state, acting as a signal to re-adjust the neural weights that had caused that kind of an outcome.
After those feelings had passed, I thought about the episode again. Now I felt silly for committing that faux pas, for now it felt obvious that the quote would come across badly. For a moment I wondered if I had just been unusually tired, or distracted, or otherwise out of my normal mode of thought to not have seen that. But then it occurred to me - the judgment of this being obviously a bad idea was produced by the network that had just been rewired in response to social feedback. The pain of the feedback had been propagated back to the action that caused it, so just thinking about doing that (or thinking about having done that) made me feel stupid. I have no way of knowing whether the "don't do that, idiot" judgment is something that would actually have been produced had I been paying more attention, or if it's a genuinely new judgment that wouldn't have been produced by the old network.
I tend to be somewhat amused by the people who go about claiming that computers can never be truly intelligent, because a computer doesn't genuinely understand the information it's processing. I think they're vastly overestimating how smart we are, and that a lot of our thinking is just relatively crude pattern-matching, with various patterns (including behavioral ones) being labeled as good or bad after the fact, as we try out various things.
On the other hand, there would probably have been one way to avoid that incident. We do have the capacity for reflective thought, which allows us to simulate various events in our heads without needing to actually undergo them. Had I actually imagined the various ways in which people could interpret that quote, I would probably have relatively quickly reached the conclusion that yes, it might easily be taken as jerk-ish. Simply imagining that reaction might then have provided the decision-making network with a similar, albeit weaker, error signal and taught it not to do that.
However, there's the question of combinatorial explosions: any decision could potentially have countless of consequences, and we can't simulate them all. (See the epistemological frame problem.) So in the end, knowing the answer to the question of "which actions are such that we should pause to reflect upon their potential consequences" is something we need to learn by trial and error as well.
So I guess the lesson here is that you shouldn't blame yourself too much if you've done something that feels obviously wrong in retrospect. That decision was made by an earlier version of you. Although it feels obvious now, that version of you might literally have had no way of knowing that it was making a mistake, as it hadn't been properly trained yet.
Link: Writing exercise closes the gender gap in university-level physics
15-minute writing exercise closes the gender gap in university-level physics:
Think about the things that are important to you. Perhaps you care about creativity, family relationships, your career, or having a sense of humour. Pick two or three of these values and write a few sentences about why they are important to you. You have fifteen minutes. It could change your life.
This simple writing exercise may not seem like anything ground-breaking, but its effects speak for themselves. In a university physics class, Akira Miyake from the University of Colorado used it to close the gap between male and female performance. In the university’s physics course, men typically do better than women but Miyake’s study shows that this has nothing to do with innate ability. With nothing but his fifteen-minute exercise, performed twice at the beginning of the year, he virtually abolished the gender divide and allowed the female physicists to challenge their male peers.
The exercise is designed to affirm a person’s values, boosting their sense of self-worth and integrity, and reinforcing their belief in themselves. For people who suffer from negative stereotypes, this can make all the difference between success and failure.
The article cites a paper, but it's behind a paywall:
http://www.sciencemag.org/content/330/6008/1234
Games People Play
Game theory is great if you know what game you're playing. All this talk of Diplomacy reminds me of this memory of Adam Cadre:
I remember that in my ninth grade history class, the teacher had us play a game that was supposed to demonstrate how shifting alliances work. He divided the class into seven groups — dubbed Britain, France, Germany, Belgium, Italy, Austria and Russia — and, every few minutes, declared a "battle" between two of the countries. Then there was a negotiation period, during which we all were supposed to walk around the room making deals. Whichever warring country collected the most allies would win the battle and a certain number of points to divvy up with its allies. The idea, I think, was that countries in a battle would try to win over the wavering countries by promising them extra points to jump aboard.
That's not how it worked in practice. Three or four guys — the same ones who had gotten themselves elected to ASB, the student government — decided among themselves during the first negotiation period what the outcome would be, and told people whom to vote for. And the others just shrugged and did as they were told. The ASB guys had decided that Germany would win, followed by France, Britain, Belgium, Austria, Italy and Russia. The first battle was France vs. Russia. Germany and Britain both signed up on the French side. Austria and Italy, realizing that if they just went along with the ASB plan they'd come in 5th and 6th, joined up with Russia. That left it up to Belgium. I was on team Belgium. I voted to give our vote to the Russian side, because that way at least we weren't doomed to come in 4th. And no one else on my team went along. They meekly gave their points to the French side. (As I recall, Josh Lorton was particularly adamant about this. I guess he thought it would make the ASB guys like him.) After that, there was no contest. Britain vs. Austria? 6-1, Britain. Germany vs. Belgium? 6-1, Germany. (And we could have beaten them if we'd just formed a bloc with the other three losers!) The teacher noticed that Germany and France were always on the same side and declared Germany vs. France. Outcome: 6-1, Germany.
The ASB guys were able to just impose their will on a class of 40 students. No carrots, no sticks, just "here's what will happen" and everyone else nodding. I have no idea how that works. I do recall that because they were in student government, for fourth period they had to take a class called Leadership. From what I could tell they just spent the class playing volleyball out in the quad. But I guess they were learning something!
What happened? Why did Italy and Russia fall into line and abandon Austria in the second battle?
This utterly failed to demonstrate the "shifting alliances" that Adam thought the teacher wanted. Does this happen every year?
Yes, the students were coerced into "playing" this game, but elsewhere he describes the same thing happen in games that people choose to play. Moreover, he tells the first story to illustrate his perception of politics.
Variation on conformity experiment
A new variation on the Asch conformity experiment was recently published. The experiment was performed in Japan and used polarizing glasses to show different lines to different people in the same room, so that the subjects had to disagree with others they actually knew, and who genuinely believed that they were answering correctly. The study found that women conformed by giving a wrong answer about a third of the time, but men did not.
Learned about this via Ben Goldacre's blog.
Number bias
The New York Times ran an editorial about an interesting type of cognitive bias: according to the article, the fact that our system of timekeeping is based on factors of 24, 7, etc. and the fact that we have 10 fingers profoundly influences our way of thinking. As the article explains, this bias is distinct from scope neglect and misunderstanding of probability. Has anyone else heard of this kind of "number bias" before? Also, is this an issue that deserves further study on LessWrong?
6 Minute Intro to Evolutionary Psychology
In the spirit of You Are A Brain, this is a 6 minute presentation I gave at Toastmasters on Evolutionary Psychology and may repeat. Be sure to click on show speaker notes (in Actions) to see the full text.
6 Minute Intro to Evolutionary Psychology
Any suggestions for improvements? Some people didn’t get it. Also, is it accurate enough? Also, I think the Wason Selection argument isn’t all that compelling and takes up about half of the time. Is there a better example I could use? (The speech was supposed to be for either informing or persuading and persuading required informing so I tried to focus just on informing.)
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)