Marsh et al. "Serotonin Transporter Genotype (5-HTTLPR) Predicts Utilitarian Moral Judgments"
The whole paper is here. In short, they found a genotype that predicts people's response to the original trolley problem:
A trolley (i.e. in British English a tram) is running out of control down a track. In its path are five people who have been tied to the track by a mad philosopher. Fortunately, you could flip a switch, which will lead the trolley down a different track to safety. Unfortunately, there is a single person tied to that track. Should you flip the switch or do nothing?
Participants with one kind of serotonin transmitter (LL-homozygotes) judged flipping the switch to be better than a morally neutral action. Participants with the other kind (S-carriers) judged flipping the switch to be no better than a morally neutral action. The groups responded equally to the "fat man scenario" both rejecting the 'push' option.
Some quotes:
We hypothesized that 5-HTTLPR genotype would interact with intentionality in respondents who generated moral judgments. Whereas we predicted that all participants would eschew intentionally harming an innocent for utilitarian gains, we predicted that participants' judgments of foreseen but unintentional harm would diverge as a function of genotype. Specifically, we predicted that LL homozygotes would adhere to the principle of double effect and preferentially select the utilitarian option to save more lives despite unintentional harm to an innocent victim, whereas S-allele carriers would be less likely to endorse even unintentional harm. Results of behavioral testing confirmed this hypothesis.
Participants in this study judged the acceptability of actions that would unintentionally or intentionally harm an innocent victim in order to save others' lives. An analysis of variance revealed a genotype × scenario interaction, F(2, 63) = 4.52, p = .02. Results showed that, relative to long allele homozygotes (LL), carriers of the short (S) allele showed particular reluctance to endorse utilitarian actions resulting in foreseen harm to an innocent individual. LL genotype participants rated perpetrating unintentional harm as more acceptable (M = 4.98, SEM = 0.20) than did SL genotype participants (M = 4.65, SEM = 0.20) or SS genotype participants (M = 4.29, SEM = 0.30).
...
The results indicate that inherited variants in a genetic polymorphism that influences serotonin neurotransmission influence utilitarian moral judgments as well. This finding is interpreted in light of evidence that the S allele is associated with elevated emotional responsiveness.
A Sketch of an Anti-Realist Metaethics
Below is a sketch of a moral anti-realist position based on the map-territory distinction, Hume and studies of psychopaths. Hopefully it is productive.
The Map is Not the Territory Reviewed
Consider the founding metaphor of Less Wrong: the map-territory distinction. Beliefs are to reality as maps are to territory. As the wiki says:
Since our predictions don't always come true, we need different words to describe the thingy that generates our predictions and the thingy that generates our experimental results. The first thingy is called "belief", the second thingy "reality".
Of course the map is not the territory.
Here is Albert Einstein making much the same analogy:
Physical concepts are free creations of the human mind and are not, however it may seem, uniquely determined by the external world. In our endeavor to understand reality we are somewhat like a man trying to understand the mechanism of a closed watch. He sees the face and the moving hands, even hears its ticking, but he has no way of opening the case. If he is ingenious he may form some picture of a mechanism which could be responsible for all the things he observes, but he may never be quite sure his picture is the only one which could explain his observations. He will never be able to compare his picture with the real mechanism and cannot even imagine the possibility or the meaning of such a comparison. But he certainly believes that, as his knowledge increases, his picture of reality will become simpler and simpler and will explain a wider and wider range of his sensuous impressions. He may also believe in the existence of the ideal limit of knowledge and that it is approached by the human mind. He may call this ideal limit the objective truth.
The above notions about beliefs involve pictorial analogs, but we can also imagine other ways the same information could be contained. If the ideal map is turned into a series of sentences we can define a 'fact' as any sentence in the ideal map (IM). The moral realist position can then be stated as follows:
Moral Realism: ∃x(x ⊂ IM) & (x = M)
In English: there is some set of sentences x such that all the sentences are part of the ideal map and x provides a complete account of morality.
Moral anti-realism simply negates the above. ¬(∃x(x ⊂ IM) & (x = M)).
Meta: A 5 karma requirement to post in discussion
Admins have been doing a decent, timely job taking down the spam that comes up in the Discussion section. But it is an eyesore for any period of time and there seems to be more and more of it. And there is an easy solution: a small karma requirement for discussion section posts. I think 5 would about right. A reasonable, literate person can get 5 karma pretty easily. "Hi, I'm new" usually does it. That plus a half-way insightful comment about something almost definitely will. This would screen out the spammers. As for the occasional genuine user that posts in discussion before commenting at all, I don't know how many there have been but my sense is that delaying them from posting until they can get five upvotes is almost certainly a good thing.
Thoughts? Or is changing this actually a difficult task that requires rewriting the site's code and that's why it hasn't been done already?
Dutch Books and Decision Theory: An Introduction to a Long Conversation
For a community that endorses Bayesian epistemology we have had surprisingly few discussions about the most famous Bayesian contribution to epistemology: the Dutch Book arguments. In this post I present the arguments, but it is far from clear yet what the right way to interpret them is or even if they prove what they set out to. The Dutch Book arguments attempt to justify the Bayesian approach to science and belief; I will also suggest that any successful Dutch Book defense of Bayesianism cannot be disentangled from decision theory. But mostly this post is to introduce people to the argument and to get people thinking about a solution. The literature is scant enough that it is plausible people here could actually make genuine progress, especially since the problem is related to decision theory.1
Bayesianism fits together. Like a well-tailored jacket it feels comfortable and looks good. It's an appealing, functional aesthetic for those with cultivated epistemic taste. But sleekness is not a rigourous justification and so we should ask: why must the rational agent adopt the axioms of probability as conditions for her degrees of belief? Further, why should agents accept the principle conditionalization as a rule of inference? These are the questions the Dutch Book arguments try to answer.
The arguments begin with an assumption about the connection between degrees of belief and willingness to wager. An agent with degree of belief b in hypothesis h is assumed to be willing to buy wager up to and including $b in a unit wager on h and sell a unit wager on h down to and including $b. For example, if my degree of belief that I can drink ten eggnogs without passing out is .3 I am willing to bet $0.30 on the proposition that I can drink the nog without passing out when the stakes of the bet are $1. Call this the Will-to-wager Assumption. As we will see it is problematic.
Karma Motivation Thread
This idea is so obvious I can't believe we haven't done it before. Many people here have posts they would like to write but keep procrastinating on. Many people also have other work to do but keep procrastinating on Less Wrong. Making akrasia cost you money is often a good way to motivate yourself. But that can be enough of a hassle to deter the lazy, the ADD addled and the executive dysfunctional. So here is a low transaction cost alternative that takes advantage of the addictive properties of Less Wrong karma. Post a comment here with a task and a deadline- pick tasks that can be confirmed by posters; so either Less Wrong posts or projects that can be linked to or photographed. When the deadline comes edit your comment to include a link to the completed task. If you complete the task, expect upvotes. If you fail to complete the task by the deadline, expect your comment to be downvoted into oblivion. If you see completed tasks, vote those comments up. If you see past deadlines vote those comments down. At least one person should reply to the comment, noting the deadline has passed-- this way it will come up in the recent comments and more eyes will see it.
Edit: DanArmak makes a great suggestion.
Pet Cryonics
Open discussion.
I think my dog is about to die. Even if I thought it was worth it I don't have the money to freeze her. But I am curious to know how people here feel about the practice and whether anyone plans to do this for their pet. It seems like a practice that plays into the image of cryonics as the domain of strange and egotistical rich people. On the other hand it also seems like a rather human and heart warming practice. Is pet cryopreservation good for the image of cryonics?
Also, do people who just do neuro get their pets preserved? Will people upload pets? Assuming life as an emulation feels different from life as a biological organism is it ethical to upload animals? The transition might be strange and uncomfortable but we expect at least some humans to take the risk and live with any differences. But animals don't understand this and might not have the mental flexibility to adjust.
Open Thread: May 2010
You know what to do.
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
The Red Bias
Summary: This color alters your perception of the world. Evidence that it does, how it does, why it does and some implications are presented below.
(Overcoming Bias: Seeing Red)

Across a range of sports, we find that wearing red is consistently associated with a higher probability of winning. These results indicate not only that sexual selection may have influenced the evolution of human response to colours, but also that the colour of sportswear needs to be taken into account to ensure a level playing field in sport.1
In the study quoted above Hill and Barton examine the outcomes of the 2004 Olympic Games in boxing, tae kwon do, Greco–Roman wrestling and freestyle wrestling. In these events competitors were for each bout randomly assigned red or blue outfits. In the matches where one side dominated the other outfit color made little difference. In close matches, however, combatants in red won over 60% percent of the time. This makes sense since there are presumably other factors that effect the outcome.
Ethics has Evidence Too
A tenet of traditional rationality is that you can't learn much about the world from armchair theorizing. Theory must be epiphenomenal to observation-- our theories are functions that tell us what experiences we should anticipate, but we generate the theories from *past* experiences. And of course we update our theories on the basis of new experiences. Our theories respond to our evidence, usually not the other way around. We do it this way because it works better then trying to make predictions on the basis of concepts or abstract reasoning. Philosophy from Plato through Descartes and to Kant is replete with failed examples of theorizing about the natural world on the basis of something other than empirical observation. Socrates thinks he has deduced that souls are immortal, Descartes thinks he has deduced that he is an immaterial mind, that he is immortal, that God exists and that he can have secure knowledge of the external world, Kant thinks he has proven by pure reason the necessity of Newton's laws of motion.
These mistakes aren't just found in philosophy curricula. There is a long list of people who thought they could deduce Euclid's theorems as analytic or a priori knowledge. Epicycles were a response to new evidence but they weren't a response that truly privileged the evidence. Geocentric astronomers changed their theory *just enough* so that it would yield the right predictions instead of letting a new theory flow from the evidence. Same goes for pre-Einsteinian theories of light. Same goes for quantum mechanics. A kludge is a sign someone is privileging the hypothesis. It's the same way many of us think the Italian police changed their hypothesis explaining the murder of Meredith Kercher once it became clear Lumumba had an alibi and Rudy Guede's DNA and hand prints were found all over the crime scene. They just replaced Lumumba with Guede and left the rest of their theory unchanged even though there was no longer reason to include Knox and Sollecito in the explanation of the murder. These theories may make it over the bar of traditional rationality but they sail right under what Bayes theorem requires.
Most people here get this already and many probably understand it better than I do. But I think it needs to be brought up in the context of our ongoing discussion of normative ethics.
Unless we have reason to think about ethics differently, our normative theories should respond to evidence in the same way we expect our theories in other domains to respond to evidence. What are the experiences that we are trying to explain with our ethical theories? Why bother with ethics at all? What is the mystery we are trying to solve? The only answer I can think of is our ethical intuitions. When faced with certain situations in real life or in fiction we get strong impulses to react in certain ways, to praise some parties and condemn others. We feel guilt and sometimes pay amends. There are some actions which we have a visceral abhorrence of.
These reactions are for ethics what measurements of time and distance are for physics -- the evidence.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Several people have now used this to commit to doing something others can benefit from, like LW posts. I suggest an alternative method: when a user commits to doing something, everyone who is interested in that thing being done will upvote that comment. However, if the task is not complete by the deadline, everyone who upvoted commits to coming back and downvoting the comment instead.
This way, people can judge whether the community is interested in their post, and the karma being gained or lost is proportional to the amount of interest. Also, upvoting and then downvoting effectively doubles the amount of karma at stake.