The Trolley Problem: Dodging moral questions
The trolley problem is one of the more famous thought experiments in moral philosophy, and studies by psychologists and anthropologists suggest that the response distributions to its major permutations remain roughly the same throughout all human cultures. Most people will permit pulling the lever to redirect the trolley so that it will kill one person rather than five, but will balk at pushing one fat person in front of the trolley to save the five if that is the only available option of stopping it.
However, in informal settings, where the dilemma is posed by a peer rather than a teacher or researcher, it has been my observation that there is another major category which accounts for a significant proportion of respondents' answers. Rather than choosing to flip the switch, push the fat man, or remain passive, many people will reject the question outright. They will attack the improbability of the premise, attempt to invent third options, or appeal to their emotional state in the provided scenario ("I would be too panicked to do anything",) or some combination of the above, in order to opt out of answering the question on its own terms.
Optimism versus cryonics
Within the immortalist community, cryonics is the most pessimistic possible position. Consider the following superoptimistic alternative scenarios:
- Uploading will be possible before I die.
- Aging will be cured before I die.
- They will be able to reanimate a whole mouse before I die, then I'll sign up.
- I could get frozen in a freezer when I die, and they will eventually figure out how to reanimate me.
- I could pickle my brain when I die, and they will eventually figure out how to reanimate me.
- Friendly AI will cure aging and/or let me be uploaded before I die.
Cryonics -- perfusion and vitrification at LN2 temperatures under the best conditions possible -- is by far less optimistic than any of these. Of all the possible scenarios where you end up immortal, cryonics is the least optimistic. Cryonics can work even if there is no singularity or reversal tech for thousands of years into the future. It can work under the conditions of the slowest technological growth imaginable. All it assumes is that the organization (or its descendants) can survive long enough, technology doesn't go backwards (on average), and that cryopreservation of a technically sufficient nature can predate reanimation tech.
It doesn't even require the assumption that today's best possible vitrifications are good enough. See, it's entirely plausible that it's 100 years from now when they start being good enough, and 500 years later when they figure out how to reverse them. Perhaps today's population is doomed because of this. We don't know. But the fact that we don't know what exact point is good enough is sufficient to make this a worthwhile endeavor at as early of a point as possible. It doesn't require optimism -- it simply requires deliberate, rational action. The fact is that we are late for the party. In retrospect, we should have started preserving brains hundreds of years ago. Benjamin Franklin should have gone ahead and had himself immersed in alcohol.
There's a difference between having a fear and being immobilized by it. If you have a fear that cryonics won't work -- good for you! That's a perfectly rational fear. But if that fear immobilizes you and discourages you from taking action, you've lost the game. Worse than lost, you never played.
This is something of a response to Charles Platt's recent article on Cryoptimism: Part 1 Part 2
Morality and relativistic vertigo
tl;dr: Relativism bottoms-out in realism by objectifying relations between subjective notions. This should be communicated using concrete examples that show its practical importance. It implies in particular that morality should think about science, and science should think about morality.
Sam Harris attacks moral uber-relativism when he asserts that "Science can answer moral questions". Countering the counterargument that morality is too imprecise to be treated by science, he makes an excellent comparison: "healthy" is not a precisely defined concept, but no one is crazy enough to utter that medicine cannot answer questions of health.
What needs adding to his presentation (which is worth seeing, though I don't entirely agree with it) is what I consider the strongest concise argument in favor of science's moral relevance: that morality is relative simply means that the task of science is to examine absolute relations between morals. For example, suppose you uphold the following two moral claims:
- "Teachers should be allowed to physically punish their students."
- "Children should be raised not to commit violence against others."
First of all, note that questions of causality are significantly more accessible to science than people before 2000 thought was possible. Now suppose a cleverly designed, non-invasive causal analysis found that physically punishing children, frequently or infrequently, causes them to be more likely to commit criminal violence as adults. Would you find this discovery irrelevant to your adherence to these morals? Absolutely not. You would reflect and realize that you needed to prioritize them in some way. Most would prioritize the second one, but in any case, science will have made a valid impact.
So although either of the two morals is purely subjective on its own, how these morals interrelate is a question of objective fact. Though perhaps obvious, this idea has some seriously persuasive consequences and is not be taken lightly. Why?
First of all, you might change your morals in response to them not relating to each other in the way you expected. Ideas parse differently when they relate differently. "Teachers should be allowed to physically punish their students" might never feel the same to you after you find out it causes adult violence. Even if it originally felt like a terminal (fundamental) value, your prioritization of (2) might make (1) slowly fade out of your mind over time. In hindsight, you might just see it as an old, misinformed instrumental value that was never in fact terminal.
Second, as we increase the number of morals under consideration, the number of relations for science to consider grows rapidly, as (n2-n)/2: we have many more moral relations than morals themselves. Suddenly the old disjointed list of untouchable maxims called "morals" fades into the background, and we see a throbbing circulatory system of moral relations, objective questions and answers without which no person can competently reflect on her own morality. A highly prevalent moral like "human suffering is undesirable" looks like a major organ: important on its own to a lot of people, and lots of connections in and out for science to examine.
Treating relativistic vertigo
To my best recollection, I have never heard the phrase "it's all relative" used to an effect that didn't involve stopping people from thinking. When the topic of conversation — morality, belief, success, rationality, or what have you — is suddenly revealed or claimed to depend on a context, people find it disorienting, often to the point of feeling the entire discourse has been and will continue to be "meaningless" or "arbitrary". Once this happens, it can be very difficult to persuade them to keep thinking, let alone thinking productively…
Morality as Parfitian-filtered Decision Theory?
Non-political follow-up to: Ungrateful Hitchhikers (offsite)
Related to: Prices or Bindings?, The True Prisoner's Dilemma
Summary: Situations like the Parfit's Hitchhiker problem select for a certain kind of mind: specifically, one that recognizes that an action can be optimal, in a self-interested sense, even if it can no longer cause any future benefit. A mind that can identify such actions might put them in a different category which enables it to perform them, in defiance of the (futureward) consequentialist concerns that normally need to motivate it. Our evolutionary history has put us through such "Parfitian filters", and the corresponding actions, viewed from the inside, feel like "something we should do", even if we don’t do it, and even if we recognize the lack of a future benefit. Therein lies the origin of our moral intuitions, as well as the basis for creating the category "morality" in the first place.
Introduction: What kind of mind survives Parfit's Dilemma?
Parfit's Dilemma – my version – goes like this: You are lost in the desert and near death. A superbeing known as Omega finds you and considers whether to take you back to civilization and stabilize you. It is a perfect predictor of what you will do, and only plans to rescue you if it predicts that you will, upon recovering, give it $0.01 from your bank account. If it doesn’t predict you’ll pay, you’re left in the desert to die. [1]
So what kind of mind wakes up from this? One that would give Omega the money. Most importantly, the mind is not convinced to withhold payment on the basis that the benefit was received only in the past. Even if it recognizes that no future benefit will result from this decision -- and only future costs will result -- it decides to make the payment anyway.
Criteria for Rational Political Conversation
Query: by what objective criteria do we determine whether a political decision is rational?
I propose that the key elements -- necessary but not sufficient -- are (where "you" refers collectively to everyone involved in the decisionmaking process):
- you must use only documented reasoning processes:
- use the best known process(es) for a given class of problem
- state clearly which particular process(es) you use
- document any new processes you use
- you must make every reasonable effort to verify that:
- your inputs are reasonably accurate, and
- there are no other reasoning processes which might be better suited to this class of problem, and
- there are no significant flaws in in your application of the reasoning processes you are using, and
- there are no significant inputs you are ignoring
If an argument satisfies all of these requirements, it is at least provisionally rational. If it fails any one of them, then it's not rational and needs to be corrected or discarded.
This is not a circular definition (defining "rationality" by referring to "reasonable" things, where "reasonable" depends on people being "rational"); it is more like a recursive algorithm, where large ambiguous problems are split up into smaller and smaller sub-problems until we get to a size where the ambiguity is negligible.
This is not one great moral principle; it is more like a self-modifying working process (subject to rational criticism and therefore improvable over time -- optimization by successive approximation). It is an attempt to apply the processes of science (or at least the same reasoning which arrived at those processes) to political discourse.
So... can we agree on this?
This is a hugely, vastly, mindbogglingly trimmed-down version of what I originally posted. All comments prior to 2010-08-26 20:52 (EDT) refer to that version, which I have reposted here for comparison purposes and for the morbidly curious. (It got voted down to negative 6. Twice.)
The Threat of Cryonics
It is obvious that many people find cryonics threatening. Most of the arguments encountered in debates on the topic are not calculated to persuade on objective grounds, but function as curiosity-stoppers. Here are some common examples:
- Elevated burden of proof. As if cryonics demands more than a small amount of evidence to be worth trying.
- Elevated cost expectation. Thinking that cryonics is (and could only ever be) affordable only for the very rich.
- Unresearched suspicions regarding the ethics and business practices of cryonics organizations.
- Sudden certainty that earth-shattering catastrophes are just around the corner.
- Assuming the worst about the moral attitudes of humanity's descendants towards cryonics patients.
- Associations with prescientific mummification, or sci-fi that handwaves the technical difficulties.
The question is what causes this sensation that cryonics is a threat? What does it specifically threaten?
Rationality & Criminal Law: Some Questions
The following will explore a couple of areas in which I feel that the criminal justice system of many Western countries might be deficient, from the standpoint of rationality. I am very much interested to know your thoughts on these and other questions of the law, as far as they relate to rational considerations.
Moral Luck
Moral luck refers to the phenomenon in which behaviour by an agent is adjudged differently based on factors outside the agent's control.
Suppose that Alice and Yelena, on opposite ends of town, drive home drunk from the bar, and both dazedly speed through a red light, unaware of their surroundings. Yelena gets through nonetheless, but Alice hits a young pedestrian, killing him instantly. Alice is liable to be tried for manslaughter or some similar charge; Yelena, if she is caught, will only receive the drunk driving charge and lose her license.
Raymond, a day after finding out that his ex is now in a relationship with Pardip, accosts Pardip at his home and attempts to stab him in the chest; Pardip smashes a piece of crockery over Raymond's head, knocking him unconscious. Raymond is convicted of attempted murder, receiving typically 3-5 years chez nous (in Canada). If he had succeeded, he would have received a life sentence, with parole in 10-25 years.
Why should Alice be punished by the law and demonized by the public so much more than Yelena, when their actions were identical, differing only by the sheerest accident? Why should Raymond receive a lighter sentence for being an unsuccessful murderer?
Some prima facie plausible justifications:
- Identical behaviour is hard to judge - perhaps Yelena was really keeping a better eye on the road than Alice; perhaps Raymond would have performed a non-fatal stabbing.
- The law needs to crack down harder when there are actual victims, in order to provide the victims and families a sense of justice done.
- This could result in far too many serious, high-level trials.
Trial by Jury; Trial by Judge
Those of us who like classic films may remember 12 Angry Men (1957) with Henry Fonda. This was a remarkably good film about a jury deliberating on the murder trial of a poor young man from a bad neighbourhood, accused of killing his father. It portrays the indifference (one juror wants to be out in time for the baseball game), prejudice and conformity of many of the jurors, and how this is overcome by one man of integrity who decides to insist on a thorough look through the evidence and testimony.
I do not wish to generalize from fictional examples; however, such factors are manifestly at play in real trials, in which Henry Fonda cannot necessarily be relied upon to save the day.
Komponisto has written on the Knox case, in which an Italian jury came to a very questionable (to put it mildly) conclusion based on the evidence presented to them; other examples will doubtless spring to mind (a famous one in this neck of the woods is the Stephen Truscott case - the evidence against Truscott being entirely circumstantial.
More information on trial by jury and its limitations may be found here. Recently the UK has made some moves to trial by judge for certain cases, specifically fraud cases in which jury tampering is a problem.
The justifications cited for trial by jury typically include the egalitarian nature of the practice, in which it can be guaranteed that those making final legal decisions do not form a special class over and above the ordinary citizens whose lives they effect.
A heartening example of this was mentioned in Thomas Levenson's fascinating book Newton and the Counterfeiter. Being sent to Newgate gaol was, infamously in the 17th and 18th centuries, an effective death sentence in and of itself; moreover, a surprisingly large number of crimes at this time were capital crimes (the counterfeiter whom Newton eventually convicted was hanged). In this climate of harsh punishment, juries typically only returned guilty verdicts either when evidence was extremely convincing or when the crime was especially heinous. Effectively, they counteracted the harshness of the legal system by upping the burden of proof for relatively minor crimes.
So juries sometimes provide a safeguard against abuse of justice by elites. However, is this price for democratizing justice too high, given the ease with which citizens naive about the Dark Arts may be manipulated? (Of course, judges are by no means perfect Bayesians either; however, I would expect them to be significantly less gullible.)
Are there any other systems that might be tried, besides these canonical two? What about the question of representation? Does the adversarial system, in which two sides are represented by advocates charged with defending their interests, conduce well to truth and justice, or is there a better alternative? For any alternatives you might consider: are they naive or savvy about human nature? What is the normative role of punishment, exactly?
How would the justice system look if LessWrong had to rewrite it from scratch?
Virtue Ethics for Consequentialists
Meta: Influenced by a cool blog post by Kaj, which was influenced by a cool Michael Vassar (like pretty much everything else; the man sure has a lot of ideas). The name of this post is intended to be taken slightly more literally than the similarly titled Deontology for Consequentialists.
There's been a hip new trend going around the Singularity Institute Visiting Fellows house lately, and it's not postmodernism. It's virtue ethics. "What, virtue ethics?! Are you serious?" Yup. I'm so contrarian I think cryonics isn't obvious and that virtue ethics is better than consequentialism. This post will explain why.
When I first heard about virtue ethics I assumed it was a clever way for people to justify things they did when the consequences were bad and the reasons were bad, too. People are very good at spinning tales about how virtuous they are, even more so than at finding good reasons that they could have done things that turned out unpopular, and it's hard to spin the consequences of your actions as good when everyone is keeping score. But it seems that moral theorists were mostly thinking in far mode and didn't have too much incentive to create a moral theory that benefited them the most, so my Hansonian hypothesis falls flat. Why did Plato and Aristotle and everyone up until the Enlightenment find virtue ethics appealing, then? Well...
Study: Encouraging Obedience Considered Harmful
A while back I did a couple of posts on the care and feeding of young rationalists. Though it is not new, I recently found a truly excellent post on this topic, in Dale Mcgowan's blog, The Meming of Life. The post details a survey carried out on ordinary citizens of Hitler's Germany, searching for correlations between style of upbringing, and adult moral decisions.
Everyday Germans of the Nazi period are the focus of a fascinating study discussed in the PBB seminars and in the Ethics chapter of Raising Freethinkers. For their book The Altruistic Personality, researchers Samuel and Pearl Oliner conducted over 700 interviews with survivors of Nazi-occupied Europe. Included were both “rescuers” (those who actively rescued victims of persecution) and “non-rescuers” (those who were either passive in the face of the persecution or actively involved in it). The study revealed interesting differences in the upbringing of the two groups — specifically the language and practices that parents used to teach their values.
Non-rescuers were 21 times more likely than rescuers to have been raised in families that emphasized obedience—being given rules that were to be followed without question—while rescuers were over three times more likely than non-rescuers to identify “reasoning” as an element of their moral education. “Explained,” the authors said, is the single most common word used by rescuers in describing their parents’ ways of talking about rules and ethical ideas.
Single Point of Moral Failure
I have been recently entertaining myself with a 3-day non-stop binge of Theist vs. Atheist debates, On the atheist side: Richard Dawkins, Christopher Hitchens, Daniel Denett, Sam Harris, P.Z. Myers. On the theist corner: Dinesh D'Souza, William Lane Craig, Alistair McGrath, Tim Keller, and (unfortunately) Nassim Nicholas Taleb. One of the interesting points that comes up, often by Hitchens, is what I call the "Bodycount Argument". The atheist will claim: "Look at all the deaths caused by religion: Crusades, Inquisition, Islamic fundamentalism, Japanese militarism, Conquests of the New World" and the list goes on and on. Then the Theist will claim: "Well, look at the Nazis, the Fascists, the Soviets, the Khmer Rouge...". the Atheist then tries to reverse some of that, e.g. the Fascists were the catholic right wing, the SS were mostly confessing Catholics and Hitler had churches pray for him on his birthday, and, most tenuously, that the Soviets had the support of the orthodox church and used the pre-existing structures set up by the Czar to establish their power.
Some of that retort is convincing, some is not so much. You cannot really blame Soviet, Cambodian and Chinese massacres solely on religion. While they do at least manage to bring it to a tie, I suspect that the atheists follow this argument up suboptimally. My instinctive reaction would be "ok, so you proved that except for religion, communism leads to mass slaughter too. I have no problem doing away with both". But the Theists have a stronger form of their argument in which they claim that the crimes of communism are -because- of atheism, so a simple one-line retort won't work in all cases. We need to lay a deeper foundation for that claim to be convincing.
Enter single points of failure. The rudimentary definition, usually given in terms of computer networks, is that a single point of failure is that component which takes down the entire system when it fails. While the term has originated in computer science as far as I can tell, it can be applied to human networks as well. The strategy of Alexander the Great, at the battle of Issus, was instead of trying to defeat the entire, vastly ournumbering, Persian army in combat, to attack the Persian king Darius directly. When he was able to make him flee, the entire Persian army fell into disarray, with one side executing an orderly retreat, but the left flank completely disintegrated while being pursued by Alexander's cavalry. So while the term is new, the concept has been long known and has been used to great effect.
What I want to argue, is that all the examples cited by Theists and Atheists alike, are instances of a single point of -moral- failure. Here, instead of the system disintegrating or stopping to operate, it goes into a sequence of actions that when examined by an outside human observer, or even the participants themselves at a latter date, seem to be immoral, irrational, and akin to madness. The common point in all the examples is that a central organization, supported by a specific fanaticizing ideology, ordered the massacres to occur, and the people at the lower ranks, implemented those orders, despite perhaps individually knowing better.
My explanation of this, is that the lower-ranks had in effect outsourced their moral sense to their leadership. As with all centralised structures, when things go well, they go -really- well (assuming aligned incentives, greedy algorithms generally will not be as optimal as top-down ones), but when they go bad, they can be disastrous. The bigger the power of the network, the bigger the consequences. It is not hard to imagine why the outsourcing happened. Humans are tribal. I think very few, having observed the weekly rituals called 'football games' (whatever your definition of football is) would disagree. But humans are also moral. We have a rough set of rules that we tend to follow relatively consistently. What is of interest in these cases, is that an individual's tribalism completely overrode that individual's personal morality. And this happened repeatedly and reliably, throughout the ranks of each of these human networks.
Coming back to the original argument, if indeed tribalism trumps morality, and the above give us good reason to believe it does, then the theist argument that god put morality inside us comes into question. It does not explain why god saw fit to make our morality less powerful a motivator than our tribal instincts. But the biological explanation stands confirmed: If morality is a mechanism that was useful for intra-tribe interactions, then it would -have- to be suspended when the tribe was facing another. One can imagine the pacifist tribe being annihilated by the non-pacifist tribes around it or, lest I be accused of arguing for group selection, the individual pacifists being attacked both by their own tribe or the enemy tribe. Tribalists may disagree about who gets to live and who gets the resources, but they don't disagree about tribalism.
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)