Pancritical Rationalism Can Apply to Preferences and Behavior
ETA: As stated below, criticizing beliefs is trivial in principle, either they were arrived at with an approximation to Bayes' rule starting with a reasonable prior and then updated with actual observations, or they weren't. Subsequent conversation made it clear that criticizing behavior is also trivial in principle, since someone is either taking the action that they believe will best suit their preferences, or not. Finally, criticizing preferences became trivial too -- the relevant question is "Does/will agent X behave as though they have preferences Y", and that's a belief, so go back to Bayes' rule and a reasonable prior. So the entire issue that this post was meant to solve has evaporated, in my opinion. Here's the original article, in case anyone is still interested:
Pancritical rationalism is a fundamental value in Extropianism that has only been mentioned in passing on LessWrong. I think it deserves more attention here. It's an approach to epistemology, that is, the question of "How do we know what we know?", that avoids the contradictions inherent in some of the alternative approaches.
The fundamental source document for it is William Bartley's Retreat to Commitment. He describes three approaches to epistemology, along with the dissatisfying aspects of the other two:
- Nihilism. Nothing matters, so it doesn't matter what you believe. This path is self-consistent, but it gives no guidance.
- Justificationlism. Your belief is justified because it is a consequence of other beliefs. This path is self-contradictory. Eventually you'll go in circles trying to justify the other beliefs, or you'll find beliefs you can't jutify. Justificationalism itself cannot be justified.
- Pancritical rationalism. You have taken the available criticisms for the belief into account and still feel comfortable with the belief. This path gives guidance about what to believe, although it does not uniquely determine one's beliefs. Pancritical rationalism can be criticized, so it is self-consistent in that sense.
Read on for a discussion about emotional consequences and extending this to include preferences and behaviors as well as beliefs.
The Aliens have Landed!
"General Thud! General Thud! Wake up! The aliens have landed. We must surrender!" General Thud's assistant Fred turned on the lights and opened the curtains to help Thud wake up and confront the situation. Thud was groggy because he had stayed up late supervising an ultimately successful mission carried out by remotely piloted vehicles in some small country on the other side of the world. Thud mumbled, "Aliens? How many? Where are they? What are they doing?" General Thud looked out the window, expecting to see giant tripods walking around and destroying buildings with death rays. He saw his lawn, a bright blue sky, and hummingbirds hovering near his bird feeder.
Values vs. parameters
I've written before about the difficulty of distinguishing values from errors, from algorithms, and from context. Now I have to add to that list: How can we distinguish our utility function from the parameters we use to apply it?
Separate morality from free will
[I made significant edits when moving this to the main page - so if you read it in Discussion, it's different now. It's clearer about the distinction between two different meanings of "free", and why linking one meaning of "free" with morality implies a focus on an otherworldly soul.]
It was funny to me that many people thought Crime and Punishment was advocating outcome-based justice. If you read the post carefully, nothing in it advocates outcome-based justice. I only wanted to show how people think, so I could write this post.
Talking about morality causes much confusion, because most philosophers - and most people - do not have a distinct concept of morality. At best, they have just one word that composes two different concepts. At worst, their "morality" doesn't contain any new primitive concepts at all; it's just a macro: a shorthand for a combination of other ideas.
I think - and have, for as long as I can remember - that morality is about doing the right thing. But this is not what most people think morality is about!
Crime and punishment
Why do those words go together?
Society - and for once, I'm using this term universally - teaches that, if you committed a crime, you should be punished.
But in some societies, we have an insanity defense. If you had a brain condition so that you had no - here it's a little vague - consciousness, or moral sense, or free will, or, well, something - then it would be cruel to punish you for your crime. Instead of going to prison, you should be placed somewhere where you can't hurt anybody, where professional physicians and counselors can study your case and try to reform you so that you can rejoin society.
Wait - so that isn't what prison is for?
The Urgent Meta-Ethics of Friendly Artificial Intelligence
Barring a major collapse of human civilization (due to nuclear war, asteroid impact, etc.), many experts expect the intelligence explosion Singularity to occur within 50-200 years.
That fact means that many philosophical problems, about which philosophers have argued for millennia, are suddenly very urgent.
Those concerned with the fate of the galaxy must say to the philosophers: "Too slow! Stop screwing around with transcendental ethics and qualitative epistemologies! Start thinking with the precision of an AI researcher and solve these problems!"
If a near-future AI will determine the fate of the galaxy, we need to figure out what values we ought to give it. Should it ensure animal welfare? Is growing the human population a good thing?
But those are questions of applied ethics. More fundamental are the questions about which normative ethics to give the AI: How would the AI decide if animal welfare or large human populations were good? What rulebook should it use to answer novel moral questions that arise in the future?
But even more fundamental are the questions of meta-ethics. What do moral terms mean? Do moral facts exist? What justifies one normative rulebook over the other?
The answers to these meta-ethical questions will determine the answers to the questions of normative ethics, which, if we are successful in planning the intelligence explosion, will determine the fate of the galaxy.
Eliezer Yudkowsky has put forward one meta-ethical theory, which informs his plan for Friendly AI: Coherent Extrapolated Volition. But what if that meta-ethical theory is wrong? The galaxy is at stake.
Princeton philosopher Richard Chappell worries about how Eliezer's meta-ethical theory depends on rigid designation, which in this context may amount to something like a semantic "trick." Previously and independently, an Oxford philosopher expressed the same worry to me in private.
Eliezer's theory also employs something like the method of reflective equilibrium, about which there are many grave concerns from Eliezer's fellow naturalists, including Richard Brandt, Richard Hare, Robert Cummins, Stephen Stich, and others.
My point is not to beat up on Eliezer's meta-ethical views. I don't even know if they're wrong. Eliezer is wickedly smart. He is highly trained in the skills of overcoming biases and properly proportioning beliefs to the evidence. He thinks with the precision of an AI researcher. In my opinion, that gives him large advantages over most philosophers. When Eliezer states and defends a particular view, I take that as significant Bayesian evidence for reforming my beliefs.
Rather, my point is that we need lots of smart people working on these meta-ethical questions. We need to solve these problems, and quickly. The universe will not wait for the pace of traditional philosophy to catch up.
Dark Arts 101: Using presuppositions
Sun Tzu said, "The supreme art of war is to subdue the enemy without fighting." This is also true in rhetoric. The best way to get a belief accepted is to fool people into thinking that they have already accepted it.
(Note, first-year students, that I did not say, "The best way to convince people of a belief". Do not try to convince people! It will not work; and it may start them thinking.)
An excellent way of doing this is to embed your desired conclusion as a presupposition to an enticing argument. If you are debating abortion, and you wish people to believe that human and non-human life are qualitatively different, begin by saying, "We all agree that killing humans is immoral. So when does human life begin?" People will be so eager to jump into the debate about whether a life becomes "human" at conception, the second trimester, or at birth (I myself favor "on moving out of the house"), they won't notice that they agreed to the embedded presupposition that the problem should be phrased as a binary category membership problem, rather than as one of tradeoffs or utility calculations.
Consider the recent furor over whether WikiLeaks leader Julian Assange is a journalist, or can be prosecuted for espionage. I don't know who initially asked this question. The earliest posing of the question that I can find that relates it to the First Amendment is this piece from Fox News on Dec. 8; but Marc Thiessen's column in the Washington Post of Aug. 3 has similar implications. Note that this question presupposes that First Amendment protection applies only to journalists! There is no legal precedent for this that I'm aware of; yet if people spend enough time debating whether Julian Assange is a journalist, they will have unknowingly convinced themselves that ordinary citizens have no First Amendment rights. (We can only hope that this was an artful stroke made from the shadows by some great master of the Dark Arts, and not a mere snowballing of an ignorant question.)
Criteria for Rational Political Conversation
Query: by what objective criteria do we determine whether a political decision is rational?
I propose that the key elements -- necessary but not sufficient -- are (where "you" refers collectively to everyone involved in the decisionmaking process):
- you must use only documented reasoning processes:
- use the best known process(es) for a given class of problem
- state clearly which particular process(es) you use
- document any new processes you use
- you must make every reasonable effort to verify that:
- your inputs are reasonably accurate, and
- there are no other reasoning processes which might be better suited to this class of problem, and
- there are no significant flaws in in your application of the reasoning processes you are using, and
- there are no significant inputs you are ignoring
If an argument satisfies all of these requirements, it is at least provisionally rational. If it fails any one of them, then it's not rational and needs to be corrected or discarded.
This is not a circular definition (defining "rationality" by referring to "reasonable" things, where "reasonable" depends on people being "rational"); it is more like a recursive algorithm, where large ambiguous problems are split up into smaller and smaller sub-problems until we get to a size where the ambiguity is negligible.
This is not one great moral principle; it is more like a self-modifying working process (subject to rational criticism and therefore improvable over time -- optimization by successive approximation). It is an attempt to apply the processes of science (or at least the same reasoning which arrived at those processes) to political discourse.
So... can we agree on this?
This is a hugely, vastly, mindbogglingly trimmed-down version of what I originally posted. All comments prior to 2010-08-26 20:52 (EDT) refer to that version, which I have reposted here for comparison purposes and for the morbidly curious. (It got voted down to negative 6. Twice.)
Rationality & Criminal Law: Some Questions
The following will explore a couple of areas in which I feel that the criminal justice system of many Western countries might be deficient, from the standpoint of rationality. I am very much interested to know your thoughts on these and other questions of the law, as far as they relate to rational considerations.
Moral Luck
Moral luck refers to the phenomenon in which behaviour by an agent is adjudged differently based on factors outside the agent's control.
Suppose that Alice and Yelena, on opposite ends of town, drive home drunk from the bar, and both dazedly speed through a red light, unaware of their surroundings. Yelena gets through nonetheless, but Alice hits a young pedestrian, killing him instantly. Alice is liable to be tried for manslaughter or some similar charge; Yelena, if she is caught, will only receive the drunk driving charge and lose her license.
Raymond, a day after finding out that his ex is now in a relationship with Pardip, accosts Pardip at his home and attempts to stab him in the chest; Pardip smashes a piece of crockery over Raymond's head, knocking him unconscious. Raymond is convicted of attempted murder, receiving typically 3-5 years chez nous (in Canada). If he had succeeded, he would have received a life sentence, with parole in 10-25 years.
Why should Alice be punished by the law and demonized by the public so much more than Yelena, when their actions were identical, differing only by the sheerest accident? Why should Raymond receive a lighter sentence for being an unsuccessful murderer?
Some prima facie plausible justifications:
- Identical behaviour is hard to judge - perhaps Yelena was really keeping a better eye on the road than Alice; perhaps Raymond would have performed a non-fatal stabbing.
- The law needs to crack down harder when there are actual victims, in order to provide the victims and families a sense of justice done.
- This could result in far too many serious, high-level trials.
Trial by Jury; Trial by Judge
Those of us who like classic films may remember 12 Angry Men (1957) with Henry Fonda. This was a remarkably good film about a jury deliberating on the murder trial of a poor young man from a bad neighbourhood, accused of killing his father. It portrays the indifference (one juror wants to be out in time for the baseball game), prejudice and conformity of many of the jurors, and how this is overcome by one man of integrity who decides to insist on a thorough look through the evidence and testimony.
I do not wish to generalize from fictional examples; however, such factors are manifestly at play in real trials, in which Henry Fonda cannot necessarily be relied upon to save the day.
Komponisto has written on the Knox case, in which an Italian jury came to a very questionable (to put it mildly) conclusion based on the evidence presented to them; other examples will doubtless spring to mind (a famous one in this neck of the woods is the Stephen Truscott case - the evidence against Truscott being entirely circumstantial.
More information on trial by jury and its limitations may be found here. Recently the UK has made some moves to trial by judge for certain cases, specifically fraud cases in which jury tampering is a problem.
The justifications cited for trial by jury typically include the egalitarian nature of the practice, in which it can be guaranteed that those making final legal decisions do not form a special class over and above the ordinary citizens whose lives they effect.
A heartening example of this was mentioned in Thomas Levenson's fascinating book Newton and the Counterfeiter. Being sent to Newgate gaol was, infamously in the 17th and 18th centuries, an effective death sentence in and of itself; moreover, a surprisingly large number of crimes at this time were capital crimes (the counterfeiter whom Newton eventually convicted was hanged). In this climate of harsh punishment, juries typically only returned guilty verdicts either when evidence was extremely convincing or when the crime was especially heinous. Effectively, they counteracted the harshness of the legal system by upping the burden of proof for relatively minor crimes.
So juries sometimes provide a safeguard against abuse of justice by elites. However, is this price for democratizing justice too high, given the ease with which citizens naive about the Dark Arts may be manipulated? (Of course, judges are by no means perfect Bayesians either; however, I would expect them to be significantly less gullible.)
Are there any other systems that might be tried, besides these canonical two? What about the question of representation? Does the adversarial system, in which two sides are represented by advocates charged with defending their interests, conduce well to truth and justice, or is there a better alternative? For any alternatives you might consider: are they naive or savvy about human nature? What is the normative role of punishment, exactly?
How would the justice system look if LessWrong had to rewrite it from scratch?
Surface syllogisms and the sin-based model of causation
The White House says there will be a temporary ban on new deep-water drilling, and BP will have to pay the salaries of oilmen who have no work during that ban. I scratched my head trying to figure out the logic behind this. This was my first attempt:
- BP caused an oil spill.
- The oil spill caused a ban on drilling.
- The ban on drilling caused oilmen to be out of work.
- Therefore, BP caused oilmen to be out of work.
- Therefore, BP should pay these oilmen.
This logic works equally well in this case:
- Rachel Carson wrote Silent Spring.
- Silent Spring caused a ban on DDT use.
- The ban on DDT use caused factory workers to be out of work.
- Therefore, Rachel Carson caused factory workers to be out of work.
- Therefore, Rachel Carson should pay these workers.
But "everyone" would agree that the second example is fallacious. Are people so angry at BP that they can't think at all?
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)