Why I'm Skeptical About Unproven Causes (And You Should Be Too)

31 peter_hurford 29 July 2013 09:09AM

Since living in Oxford, one of the centers of the "effective altruism" movement, I've been spending a lot of time discussing the classic “effective altruism” topic -- where it would be best to focus our time and money.

Some people here seem to think that the most important thing we should be focusing our time and money on are speculative projects, or projects that promise a very high impact, but involve a lot of uncertainty.  One such very common example is "existential risk reduction", or attempts to make a long-term far future for humans more likely, say by reducing the chance of things that would cause human extinction.

I do agree that the far future is the most important thing to consider, by far (see papers by Nick Bostrom and Nick Beckstead).  And I do think we can influence the far future.  I just don't think we can do it in a reliable way.  All we have are guesses about what the far future will be like and guesses about how we can affect it. All of these ideas are unproven, speculative projects, and I don't think they deserve the main focus of our funding.

While I waffled in cause indecision for a while, I'm now going to resume donating to GiveWell's top charities, except when I have an opportunity to use a donation to learn more about impact.  Why?  My case is that speculative causes, or any cause with high uncertainty (reducing nonhuman animal suffering, reducing existential risk, etc.) requires that we rely on our commonsense to evaluate them with naīve cost-effectiveness calculations, and this is (1) demonstrably unreliable with a bad track record, (2) plays right into common biases, and (3) doesn’t make sense based on how we ideally make decisions.  While it’s unclear what long-term impact a donation to a GiveWell top charity will have, the near-term benefit is quite clear and worth investing in.

 

Focusing on Speculative Causes Requires Unreliable Commonsense

How can we reduce the chance of human extinction? It just makes sense that if we fund cultural exchange programs between the US and China, there will be more goodwill for the other within each country, and therefore the countries will be less likely to nuke each other. Since nuclear war would likely be very bad, it's of high value to fund cultural exchange programs, right?

Let's try another. The Machine Intelligence Research Institute (MIRI) thinks that someday artificial intelligent agents will become better than humans at making AIs. At this point, AI will build a smarter AI which will build an even smarter AI, and -- FOOM! -- we have a superintelligence. It's important that this superintelligence be programmed to be benevolent, or things will likely be very bad. And we can stop this bad event by funding MIRI to write more papers about AI, right?

Or how about this one? It seems like there will be challenges in the far future that will be very daunting, and if humanity handles them wrong, things will be very bad. But if people were better educated and had more resources, surely they'd be better at handling those problems, whatever they may be. Therefore we should focus on speeding up economic development, right?

These three examples are very common appeals to commonsense.  But commonsense hasn't worked very well in the domain of finding optimal causes.

 

Can You Pick the Winning Social Program?

Benjamin Todd makes this point well in "Social Interventions Gone Wrong", where he provides a quiz with eight social programs and asks readers to guess whether they succeeded or failed.

I'll wait for you to take the quiz first... doo doo doo... la la la...

Ok, welcome back. I don't know how well you did, but success on this quiz is very rare, and this poses problems for commonsense.  Sure, I'll grant you that Scared Straight sounds pretty suspicious. But the Even Start Family Literacy Program? It just makes sense that providing education to boost literacy skills and promote parent-child literacy activities should boost literacy rates, right? Unfortunately, it was wrong. Wrong in a very counter-intuitive way. There wasn't an effect.  

 

GiveWell and Commonsense's Track Record of Failure

Commonsense actually has a track record of failure. GiveWell has been talking about this for ages.  Every time GiveWell has found an intervention hyped by commonsense notions of high-impact and they've looked at it further, they've ended up disappointed.

The first was the Fred Hollows Foundation. A lot of people had been repeating the figure that the Fred Hollows Foundation could cure blindness for $50. But GiveWell found that number suspect.

The second was VillageReach. GiveWell originally put them as their top charity and estimated them as saving a life for under $1000. But further investigation kept leading them to revise their estimate until ultimately they weren't even sure if VillageReach had an impact at all.

Third, there is deworming. Originally, deworming was announced as saving a year of healthy life (DALY) for every $3.41 spent. But when GiveWell dove into the spreadsheets that resulted in that number, they found five errors. When the dust settled, the $3.41 figure was found to actually be off by a factor of 100. It was revised to $326.43.

Why shouldn't we expect this trend to not be the case in other areas where calculations are even looser and numbers are even less settled, like efforts devoted to speculative causes? Our only recourse is to fall back on interventions that are actually studied.

 

People Are Notoriously Bad At Predicting the (Far) Future

Cost-effectiveness estimates also frequently require making predictions about the future. Existential risk reduction, for example, requires making predictions about what will happen in the far future, and how your actions are likely to effect events hundreds of years down the road. Yet, experts are notoriously bad at making these kinds of predictions.

James Shanteau found in "Competence in Experts: The Role of Task Characteristics" (see also Kahneman and Klein's "Conditions for Intuitive Expertise: A Failure to Disagree") that experts perform well when thinking about static stimuli, thinking about things, and when there is feedback and objective analysis available. Furthermore, experts perform pretty badly when thinking about dynamic stimuli, thinking about behavior, and feedback and objective analysis are unavailable.

Predictions about existential risk reduction and the far future are firmly in the second category. So how can we trust our predictions about our impact on the far future? Our only recourse is to fall back on interventions that we can reliably predict, until we get better at prediction (or invest money in getting better at making predictions).

 

Even Broad Effects Require Specific Attempts

One potential resolution to this problem is to argue for “broad effects” rather than “specific attempts”.  Perhaps it’s difficult to know whether a particular intervention will go well or mistaken to focus entirely on Friendly AI, but surely if we improved incentives and norms in academic work to better advance human knowledge (meta-research), improved education, or advocated for effective altruism, the far future would be much better equipped to handle threats.

I agree that these broad effects would make the far future better and I agree that it’s possible to implement these broad effects and change the far future.  The problem, however, is it can’t be done in an easy or well understood way.  Any attempt to implement a broad effect would require a specific action that has an unknown expectation of success and unknown cost-effectiveness.  It’s definitely beneficial to advocate for effective altruism, but could this be done in a cost-effective way?  A way that’s more cost-effective at producing welfare than AMF?  How would you know?

In order to accomplish these broad effects, you’d need specific organizations and interventions to channel your time and money into.  And by picking these specific organizations and interventions, you’re losing the advantage of broad effects and tying yourself to particular things with poorly understood impact and no track record to evaluate. 

 

Focusing on Speculative Causes Plays Into Our Biases

We've now known for quite a long time that people are not all that rational. Instead, human thinking fails in very predictable and systematic ways.  Some of these ways make us less likely to take speculative causes seriously, such as ambiguity aversion, the absurdity heuristic, scope neglect, and overconfidence bias.

But there’s also a different side of the coin, with biases that might make people think badly about existential risk:

Optimism bias. People generally think things will turn out better than they actually will. This could lead people to think that their projects will have a higher impact than they actually will, which would lead to higher estimates of cost-effectiveness than is reasonable.

Control bias. People like to think they have more control over things than they actually do. This plausibly also includes control over the far future. Therefore, people are probably biased into thinking they have more control over the far future than they actually do, leading to higher estimates of ability to influence the future than is reasonable.

"Wow factor" bias. People seem attracted to more impressive claims. Saving a life for $2500 through a malaria bed net seems much more boring compared to the chance of saving the entire world by averting a global catastrophe. Within the Effective Altruist / LessWrong community, existential risk reduction is cool and high status, whereas averting global poverty is not. This might lead to more endorsement of existential risk reduction than is reasonable.

Conjunction fallacy.  People have a problem assessing probability properly when there are many steps involved, each of which has a chance of not happening. Ten steps, each with an independent 90% success rate, has only a 35% chance of success.  Focusing on the far future seems to involve that a lot of largely independent events happen the way that is predicted. This would mean people are worse at estimating their chances of helping the far future, creating higher cost-effectiveness estimates than is reasonable.

Selection bias.  When trying to find trends in history that are favorable for affecting the far future, some examples can be provided.  However, this is because we usually hear about the interventions that end up working, whereas all the failed attempts to influence the far future are never heard of again.  This creates a very skewed sample that can negatively bias our thinking about our success of influencing the far future.

 

It’s concerning there are numerous biases both weighted in favor and weighted against speculative causes, and this means we must tread carefully when assessing their merits.  However, I would strongly expect biases to be even worse in favor of speculative causes rather than against them, because speculative causes lack the available feedback and objective evidence needed to help insulate against bias, whereas a focus on global health does not.

 

Focusing on Speculative Causes Uses Bad Decision Theory

Furthermore, not only is the case for speculative causes undermined by a bad track record and possible cognitive biases, but the underlying decision theory seems suspect in a way that's difficult to place.         

 

Would you play a lottery with no stated odds?

Imagine another thought experiment -- you're asked to play a lottery. You have to pay $2 to play, but you have a chance at winning $100. Do you play?

Of course, you don't know, because you're not given odds. Rationally, it makes sense to play any lottery where you expect to come out ahead more often than not. If the lottery is a coin flip, it makes sense to pay $2 to have a 50/50 shot to win $100, since you'd expect to win $50 on average, and come ahead $48 each time. With a sufficiently high reward, even a one in a million chance is worth it. Pay $2 for a 1/1M chance of winning $1B, and you'd expect to come out ahead by $998 each time.

But $2 for the chance to win $100, without knowing what the chance is? Even if you had some sort of bounds, like you knew the odds had to be at least 1/150 and at most 1/10, though you could be off by a little bit. Would you accept that bet?

Such a bet seems intuitively uninviting to me, yet this is the bet that speculative causes offer me.

 

"Conservative Orders of Magnitude" Arguments

In response to these considerations, I've seen people endorsing speculative causes look at their calculations and remark that even if their estimate were off by 1000x, or three orders of magnitude, they still would be on solid ground for high impact, and there's no way they're actually off by three orders of magnitude. However, Nate Silver's The Signal and the Noise: Why So Many Predictions Fail — but Some Don't offers a cautionary tale:

Moody’s, for instance, went through a period of making ad hoc adjustments to its model in which it increased the default probability assigned to AAA-rated securities by 50 percent. That might seem like a very prudent attitude: surely a 50 percent buffer will suffice to account for any slack in one’s assumptions? It might have been fine had the potential for error in their forecasts been linear and arithmetic. But leverage, or investments financed by debt, can make the error in a forecast compound many times over, and introduces the potential of highly geometric and nonlinear mistakes.

Moody’s 50 percent adjustment was like applying sunscreen and claiming it protected you from a nuclear meltdown—wholly inadequate to the scale of the problem. It wasn’t just a possibility that their estimates of default risk could be 50 percent too low: they might just as easily have underestimated it by 500 percent or 5,000 percent. In practice, defaults were two hundred times more likely than the ratings agencies claimed, meaning that their model was off by a mere 20,000 percent.

Silver points out that when estimating how safe mortgage backed securities were, the difference between assuming defaults are perfectly uncorrelated and defaults are perfectly correlated is a difference of 160,000x in your risk estimate -- or five orders of magnitude.

If these kinds of five-orders-of-magnitude errors are possible in a realm that has actual feedback and is moderately understood, how do we know the estimates for cost-effectiveness are safe for speculative causes that are poorly understood and offer no feedback?  Again, our only recourse is to fall back on interventions that we can reliably predict, until we get better at prediction.

 

Value of Information, Exploring, and Exploiting

Of course, there still is one important aspect of this problem that has not been discussed -- value of information -- or the idea that sometimes it’s worth doing something just to learn more about how the world works.  This is important in effective altruism too, where we focus specifically on “giving to learn”, or using our resources to figure out more about the impact of various causes.

I think this is actually really important and is not a victim to any of my previous arguments, because we’re not talking about impact, but rather learning value.  Perhaps one could look to an "explore-exploit model", or the idea that we achieve the best outcome when we spend a lot of time exploring first (learning more about how to achieve better outcomes) before exploiting (focusing resources on achieving the best outcome we can).  Therefore, whenever we have an opportunity to “explore” further or learn more about what causes have high impact, we should take it.

 

Learning in Practice

Unfortunately, in practice, I think these opportunities are very rare.  Many organizations that I think are “promising” and worth funding further to see what their impact looks like do not have sufficiently good self-measurement in place to actually assess their impact or sufficient transparency to provide that information, therefore making it difficult to actually learn from them.  And on the other side of things, many very promising opportunities to learn more are already fully funded.  One must be careful to ensure that it’s actually one’s marginal dollar that is getting marginal information.

 

The Typical Donor

Additionally, I don’t think the typical donor is in a very good position to assess where there is high value of information or have the time and knowledge to act upon this information once it is acquired.  I think there’s a good argument for people in the “effective altruist” movement to perhaps make small investments in EA organizations and encourage transparency and good measurement in their operations to see if they’re successfully doing what they claim (or potentially create an EA startup themselves to see if it would work, though this carries large risks of further splitting the resources of the movement).

But even that would take a very savvy and involved effective altruist to pull off.  Assessing the value of information on more massive investments like large-scale research or innovation efforts would be significantly more difficult, beyond the talent and resources of nearly all effective altruists, and are probably left to full-time foundations or subject-matter experts.

 

GiveWell’s Top Charities Also Have High Value of Information

As Luke Muehlhauser mentions in "Start Under the Streetlight, Then Push Into the Shadows", lots of lessons can be learned only by focusing on the easiest causes first, even if we have strong theoretical reasons to expect that they won’t end up being the highest impact causes once we have more complete knowledge.

We can use global health cost-effectiveness considerations as practice for slowly and carefully moving into the more complex and less understood domains.  There even are some very natural transitions, such as beginning to look at "flow through effects" of reducing disease in the third-world and beginning to look at how more esoteric things affect the disease burden, like climate change.  Therefore, even additional funding for GiveWell’s top charities has high value of information.  And notably, GiveWell is beginning this "push" through GiveWell Labs.

 

Conclusion

The bottom line is that sometimes things look too good to be true.  Therefore, I should expect that the actual impact of speculative causes that make large promises, upon a thorough investigation, will be much lower.

And this has been true in other domains. People are notoriously bad at estimating the effects of causes in both the developed world and developing world, and those are the causes that are near to us, provide us with feedback, and are easy to predict. Yet, from the Even Start Family Literacy Program to deworming estimates, our commonsense has failed us.

Add to that the fact that we should expect ourselves to perform even worse at predicting the far future. Add to that optimism bias, control bias, "wow factor" bias, and the conjunction fallacy, which make it difficult for us to think realistically about speculative causes. And then add to that considerations in decision theory, and whether we would bet on a lottery with no stated odds.

When all is said and done, I'm very skeptical of speculative projects.  Therefore, I think we should be focused on exploring and exploiting.  We should do whatever we can to fund projects aimed at learning more, when those are available, but be careful to make sure they actually have learning value.  And when exploring isn’t available, we should exploit what opportunities we have and fund proven interventions.

But don’t confuse these two concepts and fund causes intended for learning because of their actual impact value.  I’m skeptical about these causes actually being high impact, though I’m open to the idea that they might be and look forward to funding them in the future when they become better proven.     

-

Followed up in: "What Would It Take To 'Prove' A Skeptical Cause" and "Where I've Changed My Mind on My Approach to Speculative Causes".

This was also cross-posted to my blog and to effective-altruism.com.

I'd like to thank Nick Beckstead, Joey Savoie, Xio Kikauka, Carl Shulman, Ryan Carey,  Tom Ash, Pablo Stafforini, Eliezer Yudkowsky, and Ben Hoskin for providing feedback on this essay, even if some of them might strongly disagree with it's conclusion.

Reflection in Probabilistic Logic

63 Eliezer_Yudkowsky 24 March 2013 04:37PM

Paul Christiano has devised a new fundamental approach to the "Löb Problem" wherein Löb's Theorem seems to pose an obstacle to AIs building successor AIs, or adopting successor versions of their own code, that trust the same amount of mathematics as the original.  (I am currently writing up a more thorough description of the question this preliminary technical report is working on answering.  For now the main online description is in a quick Summit talk I gave.  See also Benja Fallenstein's description of the problem in the course of presenting a different angle of attack.  Roughly the problem is that mathematical systems can only prove the soundness of, aka 'trust', weaker mathematical systems.  If you try to write out an exact description of how AIs would build their successors or successor versions of their code in the most obvious way, it looks like the mathematical strength of the proof system would tend to be stepped down each time, which is undesirable.)

Paul Christiano's approach is inspired by the idea that whereof one cannot prove or disprove, thereof one must assign probabilities: and that although no mathematical system can contain its own truth predicate, a mathematical system might be able to contain a reflectively consistent probability predicate.  In particular, it looks like we can have:

∀a, b: (a < P(φ) < b)          ⇒  P(a < P('φ') < b) = 1
∀a, b: P(a ≤ P('φ') ≤ b) > 0  ⇒  a ≤ P(φ) ≤ b

Suppose I present you with the human and probabilistic version of a Gödel sentence, the Whitely sentence "You assign this statement a probability less than 30%."  If you disbelieve this statement, it is true.  If you believe it, it is false.  If you assign 30% probability to it, it is false.  If you assign 29% probability to it, it is true.

Paul's approach resolves this problem by restricting your belief about your own probability assignment to within epsilon of 30% for any epsilon.  So Paul's approach replies, "Well, I assign almost exactly 30% probability to that statement - maybe a little more, maybe a little less - in fact I think there's about a 30% chance that I'm a tiny bit under 0.3 probability and a 70% chance that I'm a tiny bit over 0.3 probability."  A standard fixed-point theorem then implies that a consistent assignment like this should exist.  If asked if the probability is over 0.2999 or under 0.30001 you will reply with a definite yes.

continue reading »

My Best Case vs Your Worst Case

5 HalMorris 03 January 2013 02:11PM

Is there a name for the (I claim) extremely common practice of blithely and unconsciously always looking at your own view (political especially) in terms of its best possible outcomes, while always characterizing an opposing point of view by its worst possibilities?

If not, I think there should be.  It seems like a major major source of unfruitful argumentation.

New censorship: against hypothetical violence against identifiable people

22 Eliezer_Yudkowsky 23 December 2012 09:00PM

New proposed censorship policy:

Any post or comment which advocates or 'asks about' violence against sufficiently identifiable real people or groups (as opposed to aliens or hypothetical people on trolley tracks) may be deleted, along with replies that also contain the info necessary to visualize violence against real people.

Reason: Talking about such violence makes that violence more probable, and makes LW look bad; and numerous message boards across the Earth censor discussion of various subtypes of proposed criminal activity without anything bad happening to them.

More generally: Posts or comments advocating or 'asking about' violation of laws that are actually enforced against middle-class people (e.g., kidnapping, not anti-marijuana laws) may at the admins' option be censored on the grounds that it makes LW look bad and that anyone talking about a proposed crime on the Internet fails forever as a criminal (i.e., even if a proposed conspiratorial crime were in fact good, there would still be net negative expected utility from talking about it on the Internet; if it's a bad idea, promoting it conceptually by discussing it is also a bad idea; therefore and in full generality this is a low-value form of discussion).  

This is not a poll, but I am asking in advance if anyone has non-obvious consequences they want to point out or policy considerations they would like to raise. In other words, the form of this discussion is not 'Do you like this?' - you probably have a different cost function from people who are held responsible for how LW looks as a whole - but rather, 'Are there any predictable consequences we didn't think of that you would like to point out, and possibly bet on with us if there's a good way to settle the bet?'

Yes, a post of this type was just recently made.  I will not link to it, since this censorship policy implies that it will shortly be deleted, and reproducing the info necessary to say who was hypothetically targeted and why would be against the policy.

That Thing That Happened

19 [deleted] 18 December 2012 12:29PM

I am emotionally excited and/or deeply hurt by what st_rev wrote recently. You better take me seriously because you've spent a lot of time reading my posts already and feel invested in our common tribe. Anecdote about how people are tribal thinkers.

That thing that happened shows that everything I was already advocating for is correct and necessary. Indeed it is time for everyone to put their differences aside and come together to carry out my recommended course of action. If you continue to deny what both you and I know in our hearts to be correct, you want everyone to die and I am defriending you.

I don't even know where to begin. This is what blueist ideology has been workign towards for decades if not millennia, but to see it written here is hard to stomach even for one as used to the depravity caused by such delusions as I am. The lack of socially admired virtues among its adherents is frightening. Here I introduce an elaborate explanation of how blueist domination is not just completely obvious and a constant thorn in the side of all who wish more goodness but is achieved by the most questionable means often citing a particular blogger or public intellectual who I read in order to show how smart I am and because people I admire read him too. Followed by an appeal to the plot of a movie. Anecdote from my personal life. If you are familiar with the obscure work of an academic taken out of context and this does not convince you then you are clearly an intolerant sexual deviant engaging in motivated cognition.

Consider well: do you want to be on the wrong side of history? If you persist, millions or billions of people you will never meet will be simultaneously mystified and appalled that an issue so obvious caused such needless contention. They will argue whether you were motivated more by stupidity, malice, raw interest, or if you were a helpless victim of the times in which you lived. Characters in fiction set in your era will inevitably be on (or at worst, join) the right side unless they are unredeemable villains. (Including historical figures who were on the other side, lest they lose all audience sympathy.).

Remember: it's much more important what hypothetical future people will consider right than what you or current people you respect do. And you and I both know they'll agree with me.

While sympathetic to this criticism I must signal my world-weariness and sophistication by writing several long paragraphs about how this is much too optimistic and we are in grave danger of a imminent and eternal takeover by our opponents. The only solution is to begin work on an organization dedicated to preventing this which happens to give me access to material resources and attractive females.

Ciphergoth proves to be the lone voice of reason by encouraging us to recall what we all learned on 9/11:

However, we must also consider if this is not also a lesson to us all; a lesson that my political views are correct.

http://www.adequacy.org/stories/2001.9.12.102423.271.html

Train Philosophers with Pearl and Kahneman, not Plato and Kant

65 lukeprog 06 December 2012 12:42AM

Part of the sequence: Rationality and Philosophy

Hitherto the people attracted to philosophy have been mostly those who loved the big generalizations, which were all wrong, so that few people with exact minds have taken up the subject.

Bertrand Russell

 

I've complained before that philosophy is a diseased discipline which spends far too much of its time debating definitions, ignoring relevant scientific results, and endlessly re-interpreting old dead guys who didn't know the slightest bit of 20th century science. Is that still the case?

You bet. There's some good philosophy out there, but much of it is bad enough to make CMU philosopher Clark Glymour suggest that on tight university budgets, philosophy departments could be defunded unless their work is useful to (cited by) scientists and engineers — just as his own work on causal Bayes nets is now widely used in artificial intelligence and other fields.

How did philosophy get this way? Russell's hypothesis is not too shabby. Check the syllabi of the undergraduate "intro to philosophy" classes at the world's top 5 U.S. philosophy departmentsNYU, Rutgers, Princeton, Michigan Ann Arbor, and Harvard — and you'll find that they spend a lot of time with (1) old dead guys who were wrong about almost everything because they knew nothing of modern logic, probability theory, or science, and with (2) 20th century philosophers who were way too enamored with cogsci-ignorant armchair philosophy. (I say more about the reasons for philosophy's degenerate state here.)

As the CEO of a philosophy/math/compsci research institute, I think many philosophical problems are important. But the field of philosophy doesn't seem to be very good at answering them. What can we do?

Why, come up with better philosophical methods, of course!

Scientific methods have improved over time, and so can philosophical methods. Here is the first of my recommendations...

continue reading »

The Worst Problem You've Ever Encountered and Solved. And the One You Didn't, Yet!

2 diegocaleiro 04 December 2012 08:43PM

EDIT: No one was doing what the post suggests, so I accepted an idea from one of the comments, and embedded my response in a comment, not the post itself

 

I'd like to ask this question to you, and I'll respond it myself as well.

What Is The Worst Problem You've Ever Encountered and Solved? And the One You Didn't, Yet!

Some prior considerations:

1) I mean "problem" in a very general sense, it could be a math problem, an existential problem, a social problem, an akrasia problem,  a disease problem etc...

2) I'd like people to give informative/didactic responses.  Try not only to state the facts, but also to help someone who'd encounter similar situations to be able to deal with them.

3) When talking about the one you didn't, give enough specifics that someone would actually be able to help you.

The general idea is to teach people how to Win by example, taking in consideration all the shortcomings of biases etc...

 

Well, that is all. One solved, one not yet solved. State your own issues and help others here. Someone else's rationality is always welcome.

Intuitions Aren't Shared That Way

31 lukeprog 29 November 2012 06:19AM

Part of the sequence: Rationality and Philosophy

Consider these two versions of the famous trolley problem:

Stranger: A train, its brakes failed, is rushing toward five people. The only way to save the five people is to throw the switch sitting next to you, which will turn the train onto a side track, thereby preventing it from killing the five people. However, there is a stranger standing on the side track with his back turned, and if you proceed to thrown the switch, the five people will be saved, but the person on the side track will be killed.

Child: A train, its brakes failed, is rushing toward five people. The only way to save the five people is to throw the switch sitting next to you, which will turn the train onto a side track, thereby preventing it from killing the five people. However, there is a 12-year-old boy standing on the side track with his back turned, and if you proceed to throw the switch, the five people will be saved, but the boy on the side track will be killed.

Here it is: a standard-form philosophical thought experiment. In standard analytic philosophy, the next step is to engage in conceptual analysis — a process in which we use our intuitions as evidence for one theory over another. For example, if your intuitions say that it is "morally right" to throw the switch in both cases above, then these intuitions may be counted as evidence for consequentialism, for moral realism, for agent neutrality, and so on.

Alexander (2012) explains:

Philosophical intuitions play an important role in contemporary philosophy. Philosophical intuitions provide data to be explained by our philosophical theories [and] evidence that may be adduced in arguments for their truth... In this way, the role... of intuitional evidence in philosophy is similar to the role... of perceptual evidence in science...

Is knowledge simply justified true belief? Is a belief justified just in case it is caused by a reliable cognitive mechanism? Does a name refer to whatever object uniquely or best satisfies the description associated with it? Is a person morally responsible for an action only if she could have acted otherwise? Is an action morally right just in case it provides the greatest benefit for the greatest number of people all else being equal? When confronted with these kinds of questions, philosophers often appeal to philosophical intuitions about real or imagined cases...

...there is widespread agreement about the role that [intuitions] play in contemporary philosophical practice... We advance philosophical theories on the basis of their ability to explain our philosophical intuitions, and appeal to them as evidence that those theories are true...

In particular, notice that philosophers do not appeal to their intuitions as merely an exercise in autobiography. Philosophers are not merely trying to map the contours of their own idiosyncratic concepts. That could be interesting, but it wouldn't be worth decades of publicly-funded philosophical research. Instead, philosophers appeal to their intuitions as evidence for what is true in general about a concept, or true about the world.

continue reading »

Philosophy Needs to Trust Your Rationality Even Though It Shouldn't

27 lukeprog 29 November 2012 09:00PM

Part of the sequence: Rationality and Philosophy

Philosophy is notable for the extent to which disagreements with respect to even those most basic questions persist among its most able practitioners, despite the fact that the arguments thought relevant to the disputed questions are typically well-known to all parties to the dispute.

Thomas Kelly

The goal of philosophy is to uncover certain truths... [But] philosophy continually leads experts with the highest degree of epistemic virtue, doing the very best they can, to accept a wide array of incompatible doctrines. Therefore, philosophy is an unreliable instrument for finding truth. A person who enters the field is highly unlikely to arrive at true answers to philosophical questions.

Jason Brennan

 

After millennia of debate, philosophers remain heavily divided on many core issues. According to the largest-ever survey of philosophers, they're split 25-24-18 on deontology / consequentialism / virtue ethics, 35-27 on empiricism vs. rationalism, and 57-27 on physicalism vs. non-physicalism.

Sometimes, they are even divided on psychological questions that psychologists have already answered: Philosophers are split evenly on the question of whether it's possible to make a moral judgment without being motivated to abide by that judgment, even though we already know that this is possible for some people with damage to their brain's reward system, for example many Parkinson's patients, and patients with damage to the ventromedial frontal cortex (Schroeder et al. 2012).1

Why are physicists, biologists, and psychologists more prone to reach consensus than philosophers?2 One standard story is that "the method of science is to amass such an enormous mountain of evidence that... scientists cannot ignore it." Hence, religionists might still argue that Earth is flat or that evolutionary theory and the Big Bang theory are "lies from the pit of hell," and philosophers might still be divided about whether somebody can make a moral judgment they aren't themselves motivated by, but scientists have reached consensus about such things.

continue reading »

LW Women- Minimizing the Inferential Distance

58 [deleted] 25 November 2012 11:33PM

Standard Intro

The following section will be at the top of all posts in the LW Women series.

About two months ago, I put out a call for anonymous submissions by the women on LW, with the idea that I would compile them into some kind of post.  There is a LOT of material, so I am breaking them down into more manageable-sized themed posts. 

Seven women submitted, totaling about 18 pages. 

Crocker's Warning- Submitters were told to not hold back for politeness. You are allowed to disagree, but these are candid comments; if you consider candidness impolite, I suggest you not read this post

To the submittrs- If you would like to respond anonymously to a comment (for example if there is a comment questioning something in your post, and you want to clarify), you can PM your message and I will post it for you. If this happens a lot, I might create a LW_Women sockpuppet account for the submitters to share.

Standard Disclaimer- Women have many different viewpoints, and just because I am acting as an intermediary to allow for anonymous communication does NOT mean that I agree with everything that will be posted in this series. (It would be rather impossible to, since there are some posts arguing opposite sides!)

Please do NOT break anonymity, because it lowers the anonymity of the rest of the submitters.

continue reading »

View more: Prev | Next