steven0461 comments on Hardened Problems Make Brittle Models - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (40)
Within moral philosophy, at least, there are two related senses in which philosophers’ typical practice of thought-experiments can seem ill-advised:
They may deal with situations that are strongly unlike the situations in which we actually need to make decisions. Perhaps you’ll never be faced with a runaway trolley, with decisions concerning 3^^^3 dust specks, or with any decision simple enough that you can easily apply your thinking about trolley problems or dust specks.
They may highlight situations that disorient or break our moral intuitions or our notions of value.
To elaborate a plausible mechanism: The human categories “birds”, “vegetables” (vs. “fruits”, or “herbs”), and “morally right” are all better understood as family resemblance terms (capturing “clusters in thingspace”) than as crisp, explicitly definable, schematic categories that entities do or don’t fall into. Such family resemblance terms arguably gain their meaning, in our heads, from our exposure to many different central examples. Show a person carrots, mushrooms, spinach, and broccoli, with a “yes these are Xes”, and strawberries, cayenne, and rice with a “these aren’t Xes”, and the person will construct the concept “vegetables”. Add in a bunch of borderline cases (“are mustard greens a vegetable or an herb? what exact features point toward and against?”) and the person’s notion of “vegetable” will lose its some of its intuitive “is a category”-ness. If there are enough borderline examples in their example-space, “vegetable” won’t be a cluster for them anymore.
“Is morally right” may similarly be a cluster formed by seeing what kinds of intra- and inter-personal situations work well, or can be expected to be judged well, and may break or weaken when faced with non-“ecologically valid” thought-experiments.
I spent two years in a graduate philosophy department before leaving academic philosophy to try to reduce existential risks. In my grad philosophy courses, I used to express disdain for dust specks vs. torture type problems, and to claim arguments along the lines of both (1) and (2) for why I should fail to engage such questions. My guess is that (2) was my actual motivation -- I could feel aspects of my moral concern breaking when I considered trolley problems and the like -- and, having not read OB, and tending to believe that arguments were like soldiers, I then argued for (1) as well.
When I left philosophy, though, and started actually thinking about what kind of a large-scale world we want, I was surprised to find that the discussions I'd claimed were inapplicable (with argument (1)) were glaringly applicable. If you’re considering what people shouldn’t tile the light-cone with, or even if you’re just considering aid to Africa, large-scale schematic beliefs about how to navigate tradeoffs are, in fact, a better guide than are folk moral intuitions about what a good friendship looks like. The central examples around which human moral intuitions are built just don’t work well for some of the most important decisions we do in fact need to make.
But despite its inconvenience, (2) may in fact pose a problem, AFAICT.
I'd support idealized thought experiments even if the world were boring. The answers to boring moral problems come or should come from some process you can decompose into several simple modular parts, and these parts can be individually refined on idealized examples in a way that's cleaner and safer than refining the whole of them together on realistic examples. Not letting answers to thought experiments leak into superficially similar real situations takes a kind of discipline, but it's worth it for people to build this discipline.