Consider a simple decision problem: you arrange a date with someone, you arrive on time, your partner isn't there. How long do you wait before giving up?

Humans naturally respond to this problem by acting outside the box. Wait a little then send a text message. If that option is unavailable, pluck a reasonable waiting time from cultural context, e.g. 15 minutes. If that option is unavailable...

Wait, what?

The toy problem was initially supposed to help us improve ourselves - to serve as a reasonable model of something in the real world. The natural human solution seemed too messy and unformalizable so we progressively remove nuances to make the model more extreme. We introduce Omegas, billions of lives at stake, total informational isolation, perfect predictors, finally arriving at some sadistic contraption that any normal human would run away from. But did the model stay useful and instructive? Or did we lose important detail along the way?

Many physical models, like gravity, have the nice property of stably approximating reality. Perturbing the positions of planets by one millimeter doesn't explode the Solar System the next second. Unfortunately, many of the models we're discussing here don't have this property. The worst offender yet seems to be Eliezer's "True PD" which requires the whole package of hostile psychopathic AIs, nuclear-scale payoffs and informational isolation; any natural out-of-the-box solution like giving the damn thing some paperclips or bargaining with it would ruin the game. The same pattern has recurred in discussions of Newcomb's Problem where people have stated that any miniscule amount of introspection into Omega makes the problem "no longer Newcomb's". That naturally led to more ridiculous use of superpowers, like Alicorn's bead jar game where (AFAIU) the mention of Omega is only required to enforce a certain assumption about its thought mechanism that's wildly unrealistic for a human.

Artificially hardened logic problems make brittle models of reality.

So I'm making a modest proposal. If you invent an interesting decision problem, please, first model it as a parlor game between normal people with stakes of around ten dollars. If the attempt fails, you have acquired a bit of information about your concoction; don't ignore it outright.

New to LessWrong?

New Comment
40 comments, sorted by Click to highlight new comments since: Today at 10:56 PM

I agree that the true PD never happens in human existence, and that's yet another reason why I'm outraged at using a mathematically flawed decision theory to teach incoming students of rationality that they ought to betray their friends. (C, C) FTW!

(Actually, that would make a nice button.)

But I defend the use of simple models for the sake of understanding problems with mathematical clarity; if you can't model simple hypothetical things correctly, how does it help to try to model complex real things correctly first? In real life, no one is an economic agent; in real life, no laws except basic physics and theorems therefrom have universal force; in real life, an asteroid can always strike at any time; in real life, we can never use Bayesian reasoning... but knowing a bit of math still helps, even if it never applies perfectly above the level of quarks.

Agree completely. I wasn't advocating ignorance or promoting complex models over simple ones a priori. Only well-fitting and robust simple models over poorly fitting and brittle ones.

There are many small daily problems I can't imagine addressing with math, and most people just cruise on intuition most of the time. Where we set the threshold for using math concepts seems to vary a lot with cognitive ability and our willingness to break out the graphing calculator when it might be of use.

It might be useful to lay down some psychological triggers so that we are reminded to be rational in situations where we too often operate intuitively. Conversely, a systematic account of things that are too trivial to rationalize and best left to our unconscious would be helpful. I'm not sure either sort of rule would be generalizable beyond the individual mind.

Conversely, a systematic account of things that are too trivial to rationalize and best left to our unconscious would be helpful.

This is only helpful if the subconscious reaction is reasonably good. Finding a way to improve the heuristics applied by the subconscious mind would be ideal for this type of thing.

if you can't model simple hypothetical things correctly, how does it help to try to model complex real things correctly first?

Well, people do do better on the Wason selection task when it's presented in terms of ages and drinks than in terms of letters and numbers.

in real life, we can never use Bayesian reasoning

But we can use judgement, a faculty that we have been developing for millennia that allows us to do amazing things that would take far more effort to work out mathematically. While it's possible that you could catch a baseball merely using some calculus and an understanding of Newtonian physics, it's not a feasible way for humans to do it, and 'knowing some math' is not likely to make you any better at it.. Similarly, while 'bayesian reasoning' might in principle get you the right answer in ethical questions, it's not a feasible way for humans to do it, and it will likely not help at all.

Similarly, while 'bayesian reasoning' might in principle get you the right answer in ethical questions, it's not a feasible way for humans to do it, and it will likely not help at all.

Maybe I'm missing something, but this analogy seems pretty weak. In general, I suspect that a pretty important factor in our ability to learn effective heuristics without reasoning them out from first principles is that we are consistently given clear feedback on the quality of our actions/decisions. (There's a good bit on this in Jonah Lehrer's, The Decisive Moment.)

It's generally pretty obvious whether you've managed to catch a baseball, but there's no equivalent feedback mechanism for making-the-right-moral-decision, so there seems little reason to think that we'll just stumble onto good heuristics, especially outside contexts in which particular heuristics might have conferred a selection advantage.

Do you have concrete reasons for thinking that Bayesian reasoning "likely won't help at all" in answering ethical questions such as "what steps we should take to mitigate the effects of global warming?" It seems pretty useful to me.

ethical questions such as "what steps we should take to mitigate the effects of global warming?"

While I don't often say this, that question doesn't strike me as an ethical question. It seems to turn entirely on questions of what steps would be most effective to producing the desired effect.

When primitives performed human sacrifice to ensure the sun will rise tomorrow, they were not mistaken about ethics - they were mistaken about astronomy.

there's no equivalent feedback mechanism for making-the-right-moral-decision

I disagree - it's usually pretty obvious. While I usually prefer not to talk in terms of "right moral decisions", acting in accord with ethics gets you exactly what you'd expect from it. Ethics specifies criteria for determining what one has most reason to do or want. While what that ends up being is still a matter of disagreement, here are a couple of examples:

consequentialist: do whatever maximizes overall net utility. If you do something to make someone feel good, and you make them feel bad instead, you get immediate feedback as direct and profound as catching a baseball.

virtue ethics: act as the good man does. If you go around acting in a vicious manner, it's obvious to all around that you're nothing like a good person.

While I don't often say this, that question doesn't strike me as an ethical question. It seems to turn entirely on questions of what steps would be most effective to producing the desired effect.

Entirely? It depends on things like how we should weigh the present vs future generations, how we should weigh rich vs poor, whether we're working under ethical constraints other than pure utility maximization. All those are ethical questions.

When primitives performed human sacrifice to ensure the sun will rise tomorrow, they were not mistaken about ethics - they were mistaken about astronomy.

If the probability of the sun rising tomorrow is something else than a unit step function of the number of humans sacrificed, ethics comes in again. Do you sacrifice victim number 386,264 for an added 0.0001% chance of sunrise? Ethical question.

Entirely? It depends on things like how we should weigh the present vs future generations, how we should weigh rich vs poor, whether we're working under ethical constraints other than pure utility maximization. All those are ethical questions.

I'm not sure who the 'we' here are. Ethical questions are questions about what I should do. I see no reason to 'weigh' rich or poor people, or different generations.

There are political questions about what sorts of institutions should be set up, and those things might address collectives of people or whether the poor get to count for more than the rich. But while in some sense 'what political system should I prefer' is an ethical question, the relevant questions to analyze the problem of what institutions to set up are political.

If ethical questions are limited to determining criteria for normative evaluation, then your claim that we receive feedback on ethical issues appears false. We receive feedback on the instrumental questions (e.g. what makes people feel good), not the ethical ones.

On the other hand, adopting my broader sense of what constitutes an ethical question seems to falsify my claim that we do not get feedback on "rightness". We do, for the reasons you explain.* (Actually, I think your virtue ethics example is weak, but the consequentialist one is enough to make your point.)

I would still claim that ethical feedback is generally weaker than in the baseball case, particularly once you're thinking about trying to help dispersed groups of individuals with whom you do not have direct contact (e.g. future generations). But my claim that there is no feedback whatsoever was overstated.

Another question: If we define ethics as being just about criteria, is there any reason to think Bayesian reasoning, which is essentially instrumental, should help us reach answers even in principle? (I guess you might be able to make an Aumann-style agreement argument, but it's not obvious it would work.)

* It looks like we both illegitimately altered our definition of "ethical" half way through our comments. Mmmm... irony.

EDIT:

[what to do about global warming] seems to turn entirely on questions of what steps would be most effective to producing the desired effect.

It turns pretty seriously on what you think the desired effect is as well. Indeed, much of the post-Stern debate was on exactly that issue.

Within moral philosophy, at least, there are two related senses in which philosophers’ typical practice of thought-experiments can seem ill-advised:

  1. They may deal with situations that are strongly unlike the situations in which we actually need to make decisions. Perhaps you’ll never be faced with a runaway trolley, with decisions concerning 3^^^3 dust specks, or with any decision simple enough that you can easily apply your thinking about trolley problems or dust specks.

  2. They may highlight situations that disorient or break our moral intuitions or our notions of value.

    To elaborate a plausible mechanism: The human categories “birds”, “vegetables” (vs. “fruits”, or “herbs”), and “morally right” are all better understood as family resemblance terms (capturing “clusters in thingspace”) than as crisp, explicitly definable, schematic categories that entities do or don’t fall into. Such family resemblance terms arguably gain their meaning, in our heads, from our exposure to many different central examples. Show a person carrots, mushrooms, spinach, and broccoli, with a “yes these are Xes”, and strawberries, cayenne, and rice with a “these aren’t Xes”, and the person will construct the concept “vegetables”. Add in a bunch of borderline cases (“are mustard greens a vegetable or an herb? what exact features point toward and against?”) and the person’s notion of “vegetable” will lose its some of its intuitive “is a category”-ness. If there are enough borderline examples in their example-space, “vegetable” won’t be a cluster for them anymore.

    “Is morally right” may similarly be a cluster formed by seeing what kinds of intra- and inter-personal situations work well, or can be expected to be judged well, and may break or weaken when faced with non-“ecologically valid” thought-experiments.

I spent two years in a graduate philosophy department before leaving academic philosophy to try to reduce existential risks. In my grad philosophy courses, I used to express disdain for dust specks vs. torture type problems, and to claim arguments along the lines of both (1) and (2) for why I should fail to engage such questions. My guess is that (2) was my actual motivation -- I could feel aspects of my moral concern breaking when I considered trolley problems and the like -- and, having not read OB, and tending to believe that arguments were like soldiers, I then argued for (1) as well.

When I left philosophy, though, and started actually thinking about what kind of a large-scale world we want, I was surprised to find that the discussions I'd claimed were inapplicable (with argument (1)) were glaringly applicable. If you’re considering what people shouldn’t tile the light-cone with, or even if you’re just considering aid to Africa, large-scale schematic beliefs about how to navigate tradeoffs are, in fact, a better guide than are folk moral intuitions about what a good friendship looks like. The central examples around which human moral intuitions are built just don’t work well for some of the most important decisions we do in fact need to make.

But despite its inconvenience, (2) may in fact pose a problem, AFAICT.

I'd support idealized thought experiments even if the world were boring. The answers to boring moral problems come or should come from some process you can decompose into several simple modular parts, and these parts can be individually refined on idealized examples in a way that's cleaner and safer than refining the whole of them together on realistic examples. Not letting answers to thought experiments leak into superficially similar real situations takes a kind of discipline, but it's worth it for people to build this discipline.

large-scale schematic beliefs about how to navigate tradeoffs are, in fact, a better guide than are folk moral intuitions about what a good friendship looks like

Not at all. Our 'folk moral intuitions' tell us right quick that we shouldn't tile the light-cone with anything, and I'd need quite a bit of convincing to think otherwise. Similarly, considering aid to Africa can be dealt with entirely within our 'folk moral intuitions', and to think otherwise I'm pretty sure you'd have to beg the question in favor of 'large-scale schematic beliefs about how to navigate tradeoffs'.

That said, I agree wholeheartedly with (1) and (2). Part of the analysis of (1) involves the nature of observation. Intuitions are a sort of observation, and in really strange situations our observations can be confused and fail to match up with reality. While we can rely on our moral intuitions in situations we actually find ourselves facing every day, 'desert island cases' confuse our moral faculties so we shouldn't necessarily trust our intuitions in them. Of course, this starts bleeding into (2).

considering aid to Africa can be dealt with entirely within our 'folk moral intuitions'

This is an issue that our folk moral intuitions can get horribly wrong. It's a lot easier to think "people in Africa are suffering, so it's morally right to help them" than to ask "is X actually going to help them?" and harder still to figure out which intervention will help the most. The difference (from a consequentialist perspective) between efficient charity and average charity is probably much larger than the difference between average charity and no charity.

This is an issue that our folk moral intuitions can get horribly wrong. It's a lot easier to think "people in Africa are suffering, so it's morally right to help them" than to ask "is X actually going to help them?"

This is true, but in this case what is going wrong is our intuitions about instrumental values, not moral ones. I think thomblake was talking about whether our folk moral intuitions could determine whether it was a good or bad thing if we did something that resulted in less suffering in Africa. Our intuitions about how to effectively accomplish that goal are a whole different beast.

Yes exactly

Many physical models, like gravity, have the nice property of stably approximating reality. Perturbing the positions of planets by one millimeter doesn't explode the Solar System the next second.

The stability of orbits when perturbing the position of planets is a nice property (from our perspective of not wanting to crash into the sun) of the physical system of gravity. The fact that our model of gravity explains this stability is a nice property of the model, just as, if the physical system did not have the stability, not predicting stability that is not there would be a nice property of the model. As Tarski would say, if orbits are stable, we want our model to predict stable orbits, if orbits are unstable, we want our model to predict unstable orbits.

I didn't mean the stability of planetary systems as t goes to infinity - this is a very non-trivial problem, AFAIK unsolved yet. I only meant that, if we slightly perturb the initial conditions at t=0, the outcome at t=epsilon likely won't jump around discontinuously.

I did not intend to dispute the stability of orbits. I mean to point out that the stability is a nice property of the territory, and it is only a nice property of the map because it is a property of the territory. Generally, we should not let our desire that maps have nice mathematical properties override our desire that the map reflects the territory. If the territory has discontinuities at corner cases, the map should reflect it, even though we like continuous functions.

More to the point, there are a variety of systems where the territory displays a degree of sensitive dependence on initial conditions at some scale that makes a stable map impossible.

In fact, on astronomical time scales the dynamics of the solar system (or generally, any multi-body gravitational system) displays such behaviors.

I forget where, but I read a blog post that described these sorts of things as controlled experiments - you want to test one part of your decision apparatus, not have it confounded by all the others.

Also, you are anthropomorphizing the paperclipper AI. It would accept your bargain, but demand not just a handful but as many paperclips as you could be arm-twisted into making - where your pain at the expense and effort is just below where you'd prefer the billion deaths. And then it would exploit your exhausted state to stab you in the back anyway. It's not psychopathic, it's just incrementing a number by expedient means. You can't negotiate with something that won't stay bought.

What's the goal of that controlled experiment? If my decision apparatus fails on Newcomb's problem or the "true PD", does it tell you anything about my real world behavior?

It tells us that your real world behavior has the potential to be inconsistent.

Many people carry a decision apparatus that consists of a mess of unrelated heuristics and ad hoc special-cases. Examining extreme cases is a tool for uncovering places where the ad hoc system falls down, so that a more general system can be derived from basic principles, preferably before encountering a real world situation where the flaws in the ad hoc system become apparent.

To my mind, a better analogy than "controlled experiment" would be describing these as decision system unit tests.

If you pose someone the Monty Hall Problem, and their response is "It doesn't matter whether I switch doors or not! They're going to move the prize so that I don't end up getting it anyway!" Do you think they've understood the point of the exercise?

As far as I recall, in the actual game show Monty Hall was never required to open a 'goat' door and offer you the switch. In fact, he did so almost exactly often enough to make switching vs. not switching a neutral proposition. I'm not exactly sure why, but this feels very relevant to the point of this post.

To make it look more fair than it actually is.

I am sympathetic to the sentiment underlying this post, but I would stress that the value of "realism" depends on what you're trying to model, and why. If your purpose is to generate reasonable solutions to non-extreme problems/parlor games, then you can lose your purpose by artificially hardening your models beyond what such parlor games require. But if your purpose is to find generally-applicable decision rules that will be robust to extreme circumstances, then you can lose by failing to harden your models sufficiently.

Is there any reason to believe that there are generally-applicable decision rules that will be robust to extreme circumstances, and yet are simple enough to use for the vast majority of non-extreme circumstances?

I don't see the purpose of such thought experiments as being to model reality (we've already got a perfectly good actual reality for that), but to simplify it. Hypothesizing omnipotent beings and superpowers may not seem like simplification, but it is in one key aspect: it reduces the number of variables.

Reality is messy, and while we have to deal with it eventually, it's useful to consider simpler, more comprehensible models, and then gradually introduce complexity once we understand how the simpler system works. So the thought experiments arbitrarily set certain variables (such as predictive ability) to 100% or 0% simply to remove that aspect from consideration.

This does give a fundamentally unrealistic situation, but that's really the point - they are our equivalent of spherical cows. Dealing with all those variables at once is too hard. In the situations where it isn't and we have "real" situations we can fruitfully consider, there's no need for the thought experiment in the first place. Once we can understand the simpler system, we have somewhere to start from once we start adding back in the complexity.

Models are also dangerously seductive. You're gaining precision at the expense of correspondence to reality, which can only be a temporary trade off if you're ever going to put your knowledge to work.

I most strongly object to modeling as used in economics. Modeling is no longer about getting traction on difficult concepts - building these stylized models has become a goal in and of itself, and mathematical formalization is almost a prerequisite for getting published in a major journal.

I don't see the purpose of such thought experiments as being to model reality (we've already got a perfectly good actual reality for that), but to simplify it.

You seem to misunderstand what models are for. A model is not the actual thing - thus, we do not say, "Why did you build a scale model of the solar system - we have the actual solar system for that!". Instead, models always leave something out - they abstract away the details we don't think are important to simplify thinking about the problem.

Other than that, I agree.

I guess the point is to model artificial intelligences, of which we know almost nothing, so the models and problems need the robustness of logic and simpleness.

Thats why they are brittle when used for modeling people.

[-][anonymous]15y20

So I'm making a modest proposal. If you invent an interesting decision problem, please, first model it as a parlor game between normal people with stakes of around ten dollars. If the attempt fails, you have acquired a bit of information about your concoction; don't ignore it outright.

Absolutely not. If I want to play parlor games for stakes around ten dollars I'll do that at home. But that's not what I'm here for. Questions such as "How long do you wait before giving up (on the flakey date)?" are simply not best answered by the types of thinking we are trying to train here. They are best answered by embracing the dark side with both hands and following our social instincts and conditioning.

I absolutely think that Bayesian reasoning and expected utility consequentialism can provide invaluable guides to ordinary interpersonal conflict.

If Bayesian rational thought is not the best answer for questions such as "How long do you wait before giving up (on the flakey date)?", then training oneself to think in those terms is training oneself to use a sub-optimal strategy for the real world.

The entire point, to me to using rational thought is because I believe rational thought is an optimal strategy. If it's not, why would you do it?

I would prefer a parlor game for $10 any day - {G}

Jonnan

I would suggest that we only use Omega as a tool to allow us to have information, both about the world and about the consequences of our actions, that we can fully trust. Omega's puprose is to handwave facts and/or cause and effect into being in such a way as to create an interesting problem. The moment we need to start thinking about how or why Omega does or might do something we're off on a different problem that probably doesn't relate to the problem we're looking to investigate. The bead jar game requires looking inside Omega, and therefore would have been better off with a human.

any natural out-of-the-box solution like giving the damn thing some paperclips or bargaining with it would ruin the game.

LOL'd and upvoted.

In fairness, though, "giving the damn thing some paperclips" doesn't work in the long run, even for humans above a certain level of fanaticism. Nonetheless, IAWYC.

Have you ever looked at a paperclip? I mean, really stopped to look?

Heh. And deeply understanding the near-far distinction can be rather exhausting.