Suppose you are discussing the issue of whether it’d be okay to flick the switch if a train was about to collide with and destroy an entire world as a way to try to sell someone on utilitarian ethics (see trolley problem). The other person objects that this is an unrealistic situation and so there is no point wasting time on this discussion.
Philosophy has a very bad track record at reliable production of knowledge. Reliable knowledge is much more often produced by interaction with the real world and empiric learning.
If you allow people to substantially affect your beliefs by hypotheticals that have nothing to do with reality I don't think you will update in the right direction.
Nassim Taleb wrote a lot on the subject of how the track record of people who try to hold beliefs that are detached from reality and only based on abstract reasoning is pretty poor.
“Okay, your argument seems to check out on first glance, but I’m rather skeptical that it’d hold up if I spent enough time thinking about it. Anyway, supposing that it was true, why should the real world be anything like A?”
A lot of thinks don't hold up in reality but you don't find the flaw by spending significant time thinking about the issue. That's why it's useful to not focus too much of your beliefs on abstract arguments but base as much as possible in interaction with the real world and be exposed to realworld feedback.
I never said that you had to substantially update. I think I might have addressed these points to some extent in the previous two posts, although there is probably more to be said on this. The fact that philosophy has a poor track record is a good point. I'm not going to address it here and now though. If I wanted to address this properly it'd need its own post and I generally don't like to invest the effort until I see an issue come up on multiple occasions.
I never said that you had to substantially update.
This can be a bit motte-and-bailey. Without further definition of how much you think one should update it's hard to talk about it.
It isn't a Motte-and-bailey as I already acknowledged this limitation in the original post: "Obviously we will update to a much lesser degree than if we were confident in the logic, but we still have to update to some extent"
It would be useful to present an example and express your beliefs of how strongly you should update in numbers.
Just to underline here...philosophy has a bad track record because when it finds something concrete and useful, it gets split off into things like science and ethics, and very abstract things tend to be all that's left.
Hypotheticals are probably in the same class. Useful when they apply to reality, entertaining or stimulating sometimes even when they don't...and in some cases neither. The third category is the one I ignore.
Can you bring examples from LessWrong where you think people didn't understand hypotheticals and unfairly dismissed a hypotheticals?
Thanks for the suggestion, but I think it is probably better to keep this abstract and not call particular people out. I guess if I'm being honest, trying to remember when I saw an example and then find it, seems like a lot of work too.
How does this play out? An intellectually honest response would be along the lines of: “Okay, your argument seems to check out on first glance, but I’m rather skeptical that it’d hold up if I spent enough time thinking about it. Anyway, supposing that it was true, why should the real world be anything like A?”. This is much more honest than simply trying to dismiss the hypothetical by stating that A is nothing like reality.
These are the long and the short version of the same thought.
This is another case of Goodhart's Law. If you update your priors based on being asked about A->B, people can take advantage of that fact, which changes whether it is a good idea to update your prior on being asked about A->B.
That wasn't quite the argument. It was that you should update when asked about A->B if it looked like there prima facie appeared a solid argument for it, even if you were sure it had a flaw somewhere. So it was never automatic.
Of those who have read all three posts, how has it affected your opinion on hypotheticals?
[pollid:1074]
Subtract one "No change, but I already agreed with you" as I needed to vote in order to be able to see results!
However, even if we aren’t confident in the logic we still have to update our priors, once we know that there is an argument for it that at least appears to check out.
I don't buy it.
Update is proportional to surprise. How surprising is it that your stoned housemate or random internet poster is attempting to pose this hypothetical, and what beliefs do you propose need updating?
It isn't surprising that they are proposing it, what is surprising is that their argument that A->B seems to check out on first glance. So if your previous model was, we have no idea of whether A->B is true or not, then you should be updating.
My previous model already incorporated the surface features of the hypothetical, that's how I got my initial reaction. What is new about THIS presentation of A->B that I didn't expect?
Is there a concrete example to use? I think there is a lot of variation possible across hypotheticals and across different participants' past experiences with hypotheticals. In a perfect agent, there is no update possible on fictional evidence. In real-world agents, it'll depend entirely on what hasn't already been considered.
Here's a concrete example. Imagine a trolley problem with one person on one track and a million people on another track. If Bob doesn't want to engage in the hypothetical because it is "unrealistic", then his mind has most likely already considered the fact that it would be very hard to argue against switching were he to accept the hypothetical. Many people will do this every time a hypothetical comes up and act as though they have no idea whether the hypothetical is true or not. However, this isn't quite true, Bob already knows that he would find it very hard to argue against the point being made; indeed, if it were easy to argue against the point being made, Bob would probably do that instead of dodging the hypothetical. So Bob has to update, but only on the first instance of such a problem.
Maybe stating it in terms of updating obscures things rather than clearing them up? I'm actually not sure if I'd write this article the same way if I was writing it again.
I can't follow what priors Bob has in this case and what update you think he should make. I do think that in this example, the presenter of the hypothetical should update on the evidence that Bob doesn't find the fictional case worth discussing.
I think stating things in terms of beliefs (priors and updates) is extremely helpful when discussing communication and reflective knowledge. But I haven't seen the detail needed for it to be a compelling (or even understandable) point on this specific topic.
This post is based on a discussion with ChristianKl on Less Wrong Chat. Thanks!
Many people disagreed with my previous writings on hypotheticals on Less Wrong (link1, link2). For those who still aren’t convinced, I’ll provide another argument on why you should take hypotheticals seriously. Suppose you are discussing the issue of whether it’d be okay to flick the switch if a train was about to collide with and destroy an entire world as a way to try to sell someone on utilitarian ethics (see trolley problem). The other person objects that this is an unrealistic situation and so there is no point wasting time on this discussion.
This may seem unreasonable, but I suppose a person who believes that their time is very valuable may not feel that it is actually worth their time indulging in the hypothetical that A->B unless the other person is willing to explain to them why this result would relate to how we should act in the real world. This might be especially likely to be true if they have had similar discussion before and so they have a low prior that the other person will be able to relate it to the real world.
However, at this stage, they almost certainly have to update, in the sense that if you are following the rule of updating on new evidence, you have most likely already received new evidence. The argument is as follows: As soon as you have heard A->B (if it would save a world, I would flick a switch), your brain has already performed a surface level evaluation on that argument. Realistically, the thinker in the situation probably knows that it is really tough to make the argument that we should allow an entire world to be destroyed instead of ending one life. Now, the fact that it is tough to argue against something doesn’t mean that it should be accepted. For example, many philosophical proofs or halves of mathematical paradoxes seem very hard to argue against at first, but we may have an intuitive sense that there is a flaw there to be found if we are smart enough and look hard enough.
However, even if we aren’t confident in the logic we still have to update our priors, once we know that there is an argument for it that at least appears to check out. Obviously we will update to a much lesser degree than if we were confident in the logic, but we still have to update to some extent, even if we think the chance of A->B being analogous to the real world is incredibly small, as there will always be *some* chance that it is analogous assuming the other person isn’t talking nonsense. So even though the analogy hardly seems to fit the real world and even though you’ve perhaps spent only second thinking about whether A->B checks out, you’ve still got to update. I'll add another quick note: you only have to update on the first instance, when you see the same or a very similar problem again, you don't have to update.
How does this play out? An intellectually honest response would be along the lines of: “Okay, your argument seems to check out on first glance, but I’m rather skeptical that it’d hold up if I spent enough time thinking about it. Anyway, supposing that it was true, why should the real world be anything like A?”. This is much more honest than simply trying to dismiss the hypothetical by stating that A is nothing like reality.
There’s one objection that I need to answer. Maybe you say that you haven’t considered A->B at all. I would be really skeptical of this. There is a small chance I’m committing the typical mind fallacy, but I’m pretty sure that your mind considered both A->B and “this is analogous with reality” and you decided to argue for the second because you didn’t find a strong counter-argument against A->B. And if you did actually find a strong counter-argument, but choose to challenge the hypothetical instead, why not use your counter-argument? Why not engage with your opponent directly and take down their argument as this is more persuasive than dodging the question? There probably are situations where this seems reasonable, such if the argument against A->B is very long and complicated, but you think it is much easier to convince the other person that the situation isn’t analogous. These situations might exist, but I would suspect that these situations are relatively rare.