mattnewport comments on The Fundamental Question - Less Wrong

43 Post author: MBlume 19 April 2010 04:09PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (277)

You are viewing a single comment's thread. Show more comments above.

Comment author: mattnewport 23 April 2010 10:28:50PM 0 points [-]

Newcomb's problem is just a case of making decisions when someone else, who "knows you very well" has already made a decision based on expectation of your decision.

Which sounds a lot like Pascal's wager to me, when your decision is whether to believe in god and god is the person who "knows you very well" and is deciding whether to let you into heaven based on whether you believe in him or not.

There are situations which I guess are what you would describe as 'Newcomb-like' where I would do the equivalent of one-boxing. If Omega shows up this evening though I will be taking both his boxes, because there is too big an epistemic gap for me to cross to reach the point of thinking that one-boxing is sensible in this universe.

Comment author: RobinZ 23 April 2010 10:48:03PM *  0 points [-]

But the plausibility of a hypothetical is unrelated to the correct resolution of the hypothetical. One could equally say that two-boxing implies that you should push the man off the bridge in the trolley problem - the latter is just as unphysical as Newcomb. The proper objection to unreasonable hypotheticals is to claim that they do not resemble the real-world situations one might compare them to in the relevant aspects.

Comment author: mattnewport 23 April 2010 10:54:42PM 0 points [-]

I actually think that implausible hypotheticals are unhelpful and probably actively harmful which is why I usually don't involve myself in discussions about Omega. I wish I'd stuck with that policy now.

Comment author: NancyLebovitz 24 April 2010 01:32:45PM 0 points [-]

Why do you think implausible hypotheticals are unhelpful and probaby harmful? It seems to me that they're a lot of work for no obvious reward, but I don't have a more complex theory.

Anyone have an example of the examination of an implausible hypothetical paying off?

Comment author: mattnewport 26 April 2010 06:02:54AM 1 point [-]

I think implausible hypotheticals are often intuition pumps. If they are used as part of an attempt to convince the audience of a certain point of view I automatically get suspicious. If the point of view is correct, why can't it be illustrated with a plausible hypothetical or a real world example? They often seem to be constructed in a way that tries to move attention away from certain aspects of the situation described and thus allow for dubious assumptions to be hidden in plain sight.

Basically, I always feel like someone is trying to pull a philosophical sleight of hand when they pull out an implausible hypothetical to make their case and they often seem to be used in arguments that are wrong in subtle or hard to detect ways. I feel like I encounter them far more in arguments for positions that I ultimately conclude are incorrect than as support for positions I ultimately conclude to be correct.

Comment author: NancyLebovitz 26 April 2010 07:57:40AM 1 point [-]

That's interesting, and might apply to the trolley problem which implies that people can have much more knowledge of the alternatives than they are ever likely to have.

Ethical principles and empathy (as a sort of unconscious ethical principle) are needed when you don't have detailed knowledge, but I haven't seen the trolley problem extended to the usual case of not knowing very many of the effects.

It might be worth crossing the trolley problem with Protected from Myself.

Taking a look at ethical intuitions with specifics: Sex, Drugs, and AIDS: the desire to only help when it will make a big difference and the desire to not help unworthy people add up to worse effects than having a less dramatic view of the world. Having AIDS drugs doesn't mean it makes sense to slack off on prevention as much as has happened.

Comment author: mattnewport 26 April 2010 04:26:07PM 0 points [-]

Yes, the trolley problems are another example of harmful implausible hypotheticals in my opinion. The different reaction many people have to the same underlying ethical question framed as a trolley problem vs. an organ donor problem is I think illustrative of the pernicious influence of implausible hypotheticals on clear thought.

Comment author: Jack 24 April 2010 03:56:47PM *  0 points [-]

Well, the fact that they're implausible pretty much means the cash rewards are going to have to wait until they are plausible. But don't we think clear thinking is its own reward?

I've found that such things are incredibly crucial for getting people to think clearly about personal identity. In fact I don't know if I have any way of explaining or defending my views on personal identity to the philosophically untrained without implausible hypotheticals. Same goes for understanding skepticism, causality, maybe induction, problems with causal decision theory (obviously), anthropics, simulation...

I'm all about being aware that using implausible hypotheticals can generate error but I am bewildered by the sudden resistance to them on this thread: we use them all the time here!

Comment author: RobinZ 23 April 2010 11:08:58PM *  0 points [-]

I would be dead chuffed to talk about the wisdom of considering implausible hypotheticals instead, if that's what you'd prefer to do. (:

Edit: I would be equally happy to drop the thread entirely, if that's what you prefer.

Comment author: mattnewport 23 April 2010 11:29:06PM 3 points [-]

Ok, let me try and nail down my true objection here. Is Pascal's wager a good reason to believe in God? No. Hypothetically, if you had good reason to believe that the hypothesis of the christian god existing were massively more likely than other hypotheses of similar complexity, would it be a good reason to believe in god? Well, not really - it doesn't add much in that case.

Similarly, if Omega showed up at my apartment this evening would I one-box? No. Hypothetically, if I had good reason to believe that an Omega-like entity existed and did this kind of thing (which is the set up for Newcomb's problem) would I one-box? Well, probably yes but you've glossed over the rather radical change to my epistemic state required to make me believe such an implausible thing.

I guess I have a general problem with a certain kind of philosophical thought experiment that tries to sneak in a truly colossal amount of implausibility in its premises and ask you not to notice and then whenever you keep pointing to the implausibility telling you to ignore it and focus on the real question. Well I'm sorry, but the staggering implausibility over there in the corner is more significant than the question you want me to focus on in my opinion... (Forgive the casual use of 'you' here - I'm not intending to refer to you specifically).

Comment author: Jack 24 April 2010 03:43:03PM *  2 points [-]

I don't understand. A hypothetical can be dangerous if it keeps us from attending to aspects of the problem we're trying to analyze- like the Chinese Room which fails to convey properly the powers it would have to have for us to declare it conscious. The fact that a hypothetical is implausible might make it harder for us to notice that we're not attending to certain issues, I guess. That hardly seems grounds for rejecting them outright (indeed, Dennett uses plenty of intuition pumps). And the implausibility itself really is irrelevant. No one is claiming that the hypothetical will occur, so why should the probability of its occurrence be an issue?

Comment author: mattnewport 26 April 2010 06:16:03AM 0 points [-]

Using Newcomb's problem as an example, it seems like it glosses over important details of how much evidence you would actually need to believe in an Omega like entity and as a result confuses more than it illuminates. Re-reading some of Eliezer's posts on it I get the impression that he is hinting that his resolution of the issue is connected to that problem. It seems to me that it causes a lot of unnecessary confusion because humans are susceptible to stories that require suspension of disbelief in highly implausible occurrences that they would not actually suspend their disbelief for if encountered in real life. This might be an example of Robin Hanson's near/far distinction.

Tyler Cowen's cautionary tale about the dangers of stories covers some of the same kinds of human biases that I think are triggered by implausible hypotheticals.

Comment author: Jack 26 April 2010 08:52:45PM *  1 point [-]

Using Newcomb's problem as an example, it seems like it glosses over important details of how much evidence you would actually need to believe in an Omega like entity and as a result confuses more than it illuminates.

It certainly does gloss over that... I mean it has to, you'd require a lot of evidence. But the reason it does so is because the question isn't could Omega exists or how can we tel when Omega shows up... the details are buried because they aren't relevant. How does Newcomb's problem confuse more that illuminate? It illustrates a problem/paradox. We would not be aware of that paradox were it not for the hypothetical. I suppose it confuses in the sense that one becomes aware of a problem they weren't previously- but that's the kind of confusion we want.

Tyler Cowen's cautionary tale about the dangers of stories covers some of the same kinds of human biases that I think are triggered by implausible hypotheticals.

It's a great video and I'm grateful you linked me to it but I don't see where the problems with the kind of stories Cowen was discussing show up in thought experiments.

Comment author: PhilGoetz 26 April 2010 09:06:23PM 2 points [-]

How does Newcomb's problem confuse more that illuminate? It illustrates a problem/paradox. We would not be aware of that paradox were it not for the hypothetical.

The danger is that you can use a hypothetical to illustrate a paradox that isn't really a paradox, because its preconditions are impossible. A famous example: Suppose you're driving a car at the speed of light, and you turn on the headlights. What do you see?

Comment author: Jack 26 April 2010 10:13:25PM 0 points [-]

This is a danger. Good point.

Comment author: mattnewport 26 April 2010 09:30:01PM 0 points [-]

How does Newcomb's problem confuse more that illuminate? It illustrates a problem/paradox.

It confuses because it doesn't really show a problem/paradox. That is not obvious because of the peculiar construction of the hypothetical. If you actually had enough evidence to make it seem like one-boxing was the obvious choice then it wouldn't seem like a paradoxical choice. The problem is people generally aren't able to imagine themselves into such a scenario and so think they should two-box and then think there is a paradox (because you 'should' one-box). They quite reasonably aren't able to imagine themselves into such a scenario because it is wildly implausible. The paradox is just an artifact of difficulties we have mentally dealing with highly implausible scenarios.

I don't see where the problems with the kind of stories Cowen was discussing show up in thought experiments.

Specifically what I had in mind was the fact that people seem to have a natural willingness to suspend disbelief and accept contradictory or wildly implausible premises when 'story mode' is activated. We are used to listening to stories and we become less critical of logical inconsistencies and unlikely scenarios because they are a staple of stories. Presenting a thought experiment in the form of a story containing a highly implausible scenario takes advantage of a weakness in our mental defenses which exists for story-shaped language and leads to confusion and misjudgement which we would not exhibit if confronted with a real situation rather than a story.

Comment author: Jack 26 April 2010 10:07:56PM 1 point [-]

If you actually had enough evidence to make it seem like one-boxing was the obvious choice then it wouldn't seem like a paradoxical choice. The problem is people generally aren't able to imagine themselves into such a scenario and so think they should two-box and then think there is a paradox (because you 'should' one-box).

No. The choice is paradoxical because no matter how much evidence you have of Omega's omniscience the choice you make can't change the amount of money in the box. As such traditional decision theory tells you to two- box because the decision you make can't affect the amount of money the boxes. No matter how much money is in the boxes you should more by two boxing. Most educated people are causal decision makers by default. So a thought experiment where causal decision makers lose is paradox inducing. If one-boxing was the obvious choice people would feel the need to posit new decision theories as a result.

Comment author: byrnema 24 April 2010 12:17:42AM 0 points [-]

Curiously, what is the average utility you would estimate for belief in God? Or do you feel that trying to estimate this forces suspended disbelief in implausible scenarios?

Comment author: mattnewport 24 April 2010 12:31:03AM 2 points [-]

Which god? The God Of Abraham, Isaac, And Jacob? The Christian, Muslim or Jewish flavour? It would seem this is quite important in the context of Pascal's wager. Some gods are notoriously specific about the form my belief should take in order to win infinite utility. I don't see any compelling evidence to prefer any of the more popular god hypotheses over any other, nor to prefer them over the infinitude of other possible gods that I could imagine.

Some of the Norse gods were pretty badass though, they might be fun to believe in.

Comment author: byrnema 24 April 2010 12:39:02AM 0 points [-]

... if I may put the question differently: what average utility do you estimate for not believing in any God?

Comment author: mattnewport 24 April 2010 12:57:10AM 1 point [-]

This strikes me as a rather odd question. I thought we were more or less agreed that beliefs don't generally have utility. The peculiarity of Pascal's wager and religious belief in general is that you are postulating a universe in which you are rewarded for holding certain beliefs independently of your actions. In a universe with no god (which I claim is a universe much like our own) belief in god is merely false belief and generally false beliefs are likely to cause bad decisions and thus lead to sub-optimal outcomes.

If the belief in god is completely free-floating and has no implications for actions then it may not have any direct negative effect on expected utility. Presumably given the finite computational capacity of the human brain holding non-consequential false beliefs is a waste of resources and so has slight negative utility. It strikes me that this is not the kind of belief in god that people are usually trying to defend when invoking Pascal's wager however.

Comment author: NancyLebovitz 24 April 2010 01:40:22PM 1 point [-]

This strikes me as a rather odd question. I thought we were more or less agreed that beliefs don't generally have utility.

I'm not sure that beliefs don't generally have utility. It seems to me that beliefs (or something like beliefs) do a lot to organize action. There's a difference between doing something because of short-term reward and punishment and doing the same thing because one thinks it's generally a good idea.

Comment author: byrnema 24 April 2010 01:50:14AM *  0 points [-]

Hmm. I think beliefs do have a utility, whether or not you can act on that utility by choosing a belief or whether or not you can accurately estimate the utility. If you believe something, you will act as though you believe it, so that believing in something inherits the utility of acting as though you do. It seems very strange to think of someone acting as though they believe something, without them actually believing it. There are exceptions, but for the most part, if someone bets on a belief, this is because they believe it.

Comment author: RobinZ 24 April 2010 12:09:27AM 0 points [-]

That looks like a good heuristic you are using - it seems related to the idea of the intuition pump.

...wow, that was a short time-to-agreement. :D

Comment author: mattnewport 24 April 2010 12:20:50AM 1 point [-]

it seems related to the idea of the intuition pump.

Yeah, I think I was always averse to this sort of philosophical sophistry but reading Consciousness Explained probably crystallized my objection to it at a relatively early age.