Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Unknown3 01 February 2008 07:09:00PM 4 points [-]

Eliezer, I have a question about this: "There is no finite amount of life lived N where I would prefer a 80.0001% probability of living N years to an 0.0001% chance of living a googolplex years and an 80% chance of living forever. This is a sufficient condition to imply that my utility function is unbounded."

I can see that this preference implies an unbounded utility function, given that a longer life has a greater utility. However, simply stated in that way, most people might agree with the preference. But consider this gamble instead:

A: Live 500 years and then die, with certainty.
B: Live forever, with probability 0.000000001%; die within the next ten seconds, with probability 99.999999999%

Do you choose A or B? Is it possible to choose A and have an unbounded utility function with respect to life? It seems to me that an unbounded utility function implies the choice of B. But then what if the probability of living forever becomes one in a googleplex, or whatever? Of course, this is a kind of Pascal's Wager; but it seems to me that your utility function implies that you should accept the Wager.

It also seems to me that the intuitions suggesting to you and others that Pascal's Mugging should be rejected similarly are based on an intuition of a bounded utility function. Emotions can't react infinitely to anything; as one commenter put it, "I can only feel so much horror." So to the degree that people's preferences reflect their emotions, they have bounded utility functions. In the abstract, not emotionally but mentally, it is possible to have an unbounded function. But if you do, and act on it, others will think you a fanatic. For a fanatic cares infinitely for what he perceives to be an infinite good, whereas normal people do not care infinitely about anything.

This isn't necessarily against an unbounded function; I'm simply trying to draw out the implications.

Comment author: thrawnca 29 November 2016 02:19:05AM 0 points [-]

A: Live 500 years and then die, with certainty. B: Live forever, with probability 0.000000001%; die within the next ten seconds, with probability 99.999999999%

If this was the only chance you ever get to determine your lifespan - then choose B.

In the real world, it would probably be a better idea to discard both options and use your natural lifespan to search for alternative paths to immortality.

Comment author: Kenny 02 February 2013 07:27:16PM 1 point [-]

Actually, we don't know that our decision affects the contents of Box B. In fact, we're told that it contains a million dollars if-and-only-if Omega predicts we will only take Box B.

It is possible that we could pick Box B even tho Omega predicted we would take both boxes. Omega has only observed to have predicted correctly 100 times. And if we are sufficiently doubtful whether Omega would predict that we would take only Box B, it would be rational to take both boxes.

Only if we're somewhat confident of Omega's prediction can we confidently one-box and rationally expect it to contain a million dollars.

Comment author: thrawnca 29 November 2016 12:44:26AM *  2 points [-]

somewhat confident of Omega's prediction

51% confidence would suffice.

  • Two-box expected value: 0.51 * $1K + 0.49 * $1.001M = $491000
  • One-box expected value: 0.51 * $1M + 0.49 * $0 = $510000
Comment author: hairyfigment 21 November 2016 08:06:20PM -1 points [-]

Are you actually trying to understand? At some point you'll predictably approach death, and predictably assign a vanishing probability to another offer or coin-flip coming after a certain point. Your present self should know this. Omega knows it by assumption.

Comment author: thrawnca 28 November 2016 05:17:30AM -1 points [-]

At some point you'll predictably approach death

I'm pretty sure that decision theories are not designed on that basis. We don't want an AI to start making different decisions based on the probability of an upcoming decommission. We don't want it to become nihilistic and stop making decisions because it predicted the heat death of the universe and decided that all paths have zero value. If death is actually tied to the decision in some way, then sure, take that into account, but otherwise, I don't think a decision theory should have "death is inevitably coming for us all" as a factor.

Comment author: Wes_W 31 October 2016 08:18:06PM *  1 point [-]

Yes, that is the problem in question!

If you want the payoff, you have to be the kind of person who will pay the counterfactual mugger, even once you no longer can benefit from doing so. Is that a reasonable feature for a decision theory to have? It's not clear that it is; it seems strange to pay out, even though the expected value of becoming that kind of person is clearly positive before you see the coin. That's what the counterfactual mugging is about.

If you're asking "why care" rhetorically, and you believe the answer is "you shouldn't be that kind of person", then your decision theory prefers lower expected values, which is also pathological. How do you resolve that tension? This is, once again, literally the entire problem.

Comment author: thrawnca 20 November 2016 10:18:33PM -1 points [-]

How do you resolve that tension?

Well, as previously stated, my view is that the scenario as stated (single-shot with no precommitment) is not the most helpful hypothetical for designing a decision theory. An iterated version would actually be more relevant, since we want to design an AI that can make more than one decision. And in the iterated version, the tension is largely resolved, because there is a clear motivation to stick with the decision: we still hope for the next coin to come down heads.

Comment author: Utilitarian 14 October 2008 11:44:56PM 2 points [-]

As a human, I try to abide by the deontological prohibitions that humans have made to live in peace with one another. [...] I don't go around pushing people into the paths of trains myself, nor stealing from banks to fund my altruistic projects.

It seems a strong claim to suggest that the limits you impose on yourself due to epistemological deficiency line up exactly with the mores and laws imposed by society. Are there some conventional ends-don't-justify-means notions that you would violate, or non-socially-taboo situations in which you would restrain yourself?

Also, what happens when the consequences grow large? Say 1 person to save 500, or 1 to save 3^^^^3?

Comment author: thrawnca 20 November 2016 10:13:42PM 1 point [-]

what happens when the consequences grow large? Say 1 person to save 500, or 1 to save 3^^^^3?

If 3^^^^3 lives are at stake, and we assume that we are running on faulty or even hostile hardware, then it becomes all the more important not to rely on potentially-corrupted "seems like this will work".

Comment author: Crux 28 October 2016 08:27:37AM *  0 points [-]

The evolutionary process produced humans, and humans can create certain things that evolution wouldn't have been able to produce without producing something like humans to indirectly produce those things. Your question is no more interesting than, "How could humans have built machines so much faster at arithmetic than themselves?" Well, humans can build calculators. That they can't be the calculators that they create doesn't demand an unusual explanation.

Comment author: thrawnca 31 October 2016 02:18:26AM 0 points [-]

Well, humans can build calculators. That they can't be the calculators that they create doesn't demand an unusual explanation.

Yes, but don't these articles emphasise how evolution doesn't do miracles, doesn't get everything right at once, and takes a very long time to do anything awesome? The fact that humans can do so much more than the normal evolutionary processes can marks us as a rather significant anomaly.

Comment author: Wes_W 28 October 2016 05:30:06AM *  1 point [-]

Your decision is a result of your decision theory, and your decision theory is a fact about you, not just something that happens in that moment.

You can say - I'm not making the decision ahead of time, I'm waiting until after I see that Omega has flipped tails. In which case, when Omega predicts your behavior ahead of time, he predicts that you won't decide until after the coin flip, resulting in hypothetically refusing to pay given tails, so - although the coin flip hasn't happened yet and could still come up heads - your yet-unmade decision has the same effect as if you had loudly precommitted to it.

You're trying to reason in temporal order, but that doesn't work in the presence of predictors.

Comment author: thrawnca 31 October 2016 02:06:56AM *  0 points [-]

Your decision is a result of your decision theory

I get that that could work for a computer, because a computer can be bound by an overall decision theory without attempting to think about whether that decision theory still makes sense in the current situation.

I don't mind predictors in eg Newcomb's problem. Effectively, there is a backward causal arrow, because whatever you choose causes the predictor to have already acted differently. Unusual, but reasonable.

However, in this case, yes, your choice affects the predictor's earlier decision - but since the coin never came down heads, who cares any more how the predictor would have acted? Why care about being the kind of person who will pay the counterfactual mugger, if there will never again be any opportunity for it to pay off?

Comment author: thrawnca 28 October 2016 05:33:55AM *  0 points [-]

Humans can do things that evolutions probably can't do period over the expected lifetime of the universe.

This does beg the question, How, then, did an evolutionary process produce something so much more efficient than itself?

(And if we are products of evolutionary processes, then all our actions are basically facets of evolution, so isn't that sentence self-contradictory?)

Comment author: Wes_W 16 October 2016 06:49:57AM *  1 point [-]

You're fundamentally failing to address the problem.

For one, your examples just plain omit the "Omega is a predictor" part, which is key to the situation. Since Omega is a predictor, there is no distinction between making the decision ahead of time or not.

For another, unless you can prove that your proposed alternative doesn't have pathologies just as bad as the Counterfactual Mugging, you're at best back to square one.

It's very easy to say "look, just don't do the pathological thing". It's very hard to formalize that into an actual decision theory, without creating new pathologies. I feel obnoxious to keep repeating this, but that is the entire problem in the first place.

Comment author: thrawnca 28 October 2016 05:20:17AM *  0 points [-]

there is no distinction between making the decision ahead of time or not

Except that even if you make the decision, what would motivate you to stick to it once it can no longer pay up?

Your only motivation to pay is the hope of obtaining the $10000. If that hope does not exist, what reason would you have to abide by the decision that you make now?

Comment author: Gradus 25 October 2016 09:49:26PM 0 points [-]

"Policy debates should not appear one-sided" doesn't in this case give credence to the idea that a world with suffering implies the possibility of the God. Quite the opposite. That is a post-hoc justification for what should be seen as evidence to lower the probability of "belief in just and benevolent God." This is analogous to EY's example of the absence of sabotage being used as justification for the concentration camps in "Conservation of Expected Evidence"

Comment author: thrawnca 28 October 2016 05:11:52AM *  0 points [-]

I didn't mean to suggest that the existence of suffering is evidence that there is a God. What I meant was, the known fact of "shared threat -> people come together" makes the reality of suffering less powerful evidence against the existence of a God.

View more: Next