Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
kaz10

Given that your model is failing at these extreme values and telling you to torture people instead of blink, I think that's a bad sign for your model.

That would be failing, but 3^^^3 people blinking != you blinking. You just don't comprehend the size of 3^^^3.

Yeah, absolutely, I definitely agree with that.

Well it's self evident that that's silly. So, there's that.

[This comment is no longer endorsed by its author]Reply
kaz00

The amazing thing is that this is a scientifically productive rule - finding a new representation that gets rid of epiphenomenal distinctions, often means a substantially different theory of physics with experimental consequences!

(Sure, what I just said is logically impossible, but it works.)

That's not a logical impossibility; it's just a property of the way we change our models. When you observe that X always seems to equal Y, that's redundancy in your model; if you find a model that matches all known observations equally but also compresses X to be the same thing as Y, your new model is the same as the old model except for having lower complexity - i.e. higher probability. Any predictions that are different in your new model from in your old model, you should now expect to be more likely to act according to the new model.

kaz00

I see your general point, but it seems like the solution to the Omega example is trivial if Omega is assumed to be able to predict accurately most of the time:
(letting C = Omega predicted correctly; let's assume for simplicity that Omega's fallibility is the same for false positives and false negatives)

  • if you chose just one box, your expected utility is $1M * P(C)
  • if you chose both boxes, your expected utility is $1K + $1M (1 - P(C))
    Setting these equal to find the equilibrium point:
    1000000
    P(C) = 1000 + 1000000 (1 - P(C))
    1000
    P(C) = 1001 - 1000 P(C)
    2000
    P(C) = 1001
    P(C) = 1001/2000 = 0.5005 = 50.05%

So as long as you are at least 50.05% sure that Omega's model of the universe describes you accurately, you should pick the one box. It's a little confusing because it seems like cause precedes effect in this situation, but that's not the case; your behaviour affects the behaviour of a simulation of you. Assuming Omega is always right: if you take one box, then you are the type of person who would take the one box, and Omega will see that you are, and you will win. So it's the clear choice.

kaz70

and yes, as soon as ends justify means, you do get Stalin, Mao, Pol Pot, who were all striving for good consequences......

As soon as? That's a very strong statement.

I don't think utilitarianism suggests that "the ends justify the means" in the way that you are claiming - a more utilitarian view would be "all of the effects of the means justify the means" i.e. side effects are relevant.

kaz120

I really don't see why I can't say "the negative utility of a dust speck is 1 over Graham's Number."

You can say anything, but Graham's number is very large; if the disutility of an air molecule slamming into your eye were 1 over Graham's number, enough air pressure to kill you would have negligible disutility.

or "I am not obligated to have my utility function make sense in contexts like those involving 3^^^^3 participants, because my utility function is intended to be used in This World, and that number is a physical impossibility in This World."

If your utility function ceases to correspond to utility at extreme values, isn't it more of an approximation of utility than actual utility? Sure, you don't need a model that works at the extremes - but when a model does hold for extreme values, that's generally a good sign for the accuracy of the model.

An addendum: 2 more things. The difference between a life with n dust specks hitting your eye and n+1 dust specks is not worth considering, given how large n is in any real life. Furthermore, if we allow for possible immortality, n could literally be infinity, so the difference would be literally 0.

If utility is to be compared relative to lifetime utility, i.e. as (LifetimeUtility + x / LifetimeUtility), doesn't that assign higher impact to five seconds of pain for a twenty-year old who will die at 40 than to a twenty-year old who will die at 120? Does that make sense?

Secondly, by virtue of your asserting that there exists an action with minimal disutility, you've shown that the Field of Utility is very different from the field of, say, the Real numbers, and so I am incredulous that we can simply "multiply" in the usual sense.

Eliezer's point does not seem to me predicated on the existence of such a value; I see no need to assume multiplication has been broken.

kaz30

I mean, suppose that God himself descended from the clouds and told you that your whole religion was true except for the Virgin Birth. If that would change your mind, you can't say you're absolutely certain of the Virgin Birth.

I think that latter statement is equivalent to this:

V = Virgin Birth
G = God appears and proclaims ~V

P(V|G) < 1
∴​P(V) < 1

But that argument is predicated on P(G) > 0. It is internally consistent to believe P(V|G) < 1 and yet P(V) = 1, as long as one also believes P(G) = 0, i.e. one is certain that God will not appear and proclaim ~V.