All of kaz's Comments + Replies

Given that your model is failing at these extreme values and telling you to torture people instead of blink, I think that's a bad sign for your model.

That would be failing, but 3^^^3 people blinking != you blinking. You just don't comprehend the size of 3^^^3.

Yeah, absolutely, I definitely agree with that.

Well it's self evident that that's silly. So, there's that.

[This comment is no longer endorsed by its author]Reply

The amazing thing is that this is a scientifically productive rule - finding a new representation that gets rid of epiphenomenal distinctions, often means a substantially different theory of physics with experimental consequences!

(Sure, what I just said is logically impossible, but it works.)

That's not a logical impossibility; it's just a property of the way we change our models. When you observe that X always seems to equal Y, that's redundancy in your model; if you find a model that matches all known observations equally but also compresses X to be th... (read more)

I see your general point, but it seems like the solution to the Omega example is trivial if Omega is assumed to be able to predict accurately most of the time:
(letting C = Omega predicted correctly; let's assume for simplicity that Omega's fallibility is the same for false positives and false negatives)

  • if you chose just one box, your expected utility is $1M * P(C)
  • if you chose both boxes, your expected utility is $1K + $1M (1 - P(C))
    Setting these equal to find the equilibrium point:
    1000000
    P(C) = 1000 + 1000000 (1 - P(C))
    1000
    P(C) = 1001 - 1000
... (read more)

and yes, as soon as ends justify means, you do get Stalin, Mao, Pol Pot, who were all striving for good consequences......

As soon as? That's a very strong statement.

I don't think utilitarianism suggests that "the ends justify the means" in the way that you are claiming - a more utilitarian view would be "all of the effects of the means justify the means" i.e. side effects are relevant.

I really don't see why I can't say "the negative utility of a dust speck is 1 over Graham's Number."

You can say anything, but Graham's number is very large; if the disutility of an air molecule slamming into your eye were 1 over Graham's number, enough air pressure to kill you would have negligible disutility.

or "I am not obligated to have my utility function make sense in contexts like those involving 3^^^^3 participants, because my utility function is intended to be used in This World, and that number is a physical impossibility in Th

... (read more)
3bgaesop
Yes, this seems like a good argument that we can't add up disutility for things like "being bumped into by particle type X" linearly. In fact, it seems like having 1, or even (whatever large number I breathe in a day) molecules of air bumping into me is a good thing, and so we can't just talk about things like "the disutility of being bumped into by kinds of particles". Yeah, of course. Why, do you know of some way to accurately access someone's actually-existing Utility Function in a way that doesn't just produce an approximation of an idealization of how ape brains work? Because me, I'm sitting over here using an ape brain to model itself, and this particular ape doesn't even really expect to leave this planet or encounter or affect more than a few billion people, much less 3^^^3. So it's totally fine using something accurate to a few significant figures, trying to minimize errors that would have noticeable effects on these scales. Yes, I agree. Given that your model is failing at these extreme values and telling you to torture people instead of blink, I think that's a bad sign for your model. Yeah, absolutely, I definitely agree with that.

I mean, suppose that God himself descended from the clouds and told you that your whole religion was true except for the Virgin Birth. If that would change your mind, you can't say you're absolutely certain of the Virgin Birth.

I think that latter statement is equivalent to this:

V = Virgin Birth
G = God appears and proclaims ~V

P(V|G) < 1
∴​P(V) < 1

But that argument is predicated on P(G) > 0. It is internally consistent to believe P(V|G) < 1 and yet P(V) = 1, as long as one also believes P(G) = 0, i.e. one is certain that God will not appear and proclaim ~V.

0robertzk
Go a little farther. Let G(X) = God appears and proclaims X. For religions with acknowledgment of divine revelation, which is all major religions, P(G(X)) has been non-zero for certain X (people have received revelation directly from God). Indeed, granting ultimate authority to God, again a factor of all major religions, means that 0 < P(G(X)) < 1 for all X (granting that there is a statement X such that humans know God will not appear and proclaim X is removing ultimate authority from God and assigning part of it to humans--by the way, we can assume the space of X's is countable so there is no problem with summing to 1). So it is not internally consistent to assume, in particular, that P(G(~V)) = 0, without abandoning ultimate authority to God (or probability theory as a way of reasoning about this stuff, as most religions opt to do). Of course the more productive question is what evolutionary mechanisms allowed human brain architecture the ability to get so off-par with reality but productive from a Darwinian point of view. Some would argue that potential to be so absurdly wrong is what gives brains their computational power in the first place! Bounded rationality under physical constraints is a very active area of research.