Rain comments on Pathological utilitometer thought experiment - Less Wrong

4 Post author: Rain 26 October 2010 03:13PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (30)

You are viewing a single comment's thread. Show more comments above.

Comment author: Rain 28 October 2010 04:09:23PM *  0 points [-]

If almost every action is static noise apart from it's predictable consequences, is it not a sensible approximation to assume that the static noises are going to be, on average, equal?

In my estimation, it seems likely that either the sign of total utility flips between positive and negative based on every act (very large swings, butterfly effect), or all utility is canceled out by noise after the short term (anchoring to null).

In which case, you can value the predictable consequences, and let the unpredictable consequences cancel.

If you fail to do that you can't get a value of utility for anything; even for the utility of making a better utilitometer.

Hence pathology.

Comment author: Kingreaper 28 October 2010 05:07:46PM *  1 point [-]

In my estimation, it seems likely that either the sign of total utility flips between positive and negative based on every act (very large swings, butterfly effect), or all utility is canceled out by noise after the short term (anchoring to null).

This is a strange version of the gamblers fallacy; the random noise doesn't "cancel out" the chosen act. If I place my D20 on the 1 20 times in a row, that doesn't make it any less likely that I'll roll a 1 during a game.

Imagine a game where you first place a fair coin heads up (winning 5000 utilons) or tails up (losing 5000 utilons) and then flip it 10 million times; winning 500 utilons for every coinflip that turns up heads; and losing 500 utilons for every tails

Sure, the unpredictable (chaotic) effects are much larger than the predictable effects, but they don't cancel them out.

Putting the coin down heads-up is, on average, 10,000 utilons better.

Just like torturing someone for no reason is, on average, going to produce a worse world-outcome than giving someone chocolate for no reason.

Comment author: Rain 28 October 2010 05:42:44PM *  0 points [-]

I disagree, primarily on the grounds of when you take the measure of utility. As usual, you're measuring immediately after the event occurs, whereas all of my previous statements have been about a measure many years after. It is not at all clear to me that short term effects like those you describe end up with long term average effects that can be calculated, or would be of the desired sign. Events are not discrete.

How does giving a random person chocolate for no reason affect them over the course of their whole life?

Comment author: Kingreaper 28 October 2010 07:37:18PM 0 points [-]

I disagree, primarily on the grounds of when you take the measure of utility.

Do you disagree with just my real-world application, or also with my coinflip example?

It is not at all clear to me that short term effects like those you describe end up with long term average effects that can be calculated, or would be of the desired sign.

Let's say you have two choices; one is "+500 utilons and then other stuff"; the other is "-500 utilons and then other stuff", where you don't know anything about the nature of "other stuff". Why can you not cancel out the unknowns? Your best information about both unknowns is identical, is it not?

How does giving a random person chocolate for no reason affect them over the course of their whole life?

On average better than torturing them would. Do you disagree?

Comment author: Rain 28 October 2010 08:47:33PM *  0 points [-]

Do you disagree with just my real-world application, or also with my coinflip example?

Both.

Let's say you have two choices; one is "+500 utilons and then other stuff"; the other is "-500 utilons and then other stuff", where you don't know anything about the nature of "other stuff". Why can you not cancel out the unknowns? Your best information about both unknowns is identical, is it not?

Too clean - money is not utilons. I think I can see part of the problem. The standard definition of utility seems to contain the time element within it, rather than allowing context and flow into the future to have an effect on the object (not utilons!) itself. Using the very word 'utility' creates a point-in-time effect?

On average better than torturing them would. Do you disagree?

Maybe. I'm mainly trying to say, "I don't know", because I'm caught in some weird loop of calculation over unknown quantities.

Comment author: Kingreaper 28 October 2010 08:50:42PM *  0 points [-]

Both.

Okay, let's concentrate on this for a second, why do you disagree with the coinflip example?

Do you feel that the two sets of coinflips DON'T have the same average utility? Do you feel that the average utility of the coinflips isn't zero?

Do you feel that utility can't be measured? (in which case, whence the utiliometer?)

Comment author: Rain 28 October 2010 08:58:27PM 1 point [-]

I feel that the word 'utilons' needs to be disambiguated or tabooed, and that once I see the actual winnings (money? prestige? sweet, sweet heroin?), I could see how it might be 'utilons' at the point it's won, but negative utility later on.

Comment author: Kingreaper 28 October 2010 09:00:35PM *  1 point [-]

Okay, let's make it money, and assume you're a money-optimiser.

Or make it utilons, and you've been told it's utilons by your friend, who has a utiliometer.

EDIT:(I am forced to give such arbitrary, but certain, examples by the nature of the issue you're having; you seem to be seeing anything with an uncertain part as being completely indistinguishable; to an extent that makes torture indistinguishable from chocolate.)

Hmmm, perhaps there is one example that could work: replace utilons with "hours worth of progress on making the utiliometer", but make all the negative amounts 0 instead.

In each of these cases: do the random bits cancel?

Comment author: Rain 28 October 2010 09:16:19PM 1 point [-]

During the flipping of the coin, and the winning of the utilons, yes. If they're taking the measure with the utilitometer at the point-in-time of winning, then it will show 'utilons', but I think that's the wrong place to take the measurement. There's the possibility that more now means less later, or over all. If they take the measurement at end-of-time, then I would expect massive differences between each coin flip, as measured by the utilitometer, or no effect whatsoever.

I still think the problem is inherent in the definition, though, so asking me questions based strictly on that definition is, uh, problematic, even as a thought experiment.

Value is complex. Humans are contradictory. I doubt there is such a thing as a true utilon, or a simplistic optimizer of any kind. I asked Clippy what it valued, and didn't get satisfactory results when talking about prediction and value problems.

Comment author: Kingreaper 28 October 2010 09:23:31PM *  1 point [-]

During the flipping of the coin, and the winning of the utilons, yes. If they're taking the measure with the utilitometer at the point-in-time of winning, then it will show 'utilons', but I think that's the wrong place to take the measurement. There's the possibility that more now means less later, or over all. If they take the measurement at end-of-time, then I would expect massive differences between each coin flip, as measured by the utilitometer, or no effect whatsoever.

The friend with the utiliometer set it up so that there are no differences between each flip. One might alter windflow over the artic, the other might kill a fish in the pacific, total utility is the same.

Value is complex. Humans are contradictory. I doubt there is such a thing as a true utilon, or a simplistic optimizer of any kind.

Then why bother trying to make a utiliometer?

Remember all those unintended consequences? Your making an imperfect utiliometer is as likely to have huge negative effects on the far future as any other action you make.

And you making a perfect utiliometer is impossible; the total future is unbounded.

Again: if you have two possibilities that are, on average, the same; apart from a small, known difference (ie. torturing someone to death or giving them chocolate; both are almost equally likely to prevent the end of the world [there is good reason to think that the chocolate is a better choice in that regard, but the effect is minor] both are equally likely to decrease the death toll in the year 5583224308, but one gives someone chocolate, and the other tortures the person to death) why can't you cancel the bit that's the same, and look at the difference?