JGWeissman comments on Rationality Quotes January 2013 - Less Wrong

6 Post author: katydee 02 January 2013 05:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (604)

You are viewing a single comment's thread. Show more comments above.

Comment author: JGWeissman 16 January 2013 02:49:30PM 7 points [-]

Suppose you had the chance to save the life of one sparrow, but doing so kills you with probability p. For what values of p would you do so?

If the answer is only when p=0, then your value of sparrows should never affect your choices, because it will always be dominated by the greater probability of your own welfare.

Comment author: Kawoomba 18 January 2013 12:07:13PM 2 points [-]

A strong argument, well done.

This indeed puts me in a conundrum: If I answer anything but p=0, I'm giving a kind of weighting factor that destroys the supposedly strict separation between tiers.

However, if I answer p=0, then indeed as long as there is anything even remotely or possibly affecting my top tier terminal values, I should rationally disregard pursuing any other unrelated goal whatsoever.

Obviously, as evident by my writing here, I do not solely focus all my life's efforts on my top tier values, even though I claim they outweigh any combination of other values.

So I am dealing with my value system in an irrational way. However, there are two possible conclusions concerning my confusion:

  • Are my supposed top tier terminal values in fact outweigh-able by others, with "just" a very large conversion coefficient?

or

  • Do I in fact rank my terminal values as claimed and am just making bad choices effectively matching my behavior to those values, wasting time on things not strictly related to my top values? (Is it just an instrumental rationality failure?) Anything with a terminal value that's valued infinitely higher than all other values should behave strictly isomorphically to a paperclip maximizer with just that one terminal value, at least in our universe.

This could be resolved by Omega offering me a straight out choice, pressing buttons or something. I know what my consciously reflected decision would be, even if my daily routine does not reflect that.

Another case of "do as I say (I'd do in hypothetical scenarios), not as I do (in daily life)" ...

Comment author: wedrifid 18 January 2013 02:57:28PM 0 points [-]

This indeed puts me in a conundrum: If I answer anything but p=0, I'm giving a kind of weighting factor that destroys the supposedly strict separation between tiers.

Well, you could always play with some fun math...

Comment author: [deleted] 18 January 2013 03:58:41PM 0 points [-]

Even that would be equivalent to an expected utility maximizer using just real numbers, except that there's a well-defined tie-breaker to be used when two different possible decisions would have the exact same expected utility.

Comment author: MugaSofer 20 January 2013 01:34:12PM *  -2 points [-]

How often do two options have precisely the same expected utility? Not often, I'm guessing. Especially in the real world.

Comment author: [deleted] 20 January 2013 05:57:47PM *  1 point [-]

I guess almost never (in the mathematical sense). OTOH, in the real world the difference is often so tiny that it's hard to tell its sign -- but then, the thing to do is gather more information or flip a coin.

Comment author: ArisKatsaris 16 January 2013 03:04:59PM -1 points [-]

If the answer is only when p=0, then your value of sparrows should never affect your choices, because it will always be dominated by the greater probability of your own welfare.

Not sure that holds. Surely there could be situations where you can't meaningfully calculate whether acting to preserve the life of a sparrow increases or decreases the probability of your death, therefore you act to preserve its life because though you consider it a fundamentally lesser terminal value, it's still a terminal value.

Comment author: JGWeissman 16 January 2013 03:18:40PM 0 points [-]

Surely there could be situations where you can't meaningfully calculate whether acting to preserve the life of a sparrow increases or decreases the probability of your death

In this case you try harder to figure out a way to calculate the impact on your chance of death. The value of information of such an effort is worth infinite sparrow lives. Lower tier utility functions just don't matter.

Comment author: ArisKatsaris 16 January 2013 03:39:49PM *  -1 points [-]

In this case you try harder to figure out a way to calculate the impact on your chance of death. The value of information of such an effort is worth infinite sparrow lives.

What if you've already estimated that calculating excessively (e.g. beyond a minute) on this matter will have near-definite negative impact on your well-being?

Comment author: JGWeissman 16 January 2013 03:52:07PM *  0 points [-]

Then you go do something else that's relevant to your top-tier utility function.

You can contrive a situation where the lower tier matters, but it looks like someone holding a gun to your head, and threatening to kill you if you don't choose in the next 5 seconds whether or not they shoot the sparrow. That sort of thing generally doesn't happen.

And even then, if you have the ability to self-modify, the costs of maintaining a physical representation of the lower tier utility functions is greater than the marginal benefit of choosing to save the sparrow automatically because you lower tier utility function says so over choosing alphabetically.