taw comments on Evolved Bayesians will be biased - Less Wrong

22 Post author: taw 20 August 2009 02:54PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (14)

You are viewing a single comment's thread. Show more comments above.

Comment author: taw 20 August 2009 10:04:21PM 0 points [-]

By being internally inconsistent, and only saved by your mistakes in A.

For example it can be argued that proper D should treat risk of dying in all possible ways the same way. If person's D considers dying of shark attack worse than dying of infection (given similar level of suffering etc), and their A has completely wrong idea of how likely shark attacks and infections are, they might take precautions about sharks and infections that are exactly correct. If they find out what A is really like, and start using it, their decisions suddenly become inconsistent.

Of course you can argue from fundamentalist position that utility function is "never wrong", but if you can be trivially Dutch booked, or have ridiculously inconsistent preferences between essentially equivalent outcomes (like dying), then it's "wrong" as far as I'm concerned.

Comment author: SforSingularity 21 August 2009 06:52:13PM 1 point [-]

For example it can be argued that proper D should treat risk of dying in all possible ways the same way

It is not logically inconsistent to prefer dying in one way over another.

Comment author: taw 21 August 2009 07:17:24PM 0 points [-]

If you buy into fundamentalist interpretations of utility functions then it's not. If you don't, then it is - to me there should be some difference in something "meaningful" for there to be difference in preferences, otherwise it's not a good utility function.

Even with fundamentalist interpretation you get known inconsistencies with probabilities, so it doesn't save you.

Comment author: SforSingularity 22 August 2009 12:15:06PM 1 point [-]

I think that the strongest critique of D is that most people choose things that they later honestly claim were not "what they actually wanted", i.e. D acts something like a stable utility function Du with a time and mood dependent error term Derror added to it. It causes many people much suffering that their own actions don't live up to the standards of what they consider to be their true goals.

Probabilistic inconsistencies in action are probably less of a problem for humans, though not completely absent.

Comment author: Technologos 21 August 2009 03:28:46AM 1 point [-]

Even more to the point, imagine D to be split into two parts, a utility function and a goal-seeking function. Then even if the utility function is never "wrong," per se, the goal-seeking function could suboptimally use A to pursue the goals. Our D-functions routinely make poor decisions of the second sort, e.g. akrasia.