You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Dagon comments on Open Thread, Jul. 27 - Aug 02, 2015 - Less Wrong Discussion

5 Post author: MrMind 27 July 2015 07:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (220)

You are viewing a single comment's thread. Show more comments above.

Comment author: snarles 27 July 2015 02:49:11PM *  0 points [-]

Disclaimer: I am lazy and could have done more research myself.

I'm looking for work on what I call "realist decision theory." (A loaded term, admittedly.) To explain realist decision theory, contrast with naive decision theory. My explanation is brief since my main objective at this point is fishing for answers rather than presenting my ideas.

Naive Decision Theory

  1. Assumes that individuals make decisions individually, without need for group coordination.

  2. Assumes individuals are perfect consequentialists: their utility function is only a function of the final outcome.

  3. Assumes that individuals have utility functions which do not change with time or experience.

  4. Assumes that the experience of learning new information has neutral or positive utility.

Hence a naive decision protocol might be:

  • A person decides whether to take action A or action B

  • An oracle tells the person the possible scenarios that could result from action A or action B, with probability weightings.

  • The person subconsciously assigns a utility to each scenario. This utility function is fixed. The person chooses the action A or B based on which action maximizes expected utility.

  • As a consequence of the above assumptions, the person's decision is the same regardless of the order of presentation of the different actions.

Note: we assume physical determinism, so the person's decision is even known in advance to the oracle. But we suppose the oracle can perfectly forecast counterfactuals; to emphasize this point, we might call it a "counterfactual oracle" from now on.

It should be no surprise that the above model of utility is extremely unrealistic. I am aware of experiments demonstrating non-transitivity of utility, for instance. Realist decision theory contrasts with naive decision theory in several ways.

Realist Decision Theory

  1. Acknowledges that decisions are not made individually but jointly with others.

  2. Acknowledges that in a group context, actions have a utility in of themselves (signalling) separate from the utility of the resulting scenarios.

  3. Acknowledges that an individual's utility function changes with experience.

  4. Acknowledges that learning new information constitutes a form of experience, which may itself have positive or negative utility.

Relaxing any one of the four assumptions radically complicates the decision theory. Consider only relaxing conditions 1 and 2: then game theory becomes required. Consider relaxing only 3 and 4, so that for all purposes only one individual exists in the world: then points 3 and 4 mean that the order in which a counterfactual oracle presents the relevant information to the individual affect the individual's final decision. Furthermore, an ethically implemented decision procedure would allow the individual to choose which pieces of information to learn. Therefore there is no guarantee that the individual will even end up learning all the information relevant to the decision, even if time is not a limitation.

It would be great to know which papers have considered relaxing the assumptions of a "naive" decision theory in the way I have outlined.

Comment author: Dagon 27 July 2015 03:26:33PM *  1 point [-]

Unpack #1 a bit.

Are you looking for information about situations where an individual's decisions should include predicted decisions by others (which will in turn take into account the individual's decisions)? The (Game Theory Sequence)[http://lesswrong.com/lw/dbe/introduction_to_game_theory_sequence_guide/] is a good starting point.

Or are you looking for cases where "individual" is literally not the decision-making unit? I don't have any good less-wrong links, but both (Public Choice Theory)[http://lesswrong.com/lw/2hv/public_choice_and_the_altruists_burden/] and the idea of sub-personal decision modules come up occasionally.

Both topics fit into the overall framework of classical decision theory (naive or not, you decide) and expected value.

Items 2-4 don't contradict classical decision theory, but fall somewhat outside of it. decision theory generally looks at instrumental rationality - how to best get what one wants, rather than questions of what to want.

Comment author: snarles 28 July 2015 01:42:03PM 1 point [-]

Thanks for the references.

I am interested in answering questions of "what to want." Not only is it important for individual decision-making, but there are also many interesting ethical questions. If a person's utility function can be changed through experience, is it ethical to steer it in a direction that would benefit you? Take the example of religion: suppose you could convince an individual to convert to a religion, and then further convince them to actively reject new information that would endanger their faith. Is this ethical? (My opinion is that it depends on your own motivations. If you actually believed in the religion, then you might be convinced that you are benefiting others by converting them. If you did not actually believe in the religion, then you are being manipulative.)

Comment author: Dagon 28 July 2015 04:23:54PM 0 points [-]

Cool. The (Metaethics Sequence)[http://wiki.lesswrong.com/wiki/Metaethics_sequence] is useful for some of those things.

I have to admit that, for myself, I remain unconvinced that there is an objective truth to be had regarding "what should I want". Partly because I'm unconvinced that "I" is a coherent unitary thing at any given timepoint, let alone over time. And partly because I don't see how to distinguish "preferences" from "tendencies" without resorting to unmeasurable guesses about qualia and consciousness.