You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Stingray comments on Open Thread, Jul. 27 - Aug 02, 2015 - Less Wrong Discussion

5 Post author: MrMind 27 July 2015 07:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (220)

You are viewing a single comment's thread. Show more comments above.

Comment author: snarles 27 July 2015 02:49:11PM *  0 points [-]

Disclaimer: I am lazy and could have done more research myself.

I'm looking for work on what I call "realist decision theory." (A loaded term, admittedly.) To explain realist decision theory, contrast with naive decision theory. My explanation is brief since my main objective at this point is fishing for answers rather than presenting my ideas.

Naive Decision Theory

  1. Assumes that individuals make decisions individually, without need for group coordination.

  2. Assumes individuals are perfect consequentialists: their utility function is only a function of the final outcome.

  3. Assumes that individuals have utility functions which do not change with time or experience.

  4. Assumes that the experience of learning new information has neutral or positive utility.

Hence a naive decision protocol might be:

  • A person decides whether to take action A or action B

  • An oracle tells the person the possible scenarios that could result from action A or action B, with probability weightings.

  • The person subconsciously assigns a utility to each scenario. This utility function is fixed. The person chooses the action A or B based on which action maximizes expected utility.

  • As a consequence of the above assumptions, the person's decision is the same regardless of the order of presentation of the different actions.

Note: we assume physical determinism, so the person's decision is even known in advance to the oracle. But we suppose the oracle can perfectly forecast counterfactuals; to emphasize this point, we might call it a "counterfactual oracle" from now on.

It should be no surprise that the above model of utility is extremely unrealistic. I am aware of experiments demonstrating non-transitivity of utility, for instance. Realist decision theory contrasts with naive decision theory in several ways.

Realist Decision Theory

  1. Acknowledges that decisions are not made individually but jointly with others.

  2. Acknowledges that in a group context, actions have a utility in of themselves (signalling) separate from the utility of the resulting scenarios.

  3. Acknowledges that an individual's utility function changes with experience.

  4. Acknowledges that learning new information constitutes a form of experience, which may itself have positive or negative utility.

Relaxing any one of the four assumptions radically complicates the decision theory. Consider only relaxing conditions 1 and 2: then game theory becomes required. Consider relaxing only 3 and 4, so that for all purposes only one individual exists in the world: then points 3 and 4 mean that the order in which a counterfactual oracle presents the relevant information to the individual affect the individual's final decision. Furthermore, an ethically implemented decision procedure would allow the individual to choose which pieces of information to learn. Therefore there is no guarantee that the individual will even end up learning all the information relevant to the decision, even if time is not a limitation.

It would be great to know which papers have considered relaxing the assumptions of a "naive" decision theory in the way I have outlined.

Comment author: Stingray 27 July 2015 06:57:54PM *  1 point [-]

Acknowledges that in a group context, actions have a utility in of themselves (signalling) separate from the utility of the resulting scenarios.

Why do people even signal anything? To get something for themselves from others. Why would signaling be outside the scope of consequentialism.

Comment author: snarles 28 July 2015 01:35:11PM 0 points [-]

Ordinarily, yes, but you could imagine scenarios where agents have the option to erase their own memories or essentially commit group suicide. (I don't believe these kinds of scenarios are extreme beyond belief--they could come up in transhuman contexts.) In this case nobody even remembers which action you chose, so there is no extrinsic motivation for signalling.