Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Peter_de_Blanc comments on Value Uncertainty and the Singleton Scenario - Less Wrong

8 Post author: Wei_Dai 24 January 2010 05:03AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (28)

You are viewing a single comment's thread.

Comment author: Peter_de_Blanc 27 January 2010 09:46:14PM 5 points [-]

The negotiation approach doesn't value information. This is a big problem with that approach.

If you're uncertain about which values are correct, it's very important for you to get whatever information you need to reduce your uncertainty. But by Conservation of Expected Evidence, none of those value systems would advocate doing so.

To use the parliament analogy, imagine that you have a button that, when pressed, will randomly increase the vote share of some members of parliament, and decrease the vote share of others. Since this button represents gaining information that you don't already have, no member of parliament can expect to increase his or her vote share by pressing the button. Maybe parliament is indifferent to pressing it, or maybe due to some quirk of the negotiation system they would vote to press it, but they certainly wouldn't expend vast resources to press it. But figuring out your morality is actually worth spending vast resources on!

Comment author: Peter_de_Blanc 28 January 2010 03:35:49PM 4 points [-]

Hmm, maybe the way to fix this is to have each agent in the parliament believe that future experiments will validate its position. More precisely, the agent's own predictions condition on its value system being correct. Then the parliament would vote to expend resources on information about the value system.

Comment author: RobinZ 28 January 2010 03:51:03PM 0 points [-]

Is it possible to enforce that? It seems like specifying a bottom line to me.

Comment author: Peter_de_Blanc 28 January 2010 07:37:21PM 3 points [-]

It would be specifying a bottom line if each sub-agent could look at any result and say afterwards that this result supports its position. That's not what I'm suggesting. I'm saying that each sub-agent should make predictions as if its own value system is correct, rather than having each sub-agent use the same set of predictions generated by the super-agent.

Comment author: RobinZ 28 January 2010 08:01:44PM 0 points [-]

Quick dive into the concrete: I think that legalization of marijuana would be a good thing ... but that evaluation is based on my current state of knowledge, including several places where my knowledge is ambiguous. By Baye's Rule, I can't possibly have a nonzero expectation for the change in my evaluation based on the discovery of new data.

Am I misunderstanding the situation you hypothesize?

Comment author: Peter_de_Blanc 28 January 2010 10:39:13PM 1 point [-]

You can have a nonzero expectation for the change in someone else's evaluation, which is what I was talking about. The super-agent and the sub-agent have different beliefs.

Comment author: RobinZ 29 January 2010 12:17:41AM 0 points [-]

I see - that is sensible.