Vladimir_Nesov comments on The Urgent Meta-Ethics of Friendly Artificial Intelligence - Less Wrong

45 Post author: lukeprog 01 February 2011 02:15PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (249)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 02 February 2011 02:25:58AM 0 points [-]

"Expected utility maximisation" is, by definition what actually represents our best outcome.

No, it's based on certain axioms that are not unbreakable in strange contexts, which in turn assume a certain conceptual framework (where you can, say, enumerate possibilities in a certain way).

Comment author: endoself 02 February 2011 02:38:26AM 0 points [-]

Name one exception to any axiom other than the third or to the general conceptual framework.

Comment author: Vladimir_Nesov 02 February 2011 02:53:50AM *  0 points [-]

There's no point in assuming completeness, being able to compare events that you won't be choosing between (in the context of utility function having possible worlds as domain). Updateless analysis says that you never actually choose between observational events. And there are only so many counterfactuals to consider (which in this setting are more about high-level logical properties of a fixed collection of worlds, which lead to their different utility, and not presence/absence of any given possible world, so in one sense even counterfactuals don't give you nontrivial events).

Comment author: endoself 02 February 2011 03:23:56AM 0 points [-]

There's no point in assuming completeness, being able to compare events that you won't be choosing between (in the context of utility function having possible worlds as domain).

Is there ever actually a two events for which this would not hold if you did need to make such a choice?

Updateless analysis says that you never actually choose between observational events.

I'm not sure what you mean. Outcomes do not have to be observed in order to be chosen between.

And there are only so many counterfactuals to consider (which in this setting are more about high-level logical properties of a fixed collection of worlds, which lead to their different utility, and not presence/absence of any given possible world, so in one sense even counterfactuals don't give you nontrivial events).

Isn't this just seperating degrees of freedom and assuming that some don't affect others? It can be derived from the utility axioms.