Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

V_V comments on The Useful Idea of Truth - Less Wrong

77 Post author: Eliezer_Yudkowsky 02 October 2012 06:16PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (515)

You are viewing a single comment's thread. Show more comments above.

Comment author: V_V 03 October 2012 10:49:02AM *  1 point [-]

Maybe the first two just argues for Platonism and modal realism (although I note that Eliezer explicitly disclaimed being a modal realist).

I think Yudkowsky is a Platonist, and I'm not sure he has a consistent position on modal realism, since when arguing on morality he seemed to espouse it: see his comment here.

For example, "You should two-box in Newcomb's problem." If I say "Alice has a false belief that she should two-box in Newcomb's problem" it doesn't seem like I'm saying that her map doesn't correspond to the territory.

I don't think that "You should two-box in Newcomb's problem." is actually a normative statement, even if it contains a "should": you can rephrase it epistemically as "If you two-box in Newcomb's problem then you will maximize your expected utility".

Therefore, if you say "Alice has a false belief that if she two-boxes in Newcomb's problem then she will maximize her expected utility" you are saying that her belief doesn't correspond to the mathematical constructs underlying Newcomb's problem. If you take the Platonist position that mathematical constructs exist as external entities ("the territory"), then yes, you are saying that her map doesn't correspond to the territory.

Comment author: TheOtherDave 03 October 2012 02:10:21PM 2 points [-]

I don't think that "You should two-box in Newcomb's problem." is actually a normative statement, even if it contains a "should": you can rephrase it epistemically as "If you two-box in Newcomb's problem then you will maximize your expected utility".

Well, sure, a utilitarian can always "rephrase" should-statements that way; to a utilitarian what "X should Y" means is "Y maximizes X's expected utility." That doesn't make "X should Y" not a normative statement, it just means that utilitarian normative statements are also objective statements about reality.

Conversely, I'm not sure a deontologist would agree that you can rephrase one as the other... that is, a deontologist might coherently (and incorrectly) say "Yes, two-boxing maximizes expected utility, but you still shouldn't do it."

Comment author: V_V 03 October 2012 02:57:41PM *  0 points [-]

I think you are conflating two different types of "should" statements: moral injunctions and decision-theoretical injunctions.

The statement "You should two-box in Newcomb's problem" is normally interpreted as a decision-theoretical injunction. As such, it can be rephrased epistemically as "If you two-box in Newcomb's problem then you will maximize your expected utility".

But you could also interpret the statement "You should two-box in Newcomb's problem" as the moral injunction "It is morally right for you to two-box in Newcomb's problem". Moral injunctions can't be rephrased epistemically, at least unless you assume a priori that there exist some external moral truths that can't be further rephrased.

The utilitarianist of your comment is doing that. His actual rephrasing is "If you two-box in Newcomb's problem then you will maximize the expected universe cumulative utility". This assumes that:

  • This universe cumulative utility exists as an external entity

  • The statement "It is morally right for you to maximize the expected universe cumulative utility" exists as an external moral truth.

Comment author: Wei_Dai 03 October 2012 01:03:44PM 1 point [-]

I think Yudkowsky is a Platonist, and I'm not sure he has a consistent position on modal realism, since when arguing on morality he seemed to espouse it: see his comment here.

Thanks for the link. That does seem inconsistent.

I don't think that "You should two-box in Newcomb's problem." is actually a normative statement, even if it contains a "should": you can rephrase it epistemically as "If you two-box in Newcomb's problem then you will maximize your expected utility".

This comment should help you understand why I disagree. Does it make sense?

Comment author: V_V 03 October 2012 03:01:02PM 2 points [-]

This comment should help you understand why I disagree. Does it make sense?

I don't claim that all injunctions can be rephrased as epistemic statements. I claim that decision-theoretic injunctions can be rephrased as epistemic statements. Moral injunctions can't.

Comment author: Wei_Dai 03 October 2012 09:30:00PM *  0 points [-]

I don't claim that all injunctions can be rephrased as epistemic statements. I claim that decision-theoretic injunctions can be rephrased as epistemic statements. Moral injunctions can't.

I'm confused by your reply because the comment I linked to tried to explain why I don't think "You should two-box in Newcomb's problem" can be rephrased as an epistemic statement (as you claimed earlier). Did you read it, and if so, can you explain why you disagree with its reasoning?

ETA: Sorry, I didn't notice your comment in the other subthread where you gave your definitions of "decision-theoretic" vs "moral" injunctions. Your reply makes more sense with those definitions in mind, but I think it shows that the comment I linked to didn't get my point across. So I'll try it again here. You said earlier:

I don't think that "You should two-box in Newcomb's problem." is actually a normative statement, even if it contains a "should": you can rephrase it epistemically as "If you two-box in Newcomb's problem then you will maximize your expected utility".

A causal decision theorist (C) and an evidential decision theorist (E) have different definitions of "maximize your expected utility", and so when C says to E "you should two-box in Newcomb's problem" he is not just saying "If you two-box in Newcomb's problem then you will maximize your expected utility according to the CDT formula" since E wouldn't care about that. So my point is that "you should two-box in Newcomb's problem" is usually not a "decision-theoretical injunction" in your sense of the phrase, but rather a normative statement as I claimed.

Comment author: V_V 04 October 2012 12:07:59PM *  0 points [-]

A causal decision theorist (C) and an evidential decision theorist (E) have different definitions of "maximize your expected utility", and so when C says to E "you should two-box in Newcomb's problem" he is not just saying "If you two-box in Newcomb's problem then you will maximize your expected utility according to the CDT formula" since E wouldn't care about that. So my point is that "you should two-box in Newcomb's problem" is usually not a "decision-theoretical injunction" in your sense of the phrase, but rather a normative statement as I claimed.

I was assuming implicitely that we were talking in the context of EDT.

In general, you can say "Two-boxing in Newcomb's problem is the optimal action for you", where the definition of "optimal action" depends on the decision theory you use.

If you use EDT, then "optimal action" means "maximizes expected utility", hence the statement above is false (that is, it is inconsistent with the axioms of EDT and Newcomb's problem).

If you use CDT, then "optimal action" means "maximizes expected utility under a causality assumption". Hence the statement above is technically true, although not very useful, since the axioms that define Newcomb's problem specifically violate the causality assumption.

So, which decision theory should you use? An answer like "you should use the decision theory that determines the optimal action without any assumption that violates the problem constraints" seems irreducible to an epistemic statement. But is that actually correct?

If you are studing actual agents, then the point is moot, since these agents already have a decision theory (in practice it will be an approximation of either EDT or CDT, or something else), but what if you want to improve yourself, or build an artificial agent?

Then you evaluate the new decision theory according to the decision theory that you already have. Then, assuming that in principle your current decision theory can be described epistemically, you can say, for instance: "A decision theory that determines the optimal action without any assumption that violates the problem constraints is optimal for me".

If you want to suggest a decision theory to somebody who is not you, you can say: "A decision theory that determines the optimal action without any assumption that violates the problem constraints is optimal for you", or, more properly but less politely: "You using a decision theory that determines the optimal action without any assumption that violates the problem constraints are optimal for me".

Comment author: Wei_Dai 04 October 2012 11:12:26PM *  2 points [-]

Then you evaluate the new decision theory according to the decision theory that you already have.

I had similar thoughts before, but eventually changed my mind. Unfortunately it's hard to convince people that their solution to some problem isn't entirely satisfactory without having a better solution at hand. (For example, this post of mine pointing out a problem with using probability theory to deal with indexical uncertainty sat at 0 points for months before I made my UDT post which suggested a different solution.) So instead of trying harder to convince people now, I think I will instead try harder to figure out a better answer by myself (and others who already share my views).