davidpearce comments on Decision Theory FAQ - Less Wrong

52 Post author: lukeprog 28 February 2013 02:15PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (467)

You are viewing a single comment's thread. Show more comments above.

Comment author: pragmatist 11 March 2013 05:36:45PM *  5 points [-]

Elsewhere, we also refer to epistemic rationality, which is believing true things. In neither case do we say anything about what you should want.

This begs the question against moral realism. If it is in fact true that harming cows is bad, then epistemic rationality demands that we believe that harming cows is bad. Of course, saying that you should believe harming cows is morally wrong is different from saying that you shouldn't choose to harm cows, but the inference from one to the other is pretty plausible. It seems fairly uncontroversial that if one believes that action X is bad, and it is in fact true that action X is bad, then one should not perform action X (ceteris paribus).

I don't agree with davidpearce's framing (that rationality demands that one give equal weight to all perspectives), but I also don't agree with the claim that rationality does not tell us anything about what we should want. Perhaps instrumental rationality doesn't, but epistemic rationality does.

Comment author: davidpearce 11 March 2013 09:43:20PM 1 point [-]

pragmatist, apologies if I gave the impression that by "impartially gives weight" I meant impartially gives equal weight. Thus the preferences of a cow or a pig or a human trump the conflicting interests of a less sentient Anopheles mosquito or a locust every time. But on the conception of rational agency I'm canvassing, it is neither epistemically nor instrumentally rational for an ideal agent to disregard a stronger preference simply because that stronger preference is entertained by a member of a another species or ethnic group. Nor is it epistemically or instrumentally rational for an ideal agent to disregard a conflicting stronger preference simply because her comparatively weaker preference looms larger in her own imagination. So on this analysis, Jane is not doing what "an ideal agent (a perfectly rational agent, with infinite computing power, etc.) would choose."

Comment author: nshepperd 11 March 2013 09:58:34PM *  3 points [-]

Rationality can be used toward any goal, including goals that don't care about anyone's preference. For example, there's nothing in the math of utility maximisation that requires averaging over other agents' preferences (note: do not confuse utility maximisation with utilitarianism, they are very different things, the former being a decision theory, the latter being a specific moral philosophy).

Comment author: davidpearce 11 March 2013 11:50:30PM 1 point [-]

nshepperd, utilitarianism conceived as theory of value is not always carefully distinguished from utilitarianism - especially rule-utilitarianism - conceived as a decision procedure. This distinction is nicely brought out in the BPhil thesis of FHI's Tony Ord, "Consequentialism and Decision Procedures": http://www.amirrorclear.net/academic/papers/decision-procedures.pdf Toby takes a global utilitarian consequentialist approach to the question, 'How should I decide what to do?" - a subtly different question from '"What should I do?"