diegocaleiro comments on Decision Theory FAQ - Less Wrong

52 Post author: lukeprog 28 February 2013 02:15PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (467)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 01 March 2013 08:54:37AM 15 points [-]

David, we're not defining rationality to exclude other-oriented desires. We're just not including that exact morality into the word "rational". Instrumental rationality links up a utility function to a set of actions. You hand over a utility function over outcomes, epistemic rationality maps the world and then instrumental rationality hands back a set of actions whose expected score is highest. So long as it can build a well-calibrated, highly discriminative model of the world and then navigate to a compactly specified set of outcomes, we call it rational, even if the optimization target is "produce as many paperclips as possible". Adding a further constraint to the utility function that it be perfectly altruistic will greatly reduce the set of hypothetical agents we're talking about, but it doesn't change reality (obviously) nor yield any interesting changes in terms of how the agent investigates hypotheses, the fact that the agent will not fall prey to the sunk cost fallacy if it is rational, and so on. Perfectly altruistic rational agents will use mostly the same cognitive strategies as any other sort of rational agent, they'll just be optimizing for one particular thing.

Jane doesn't have any false epistemic beliefs about being special. She accurately models the world, and then accurately calculates and outputs "the strategy that leads to the highest expected number of burgers eaten by Jane" instead of "the strategy that has the highest expected fulfillment of all thinking beings' values".

Besides, everyone knows that truly rational entities only fulfill other beings' values if they can do so using friendship and ponies.

Comment author: diegocaleiro 01 March 2013 04:02:27PM *  1 point [-]

That did not address David's True Rejection.
an Austere Charitable Metaethicist could do better.

Comment author: wedrifid 01 March 2013 07:35:17PM 1 point [-]

That did not address David's True Rejection. an Austere Charitable Metaethicist could do better.

The grandparent is a superb reply and gave exactly the information needed in a graceful and elegant manner.

Comment author: diegocaleiro 02 March 2013 05:53:02AM 1 point [-]

Indeed it does. Not. Here is a condition in which I think David would be satified. If people would use vegetables for example as common courtesy to vegetarians, in the exact same sense that "she" has been largely adopted to combat natural drives towards "he"-ness. Note how Luke's agents and examples are overwhelmingly female. Not a requirement, just a courtesy.

An I don't say that as a vegetarian, because I'm not one.

Comment author: davidpearce 07 March 2013 08:04:26AM 0 points [-]

Indeed. What is the Borg's version of the Decision Theory FAQ? This is not to say that rational agents should literally aim to emulate the Borg. Rather our conception of epistemic and instrumental rationality will improve if / when technology delivers ubiquitous access to each other's perspectives and preferences. And by "us" I mean inclusively all subjects of experience.