Nisan comments on Metaphilosophical Mysteries - Less Wrong

35 Post author: Wei_Dai 27 July 2010 12:55AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (255)

You are viewing a single comment's thread. Show more comments above.

Comment author: Nisan 27 July 2010 02:04:29PM 1 point [-]

Doesn't rationality require identification of one's goals, therefore inheriting the full complexity of value of oneself?

Seconded. We can certainly imagine an amoral agent that responds to rational argument — say, a paperclipper that can be convinced to one-box on Newcomb's problem. This gives rise to the illusion that rationality is somehow universal.

But in what sense is an EU-maximizer with a TM-based universal prior "wrong"? If it loses money when betting on a unary encoding of the Busy Beaver sequence, maybe we should conclude that making money isn't its goal.

If someone knows a way to extract goals from an arbitrary agent in a way that might reveal the agent to be irrational, I would like to hear it.

Comment author: Randaly 28 July 2010 08:00:34PM 1 point [-]

For instrumental rationality, yes; for epistemic rationality, no. If the reason the EU-maximizer loses money is because it believes that the encoding will be different than it actually is, then it is irrational.