Nisan comments on Metaphilosophical Mysteries - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (255)
Seconded. We can certainly imagine an amoral agent that responds to rational argument — say, a paperclipper that can be convinced to one-box on Newcomb's problem. This gives rise to the illusion that rationality is somehow universal.
But in what sense is an EU-maximizer with a TM-based universal prior "wrong"? If it loses money when betting on a unary encoding of the Busy Beaver sequence, maybe we should conclude that making money isn't its goal.
If someone knows a way to extract goals from an arbitrary agent in a way that might reveal the agent to be irrational, I would like to hear it.
For instrumental rationality, yes; for epistemic rationality, no. If the reason the EU-maximizer loses money is because it believes that the encoding will be different than it actually is, then it is irrational.