Nisan comments on Metaphilosophical Mysteries - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (255)
What is your definition of philosophy for this article?
Why is it a failing of a highly intelligent mind that it can't "do philosophy"?
Why would a Bayesian EU maximizer necessarily be unable to tell that a computable prior is wrong?
When is Bayesian updating the wrong thing to do?
What should I have learned from your link to Updateless Decision Theory that causes me to suspect that EU maximizing with Bayesian updating on a universal prior is wrong?
Doesn't rationality require identification of one's goals, therefore inheriting the full complexity of value of oneself?
What would count as an example of a metaphilosophical insight?
Seconded. We can certainly imagine an amoral agent that responds to rational argument — say, a paperclipper that can be convinced to one-box on Newcomb's problem. This gives rise to the illusion that rationality is somehow universal.
But in what sense is an EU-maximizer with a TM-based universal prior "wrong"? If it loses money when betting on a unary encoding of the Busy Beaver sequence, maybe we should conclude that making money isn't its goal.
If someone knows a way to extract goals from an arbitrary agent in a way that might reveal the agent to be irrational, I would like to hear it.
For instrumental rationality, yes; for epistemic rationality, no. If the reason the EU-maximizer loses money is because it believes that the encoding will be different than it actually is, then it is irrational.