Perplexed comments on Another Argument Against Eliezer's Meta-Ethics - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (35)
Could you spell out those problems?
And in any case, why is IE trying to determine whether "Eliezer should save person X" is true? Shouldn't he limit himself to determining whether Eliezer should save person X?
Compare: If Joe tries to determine whether "256 X 2 = 512" is true, he will notice that it is true if a certain computation he could perform yields 512, and that if he performs that calculation, he will believe it (the quoted statement above) is true. This leads to Löb-style confusion.
The confusion is dissolved because anyone can do the computations operationalizing IE's ethical definitions, if IE can communicate his values to that computational agent.
Or is this one of those situations where the self-referentiality of UDT-like decision theories gets us into trouble?
ETA: Ok, I just noticed that IE is by definition the end result of a dynamic process of reflective updating starting from the Base Eliezer. So IE is a fixpoint defined in terms of itself. Maybe Löb-style issues really do exist.