Zvi comments on Two-Tier Rationalism - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (26)
Seems that way. Disclaimer: IHAPMOE (I have a poor model of Eliezer).
See for example my comment on why trying to maximize happiness should increase your utility more than trying to maximize your utility would. If happiness is the derivative of utility, then maximizing happiness over a finite time period maximizes the increase in utility over that time-period. If you repeatedly engage in maximizing your happiness over a timespan that's small relative to your lifespan, at the end of your life you'll have attained a higher utility than someone who tried to maximize utility over those time-periods.
This variant on Kant's maxim seems still to be universally adhered to by moralists; yet it's wrong. I know that's a strong claim.
The problem is that everybody has different reasoning abilities. A universal moral code, from which one could demand that it satisfy the publicity condition, must be one that is optimal for EY and for chimpanzees.
If you admit that it may be more optimal for EY to adopt a slightly more sophisticated moral code than the chimpanzees do, then satisfaction of the publicity condition implies suboptimality.
Doesn't the publicity condition allow you to make statements like "If you have the skills to do A then do A, otherwise do B"? Similarly, to solve the case where everyone was just like you, a code can alter itself in the case that publicity cares about: "If X percent of agents are using this code, do Y, otherwise do Z." It seems sensible to alter your behavior in both cases, even if it feels like dodging the condition.