KnaveOfAllTrades comments on Six Plausible Meta-Ethical Alternatives - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (36)
Thanks for posting this! Is this list drawn up hodge-podge, or is there some underlying process that generated it? How likely do you think it is to be exhaustive?
It looks like your list is somewhat methodical, arising from combinations of metaethical desiderata/varyingly optimistic projections of the project of value loading?
Are you able to put probabilities to the possibilities?
I'm very confident that no decision theory does better than every other in every situation, insomuch as the decision theories are actually implemented. For any implementing agent, I can put that agent in the adversarial world where Omega instantly destroys any agent implementing that decision theory, assuming it has an instantiation in some world where that makes sense (e.g. where 'instantly' and 'destroy' make sense). This is what we would expect on grounds of Created Already In Motion and general No Free Lunch principles.
The only way I currently see to resolve this is something along the lines of having a measure of performance over instantations of the decision theory, and some scoring rule over that measure over instantiations. Might be other ways, though.
Just to check, you mean
and not
right?
Eliezer uses "should" in an idiosyncratic way, which he thought (and maybe still thinks) would prevent a particular kind of confusion.
On this usage of "should", Eliezer would probably* endorse something very close to (B). However, the "should" is with respect to the moral values towards which human CEV points (in the actual world, not in some counterfactual or future world in which the human CEV is different). These values make up the M that is asserted to exist in (B). And, as far as M is concerned, it would probably be best if all intelligent agents observed M.
* I'm hedging a little bit because maybe, under some perverse circumstances, it would be moral for an agent to be unmoved by moral facts. To give a fictional example, apparently God was in such a circumstance when he hardened the heart of Pharaoh.