Eugine_Nier comments on Our Phyg Is Not Exclusive Enough - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (513)
Metaethics sequence is a bit of a mess, but the point it made is important, and it doesn't seem like it's just some wierd opinion of Eliezer's.
After I read it I was like, "Oh, ok. Morality is easy. Just do the right thing. Where 'right' is some incredibly complex set of preferences that are only represented implicitly in physical human brains. And it's OK that it's not supernatural or 'objective', and we don't have to 'justify' it to an ideal philosophy student of perfect emptyness". Fake utility functions, and Recursive justification stuff helped.
Maybe there's something wrong with Eliezer's metaethics, but I havn't seen anyone point it out, and have no reason to suspect it. Most of the material that contradicts it is obvious mistakes from just not having read and understood the sequences, not an enlightened counter-analysis.
Try actually applying it to some real life situations and you'll quickly discover the problems with it.
such as?
Well, for starters determining whether something is a preference or a bias is rather arbitrary in practice.
I struggled with that myself, but then figured out a rather nice quantitative solution.
Eliezer's stuff doesn't say much about that topic, but that doesn't mean it fails at it.
I don't think your solution actually resolves things since you still need to figure out what weights to assign to each of your biases/values.
You mean that it's not something that I could use to write an explicit utility function? Of course.
Beyond that, whatever weight all my various concerns have is handled by built-in algorithms. I just have to do the right thing.
There's a difference between a metaethics and an ethical theory.
The metaethics sequence is supposed to help dissolve the false dichotomy "either there's a metaphysical, human-independent Source Of Morality, or else the nihilists/moral relativists are right". It's not immediately supposed to solve "So, should we push a fat man off the bridge to stop a runaway trolley before it runs over five people?"
For the second question, we'd want to add an Ethics Sequence (in my opinion, Yvain's Consquentialism FAQ lays some good groundwork for one).