anonym comments on The Meaning of Right - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (147)
"So that we can regard our present values, as an approximation to the ideal
morality that we would have if we heard all the arguments, to whatever extent
such an extrapolation is coherent."
This seems to be in the right ballpark, but the answer is dissatisfying
because I am by no means persuaded that the extrapolation would be coherent
at all (even if you only consider one person.) Why would it? It's
god-shatter, not Peano Arithmetic.
There could be nasty butterfly effects, in that the order in which you
were exposed to all the arguments, the mood you were in upon hearing them and
so forth could influence which of the arguments you came to trust.
On the other hand, viewing our values as an approximation to the ideal
morality that us would have if we heard all the
arguments, isn't looking good either: correctly predicting a bayesian port of
a massive network of sentient god-shatter looks to me like it would require a
ton of moral judgments to do at all. The subsystems in our brains sometimes
resolve things by fighting (ie. the feeling being in a moral dilemma.)
Looking at the result of the fight in your real physical brain isn't helpful
to make that judgment if it would have depended on whether you just had a
cup of coffee or not.
So, what do we do if there is more than one basin of attraction a moral
reasoner considering all the arguments can land in? What if there are no
basins?
This is a really insightful question, and it hasn't been answered convincingly in this thread. Does anybody know if it has been discussed more completely elsewhere?
One option would be to say that the FAI only acts where there is coherence. Another would be to specify a procedure for acting when there are multiple basins of attraction (perhaps by weighting the basins according to the proportion of starting points and orderings of arguments that lead to each basin, when that's possible, or some other 'impartial' procedure).
But still, what if it turns out that most of the difficult extrapolations that we would really care about bounce around without ever settling down or otherwise behave undesirably? No human being has ever done anything like the sorts of calculations that would be involved in a deep extrapolation, so our intuitions based on the extrapolations that we have imagined and that seem to cohere (which all have paths shorter than [e.g.] 1000) might be unrepresentative of the sorts of extrapolations than an FAI would actually have to perform.