RichardKennaway comments on Rationality Quotes Thread October 2015 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (265)
The application to Coherent Extrapolated Volition is left as an exercise.
An important part of the quote, it seems, is "may be" the most oppressive. Only if the goodness of these "omnipotent moral busybodies" is actually so different from our own that we suffer under it is there an issue; a goodness well-executed would perhaps never even be called a tyranny at all.
But, from the inside, how to you tell the difference between doing actual good for others or being an omnipotent moral busybody?
"She's the sort of woman who lives for others - you can tell the others by their hunted expression."
Having a good factual model of a person would be necessary, and perhaps sufficient, for making that judgment favourably. When moving beyond making people more equal and free in their means, the model should be significantly better than their self-model. After that, the analyst would probably value the thus observed people caring about self-determination in the territory (so no deceiving them to think they're self-determining), and act accordingly.
If people declare that analysing people well enough to know their moral values is itself being a busybody, it becomes harder. First I would note that using the internet without unusual data protection already means a (possibly begrudging) acceptance of such busybodies, up to a point. But in a more inconvenient world, consent or prevention of acute danger are as far as I would be willing to go in just a comment.
For a single person, yes, but it takes a significant investment of time to build an accurate, factual model of a single person. It becomes impractical to do so when making decisions that affect even a mere hundred people.
How would you recommend scaling this up for large groups?
Sociology and psychology. Determine patterns in human desires and behaviour, and determine universal rules. Either that, or scale up your resources and get yourself an fAI.
This is a difficult problem, which very few people (if any) have ever solved properly. It's (probably) not insoluble, but it's also not easy...
Good luck.
Willingness to be critiqued? Self-examination and scrupulous quantities of doubt? This seems kind of like the wrong question, actually. "Actual good" is a fuzzy concept, if it even exists at all; a benevolent tyrant cares whether or not they are fulfilling their values (which, presumably, includes "provide others with things I think are good"). The question I would ask is how you tell the difference between actually achieving the manifestation of your values and only making a big show of it; presumably it's the latter that causes the problem (or at least the problem that you care about).
Then again, this comes from a moral non-realist who doesn't see a contradiction in having a moral clause saying it's good to enforce your morality on others to some extent, so your framework's results may vary.
Both of these will help. A lot.
True. One could go with "that which causes the greatest happiness", but one shouldn't be putting mood-controlling chemicals in the water. One could go with "that which best protects human life", but one shouldn't put humanity into a (very safe) zoo where nothing dangerous or interesting can ever happen to them.
This is therefore a major problem for someone actually trying to be a benevolent leader - how to go about it?
I'd suggest having some metric by which your values can be measured, and measuring it on a regular basis. For example, if you think that a benevolent leader would do best by reducing crime, then you can measure that by tracking crime statistics.
If someone clearly wants you to stop bothering them, then stop bothering them.
"Quit bothering me, officer, I'm super busy here."
I think you entirely missed the point.
I don't think that helps AndHisHorse figure out the point.
As best I understood it, the point was that one's belief in one's own goodness is a source of drive - and if that goodness is false, the drive is misaimed, and the greater drive makes for greater ill consequences.
I think we agree that belief in one's own goodness has the capability to go quite wrong, in such cases as the quote describes more wrong than an all-other-things-being-equal belief in one's own evil. Where we seem to disagree is on the inevitability of this failure mode - I acknowledge that the failure mode exists and we should be cautious about it (although that may not have come across), whereas you seem to be implying that the failure mode is so prevalent that it would be better not to try to be a good overlord at all.
Am I understanding your position correctly?
Partially. Yes, I would assert that the failure mode you're talking about is prevalent (and point to a LOT of history to support that assertion; no one is evil is his own story). However the main point in the quote we're talking about isn't quite that, I think. Instead, consider such concepts as "autonomy", "individuality", and "diversity".