Houshalter comments on Open thread, Oct. 19 - Oct. 25, 2015 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (198)
And they aren't obligated to convert to your values. Not everyone can have their way! Democratic voting is the fairest way of making a decision when people can't agree.
Yes I know it's No-True-Scotsman-y, but I really believe that a totally informed population would make very different decisions than an angry mob during a war and depression.
And even your examples are not convincing. Internment during wartime wasn't anywhere near the level of genocide. And the Nazi election was far from fair:
.
Well I did mention that in my first comment. This is more of an aesthetic thing to talk about. Once we have an AI we can just ask it how to solve this problem.
But I still think it's somewhat important to think about. Because if we go with your solution, we just get whatever the creator of the AI wants. He becomes supreme dictator of the universe forever, and forces his values on everyone for eternity. I would much rather have CEV or something like it.
That sounds like an article of faith.
"Fair" is a very... relative world. Calling something "fair" rarely means more than "I like / approve of it".
Ah. Well, speaking aesthetically, I find the elevation of mob rule to be the ultimate moral principle ugly and repugnant. Y'know, de gustibus 'n'all...
I don't believe I proposed any.
Well see my edit to my first comment. I'll paste it here:
Do you agree that the fairest system would be to combine everyone's utility functions and maximize them? Of course somehow giving everyone equal weight to avoid utility monsters and other issues. I think these issues can be worked out.
If so, do you agree that voting systems are the best compromise when you can't just read people's utility functions? And need to worry about tactical voting? Because that is basically what I was getting at.
If you don't agree to the above, then I don't understand your objection. CEV is about somehow finding the best compromise of all humans' utility functions. About combining them all. All I'm talking about is more concrete methods of doing that.
Anything you can do maximizes some combination of people's utility functions. So it is trivially true that the fairest system is a system which uses some combination of people's utility functions. Unless you can first describe how you are going to avoid utility monsters and other perils of utilitarianism, you really haven't said anything useful.
No, I do not. I do not think that humans have coherent utility functions. I don't think utilities of different people can be meaningfully combined, too.
Ah, yes, the famous business plan of the underpants gnomes...
No, I do not. They might be best given some definitions of "best" and given some conditionals, but they are not always best regardless of anything.
What makes you think it is possible?