Lumifer comments on Open thread, Oct. 19 - Oct. 25, 2015 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (198)
That's not an argument.
If 80% of the population has a certain value, how can you say that value is wrong? Statistically you are far more likely to be in that 80%.
And the alternative isn't "you get to be dictator and have all your values maximized without compromise". It's "some random individual is picked from the population and gets his values maximized over everyone else's." Democracy of values is far preferable.
By functioning democracies? With a perfectly rational and informed population?
That's the important part of CEV, or at least my interpretation of it. The AI predicts what you would decide, if you knew all the relevant information and had plenty of time to think about it. I'm not suggesting a regular democracy where the voters barely know anything.
Indeed, it is not. The question mark at the end might indicate that it is a question.
I don't see any problems with this whatsoever. I am not obligated to convert to the values of the majority. What is the issue that you see?
There is a bit of a true Scotsman odor to this question :-) but let me point out my example upthread and ask you whether the Nazi party came to power democratically.
At this level you might as well cut to the chase and go straight to "I wish for you to do what I should wish for". No need to try to tell God... err.. AI how to do it.
And they aren't obligated to convert to your values. Not everyone can have their way! Democratic voting is the fairest way of making a decision when people can't agree.
Yes I know it's No-True-Scotsman-y, but I really believe that a totally informed population would make very different decisions than an angry mob during a war and depression.
And even your examples are not convincing. Internment during wartime wasn't anywhere near the level of genocide. And the Nazi election was far from fair:
.
Well I did mention that in my first comment. This is more of an aesthetic thing to talk about. Once we have an AI we can just ask it how to solve this problem.
But I still think it's somewhat important to think about. Because if we go with your solution, we just get whatever the creator of the AI wants. He becomes supreme dictator of the universe forever, and forces his values on everyone for eternity. I would much rather have CEV or something like it.
That sounds like an article of faith.
"Fair" is a very... relative world. Calling something "fair" rarely means more than "I like / approve of it".
Ah. Well, speaking aesthetically, I find the elevation of mob rule to be the ultimate moral principle ugly and repugnant. Y'know, de gustibus 'n'all...
I don't believe I proposed any.
Well see my edit to my first comment. I'll paste it here:
Do you agree that the fairest system would be to combine everyone's utility functions and maximize them? Of course somehow giving everyone equal weight to avoid utility monsters and other issues. I think these issues can be worked out.
If so, do you agree that voting systems are the best compromise when you can't just read people's utility functions? And need to worry about tactical voting? Because that is basically what I was getting at.
If you don't agree to the above, then I don't understand your objection. CEV is about somehow finding the best compromise of all humans' utility functions. About combining them all. All I'm talking about is more concrete methods of doing that.
Anything you can do maximizes some combination of people's utility functions. So it is trivially true that the fairest system is a system which uses some combination of people's utility functions. Unless you can first describe how you are going to avoid utility monsters and other perils of utilitarianism, you really haven't said anything useful.
No, I do not. I do not think that humans have coherent utility functions. I don't think utilities of different people can be meaningfully combined, too.
Ah, yes, the famous business plan of the underpants gnomes...
No, I do not. They might be best given some definitions of "best" and given some conditionals, but they are not always best regardless of anything.
What makes you think it is possible?