No, I'm asking you to specify it. My point is that you can't build X if you can't even recognize X.
And I don't agree with that. I've presented some ideas on how an FAI could be built, and how CEV would work. None of them require "recognizing" FAI. What would it even mean to "recognize" FAI, except to see that it values the kinds of things we value and makes the world better for us.
Learning what humans want is pretty easy. However it's an inconsistent mess which involves many things contemporary people find unsavory. Making it all coherent and formulating a (single) policy on the basis of this mess is the hard part.
I've written about one method to accomplish this, though there may be better methods.
Why would CEV eliminate things I find negative? This is just a projected typical mind fallacy. Things I consider positive and negatve are not (necessarily) things many or most people consider positive and negative.
Humans are 99.999% identical. We have the same genetics, the same brain structures, and mostly the same environments. The only reason this isn't obvious, is because we spend almost all our time focusing on the differences between people, because that's what's useful in everyday life.
I should expect CEV to eliminate some things I believe are positive and impose some things I believe are negative.
That may be the case, but that's still not a bad outcome. In the example I used, the values dropped from ISIS members were taken for 2 reasons. That they were based on false beliefs, or that they hurt other people. If you have values based on false beliefs, you should want them to be eliminated. If you have values that hurt other people then it's only fair that be eliminated. Or else you risk the values of people that want to hurt you.
Later you say that CEV will average values. I don't have average values.
Well I think it's accurate, but it's somewhat nonspecific. Specifically, CEV will find the optimal compromise of values. The values that satisfy the most people the most amount. Or at least dissatisfy the fewest people the least. See the post I just linked for more details, on one example of how that could be implemented. That's not necessarily "average values".
In the worst case, people with totally incompatible values will just be allowed to go separate ways, or whatever the most satisfying compromise is. Muslims live on one side of the dyson sphere, Christians on the other, and they never have to interact and can do their own thing.
You are essentially saying that religious people are idiots and if only you could sit them down and explain things to them, the scales would fall from their eyes and they will become atheists.This is a popular idea, but it fails real-life testing very very hard.
My exact words were "If they were more intelligent, informed, and rational... If they knew all the arguments for and against..." Real world problems of persuading people don't apply. Most people don't research all the arguments against their beliefs, and most people aren't rational and seriously consider the hypothesis that they are wrong.
For what it's worth, I was deconverted like this. Not overnight by any means. But over time I found that the arguments against my beliefs were correct and I updated my belief.
Changing world views is really really hard. There's no one piece of evidence or one argument to dispute. Religious people believe that there is tons of evidence of God. To them it just seems obviously true. From miracles, to recorded stories, to their own personal experiences, etc. It takes a lot of time to get at every single pillar of the belief and show its flaws. But it is possible. It's not like Muslims were born believing in Islam. Islam is not encoded in genetics. People deconvert from religions all the time, entire societies have even done it.
In any case, my proposal does not require literally doing this. It's just a thought experiment. To show that the ideal set of values is what you choose if you had all the correct beliefs.
What would it even mean to "recognize" FAI
It means that when you look an an AI system, you can tell whether it's FAI or not.
If you can't tell, you may be able to build an AI system, but you still won't know whether it's FAI or not.
I've written about one method to accomplish this
I don't see what voting systems have to do with CEV. The "E" part means you don't trust what the real, current humans say, so to making them vote on anything is pointless.
Humans are 99.999% identical.
That's a meaningless expression without a context. ...
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "