PhilGoetz comments on Great Product. Lousy Marketing. - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (70)
I'd like to hear some justification - some extensive justification, at least a sequence's worth - explaining how building a Friendly AI, with the already-expressed intent of beating all other AIs to the punch and then using your position of power to suppress or destroy construction of any other AIs at any cost, and to make yours a singleton designed in such a way that the values you programmed it with can never be altered -
-- can amount to anything other than what Robin just described.
(Elaborating after a day with no responses)
I realize that the first answer is going to be something along the lines of, "But we don't program in the values. We just design an algorithm that can extrapolate values from everyone else."
First, I've spoken with many of the people involved, and haven't heard any of them express views consistent with this - they want their values to be preserved, in fact two said explicitly that they did not care what happened to the universe if their personal values were not preserved - and yet they also believe that their values are extreme minority views among humanity. What's more, they have views of transhumanism and humanity that make excluding lower animals from this extrapolated volition unjustifiable on any grounds that would not also exclude humans.
Second, the problem of trying to program an AI in such a way that your values do not determine the values it acquires, is isomorphic to the problem of trying to write a program to do Bayesian analysis in a way that will not be influenced by your priors; or trying to evaluate a new scientific idea that isn't influenced by your current scientific paradigm. It can't be done, except by defining your terms in a way that hides the problem.
Third, the greatest concern here is how much respect will be given to our free will when setting up the AI governor over us. Given that the people doing the setting up unanimously don't believe in free will, the only logical answer is, Zero.
Could you add more details? I'm really interested in this issue since I have similar concerns regarding the whole CEV-idea: It presumes that the values of almost every human somehow converge if they were only smarter, had more time etc. . I don't know, it would definitely be nice if this were true, but if I look at most people around me, read a random history book, or just watch 5 minutes TV, I see values absurdly different from mine.
To be frank, I think I would trust a CEV more if the FAI would only extrapolate the volition of highly intelligent people. Damn, thinking about it, I have to say: If I had to choose between a FAI only scanning the brain of Eliezer and a FAI scanning every human on earth, then I would choose Eliezer!
Well, you could argue that this only shows that I'm a fanatic lunatic or a cynical misanthrope...
Anyway, I would like to hear your current thoughts on this subject!
By 'extrapolated' we mean that the FAI is calculating what the wishes of those people would be IF they were as intelligent and well-informed as the FAI.
Given that, what difference do you think it would make for the FAI to only scan intelligent people? I can imagine only negatives: a potential neglect of physical/non intellectuals pursuits as a potential source of Fun, greater political opposition if not everyone is scanned, harder time justifying the morality of letting something take control that doesn't take EVERYONE'S desires into consideration...
I don't think I understand this. If the FAI would make Stalin as intelligent and well-informed as the FAI, then this newly created entity wouldn't be Stalin anymore. In fact, it would be something totally different. But maybe I'm just too stupid and there is some continuity of identity going on. Then I have to ask: Why not extrapolate the volition of every animal on earth? If you can make Stalin intelligent and moral and you somehow don't annihilate the personality of Stalin then I propose the same thing is possible for every (other) pig. Now you could say something like " Well, Stalin is an exception, he obviously was hopelessly evil", but even today Stalin is the hero of many people.
To but it bluntly: Many folks seem to me either utterly stupid or evil or both. If the FAI makes them "as intelligent and well-informed as the FAI itself" then they would be different people.
Anyway, these were only my misanthropic ramblings, and I don't believe they are accurate, since they are probably at least partly caused by depression and frustration. But somehow I felt the urge to utter them;) Look, I really hope this whole CEV-stuff makes sense, but I think I'm not the only one who doubts it. ( And not only ordinary folks like me, but e.g. Wei Dai, Phil Goetz, etc. ) Do you know of any further explanations of CEV besides this one?
Yeah, so? Nobody said the intelligently extrapolated volition of Stalin would be Stalin. It would be a parameter in the calculation parameters of the FAI.
We're not talking about simulating people. We're talking about extrapolating their volition. Continuity of identity has nothing to do with anything.
Which is the exact point, that we don't want the current volition of people (since people are currently stupid), we want their extrapolated volition, what they would choose if they were much more intelligent than they currently are.