NancyLebovitz comments on Short versions of the basic premise about FAI - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (23)
I personally don't think we need to talk about self-improving AI at all to consider the problem of friendliness. I would say a viable alternative statement is "Evolution has shaped the values of human minds. Such preferences will not exist in engineered minds unless they are explicitly engineered. Human values are complex, so explicit engineering will be extremely difficult or impossible."
Self-optimization is what makes friendliness a serious problem.
Potentially yes, but I think the problem can be profitably restated without any reference to the Singularity or FOOMing AI. (I've often wondered whether the Friendliness problem would be better recognized and accepted if it was presented without reference to the Singularity).
Edit: See also Vladimir Nesov's summary, which is quite good, but not quite as short as you're looking for here.
Friendliness would certainly be worth pursuing-- it applies to a lot of human issues in addition to what we want from computer programs.
Still, concerns about FOOM is the source of urgency here.
Concerns about FOOM are also what makes SIAI look like (and some posters talk like) a loony doom cult.
Skip the "instant godlike superintelligence with nanotech arms" shenanigans, and AI ethics still remains an interesting and important problem, as you observed.
But it's much easier to get people to look at an interesting problem so you can then persuade them that it's serious, than it is to convince them that they are about to die in order to make them look at your problem. Especially since modern society has so inured people to apocalyptic warnings that the wiser half of the population takes them with a few kilograms of salt to begin with.
The Hidden Complexity of Wishes
I do not understand your point. Would you care to explain?
Sorry, I thought that post was a pretty good statement of the Friendliness problem, sans reference to the Singularity (or even any kind of self-optimization), but perhaps I misunderstood what you were looking for.
Oh, I misunderstood your link. I agree, that's a good summary of the idea behind the "complexity of value" hypothesis.