If it has "swallowed* that claim. You are assuming that the AI has a free choice about some goals >and is just programmed with others.
This is the important part.
the "optimal goal" is not actually controlling the AI.
the "optimal goal" is merely the subject of a discussion.
what is controlling the AI is the desire the tell the truth to the humans it is talking to, nothing more.
Why would that require more gullibility than "species X is more important than all the others"? >That doesn't even look like a moral claim.
The entire discussion is not supposed to unearth some kind of pure, inherently good, perfect optimal goal that transcends all reason and is true by virtue of existing or something.
The AI is supposed to take the human POV and think "if I were these humans, what would I want the AI's goal to be".
I didn't mention this explicitly because I didn't think it was necessary but the "optimal goal" is purely subjective from the POV of humanity and the AI is aware of this.
some kind of pure, inherently good, perfect optimal goal that transcends all reason and is true by virtue of existing or something.
But if that is true, the AI will say so. What's more, you kind of need the AI to refrain from acting on it, if it is a human-unfriendly objective moral truth. There are ethical puzzles where it is apparently right to lie or keep schtum, because of the consequences of telling the truth.
edit: I think I have phrased this really poorly and that this has been misinterpreted. See my comment below for clarification.
A lot of thought has been put into the discussion of how one would need to define the goals of an AI so that it won't find any "loopholes" and act in an unintended way.
Assuming one already had an AI that is capable of understanding human psychology, which seems necessary to me to define the AI's goals anyway, wouldn't it be reasonable to assume that the AI would have an understanding of what humans want?
If that is the case, would the following approach work to make the AI friendly?
-give it the temporary goal to always answer questions thruthfully as far as possible while admitting uncertainty
-also give it the goal to not alter reality in any way besides answering questions.
-ask it what it thinks would be the optimal definition of the goal of a friendly AI, from the point of view of humanity, accounting for things that humans are too stupid to see coming.
-have a discussion between it and a group of ethicists/philosophers wherein both parties are encouraged to point out any flaws in the definition.
-have this go on for a long time until everyone (especially the AI, seeing as it is smarter than anyone else) is certain that there is no flaw in the definition and that it accounts for all kinds of ethical contingencies that might arise after the singularity.
-implement the result as the new goal of the AI.
What do you think of this approach?