billswift comments on Holden Karnofsky's Singularity Institute Objection 1 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (60)
I have been saying for years that I don't think provable Friendliness is possible, basically for the reasons given here. But I have kept thinking about it, and a relatively minor point that occurred to me is that a bungled attempt at Friendliness might be worse than none. Depending on how it was done, the AI could consider the attempt as a continuing threat.
What's your sense of how a bungled attempt at Friendliness compares to other things humans might do, in terms of how likely an AI would be to consider it a threat?
Fairly low. But that's because I don't think the first AIs are likely to be built by people trying to guarantee Friendliness. If a FriendlyAI proponent tries to rush to get done before another team could finish it could be a much bigger risk.
OK.
For my part, if I think about things people might do that might cause a powerful AI to feel threatened and thereby have significantly bad results, FAI theory and implementation not only doesn't float to the top of the list, it's hardly even visible in the hypothesis space (unless, as here, I privilege it inordinately by artificially priming it).
It's still not even clear to me that "friendliness" is a coherent concept. What is a human-friendly intelligence? Not "what is an unfriendly intelligence" - I'm asking what it is, not what it isn't. (I've asked this before, as have others.) Humans aren't, for example, or this wouldn't even be a problem. But SIAI needs a friendly intelligence that values human values.
Humans are most of the way to human-friendly. A human given absolute power might use it to accumulate wealth at the expense of others, or punish people that displease her in cruel ways, or even utterly annihilate large groups of people based on something silly like nationality or skin color. But a human wouldn't misunderstand human values. There is no chance the human would, if she decided to make everyone as happy as possible, kill everyone to use their atoms to tile the universe with pictures of smiley faces (to use a familiar example).
That is not at all clear to me.
I mean, sure, I agree with the example: a well-meaning human would not kill everyone to tile the universe with pictures of smiley faces. There's a reason that example is familiar; it was chosen by humans to illustrate something humans instinctively agree is the wrong answer, but a nonhuman optimizer might not.
But to generalize from this to the idea that humans wouldn't misunderstand human values, or that a well-meaning human granted superhuman optimization abilities won't inadvertently destroy the things humans value most, seems unjustified.
Well, there's the problem of getting the human to be sufficiently well-meaning, as opposed to using Earth as The Sims 2100 before moving on to bigger and better galaxies. But if Friendliness is a coherent concept to begin with, why wouldn't the well-meaning superhuman figure it out after spending some time thinking about it?
Edit: What I'm saying is that if the candidate Friendly AI is actually a superhuman, then we don't have to worry about Step 1 of friendliness: explaining the problem. Step 2 is convincing the superhuman to care about the problem, and I don't know how likely that is. And finally Step 3 is figuring out the solution, and assuming the human is sufficiently super that wouldn't be difficult (all this requires is intelligence, which is what we're giving the human to begin with).
Agreed that a sufficiently intelligent human would be no less capable of understanding human values, given data and time, than an equally intelligent nonhuman.
No-one is seriously worried that an AGI will misunderstand human values. The worry is that an AGI will understand human values perfectly well, and go on to optimize what it was built to optimize.
Right, so I'm still thinking about it from the "what it was built to optimize" step. You want to try to build the AGI to optimize for human values, right? So you do your best to explain to it what you mean by your human values. But then you fail at explaining and it starts optimizing something else instead.
But suppose the AGI is a super-intelligent human. Now you can just ask it to "optimize for human values" in those exact words (although you probably want to explain it a bit better, just to be on the safe side).
Does this clarify at all?
"non-human-harming" is still defining it as what it isn't, rather than what it is. I appreciate it's the result we're after, but it has no explanatory power as to what it is - as an answer, it's only a mysterious answer.
Morality has a long tradition of negative phrasing. "Thou shalt not" dates back to biblical times. Many laws are prohibitions. Bad deeds often get given more weight than good ones. That is just part of the nature of the beast - IMHO.
That's nice, but precisely fails to answer the issue I'm raising: what is a "friendly intelligence", in terms other than stating what it isn't? What answer makes the term less mysterious?
To paraphrase a radio conversation with one of SI's employees:
and then do find/replace on "human value" with Eliezer's standard paragraph:
Not that I agree this is the proper definition, just one which I've pieced together from SI's public comments.
The obvious loophole in your paraphrase is that this accounts for the atoms the humans are made of, but not for other atoms the humans are interested in.
But yes, this is a bit closer to an answer not phrased as a negation.
Here is the podcast where the Skeptics' Guide to the Universe (SGU) interviews Michael Vassar (MV) on 23-Sep-2009. The interview begins at 26:10 and the transcript below is 45:50 to 50:11.
The original quote had: "human-benefiting" as well as "non-human-harming". You are asking for "human-benefiting" to be spelled out in more detail? Can't we just invoke the 'pornography' rule here?
No, not if the claimed goal is (as it is) to be able to build one from toothpicks and string.
Right, but surely they'd the the first to admit that the details about how to do that just aren't yet available. They do have their `moon-onna-stick' wishlist.