At the recent London meet-up someone (I'm afraid I can't remember who) suggested that one might be able to solve the Friendly AI problem by building an AI whose concerns are limited to some small geographical area, and which doesn't give two hoots about what happens outside that area. Cipergoth pointed out that this would probably result in the AI converting the rest of the universe into a factory to make its small area more awesome. In the process, he mentioned that you can make a "fun game" out of figuring out ways in which proposed utility functions for Friendly AIs can go horribly wrong. I propose that we play.
Here's the game: reply to this post with proposed utility functions, stated as formally or, at least, as accurately as you can manage; follow-up comments explain why a super-human intelligence built with that particular utility function would do things that turn out to be hideously undesirable.
There are three reasons I suggest playing this game. In descending order of importance, they are:
- It sounds like fun
- It might help to convince people that the Friendly AI problem is hard(*).
- We might actually come up with something that's better than anything anyone's thought of before, or something where the proof of Friendliness is within grasp - the solutions to difficult mathematical problems often look obvious in hindsight, and it surely can't hurt to try
That's quite non-obvious to me. A quite arbitrary claim, it seems to me.
You're basically saying if an intelligent mind (A for Alice) knows that person (B for Bob) will care about a certain Consequence C, then A will definitely know how much B will care about it.
This isn't the case for real human minds. If Alice is a human mechanic and tells to Bob "I can fix your car, but it'll cost 200$ dollars", then Alice knows that Bob will care about the cost, but doesn't know how much Bob will care, and whether Bob prefers to have a fixed car, or to have 200$.
So if your claim doesn't even hold for human minds, why do you think it applies for non-human minds?
And even if it does hold, what about the case where Alice doesn't know about whether a detail is morally salient, but errs on the side of caution. e.g. Alice the waitress asks Bob the customer "The chocolate icecream you asked for also has some crushed peanuts in it. Is that okay?" -- and Bob can respond "Ofcourse, why should I care about that?" or alternatively "It's not okay, I'm allergic to peanuts!"
In this case Alice the waitress doesn't know if the detail is salient to Bob, but asks just to make sure.