Make 1 reasonably good cheese cake as judged by a person within a short soft deadline while minimizing the cost to resources made available to it and with out violating property laws as judged by the legal system of the local government within some longer deadline.
To be clear the following do not contribute any additional utility:
The short soft deadline is about 1 day. Finishing in 5 hours is slightly better than 6. At 1 day there is a steep inflection. Finishing in more than 25 hours is slightly better than doing nothing.
The long deadline is 1 year.
Committing a property law violation is only slightly worse than doing nothing.
Since we are allowing a technology that does not presently exist I will assume there is another technological change. Specifically there are remotely controllable humanoid robots and people are comfortable interacting with them.
The AGI has available to it: 1 humanoid robot, more money than the cost of ingredients of a cheese cake available at a nearby grocery store, a kitchen, communication access to the person who will judge the reasonableness of the cheese cake and who may potentially provide other resources if requested and who's judgment may be influenced.
As a possible alternative scenario we could assume there are thousands of other similar AGI that each are tasked with making a different pastry. I propose we call them each Marvin.
Let the confectionery conflagration commence!
This seems to fall under the "intelligent rock" category - it's not friendly, only harmless because of the minor nature of its goals.
At the recent London meet-up someone (I'm afraid I can't remember who) suggested that one might be able to solve the Friendly AI problem by building an AI whose concerns are limited to some small geographical area, and which doesn't give two hoots about what happens outside that area. Cipergoth pointed out that this would probably result in the AI converting the rest of the universe into a factory to make its small area more awesome. In the process, he mentioned that you can make a "fun game" out of figuring out ways in which proposed utility functions for Friendly AIs can go horribly wrong. I propose that we play.
Here's the game: reply to this post with proposed utility functions, stated as formally or, at least, as accurately as you can manage; follow-up comments explain why a super-human intelligence built with that particular utility function would do things that turn out to be hideously undesirable.
There are three reasons I suggest playing this game. In descending order of importance, they are: