Today's post, Sorting Pebbles Into Correct Heaps was originally published on 10 August 2008. A summary (taken from the LW wiki):
A parable about an imaginary society that don't understand what their values actually are.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Inseparably Right; or, Joy in the Merely Good, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
The response to Unknown sums up the issue already, though.
You may be justified in ascertaining that the AI will figure out what they're doing. You're not justified in assuming it will then act on this knowledge instead of identifying and pursuing its own purposes (presuming you've codified "purpose" enough for it to not just sit there and modify its own utility function to produce the computer equivalent of shooting up heroin).
Until you know what you're doing, you can't get something else to do it for you. The AI programmed without knowledge of what they wanted it to do might cooperate, might not. It would be better to start over, programming it specifically to do what you want it to do.