Yeah, I see now that the story doesn't work very well. It's unrealistic that an ad hoc AI designed for answering human questions would manage a coherent takeoff on the first try, without failing miserably due to some flaws in architecture or self-modeling. In all likelihood, making an AI take off without tripping over itself is a hard engineering problem that you can't solve by accident. That seems like a new argument against this particular kind of doomsday scenario. I need to think about it.
That's the friendly AI problem. If you have a piece of planning software that seems to work fine, and you give it more and more options and resources, how do you know that it will keep generating non-extreme plans?
If it terminates as soon as it hits a plan that achieves the goal, and the possible actions are ordered in terms of how extreme they are, then increasing the available resources can't cause trouble, but increasing the available options can (because your ordering might go from correct to incorrect).
In general optimization terms, this is the dif...
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.