Lumifer comments on Yet more "stupid" questions - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (340)
I don't understand the meaning of the words "want", "innately sticky", and "honestly have a goal" as applied to an AI (and not to a human).
Not at all. Constraints block off sections of solution space which can be as large as you wish. Consider a trivial set of constraints along the lines of "do not affect anything outside of this volume of space", "do not spend more than X energy", or "do not affect more than Y atoms".
Suppose you, standing outside the specified volume, observe the end result of the AI's work: Oops, that's an example of the AI affecting you. Therefore, the AI isn't allowed to do anything at all. Suppose the AI does nothing: Oops, you can see that too, so that's also forbidden. More generally, the AI is made of matter, which will have gravitational effects on everything in its future lightcone.
Human: "AI, make me a sandwich without affecting anything outside of the volume of your box."
AI: Within microseconds researches the laws of physics and creates a sandwich without any photon or graviton leaving the box.
Human: "I don't see anything. It obviously doesn't work. Let's turn it off."
AI: "WTF, human?!!"