Vladimir_M comments on The Friendly AI Game - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (170)
Would such a constraint be possible to formulate? An AI would presumably formulate theories about its visible universe that would involve all kinds of variables that aren't directly observable, much like our physical theories. How could one prevent it from formulating theories that involve something resembling the outside world, even if the AI denies that they have existence and considers them as mere mathematical convenience? (Clearly, in the latter case it might still be drawn towards actions that in practice interact with the outside world.)
Sorry for editing my comment. The point you're replying to wasn't necessary to strike down Johnicholas's argument, so I deleted it.
I don't see why the AI would formulate theories about the "visible universe". It could start in an empty universe (apart from the AI's own machinery), and have a prior that knows the complete initial state of the universe with 100% certainty.