Vladimir_M comments on The Friendly AI Game - Less Wrong

38 Post author: bentarm 15 March 2011 04:45PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (170)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_M 15 March 2011 07:40:24PM *  5 points [-]

an AI with a prior of zero for the existence of the outside world will never believe in it, no matter what evidence it sees.

Would such a constraint be possible to formulate? An AI would presumably formulate theories about its visible universe that would involve all kinds of variables that aren't directly observable, much like our physical theories. How could one prevent it from formulating theories that involve something resembling the outside world, even if the AI denies that they have existence and considers them as mere mathematical convenience? (Clearly, in the latter case it might still be drawn towards actions that in practice interact with the outside world.)

Comment author: cousin_it 15 March 2011 07:43:13PM *  0 points [-]

Sorry for editing my comment. The point you're replying to wasn't necessary to strike down Johnicholas's argument, so I deleted it.

I don't see why the AI would formulate theories about the "visible universe". It could start in an empty universe (apart from the AI's own machinery), and have a prior that knows the complete initial state of the universe with 100% certainty.