You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

tailcalled comments on Boxing an AI? - Less Wrong Discussion

2 Post author: tailcalled 27 March 2015 02:06PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (39)

You are viewing a single comment's thread. Show more comments above.

Comment author: tailcalled 27 March 2015 05:35:36PM 0 points [-]

How would you find out something that a three-year-old is trying to hide from you?

A three-year-old that designed the universe I live in in such a way that it stays hidden as well as possible? I have absolutely no idea. I would probably hope for it to get bored and tell me.

We don't have any way of knowing whether a deception that would fool the likes of us would fool somebody way smarter.

We do have at least some clues:

  1. We know the mathematics of how optimal belief updating works.

  2. We have some rough estimates of the complexity of the theory the AI must figure out.

  3. We have some rough estimates of the amount of information we have given the AI.

I know there is such a thing as underestimating AI, but I think you are severely overestimating it.