JoshuaZ comments on What if AI doesn't quite go FOOM? - Less Wrong

11 Post author: Mass_Driver 20 June 2010 12:03AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (186)

You are viewing a single comment's thread. Show more comments above.

Comment author: JoshuaZ 20 June 2010 02:14:02AM *  3 points [-]

One of the many reasons that I will win my bet with Eliezer is that it is impossible for an AI to understand itself. If it could, it would be able to predict it's own actions, and this is a logical contradiction, just as it is for us.

I don't see a logical contradiction here. And we have examples in nature of beings able to understand themselves very well: humans are a good example. People predict their own actions all the time. For example, I predict that after I finish typing this message I am going to hit comment and then get up and refill my glass of orange juice. Moreover, human understanding of ourselves has improved and has allowed us to optimize ourselves. For example, all the cognitive biases which we frequently discuss here are examples of humans understanding our own architecture and improving our processing. We also deliberately improve ourselves by playing games or doing specific mental exercises designed to improve specific mental skills. Soon we will more directly improve our cognitive structures by genetic engineering (we've already identified multiple examples of small genetic changes that can make rodents much smarter than they normally are (see this example or this one)). In general, claiming something is a logical contradiction when it occurs in reality is not a great idea.

Comment author: Unknowns 20 June 2010 05:27:40AM 0 points [-]

See my response to wedrifid.