You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Lumifer comments on Open thread, Aug. 10 - Aug. 16, 2015 - Less Wrong Discussion

5 Post author: MrMind 10 August 2015 07:29AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (283)

You are viewing a single comment's thread. Show more comments above.

Comment author: Lumifer 11 August 2015 12:44:41AM *  -1 points [-]

I'm picturing one AI making itself smarter until it seizes control of everything. Ergo, its program would be a map to the future. Presumably someone retains admin on it from when it was a baby.

Well, think about it. We are talking about a self-improving AI. It literally changes itself. You start with a seed AI, let's call it AI-0, and it bootstraps itself to an omnipotent AI which we can call AI-1.

Note that the programmers have no idea how to construct AI-1. They have no idea about the path from AI-0 to AI-1. All they (and we) know is that AI-0 and AI-1 will very very different.

Given this, I don't think that the program will be a map to the future. I don't think that the concept of "retaining admin" would even make sense for an AI-1. It will be completely different from what it started as. And I fail to see why you have a firm belief that it will be docile and obedient.