You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Dmytry comments on What is the best compact formalization of the argument for AI risk from fast takeoff? - Less Wrong Discussion

11 Post author: utilitymonster 13 March 2012 01:44AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (20)

You are viewing a single comment's thread.

Comment author: Dmytry 14 March 2012 07:43:14AM *  0 points [-]

ahh, by the way, the points i have the most confidence about: 4, 9 . It seems virtually certain for me that precautions will not be adequate. The situation is similar to getting a server of some kind unhackable on the first compile and run.

Same goes also for creation of friendly AI. The situation is worse than writing a first autopilot ever, and on the first run of that autopilot software, flying in it complete with automated takeoff and landing. The plane's just going to crash, period. We are this sloppy at software development and there is nothing we can do about it. The worst that can happen is the AI that is not friendly but does treat humans as special; it can euthanise humans even if we are otherwise useful for it, for example. A buggy friendly AI is probably the worst outcome. Seriously, the people who don't develop software got all sorts of entirely wrong intuitions with regards to ability to make something work right on the first try (even with automated theorem proving). Furthermore, a very careful try is a very slow one as well, and is unlikely to be the first. What I am hoping for is that the AIs will just quietly wirehead themselves.