You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

diegocaleiro comments on Superintelligence Reading Group 3: AI and Uploads - Less Wrong Discussion

9 Post author: KatjaGrace 30 September 2014 01:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (138)

You are viewing a single comment's thread. Show more comments above.

Comment author: diegocaleiro 02 October 2014 10:32:24AM 2 points [-]

Having a system that stops when you reach a level of tested intelligence sounds appealing, but I'd be afraid of the measure of intelligence at hand being too fuzzy.

So the system would not detect it had achieved that level of intelligence, and it would bootstrap and take off in time to destroy the control system which was supposed to halt it. This would happen if we failed to solve any of many distinct problems that we don't know how to solve yet, like symbol grounding, analogical and metaphorical processing and 3 more complex ones that I can think of, but don't want to spend resources in this thread mentioning. Same for simulation argument.