You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

jacob_cannell comments on Wanted: "The AIs will need humans" arguments - Less Wrong Discussion

7 Post author: Kaj_Sotala 14 June 2012 11:01AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (83)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 15 June 2012 10:31:41PM *  4 points [-]

One of the categories is "They Will Need Us" - claims that AI is no big risk, because AI will always have a need of something that humans have, and that they will therefore preserve us.

I claim something like this. Specifically, I claim that a broad range of superintelligences will preserve their history, and run historical simuations, to help them understand the world. Many possible superintelligences will study their own origins intensely - in order to help them to understand the possible forms of aliens which they might encounter in the future. So, humans are likely to be preserved because superintelligences need us instrumentally - as objects of study.

This applies to (e.g.) gold atom maximisers, with no shred of human values. I don't claim it for all superintelligences, though - or even 99% of those likely to be built.

Comment author: jacob_cannell 17 June 2012 07:22:30PM 0 points [-]

Yes. I'm surprised this isn't brought up more. AIXI formalizes the idea that intelligence involves predicting the future through deep simulation, but human brains use something like a Monte Carlo sim like approach as well.