You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Kaj_Sotala comments on Wanted: "The AIs will need humans" arguments - Less Wrong Discussion

7 Post author: Kaj_Sotala 14 June 2012 11:01AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (83)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 15 June 2012 10:31:41PM *  4 points [-]

One of the categories is "They Will Need Us" - claims that AI is no big risk, because AI will always have a need of something that humans have, and that they will therefore preserve us.

I claim something like this. Specifically, I claim that a broad range of superintelligences will preserve their history, and run historical simuations, to help them understand the world. Many possible superintelligences will study their own origins intensely - in order to help them to understand the possible forms of aliens which they might encounter in the future. So, humans are likely to be preserved because superintelligences need us instrumentally - as objects of study.

This applies to (e.g.) gold atom maximisers, with no shred of human values. I don't claim it for all superintelligences, though - or even 99% of those likely to be built.

Comment author: Kaj_Sotala 18 June 2012 10:55:26AM 1 point [-]

Thanks, this is useful. You wouldn't have a separate write-up of it somewhere? (We can cite a blog comment, but it's probably more respectable to at least cite something that's on its own webpage.)

Comment author: timtyler 18 June 2012 11:05:26PM 0 points [-]

Sorry, no proper write-up.