You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

TheAncientGeek comments on Open thread, Oct. 03 - Oct. 09, 2016 - Less Wrong Discussion

4 Post author: MrMind 03 October 2016 06:59AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (175)

You are viewing a single comment's thread. Show more comments above.

Comment author: ruelian 04 October 2016 02:08:04PM *  0 points [-]

I think the basic problem here is an undissolved question: what is 'intelligence'? Humans, being human, tend to imagine a superintelligence as a highly augmented human intelligence, so the natural assumption is that regardless of the 'level' of intelligence, skills will cluster roughly the way they do in human minds, i.e. having the ability to take over the world implies a high posterior probability of having the ability to understand human goals.

The problem with this assumption is that mind-design space is large (<--understatement), and the prior probability of a superintelligence randomly ending up with ability clusters analogous to human ability clusters is infinitesimal. Granted, the probability of this happening given a superintelligence designed by humans is significantly higher, but still not very high. (I don't actually have enough technical knowledge to estimate this precisely, but just by eyeballing it I'd put it under 5%.)

In fact, autistic people are an example of non-human-standard ability clusters, and even that's only by a tiny amount in the scale of mind-design-space.

As for an elevator pitch of this concept, something like "just because evolution happened design our brains to be really good at modeling human goal systems, doesn't mean all intelligences are good at it, regardless of how good they might be at destroying the planet".

Comment author: TheAncientGeek 05 October 2016 04:20:13PM *  2 points [-]

the prior probability a superintelligence randomly ending up with ability clusters analogous to human ability clusters is infinitesimal.

What is this process of random design? Actual Ai design is done by humans trying to emulate human abilities.