You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Lumifer comments on Open thread, Oct. 19 - Oct. 25, 2015 - Less Wrong Discussion

3 Post author: MrMind 19 October 2015 06:59AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (198)

You are viewing a single comment's thread. Show more comments above.

Comment author: mwengler 21 October 2015 02:03:54PM 2 points [-]

There is significant progress in genetic modification of humans and in physical modification/augmentation of humans. It is plausible we will have genetically modified and/or physically modified human intelligence before we have artificial intelligence.

FAI is the pursuit of artificial intelligence constrained in a way that it will not be a threat to unmodified humans. Or at least that is what it seems to be to me as an observer of discussions here, is this a reasonable description of FAI?

It occurs to me that natural human intelligence has certainly not developed with any such constraints. Indeed, if humanity can develop UAI, then that is essentially proof that human intelligence is not Friendly in the sense we wish FAI to be.

Presumably we have been more worried with how to constrain AI to be friendly because AI could learn to self-modify and experience exponential growth and thus overwhelm human intelligence. But what of modified human intelligence, genetic or physical? These ARE examples of self-modification. And they both appear to be capable of inducing exponential growth.

Is the threat from unfriendly human intelligence any less or any different, or worthy of consideration as an existential risk? If an intelligence arises from modified human, is it a threat to unmodified human, or an enhancement on it? How do we define natural and artificial when our purpose in defining it is to protect the one from the other?

Comment author: polymathwannabe 21 October 2015 05:00:03PM 1 point [-]

Human intelligence has already chosen to maximize the burning of oil with no regard for the viability of our biosphere, so we're already living under an Unfriendly Human Intelligence scenario.