KatjaGrace comments on Superintelligence 29: Crunch time - Less Wrong

8 Post author: KatjaGrace 31 March 2015 04:24AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (26)

You are viewing a single comment's thread.

Comment author: KatjaGrace 31 March 2015 04:29:27AM 4 points [-]

Are there things that someone should maybe be doing about AI risk that haven't been mentioned yet?

Comment author: PhilGoetz 01 April 2015 03:13:30AM 2 points [-]

The entire approach of planning a stable ecosystem of AIs that evolve in competition, rather than one AI to rule them all and in the darkness bind them, was dismissed in the middle of the book with a few pages amounting to "it could be difficult".

Comment author: timeholmes 02 April 2015 10:24:27PM *  -1 points [-]

Human beings suffer from a tragic myopic thinking that gets us into regular serious trouble. Fortunately our mistakes so far have so far don't quite threaten our species (though we're wiping out plenty of others.) Usually we learn by hindsight rather than robust imaginative caution; we don't learn how to fix a weakness until it's exposed in some catastrophe. Our history by itself indicates that we won't get AI right until it's too late, although many of us will congratulate ourselves that THEN we see exactly where we went wrong. But with AI we only get one chance.

My own fear is that the crucial factor we miss will not be some item like an algorithm that we figured wrong but rather will have something to do with the WAY humans think. Yes we are children playing with terrible weapons. What is needed is not so much safer weapons or smarter inventors as a maturity that would widen our perspective. The indication that we have achieved the necessary wisdom will be when our approach is so broad that we no longer miss anything; when we notice that our learning curve overtakes our disastrous failures. When we no longer are learning in hindsight we will know that the time has come to take the risk on developing AI. Getting this right seems to me the pivot point on which human survival depends. And at this point it's not looking too good. Like teenage boys, we're still entranced by the speed and scope rather than the quality of life. (Like in our heads we still compete in a world of scarcity instead of stepping boldly into a cooperative world of abundance that is increasingly our reality.)

Maturity will be indicated by a race who, rather than striving to outdo the other guy, are dedicated to helping all creatures live more richly meaningful lives. This is the sort of lab condition that would likely succeed in the AI contest rather than nose-diving us into extinction. I feel human creativity is a God-like gift. I hope it is not what does us in because we were too powerful for our own good.