Kaj_Sotala comments on Open Thread: March 4 - 10 - Less Wrong

3 Post author: Coscott 04 March 2014 03:55AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (391)

You are viewing a single comment's thread. Show more comments above.

Comment author: Punoxysm 04 March 2014 09:38:07PM 1 point [-]

Hi, long time lurker, new user. I was thinking about writing a post on how any potential AGI of human-level intelligence is likely to have a band of a few years before and after its creation where FOOM risks can be contained with care, and how this would be an especially fruitful period to deal with friendliness. Any posts/articles I should look at to avoid being too redundant?

Comment author: Kaj_Sotala 05 March 2014 03:07:39PM *  3 points [-]

Intelligence Explosion Microeconomics discusses the kinds of open questions we'd need to answer in order to know whether or not there will be such a band.

Section 2.3. (and its subsections) of Responses to Catastrophic AGI Risk also discusses three different types of FOOM that might be possible: hardware overhang, speed explosion, and an intelligence explosion. Your argument should probably address all three.

Comment author: Punoxysm 05 March 2014 07:18:11PM *  1 point [-]

Thanks. I'll look over these.

Edit: It looks like section 4, AGI Containment, covers many of my thoughts and comes to a pretty similar conclusion: External constraints on AGI are an imperfect plan, but potentially valuable and complementary to other safety approaches.