Kaj_Sotala comments on Open Thread: March 4 - 10 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (391)
Hi, long time lurker, new user. I was thinking about writing a post on how any potential AGI of human-level intelligence is likely to have a band of a few years before and after its creation where FOOM risks can be contained with care, and how this would be an especially fruitful period to deal with friendliness. Any posts/articles I should look at to avoid being too redundant?
Intelligence Explosion Microeconomics discusses the kinds of open questions we'd need to answer in order to know whether or not there will be such a band.
Section 2.3. (and its subsections) of Responses to Catastrophic AGI Risk also discusses three different types of FOOM that might be possible: hardware overhang, speed explosion, and an intelligence explosion. Your argument should probably address all three.
Thanks. I'll look over these.
Edit: It looks like section 4, AGI Containment, covers many of my thoughts and comes to a pretty similar conclusion: External constraints on AGI are an imperfect plan, but potentially valuable and complementary to other safety approaches.