Strilanc comments on Open Thread: March 4 - 10 - Less Wrong

3 Post author: Coscott 04 March 2014 03:55AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (391)

You are viewing a single comment's thread. Show more comments above.

Comment author: Punoxysm 04 March 2014 09:38:07PM 1 point [-]

Hi, long time lurker, new user. I was thinking about writing a post on how any potential AGI of human-level intelligence is likely to have a band of a few years before and after its creation where FOOM risks can be contained with care, and how this would be an especially fruitful period to deal with friendliness. Any posts/articles I should look at to avoid being too redundant?

Comment author: Strilanc 04 March 2014 11:02:51PM *  1 point [-]

There's a post somewhere about two entities discussing how evolution is optimizing so quickly, compared to how things were before. One of them tries to argue that brains will be even faster while the other scoffs that brains making machines with hundreds of moving parts in as little as a thousand years is absurd.

Of course it's an allegory for the next jump also having a massive time scale difference, with things that used to take years taking only minutes.

Unfortunately I can't find the post and I can't remember what it's called.

Comment author: Pfft 04 March 2014 11:56:38PM 6 points [-]
Comment author: Strilanc 05 March 2014 01:11:37AM 0 points [-]

That's it.

Comment author: Nornagest 04 March 2014 11:39:49PM *  0 points [-]

Sounds kind of like "They're Made of Meat", though the context is different enough that I doubt that's what you're referring to.

Comment author: Oscar_Cunningham 04 March 2014 11:14:18PM 0 points [-]

It's probably in the Hanson-Yudkowsky FOOM debate. Maybe on OB?

Comment author: Punoxysm 04 March 2014 11:19:16PM 0 points [-]

I understand the notion, but think of it in terms of preventing a pandemic: There's a certain set of characteristics of a virus that would overwhelm virtually any attempt to prevent it from wiping out humanity. All existing viruses are pretty safely within the bounds of what our actual public health protocols can handle. On top of that, existing or hypothetical yet plausible protocols can prevent pandemics with viruses that have higher transmissibility, or higher mortality than anything previously experienced.

Realistically, a protocol to deal with AGI will be in a similar position. It will be distinctly "one-shot" but there's no reason it couldn't deal with a computer somewhat more intelligent than any existing human being.