Punoxysm comments on Open Thread: March 4 - 10 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (391)
I understand the notion, but think of it in terms of preventing a pandemic: There's a certain set of characteristics of a virus that would overwhelm virtually any attempt to prevent it from wiping out humanity. All existing viruses are pretty safely within the bounds of what our actual public health protocols can handle. On top of that, existing or hypothetical yet plausible protocols can prevent pandemics with viruses that have higher transmissibility, or higher mortality than anything previously experienced.
Realistically, a protocol to deal with AGI will be in a similar position. It will be distinctly "one-shot" but there's no reason it couldn't deal with a computer somewhat more intelligent than any existing human being.