Term/Category for AI with Neutral Impact?
Is there any commonly known term on LessWrong to describe an AGI that does not significantly increase or decrease human value? (For instance, an AGI that stops all future attempts to build AGI, but otherwise tries to preserve the course of human history as if it had never existed). Would...
I wonder if the initial 67% in favor of x-risk was less reflective of the audience's opinion on AI specifically, but a general application of the heuristic "<X fancy new technology> = scary, needs regulation."
(That is, if you replaced AI with any other technology that general audiences are vaguely aware of but don't have a strong opinion on, such as CRISPR or nanotech, would they default to about the same number?)
Also, I would guess that hearing two groups of roughly equally smart-sounding people debate a topic one has no strong opinion on tends to revise one's initial opinion closer to "looks like there's a lot of complicated disagreement so idk maybe it's 50/50 lol," regardless of the actual specifics of the arguments made.