Hi all,
So, as you may know, the first episode of Doctor Who, "Smile", was about a misaligned AI trying to maximize smiles (ish). And the latest, "Extremis", was about an alien race who instantiated conscious simulations to test battle strategies for invading the Earth, of which the Doctor was a subroutine.
I thought the common threat of AGI was notable, although I'm guessing it's just a coincidence. More seriously, though, this ties in with an argument I thought of, and want to know your take on: i
If we want to avoid an AI arms race, so that safety research has more time to catch up to AI progress, then we would want to prevent, if at all possible, these issues from becoming more mainstream. The reason is that if AGI in public perception becomes disassociated with Terminator (i.e. laughable, nerdy, and unrealistic) and more like a serious whoever-makes-this-first-can-take-over-the-world situation, then we will get an arms race faster.
I'm not sure I believe this argument myself. For one thing, being more mainstream has the benefit of attracting more safety research talent, government funding, etc. But maybe we shouldn't be spreading awareness without thinking this through some more.
I guess it's a legit argument, but it doesn't have the research aspect and it's a sample size of one.
(Un)luckily we don't have many examples of potentially world destroying arms races. We might have to adopt the inside view. We'd have to look at how much mutual trust and co-operation there is currently for various things. Beyond my current knowledge.
By the research aspect, I think research can be done without the public having a good understanding of the problems. E.g. cern/CRISPR. I can also think of other bad outcomes of the the public having an understanding of AIrisk. It might be used as another stick to take away freedoms, see the war on terrorism an... (read more)