It's because computers do what you program them to do. If you build an AI with superhuman intelligence and creativity, and the way it makes decisions is to best fulfill some objective, that objective might get fulfilled but everything else might get fubar.
Suppose the objective is "protect the people of Sweden from threats." This AI will almost certainly kill everyone outside Sweden, to eliminate potential threats. As for the survivors, well - what's a "threat?" Does skin cancer or the flu or emotional harm count? What state would you say truly minimizes these threats - that sounds like a coma or a sensory deprivation tank to me.
The most recent post in December's Stupid Questions article is from the 11th.
I suppose as the article's been pushed further down the list of new articles, it's had less exposure, so here's another one for the rest of December.
Plus I have a few questions, so I'll get it kicked off.
It was said in the last one, and it's good advice, I think:
This thread is for asking any questions that might seem obvious, tangential, silly or what-have-you. Don't be shy, everyone has holes in their knowledge, though the fewer and the smaller we can make them, the better.
Please be respectful of other people's admitting ignorance and don't mock them for it, as they're doing a noble thing.