"I'm increasingly inclined to thing there should be some regulatory oversight, maybe at the national and international level just to make sure that we don't do something very foolish."
http://www.cnet.com/news/elon-musk-we-are-summoning-the-demon-with-artificial-intelligence/#ftag=CAD590a51e
Although I don't know enough about Stuart Russell to be sure, he seems rather down to earth. Shane Legg also seems reasonable. So does Laurent Orseau. With the caveat that these people also seem much less extreme in their views on AI risks.
I certainly do not want to discourage researchers from being cautious about AI. But what currently happens seems to be the formation of a loose movement of people who reinforce their extreme beliefs about AI by mutual reassurance.
There are whole books now about this topic. What's missing are the empirical or mathematical foundations. It just consists of non-rigorous arguments that are at best internally consistent.
So even if we were only talking about sane domain experts, if they solely engage in unfalsifiable philosophical musings then the whole endeavour is suspect. And currently I don't see them making any predictions that are less vague and more useful than the second coming of Jesus Christ. There will be an intelligence explosion by a singleton with a handful of known characteristics revealed to us by Omohundro and repeated by Bostrom. That's not enough!
I don't understand how that answers my specific question. Your system 1 may have done a switcheroo on you.