From Geoff Anders of Leverage Research:
In the Spring semester of 2011, I decided to see how effectively I could communicate the idea of a threat from AGI to my undergraduate classes. I spent three sessions on this for each of my two classes. My goal was to convince my students that all of us are going to be killed by an artificial intelligence. My strategy was to induce the students to come up with the ideas themselves. I gave out a survey before and after. An analysis of the survey responses indicates that the students underwent a statistically significant shift in their reported attitudes. After the three sessions, students reported believing that AGI would have a larger impact1 and also a worse impact2 than they originally reported believing.
Not a surprising result, perhaps, but the details of how Geoff taught AGI danger and the reactions of his students are quite interesting.
More obviously, an isomorphic argument 'proves' that books will be gibberish - since "almost any" string of characters is gibberish. An additional argument that non-gibberish books are very difficult to write and that naively attempting to write a non-gibberish book will almost certainly fail on the first try, is required. The analogous argument exists for AGI, of course, but is not given there.
Right - so we have already had 50+ years of trying and failing. A theoretical argument that we won't succeed the first time does not tell us very much that we didn't already know.
What is more interesting is the track record of engineers of not screwing up or killing people the first time.
We have records about engineers killing people for cars, trains, ships, aeroplanes and rockets. We have failure records from bridges, tunnels and skyscrapers.
Engineers do kill people - but often it is deliberately - e.g. nuclear bombs - or with society's approval - e.g. ... (read more)