From Geoff Anders of Leverage Research:
In the Spring semester of 2011, I decided to see how effectively I could communicate the idea of a threat from AGI to my undergraduate classes. I spent three sessions on this for each of my two classes. My goal was to convince my students that all of us are going to be killed by an artificial intelligence. My strategy was to induce the students to come up with the ideas themselves. I gave out a survey before and after. An analysis of the survey responses indicates that the students underwent a statistically significant shift in their reported attitudes. After the three sessions, students reported believing that AGI would have a larger impact1 and also a worse impact2 than they originally reported believing.
Not a surprising result, perhaps, but the details of how Geoff taught AGI danger and the reactions of his students are quite interesting.
I don't see why this makes the argument seem silly. It seems to me that the isomorphic argument is correct, and that computer programs do crash.
Some computer programs crash - just as some possible superintelligences would kill alll humans.
However, the behavior of a computer program chosen at random tells you very little about how an actual real-world computer program will behave - since computer programs are typically produced by selection processes performed by intelligent agents.
The "for almost any goals" argument is bunk.