My position is that I believe that superhuman AGI will probably (accidentally) be created soon, and I think it may or may not kill all the humans depending on how threatening we appear to it. I might pour boiling water on an ant nest if they're invading my kitchen, but otherwise I'm generally indifferent to their continued existence because they pose no meaningful threat.
I'm mostly interested in what happens next. I think that the universe of paperclips would be a shame, but if the AGI is doing more interesting things than that then it could simply be rega...
Thank you. I did follow and read those links when I read the article, but I didn't think they were exactly what I was talking about. As I understand it, orthogonality says that it's perfectly possible for an intelligence to be superhuman and also to really want paperclips more than anything. What I'm wondering is whether an intelligence can change its mind about what it wants as it gains more intelligence? I'm not really interested in whether it would lead to ethics which we'd approve it, just whether it can decide what it wants for itself. Is there a term for that idea (other than "free will", I suppose)?