Hi folks,
My supervisor and I co-authored a philosophy paper on the argument that AI represents an existential risk. That paper has just been published in Ratio. We figured LessWrong would be able to catch things in it which we might have missed and, either way, hope it might provoke a conversation.
We reconstructed what we take to be the argument for how AI becomes an xrisk as follows:
- The "Singularity" Claim: Artificial Superintelligence is possible and would be out of human control.
- The Orthogonality Thesis: More or less any less of intelligence is compatible with more or less any final goal. (as per Bostrom's 2014 definition)
From the conjuction of these two presmises, we can conclude that ASI is possible, it might have a goal, instrumental or final, which is at odds with human existence, and, given the ASI would be out of our control, that the ASI is an xrisk.
We then suggested that each premise seems to assume a different interpretation of 'intelligence", namely:
- The "Singularity" claim assumes general intelligence
- The Orthogonality Thesis assumes instrumental intelligence
If this is the case, then the premises cannot be joined together in the original argument, aka the argument is invalid.
We note that this does not mean that AI or ASI is not an xrisk, only that the the current argument to that end, as we have reconstructed it, is invalid.
Eagerly, earnestly, and gratefully looking forward to any responses.
First I want to say kudos for posting that paper here and soliciting critical feedback :)
Minor point, but I read this as "it would definitely be out of human control". If so, this is not a common belief. IIRC Yampolskiy believes it, but Yudkowsky doesn't (I think?), and I don't, and I think most x-risk proponents don't. The thing that pretty much everyone believes is "it could be out of human control", and then a subset of more pessimistic people (including me) believes "there is an unacceptably high probability that it will be out of human control".
I'm not sure what you think is going on when people do ethical reasoning. Maybe you have a moral realism perspective that the laws of physics etc. naturally point to things being good and bad, and rational agents will naturally want to do the good thing. If so, I mean, I'm not a philosopher, but I strongly disagree. Stuart Russell gives the example of "trying to win at chess" vs "trying to win at suicide chess". The game has the same rules, but the goals are opposite. (Well, the rules aren't exactly the same, but you get the point.) You can't look at the laws of physics and see what your goal in life should be.
My belief is that when people do ethical reasoning, they are weighing some of their desires against others of their desires. These desires ultimately come from innate instincts, many of which (in humans) are social instincts. The way our instincts work is that they aren't (and can't be) automatically "coherent" when projected onto the world; when we think about things one way it can spawn a certain desire, and when we think about the same thing in a different way it can spawn a contradictory desire. And then we hold both of those in our heads, and think about what we want to do. That's how I think of ethical reasoning.
I don't think ethical reasoning can invent new desires whole cloth. If I say "It's ethical to buy bananas and paint them purple", and you say "why?", and then I say "because lots of bananas are too yellow", and then you say "why?" and I say … anyway, at some point this conversation has to ground out at something that you find intuitively desirable or undesirable.
So when I look at your list I quoted above, I mostly say "Yup, that sounds about right."
For example, imagine that you come to believe that everyone in the world was stolen away last night and locked in secret prisons, and you were forced to enter a lifelike VR simulation, so everyone else is now an unconscious morally-irrelevant simulation except for you. Somewhere in this virtual world, there is a room with a Go board. You have been told that if white wins this game, you and everyone will be safely released from prison and can return to normal life. If black wins, all humans (including you and your children etc.) will be tortured forever. You have good reason to believe all of this with 100% confidence.
OK that's the setup. Now let's go through the list:
Maybe you'll object that "the belief that these NPCs can pass for human but be unconscious" is not a belief that a very intelligent agent would subscribe to. But I only made the scenario like that because you're a human, and you do have the normal suite of innate human desires, and thus it's a bit tricky to get you in the mindset of an agent who cares only about Go. For an actual Go-maximizing agent, you wouldn't have to have those kinds of beliefs, you could just make the agent not care about humans and consciousness and suffering in the first place, just as you don't care about "hurting" the colorful blocks in Breakout. Such an agent would (I presume) give correct answers to quiz questions about what is consciousness and what is suffering and what do humans think about them, but it wouldn't care about any of that! It would only care about Go.
(Also, even if you believe that not-caring-about-consciousness would not survive reflection, you can get x-risk from an agent with radically superhuman intelligence in every domain but no particular interest in thinking about ethics. It's busy doing other stuff, y'know, so it never stops to consider whether conscious entities are inherently important! In this view, maybe 30,000,000 years after destroying all life and tiling the galaxies with supercomputers and proving every possible theorem about Go, then it stops for a while, and reflects, and says "Oh hey, that's funny, I guess Go doesn't matter after all, oops". I don't hold that view anyway, just saying.)
(For more elaborate intuition-pumping fiction metaethics see Three Worlds Collide.)
Can one be a moral realist and subscribe to the orthogonality thesis? In which version of it? (In other words, does one have to reject moral realism in order to accept the standard argument for XRisk from AI? We should better be told! See section 4.1)