Thanks, it's useful to bring these out - though we mention them in passing. Just to be sure: We are looking at the XRisk thesis, not at some thesis that AI can be "dangerous", as most technologies will be. The Omhundro-style escalation is precisely the issue in our point that instrumental intelligence is not sufficient for XRisk.
... we aren't trying to prove the absence of XRisk, we are probing the best argument for it?
We tried to find the strongest argument in the literature. This is how we came up with our version:
"
Premise 1: Superintelligent AI is a realistic prospect, and it would be out of human control. (Singularity claim)
Premise 2: Any level of intelligence can go with any goals. (Orthogonality thesis)
Conclusion: Superintelligent AI poses an existential risk for humanity
"
====
A more formal version with the same propositions might be this:
1. IF there is a realistic prospect that there will be a superintelligent AI system that is a) out of human control and b) can ha...
Even if that is true, you would still get a) a lot of sickness & suffering, and b) infect a lot of other people (who infect further). So some people would be seriously ill and some would die as a result of this experiment.
Can one be a moral realist and subscribe to the orthogonality thesis? In which version of it? (In other words, does one have to reject moral realism in order to accept the standard argument for XRisk from AI? We should better be told! See section 4.1)
But reasoning about morality? Is that a space with logic or with anything goes?
Thanks. We are actually more modest. We would like to see a sound argument for XRisk from AI and we investigate what we call 'the standard argument'; we find it wanting and try to strengthen it, but we fail. So there is something amiss. In the conclusion we admit "we could well be wrong somewhere and the classical argument for existential risk from AI is actually sound, or there is another argument that we have not considered."
I would say the challenge is to present a sound argument (valid + true premises) or at least a valid argument with decent inductive support for the premises. Oddly, we do not seem to have that.
... plus we say that in the paper :)
This should have been clearer. We meant this in Bentham's good old way: minimal pain and maximal pleasure. Intuitively: A world with a lot of pleasure (in the long run) is better than a world with a lot of pain. - You don't need to agree, you just need to agree that this is worth considering, but on our interpretation the orthogonality thesis says that one cannot consider this.
Thanks for this. Indeed, we have no theory of goals here and how the relate, maybe they must be in a hierarchy, as you suggest. And there is a question, then, whether there must be some immovable goal or goals that would have to remain in place in order to judge anything at all. This would constitute a theory of normative judgment ... which we don't have up our sleeves :)
We suggest that such instrumental intelligence would be very limited.
In fact, there is a degree of generality here and it seems one needs a fairly high degree to get to XRisk, but that high degree would then exclude orthogonality.
Yes, that means "this argument".
Thanks for the 'minor' point, which is important: yes, we meant definitely out of human control. And perhaps that is not required, so the argument has a different shape.
Our struggle was to write down a 'standard argument' in such a way that it is clear and its assumptions come out - and your point adds to this.
Here we get to a crucial issue, thanks! If we do assume that reflection on goals does occur, do we assume that the results have any resemblance with human reflection on morality? Perhaps there is an assumption about the nature of morality or moral reasoning in the 'standard argument' that we have not discussed?
We do not say that there is no XRisk or no XRisk from AI.
... well, one might say we assume that if there is 'reflection on goals', the results are not random.
apologies, I don't recognise the paper here :)
We tried to frame the discussion internally, i.e. without making additional assumptions that people may or may not agree with (e.g. moral realism). If we did the job right, the assumptions made in the argument are in the 'singularity claim' and the 'orthogonality thesis' - and there the dilemma is that we need an assumption in the one (general intelligence in the singularity claim) that we must reject in the other (the orthogonality thesis).
What we do say (see figure 1) is that two combinations are inconsistent:
a) general intelligence + orthogonality
b) ins...
Is this 'standard argument' valid? We only argue that is problematic.
If this argument is invalid, what would a valid argument look like? Perhaps with a 'sufficient probability' of high risk from instrumental intelligence?
The combinatorial explosion is on the side of the TT, of course. But storage space is on the side of "design to the test", so if you can make up a nice decisive question, the designer can think of it, too (or read your blog) and add that. The question here is whether Stuart (and Ned Block) are right that such a "giant lookup table" a) makes sense and b) has no intelligence. "The intelligence of a toaster" as Block said.
One thing that I've tried with Google is using it to write stories. Start by searching on "Fred was bored and". Pick slightly from the results and search on "was bored and slightly". Pick annoyed from the search results and search on "bored and slightly annoyed"
Trying this again just now reminds me that I let the sentence fragment grow and grow until I was down to, err, ten? hits. Then I took the next word from a hit that wasn't making a literal copy, and deleted enough leading words to get the hit count back up.
Anyway, it see...
That's why the test only offers a sufficient condition for intelligence (not a necessary one) - at least that's the standard view.
P.S.: Whether all this has to do with conscious experience ("consciousness") we don't know, I think.
The classical problem is that the Turing Test is behavioristic and only provides a sufficient criterion (rather than replacing talk about 'intelligence' as Turing suggests). And it doesn't provide a proper criterion in that it relies on human judges - who tend to take humans for computers, in practice. - Of course it is meant to be open-ended in that "anything one can talk about" is permitted, including stuff that's not on the web. That is a large set of intelligent behavior, but a limited set - so the "design to the test" you are poin...
Thanks, insightful post. I find the research a bit patchy. Only on the atomic bomb there is vast literature since the 1950ies, even in popular fiction - and a couple of crucial names like Oppenheimer (vs. Teller), the Russell–Einstein Manifesto or v. Weizsäcker are absent here.
One more consideration about "instrumental intelligence": we left that somewhat under-defined, more like "if I had that utility function, what would I do?" ... but it is not clear that this image of "me in the machine" captures what a current or future machine would do. In other words, people who use instrumental intelligence for an image of AI owe us a more detailed explanation of what that would be, given the machines we are creating - not just given the standard theory of rational choice.