The phrase "aligned superintelligence" is pretty broad. At one end, is the concept of a superintelligence that will do everything we want and nothing we don't, for some vague inclusive sense of "what we want". At the other end is superintelligence that won't literally kill everyone as a byproduct of doing whatever it does.
Obviously the first requires the second, but we don't know how to ensure even the second. There is an unacceptably high risk that we will get superintelligence that does kill everyone, though different people have quite different magnitudes for exactly how unacceptably high risk it is.
The biggest issue is that if the superintelligence "wants" anything (in the sense of carrying out plans to achieve it), we won't be able to prevent those. So a large part of alignment is either ensuring that it never actually "wants" anything (e.g. tool AI), or that we can somehow develop it in a way that it "wants" things that are compatible with our continued existence and ideally, flourishing. This is the core idea of "alignment".
If alignment is misguided as a whole, then either superintelligence never occurs or humanity ends.
So the first one is an "AGSI", and the second is an "ANSI" (general vs narrow)?
If I understand correctly... One type of alignment (required for the "AGSI") is what I'm referring to as alignment which is that it is conscious of all of our interests and tries to respect them, like a good friend, and the other is that it's narrow enough in scope that it literally just does that one thing, way better than humans could, but the scope is narrow enough that we can hopefully reason about it and have an idea that it's safe.
Alignment is kind of a confusing term if applied to ANSI, because to me at least it seems to suggest agency and aligned interests, wheras in the case of ANSI if I understand correctly the idea is to prevent it from having agency and interests in the first place. So it's "aligned" in the same way that a car is aligned, ie it doesn't veer off the road at 80 mph :-)
But I'm not sure if I've understood correctly, thanks for your help...
This interesting post was sitting at -1 total karma with 5 votes at the time of reading, so I strongly upvoted. It contains a brief but cogent argument that seems novel on LW. Though it might have been made before in more abstract and wordy terms, I still think the novelty of brevity is valuable.
I think your intuitions are essentially correct - in particular, I think many people draw poor conclusions from the strange notion that humans are an example of alignment success.
However, you seem to be conflating:
We don't need to solve the problem in full generality - we only need to find one setup that works.
Importantly, 'works' here means that we have sufficient understanding of the system to guarantee some alignment property with high probability. We don't need complete understanding.
This is still a very high bar - we need to guarantee-with-high-probability some property that actually leads to things turning out well, not simply one which satisfies my definition of alignment.
I haven't seen any existence proof for a predictably fairly safe path to an aligned SI (or aligned AGI...).
It'd be nice to know that such a path existed.
Thanks for your response! Could you explain what you mean by "fully general"? Do you mean that alignment of narrow SI is possible? Or that partial alignment of general SI is good enough in some circumstance? If it's the latter could you give an example?
By "fully general" I mean something like "With alignment process x, we could take the specification of any SI, apply x to it, and have an aligned version of that SI specification". (I assume almost everyone thinks this isn't achievable)
But we don't need an approach that's this strong: we don't need to be able to align all, most, or even a small fraction of SIs. One is enough - and in principle we could build in many highly specific constraints by construction (given sufficient understanding).
This still seems very hard, but I don't think there's any straightforward argument for its impossibility/intractability. Most such arguments only work against the more general solutions - i.e. if we needed to be able to align any SI specification.
Here's a survey of a bunch of impossibility results if you're interested.
These also apply to stronger results than we need (which is nice!).
For a while I’ve been reading LessWrong, and there’s something that doesn’t make much sense to me, which is the idea that it is possible to align an AGI/superintelligence at all. I understand that it’s probably not even a majority of discussion on LW that is optimistic about the prospect of AI alignment via technology as opposed to other means, but nonetheless, I think it’s skews the discussion.
I should humbly clarify here that I don’t consider myself in the league of most LW posters, but I have very much enjoyed reading the forum nonethless. The alignment question is a bit like the the Collatz conjecture, seductive, only the alignment question intuitively seems much harder than Collatz!
Alignment is a subject of intense ongoing debate on LW and other places, and of course the developers of the models, though they admit that it’s hard, are still optimistic about the possibility of alignment (I suppose they have to be).
But my intuition tells me something different; I think of superintelligence in terms of complexity, eg, it has a greater ability to manage complexity than humans, and to us it is very complex. In contrast, we are less complex to it and it may find us basic and rather easy to model / predict to a good degree of confidence.
I think complexity is a useful concept to think about, and it has an interesting characteristic, which is that it tends to escape your attempts to manage it, over time. This is why, as living organisms, we need to do continual maintenance work on ourselves to pump the entropy out of our bodies and minds.
Managing complexity takes ongoing effort, and the more complex the problem is, the more likely that your model of it will fall short sooner or later, so pretty much guaranteed, in the case of a superintelligence. And this is going to be a problem for AI safety, news to pretty much no-one I expect…
And this is just how life is, and I havn't seen an example in any other domain that would suggest we can align a superintelligence. If we could, maybe we could first try to align a politician? Nope, we havn't managed that either, and the problem isn't entirely dissimilar. What about aligning a foreign nation state? Nope, there are only 2 ways, either there is mutual benefit to alignment, or there is a benefit to one party to behave well because the other has the circumstantial advantage.
I Googled "nature of complexity" just to see if there was anything that supported my intuition on the subject, and the first result is this page, which has a fitting quote:
"Complexity is the property of a real world system that is manifest in the inability of any one formalism being adequate to capture all its properties. It requires that we find distinctly different ways of interacting with systems. Distinctly different in the sense that when we make successful models, the formal systems needed to describe each distinct aspect are NOT derivable from each other."
My layman’s understanding of what it is saying is can be summed up thusly:
Complexity will find a way to escape your attempts to control it, via unforseen circumstances that require you to augment your model with new information.
So, I can't find a reason to believe that we can align an AI at all, except via a sufficient circumstantial advantage. There is no silver bullet here. So if it were up to me (it's not), I'd:
Maybe, just maybe, if we do all these things we can be ready in 10 years when that dark actor presses the button…
Edit: going to reference here some better formulated arguments that seem to support what I'm saying.
https://www.lesswrong.com/posts/AdGo5BRCzzsdDGM6H/contra-strong-coherence
I agree with ^, a general intelligence can realign itself. The point of agency is it will define utility as being whatever maximises it's predictability / power over it's own circumstances, not according to some pre-programming.