wedrifid comments on Should I believe what the SIAI claims? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (600)
Even small ambitions are risky. If I ask a potential superintelligence to do something easy but an obstacle gets in the way it will most likely obliterate that obstacle and do the 'simple thing'. Unless you are very careful that 'obstacle' could wind up being yourself or, if you are unlucky, your species. Maybe it just can't risk one of you pressing the off switch!
Good point. The resources expended towards a "small" goal aren't directly bounded by the size of the goal. As you said, an obstacle can make the resources used go arbitrarily high. An alternative constraint would be on what the AI is allowed to use up in achieving the goal - "No more that 10 kilograms of matter, nor more than 10 megajoules of energy, nor any human lives, nor anything with a market value of more that $1000". This will have problems of its own, when the AI thinks up something to use up that we never anticipated (We have something of a similar problem with corporations - but at least they operate on human timescales).
Part of the safety of existing optimizers is that they can only use resources or perform actions that we've explicitly let them try using. An electronic CAD program may tweak transistor widths, but it isn't going to get creative and start trying to satisfy its goals by hacking into the controls of the manufacturing line and changing their settings. An AI with the option to send arbitrary messages to arbitrary places is quite another animal...
The idea is to prevent a "runaway" disaster.
Relatively standard and conventional engineering safety methodologies would be used for other kinds of problems.
My observation is that small ambitions can become 'runaway disasters' unless a lot of the problems of FAI are solved.
That sounds as 'safe' as giving Harry Potter rules to follow.
I understand that this is an area in which we fundamentally disagree. I have previously disagreed about the wisdom of using human legal systems to control AI behaviour and I assume that our disagreement will be similar on this subject.
"Small ambitions" are a proposed solution. Get the machine to want something - and then stop when it's desires are satisfied - or at a specified date, whichever comes first.
The solution has some complications - but it does look as though it is a pretty obvious safety measure - one that suitably paranoid individuals are likely to have near the top of their lists.
It doesn't make a runaway disaster impossible. The agent could still set up minions, "forget" to switch them off - and then they run amok. The point is to make a runaway disaster much less likely. The safety level is pretty configurable - if the machine's desires are sufficiently constrained. I went into a lot of these issues on:
http://alife.co.uk/essays/stopping_superintelligence/
See also the previous discussion of the issue on this site.
Shane Legg has also gone into methods of restraining a machine "from within" - so to speak. Logically, you could limit space, time or matterial resources in this way - if you have control over an agent's utility function.
This is very dangerous thinking. There are many potential holes not covered in your essay. The problem with all these holes is that even the smallest one can potentially lead to the end of the universe. As Eliezer often mentions: the AI has to be mathematically rigorously proven to be friendly; there can't be any room for guessing or hoping.
As an example, consider that to the AI moving to quiescent state will be akin to dying. (Consider somebody wanting to make you not want anything or force you to want something that you normally don't.) I hope you don't come reply with a "but we can do X", because that would be another patch, and that's exactly what we want to avoid. There is no getting around creating a solid proven mathematical definition of friendly.
The end of the universe - OMG!
It seems reasonable to expect that agents will welcome their end if their time has come.
The idea, as usual, is not to try and make the agent do something it doesn't want to - but rather to make it want to do it in the first place.
I expect off switches - and the like - will be among the safety techniques employed. Provable correctness might be among them as well - but judging by the history of such techniques it seems rather optimistic to expect very much from them.
I am fairly confident that we can tweak any correct program into a form which allows a mathematical proof that the program behavior meets some formal specification of "Friendly".
I am less confident that we will be able to convince ourselves that the formal specification of "Friendly" that we employ is really something that we want.
We can prove there are no bugs in the program, but we can't prove there are no bugs in the program specification. Because the "proof" of the specification requires that all of the stakeholders actually look at that specification of "Friendly", think about that specification, and then bet their lives on the assertion that this is indeed what they want.
What is a "stakeholder", you ask? Well, what I really mean is pitchfork-holder. Stakes are from a different movie.
I don't think there is much different between the two. Either way you are modifying the agent's behavior. If it doesn't want it, it won't have it.
The problem with off switches is that 1) it might not be guaranteed to work (AI changes its own code or prevents anyone from accessing/using the off switch), 2) it might not be guaranteed to work the way you want to. Unless you have formally proven that AI and all the possible modifications it can make to itself are safe, you can't know for sure.
It is not a modification if you make it that way "in the first place" as specified - and the "If it doesn't want it, it won't have it" seems contrary to the specified bit where you "make it want to do it in the first place".
The idea of off switches is not that they are guaranteed to work, but that they are a safety feature. If you can make a machine do anything you want at all, you can probably make it turn itself off. You can build it so the machine doesn't wish to stay turned on - but goes willing into the night.
We will never "know for sure" that a machine intelligence is safe. This is the real world, not math land. We may be able to prove some things about it - such that its initial state is not vulnerable to input stream buffer-overflow attacks - but we won't be able to prove something like that the machine will only do what we want it to do, for some value of "we".
At the moment, the self-improving systems we see are complex man-machine symbioses - companies and governments. You can't prove math theorems about such entities - they are just too messy. Machine intelligence seems likely to be like that for quite a while - functionally embedded in a human matrix. The question of "what would the machine do if no one could interfere with its code" is one for relatively late on - machines will already be very smart by then - smarter than most human computer programmers, anyway.
The hardest part of Friendly AI is figuring out how to reliably instill any goal system.
If you can't get it to do what you want at all, the machine is useless, and there would be no point in constructing it. In practice, we know we can get machines to do what we want to some extent - we have lots of examples of that. So, the idea is to make the machine not mind being turned off. Don't make it an open-ended maximiser - make it maximise only until time t - or until its stop button is pressed - whichever comes sooner.