Lots of strawmanning going on here (could somebody else please point these out? please?) but in case it's not obvious, the problem is that what you call "heuristic safety" is difficult. Now, most people haven't the tiniest idea of what makes anything difficult to do in AI and are living in a verbal-English fantasy world, so of course you're going to get lots of people who think they have brilliant heuristic safety ideas. I have never seen one that would work, and I have seen lots of people come up with ideas that sound to them like they might have a 40% chance of working and which I know perfectly well to have a 0% chance of working.
The real gist of Friendly AI isn't some imaginary 100% perfect safety concept, it's ideas like, "Okay, we need to not have a conditionally independent chance of goal system warping on each self-modification because over the course of a billion modifications any conditionally independent probability will sum to ~1, but since self-modification is initially carried out in the highly deterministic environment of a computer chip it looks possible to use crisp approaches that avert a conditionally independent failure probability for each self...
full disclosure: I'm a professional cryptography research assistant. I'm not really interested in AI (yet) but there are obvious similarities when it comes to security.
I have to back Elizer up on the "Lots of strawmanning" part. No professional cryptographer will ever tell you there's hope in trying to achieve "perfect level of safety" of anything and cryptography, unlike AI, is a very well formalized field. As an example, I'll offer a conversation with a student:
How secure is this system? (such question is usually a shorthand for: "What's the probability this system won't be broken by methods X, Y and Z")
The theorem says
What's the probability that the proof of the theorem is correct?
... probably not
Now, before you go "yeah, right", I'll also say that I've already seen this once - there was a theorem in major peer reviewed journal that turned out to be wrong (counter-example found) after one of the students tried to implement it as a part of his thesis - so the probability was indeed not even close to for any serious N. I'd like to point out that this doesn't even include problems with the implementation of the theory.
It's reall...
Now, before you go "yeah, right", I'll also say that I've already seen this once - there was a theorem in major peer reviewed journal that turned out to be wrong (counter-example found) after one of the students tried to implement it as a part of his thesis - so the probability was indeed not even close to for any serious N. I'd like to point out that this doesn't even include problems with the implementation of the theory.
Yup. Usual reference: "Probing the Improbable: Methodological Challenges for Risks with Low Probabilities and High Stakes". (I also have an essay on a similar topic.)
Excellent visual memory, great Google & search skills, a thorough archive system, thousands of excerpts stored in Evernote, and essays compiling everything relevant I know of on a topic - that's how.
(If I'd been born decades ago, I'd probably have become a research librarian.)
The usual disjunctive strategy: many levels of security, so an error in one is not a failure of the overall system.
the wrong actions can trigger some invariant and signal that something went wrong with the decision theory or utility function
That's not 'boxing'. Boxing is a human pitting their wits against a potentially hostile transhuman over a text channel and it is stupid. What you're describing is some case where we think that even after 'proving' some set of invariants, we can still describe a high-level behavior X such that detecting X either indicates global failure with high-enough probability that we would want to shut down the AI after detecting any of many possible things in the reference class of X, or alternatively, we think that X has a probability of flagging failure and that we afterward stand a chance of doing a trace-back to determine more precisely if something is wrong. Having X stay in place as code after the AI self-modifies will require solving a hard open problem in FAI for having a nontrivially structured utility function such that X looks like instrumentally a good thing (your utility function must yield, 'under circumstances X it is better that I be suspended and examined than that I continue to do whatever I would otherwise calculate as the instrumentally right ...
The point is you never achieve 100% safety no matter what, so the correct way to approach it is to reduce risk most given whatever resources you have. This is exactly what Eleizer says SI is doing:
I have an analysis of the problem which says that if I want something to have a failure probability less than 1, I have to do certain things because I haven't yet thought of any way not to have to do them.
IOW, they thought about it and concluded there's no other way. Is their approach the best possible one? I don't know, probably not. But it's a lot better than "let's just build something and hope for the best".
Edit: Is that analysis public? I'd be interested in that, probably many people would.
I don't know how to take a self-modifying heuristic soup in the process of going FOOM and make it Friendly. You don't know either, but the problem is, you don't know that you don't know. Or to be more precise, you don't share my epistemic reasons to expect that to be really difficult.
But the article didn't claim any different: it explicitly granted that if we presume a FOOM, then yes, trying to do anything with heuristic soups seems useless and just something that will end up killing us all. The disagreement is not on whether it's possible to make a heuristic AGI that FOOMs while remaining Friendly; the disagreement is on whether there will inevitably be a FOOM soon after the creation of the first AGI, and whether there could be a soft takeoff during which some people prevented those powerful-but-not-yet-superintelligent heuristic soups from killing everyone while others put the finishing touches on the AGI that could actually be trusted to remain Friendly when it actually did FOOM.
Why maintain any secrecy for SI's research? Don't we want others to collaborate on and use safety mechanisms? Of course, a safe AGI must be safe from the ground up. But as to implementation, why should we expect that SI's AGI design could possibly have an lead on the others?
The question of whether to keep research secret must be made on a case-by-case basis. In fact, next week I have a meeting (with Eliezer and a few others) about whether to publish a particular piece of research progress.
Certainly, there are many questions that can be discussed in public because they are low-risk (in an information hazard sense), and we plan to discuss those in public — e.g. Eliezer is right now working on the posts in his Open Problems in Friendly AI sequence.
Why should we expect that SI's AGI design will have a lead on others? We shouldn't. It probably won't. We can try, though. And we can also try to influence the top AGI people (10-40 years from now) to think with us about FAI and safety mechanisms and so on. We do some of that now, though the people in AGI today probably aren't the people who will end up building the first AGIs. (Eliezer's opinion may differ.)
...Given that proofs can be wro
Pursuing a provably-friendly AGI, even if very unlikely to succeed, could still be the right thing to do if it was certain that we’ll have a hard takeoff very soon after the creation of the first AGIs.
One consideration you're missing (and that I expect to be true; Eliezer also points it out) is that even if there is very slow takeoff, creation of slow-thinking poorly understood unFriendly AGIs is not any help in developing a FAI (they can't be "debugged" when you don't have accurate understanding of what it is you are aiming for; and they can't be "asked" to solve a problem which you can't accurately state). In this hypothetical, in the long run the unFriendly AGIs (or WBEs whose values have drifted away from original human values) will have control. So in this case it's also necessary (if a little bit less urgent, which isn't really enough to change the priority of the problem) to work on FAI theory, so hard takeoff is not decisively important in this respect.
(Btw, is this point in any of the papers? Do people agree it should be?)
As for my own work for SI, I've been trying to avoid the assumption of there necessarily being a hard takeoff right away, and to somewhat push towards a direction that also considers the possibility of a safe singularity through an initial soft takeoff and more heuristic AGIs. (I do think that there will be a hard takeoff eventually, but an extended softer takeoff before it doesn't seem impossible.) E.g. this is from the most recent draft of the Responses to Catastrophic AGI Risk paper:
...As a brief summary of our views, in the medium term, we think that the proposals of AGI confinement (section 4.1.), Oracle AI (section 5.1.), and motivational weaknesses (section 5.6.) would have promise in helping create safer AGIs. These proposals share in common the fact that although they could help a cautious team of researchers create an AGI, they are not solutions to the problem of AGI risk, as they do not prevent others from creating unsafe AGIs, nor are they sufficient in guaranteeing the safety of sufficiently intelligent AGIs. Regulation (section 3.3.) as well as "merge with machines" (section 3.4.) proposals could also help to somewhat reduce AGI risk. In the long run, we will
Hmm, the OP isn't arguing for it, but I'm starting to wonder if it might (upon further study) actually be a good idea to build a heuristics-based FAI. Here are some possible answers to common objections/problems of the approach:
Part of the problem here is an Angels on Pinheads problem. Which is to say: before deciding exactly how many angels can dance on the head of a pin, you have to make sure the "angel" concept is meaningful enough that questions about angels are meaningful. In the present case, you have a situation where (a) the concept of "friendliness" might not be formalizable enough to make any mathematical proofs about it meaningful, and (b) there is no known path to the construction of an AGI at the moment, so speculating about the properties of A...
Yes, but only after defining its terms well enough to make the endeavor meaningful.
That is indeed part of what SI is trying to do at the moment.
Edit: deleted, accidental comment
I think we're going to get WBE's before AGI.
If we viewed this as a form of heuristic AI, it follows from your argument that we should look for ways to ensure friendliness of WBE's. (Ignoring the ethical issues here.)
Now, maye this is becasue most real approaches would consider ethical issues, but it seems like figuring out how to modify a human brain so that it doesn't act against your interests even if is powerful and without hampering its intellect, is a big 'intractable' problem.
I suspect no one is working on it and no one is going to, even though we...
A team which is ready to adopt a variety of imperfect heuristic techniques will have a decisive lead on approaches based on pure theory [...] even if the Friendliness theory provides the basis for intelligence, the nitty-gritty of SI’s implementation will still be far away, and will involve real-world heuristics and other compromises.
Citation very much needed. Neither of the two approaches has come anywhere near to self-improving AI.
SI should evangelize AGI safety to other researchers
I think they're already aware of this.
do so before anyone else builds an AGI.
...the odds of which can also be improved by slowing down other groups, as has been pointed out before. Not that one would expect any such effort to be public.
Even the provably friendly design will face real-world compromises and errors in its implementation, so the implementation will not itself be provably friendly.
Err… Coq? The impossibility of proving computer programs is a common trope, but also a false one. It's just very hard and very expensive to do for any sufficiently large program. Hopefully, a real world implementation of the bootstrap code for whatever math is needed for the AI will be optimized for simplicity, and therefore will stand a chance at being formally proven.
A genie which can't grant wishes, only tell you how to grant the wish yourself, is considerably safer than a genie which can grant wishes, and particularly safer than a genie that can grant wishes nobody has made.
I think there's a qualitative difference in the kind of AI most people are interested in making, and the AI Eliezer is interested in making. Eliezer is interested in creating an omnipotent, omniscience god; omnibenevolence becomes a necessary safety rule. Absence omnibenevolence, a merely omniscient god is safer. (Although as Eliezer's Let-me-o...
I wrote about it before. My idea was that until math FAI is finished we should suggest another type of friendliness which consists of simple rules which could be mutualy independently implemented by any project.
(With Kaj Sotala)
SI's current R&D plan seems to go as follows:
1. Develop the perfect theory.
2. Implement this as a safe, working, Artificial General Intelligence -- and do so before anyone else builds an AGI.
The Singularity Institute is almost the only group working on friendliness theory (although with very few researchers). So, they have the lead on Friendliness. But there is no reason to think that they will be ahead of anyone else on the implementation.
The few AGI designs we can look at today, like OpenCog, are big, messy systems which intentionally attempt to exploit various cognitive dynamics that might combine in unexpected and unanticipated ways, and which have various human-like drives rather than the sort of supergoal-driven, utility-maximizing goal hierarchies that Eliezer talks about, or which a mathematical abstraction like AIXI employs.
A team which is ready to adopt a variety of imperfect heuristic techniques will have a decisive lead on approaches based on pure theory. Without the constraint of safety, one of them will beat SI in the race to AGI. SI cannot ignore this. Real-world, imperfect, safety measures for real-world, imperfect AGIs are needed. These may involve mechanisms for ensuring that we can avoid undesirable dynamics in heuristic systems, or AI-boxing toolkits usable in the pre-explosion stage, or something else entirely.
SI’s hoped-for theory will include a reflexively consistent decision theory, something like a greatly refined Timeless Decision Theory. It will also describe human value as formally as possible, or at least describe a way to pin it down precisely, something like an improved Coherent Extrapolated Volition.
The hoped-for theory is intended to provide not only safety features, but also a description of the implementation, as some sort of ideal Bayesian mechanism, a theoretically perfect intelligence.
SIers have said to me that SI's design will have a decisive implementation advantage. The idea is that because strap-on safety can’t work, Friendliness research necessarily involves more fundamental architectural design decisions, which also happen to be general AGI design decisions that some other AGI builder could grab and save themselves a lot of effort. The assumption seems to be that all other designs are based on hopelessly misguided design principles. SI-ers, the idea seems to go, are so smart that they'll build AGI far before anyone else. Others will succeed only when hardware capabilities allow crude near-brute-force methods to work.
Yet even if the Friendliness theory provides the basis for intelligence, the nitty-gritty of SI’s implementation will still be far away, and will involve real-world heuristics and other compromises.
We can compare SI’s future AI design to AIXI, another mathematically perfect AI formalism (though it has some critical reflexivity issues). Schmidhuber, Hutter, and colleagues think that their AXI can be scaled down into a feasible implementation, and have implemented some toy systems. Similarly, any actual AGI based on SI's future theories will have to stray far from its mathematically perfected origins.
Moreover, SI's future friendliness proof may simply be wrong. Eliezer writes a lot about logical uncertainty, the idea that you must treat even purely mathematical ideas with same probabilistic techniques as any ordinary uncertain belief. He pursues this mostly so that his AI can reason about itself, but the same principle applies to Friendliness proofs as well.
Perhaps Eliezer thinks that a heuristic AGI is absolutely doomed to failure; that a hard takeoff immediately soon after the creation of the first AGI is so overwhelmingly likely that a mathematically designed AGI is the only one that could stay Friendly. In that case, we have to work on a pure-theory approach, even if it has a low chance of being finished first. Otherwise we'll be dead anyway. If an embryonic AGI will necessarily undergo an intelligence explosion, we have no choice but to "shut up and do the impossible."
I am all in favor of gung-ho knife-between-the teeth projects. But when you think that your strategy is impossible, then you should also look for a strategy which is possible, if only as a fallback. Thinking about safety theory until drops of blood appear on your forehead (as Eliezer puts it, quoting Gene Fowler), is all well and good. But if there is only a 10% chance of achieving 100% safety (not that there really is any such thing), then I'd rather go for a strategy that provides only a 40% promise of safety, but with a 40% chance of achieving it. OpenCog and the like are going to be developed regardless, and probably before SI's own provably friendly AGI. So, even an imperfect safety measure is better than nothing.
If heuristic approaches have a 99% chance of an immediate unfriendly explosion, then that might be wrong. But SI, better than anyone, should know that any intuition-based probability estimate of “99%” really means “70%”. Even if other approaches are long-shots, we should not put all our eggs in one basket. Theoretical perfection and stopgap safety measures can be developed in parallel.
Given what we know about human overconfidence and the general reliability of predictions, the actual outcome will to a large extent be something that none of us ever expected or could have predicted. No matter what happens, progress on safety mechanisms for heuristic AGI will improve our chances if something entirely unexpected happens.
What impossible thing should SI be shutting up and doing? For Eliezer, it’s Friendliness theory. To him, safety for heuristic AGI is impossible, and we shouldn't direct our efforts in that direction. But why shouldn't safety for heuristic AGI be another impossible thing to do?
(Two impossible things before breakfast … and maybe a few more? Eliezer seems to be rebuilding logic, set theory, ontology, epistemology, axiology, decision theory, and more, mostly from scratch. That's a lot of impossibles.)
And even if safety for heuristic AGIs is really impossible for us to figure out now, there is some chance of an extended soft takeoff that will allow for the possibility of us developing heuristic AGIs which will help in figuring out AGI safety, whether because we can use them for our tests, or because they can by applying their embryonic general intelligence to the problem. Goertzel and Pitt have urged this approach.
Yet resources are limited. Perhaps the folks who are actually building their own heuristic AGIs are in a better position than SI to develop safety mechanisms for them, while SI is the only organization which is really working on a formal theory on Friendliness, and so should concentrate on that. It could be better to focus SI's resources on areas in which it has a relative advantage, or which have a greater expected impact.
Even if so, SI should evangelize AGI safety to other researchers, not only as a general principle, but also by offering theoretical insights that may help them as they work on their own safety mechanisms.
In summary:
1. AGI development which is unconstrained by a friendliness requirement is likely to beat a provably-friendly design in a race to implementation, and some effort should be expended on dealing with this scenario.
2. Pursuing a provably-friendly AGI, even if very unlikely to succeed, could still be the right thing to do if it was certain that we’ll have a hard takeoff very soon after the creation of the first AGIs. However, we do not know whether or not this is true.
3. Even the provably friendly design will face real-world compromises and errors in its implementation, so the implementation will not itself be provably friendly. Thus, safety protections of the sort needed for heuristic design are needed even for a theoretically Friendly design.