My summary (now with endorsement by Eliezer!):
This point seems missing:
You can't get a 20-move solution out of a human brain, using the native human planning algorithm. Humanity can do it, but only by exploiting the ability of humans to explicitly comprehend the deep structure of the domain (not just rely on intuition) and then inventing an artifact, a new design, running code which uses a different and superior cognitive algorithm, to solve that Rubik's Cube in 20 moves. We do all that without being self-modifying, but it's still a capability to respect.
A system that undertakes extended processes of research and thinking, generating new ideas and writing new programs for internal experiments, seems both much more effective and much more potentially risky than something like chess program with a simple fixed algorithm to search using a fixed narrow representation of the world (as a chess board).
The difficulty of Friendliness is finite. The difficulties are big and subtle, but not unending.
How do we know that the problem is finite? When it comes to proving a computer program safe from being hacked the problem is considered NP-hard. Google Chrome got recently hacked by chaining 14 different bugs together. A working AGI is probably as least a complex as Google Chrome. Proving it safe will likely also be NP-hard.
Google Chrome doesn't even self modify.
When I read posts like this I feel like an independent everyman watching a political debate.
The dialogue is oversimplified and even then I don't fully grasp exactly what's being said and the implications thereof, so I can almost feel my opinion shifting back and forth with each point that sounds sort of, kinda, sensible when I don't really have the capacity to judge the statements. I should probably try and fix that.
The core points don't strike me as being inherently difficult or technical
That's precisely the problem, given that Eliezer is arguing that a technical appreciation of difficult problems is necessary to judge correctly on this issue. My understanding, like pleeppleep's, is limited to the simplified level given here, which means I'm reduced to giving weight to presentation and style and things being "kinda sensible".
Hello,
I appreciate the thoughtful response. I plan to respond at greater length in the future, both to this post and to some other content posted by SI representatives and commenters. For now, I wanted to take a shot at clarifying the discussion of "tool-AI" by discussing AIXI. One of the the issues I've found with the debate over FAI in general is that I haven't seen much in the way of formal precision about the challenge of Friendliness (I recognize that I have also provided little formal precision, though I feel the burden of formalization is on SI here). It occurred to me that AIXI might provide a good opportunity to have a more precise discussion, if in fact it is believed to represent a case of "a rare exception who specified his AGI in such unambiguous mathematical terms that he actually succeeded at realizing, after some discussion with SIAI personnel, that AIXI would kill off its users and seize control of its reward button."
So here's my characterization of how one might work toward a safe and useful version of AIXI, using the "tool-AI" framework, if one could in fact develop an efficient enough approximation of AIXI to qualify as a powerful ...
Didn't see this at the time, sorry.
So... I'm sorry if this reply seems a little unhelpful, and I wish there was some way to engage more strongly, but...
Point (1) is the main problem. AIXI updates freely over a gigantic range of sensory predictors with no specified ontology - it's a sum over a huge set of programs, and we, the users, have no idea what the representations are talking about, except that at the end of their computations they predict, "You will see a sensory 1 (or a sensory 0)." (In my preferred formalism, the program puts a probability on a 0 instead.) Inside, the program could've been modeling the universe in terms of atoms, quarks, quantum fields, cellular automata, giant moving paperclips, slave agents scurrying around... we, the programmers, have no idea how AIXI is modeling the world and producing its predictions, and indeed, the final prediction could be a sum over many different representations.
This means that equation (20) in Hutter is written as a utility function over sense data, where the reward channel is just a special case of sense data. We can easily adapt this equation to talk about any function computed directly over sense data - we can g...
Thanks for the response. To clarify, I'm not trying to point to the AIXI framework as a promising path; I'm trying to take advantage of the unusually high degree of formalization here in order to gain clarity on the feasibility and potential danger points of the "tool AI" approach.
It sounds to me like your two major issues with the framework I presented are (to summarize):
(1) There is a sense in which AIXI predictions must be reducible to predictions about the limited set of inputs it can "observe directly" (what you call its "sense data").
(2) Computers model the world in ways that can be unrecognizable to humans; it may be difficult to create interfaces that allow humans to understand the implicit assumptions and predictions in their models.
I don't claim that these problems are trivial to deal with. And stated as you state them, they sound abstractly very difficult to deal with. However, it seems true - and worth noting - that "normal" software development has repeatedly dealt with them successfully. For example: Google Maps works with a limited set of inputs; Google Maps does not "think" like I do and I would not be able to look ...
So first a quick note: I wasn't trying to say that the difficulties of AIXI are universal and everything goes analogously to AIXI, I was just stating why AIXI couldn't represent the suggestion you were trying to make. The general lesson to be learned is not that everything else works like AIXI, but that you need to look a lot harder at an equation before thinking that it does what you want.
On a procedural level, I worry a bit that the discussion is trying to proceed by analogy to Google Maps. Let it first be noted that Google Maps simply is not playing in the same league as, say, the human brain, in terms of complexity; and that if we were to look at the winning "algorithm" of the million-dollar Netflix Prize competition, which was in fact a blend of 107 different algorithms, you would have a considerably harder time figuring out why it claimed anything it claimed.
But to return to the meta-point, I worry about conversations that go into "But X is like Y, which does Z, so X should do reinterpreted-Z". Usually, in my experience, that goes into what I call "reference class tennis" or "I'm taking my reference class and going home". The troub...
Thanks for the response. My thoughts at this point are that
And atheism is a religion, and bald is a hair color.
The three distinguishing characteristics of "reference class tennis" are (1) that there are many possible reference classes you could pick and everyone engaging in the tennis game has their own favorite which is different from everyone else's; (2) that the actual thing is obviously more dissimilar to all the cited previous elements of the so-called reference class than all those elements are similar to each other (if they even form a natural category at all rather than having being picked out retrospectively based on similarity of outcome to the preferred conclusion); and (3) that the citer of the reference class says it with a cognitive-traffic-signal quality which attempts to shut down any attempt to counterargue the analogy because "it always happens like that" or because we have so many alleged "examples" of the "same outcome" occurring (for Hansonian rationalists this is accompanied by a claim that what you are doing is the "outside view" (see point 2 and 1 for why it's not) and that it would be bad rationality to think about the "individual details").
I have also termed this Argument by Greek Analogy after Socrates's attempt to argue that, since the Sun appears the next day after setting, souls must be immortal.
I have also termed this Argument by Greek Analogy after Socrates's attempt to argue that, since the Sun appears the next day after setting, souls must be immortal.
For the curious, this is from the Phaedo pages 70-72. The run of the argument are basically thus:
P1 Natural changes are changes from and to opposites, like hot from relatively cold, etc.
P2 Since every change is between opposites A and B, there are two logically possible processes of change, namely A to B and B to A.
P3 If only one of the two processes were physically possible, then we should expect to see only one of the two opposites in nature, since the other will have passed away irretrievably.
P4 Life and death are opposites.
P5 We have experience of the process of death.
P6 We have experience of things which are alive
C From P3, 4, 5, and 6 there is a physically possible, and actual, process of going from death to life.
The argument doesn't itself prove (haha) the immortality of the soul, only that living things come from dead things. The argument is made in support of the claim, made prior to this argument, that if living people come from dead people, then dead people must exist somewhere. The argument is particularly interesting for premises 1 and 2, which are hard to deny, and 3, which seems fallacious but for non-obvious reasons.
This sounds like it might be a bit of a reverent-Western-scholar steelman such as might be taught in modern philosophy classes; Plato's original argument for the immortality of the soul sounded more like this, which is why I use it as an early exemplar of reference class tennis:
-
Then let us consider the whole question, not in relation to man only, but in relation to animals generally, and to plants, and to everything of which there is generation, and the proof will be easier. Are not all things which have opposites generated out of their opposites? I mean such things as good and evil, just and unjust—and there are innumerable other opposites which are generated out of opposites. And I want to show that in all opposites there is of necessity a similar alternation; I mean to say, for example, that anything which becomes greater must become greater after being less.
True.
And that which becomes less must have been once greater and then have become less.
Yes.
And the weaker is generated from the stronger, and the swifter from the slower.
Very true.
And the worse is from the better, and the more just is from the more unjust.
Of course.
And is this true of all opposites? and are we convinced tha...
To clarify, for everyone:
There are now three "major" responses from SI to Holden's Thoughts on the Singularity Institute (SI): (1) a comments thread on recent improvements to SI as an organization, (2) a post series on how SI is turning donor dollars into AI risk reduction and how it could do more of this if it had more funding, and (3) Eliezer's post on Tool AI above.
At least two more major responses from SI are forthcoming: a detailed reply to Holden's earlier posts and comments on expected value estimates (e.g. this one), and a long reply from me that summarizes my responses to all (or almost all) of the many issues raised in Thoughts on the Singularity Institute (SI).
Software that does happen to interface with humans is selectively visible and salient to humans, especially the tiny part of the software that does the interfacing; but this is a special case of a general cost/benefit tradeoff which, more often than not, turns out to swing the other way, because human advice is either too costly or doesn't provide enough benefit.
I suspect this is the biggest counter-argument for Tool AI, even bigger than all the technical concerns Eliezer made in the post. Even if we could build a safe Tool AI, somebody would soon build an agent AI anyway.
My five cents on the subject, from something that I'm currently writing:
...Like with external constraints, Oracle AI suffers from the problem that there would always be an incentive to create an AGI that could act on its own, without humans in the loop. Such an AGI would be far more effective in furthering whatever goals it had been built to pursue, but also far more dangerous.
Current-day narrow-AI technology includes high-frequency trading (HFT) algorithms, which make trading decisions within fractions of a second, far too fast to keep humans in the loop. HFT seeks to make a very short-term profit, but even tr
Marcus Hutter is a rare exception who specified his AGI in such unambiguous mathematical terms that he actually succeeded at realizing, after some discussion with SIAI personnel, that AIXI would kill off its users and seize control of its reward button.
Marcus Hutter denies ever having said that.
I asked EY for how to proceed, with his approval these are the messages we exchanged:
...Eliezer,
I am unsure how to proceed and would appreciate your thoughts on resolving this situation:
In your Reply to Holden on 'Tool AI', to me one of the central points, and the one that much credibility hinges on is this:
[Initial quote of this comment]
That and some other "quotes" and allusions to Hutter, the most recent one by Carl Shulman [I referred to this: "The informal argument that AIXI would accept a delusion box to give itself maximal sensory reward was made by Eliezer a while ago, and convinced the AIXI originators." which I may have mistakingly attributed to M.H. since he is the AIXI originator], that are attributed to M.H. seemed to be greatly at odds with my experiences with the man, so I asked Carl Shulman for sourcing them, he had this to say:
"I recall overhearing
I think it's a pity that we're not focusing on what we could do to test the tool vs general AI distinction. For example, here's one near-future test: how do we humans deal with drones?
Drones are exploding in popularity, are increasing their capabilities constantly, and are coveted by countless security agencies and private groups for their tremendous use in all sorts of roles both benign and disturbing. Just like AIs would be. The tool vs general AI distinction maps very nicely onto drones as well: a tool AI corresponds to a drone being manually flown by a human pilot somewhere, while a general AI would correspond to an autonomous drone which is carrying out some mission (blast insurgents?).
So, here is a near-future test of the question 'are people likely to let tool AIs 'drive themselves' for greater efficiency?' - simply ask whether in, say, a decade there are autonomous drones carrying tasks that now would only be carried out by piloted drones.
If in a decade we learn that autonomous drones are killing people, then we have an answer to our tool AI question: it doesn't matter because given a tool AI, people will just turn it into a general AI.
(Amdahl's law: if the human in the loo...
I begin by thanking Holden Karnofsky of Givewell for his rare gift of his detailed, engaged, and helpfully-meant critical article Thoughts on the Singularity Institute (SI). In this reply I will engage with only one of the many subjects raised therein, the topic of, as I would term them, non-self-modifying planning Oracles, a.k.a. 'Google Maps AGI' a.k.a. 'tool AI', this being the topic that requires me personally to answer. I hope that my reply will be accepted as addressing the most important central points, though I did not have time to explore every avenue. I certainly do not wish to be logically rude, and if I have failed, please remember with compassion that it's not always obvious to one person what another person will think was the central point.
Luke Mueulhauser and Carl Shulman contributed to this article, but the final edit was my own, likewise any flaws.
I think you're starting to write more like a Friendly AI. This is totally a good thing.
There are two ways to read Holden's claim about what happens if 100 experts check the proposed FAI safety proof. On one reading, Holden is saying that if 100 experts check it and say, "Yes, I am highly confident that this is in fact safe," then activating the AI kills us all with 90% probability. On the other reading, Holden is saying that even if 100 experts do their best to find errors and say, "No, I couldn't identify any way in which this will kill us, though that doesn't mean it won't kill us," then activating the AI kills us all with 90% probability. I think the first reading is very implausible. I don't believe the second reading, but I don't think it's obviously wrong. I think the second reading is the more charitable and relevant one.
Nope, I was assuming the second reading. The first reading is too implausible to be considered at all.
And if the preference function was just over the human's 'goodness' of the end result, rather than the accuracy of the human's understanding of the predictions, the AI might tell you something that was predictively false but whose implementation would lead you to what the AI defines as a 'good' outcome. And if we ask how happy the human is, the resulting decision procedure would exert optimization pressure to convince the human to take drugs, and so on.
I was under the impression that Holden's suggestion was more along the lines of: Make a model of the world. Remove the user from the model and replace it with a similar user that will always do what you recommend. Then manipulate this user so that it achieves its objective in the model, and report the actions that you have the user do in the model to the real user.
Thus, if the objective was to make the user happy, the Google Maps AGI would simply instruct the user to take drugs, rather than tricking him into doing so, because such instruction is the easiest way to manipulate the user in the model that the Google Maps AGI is optimizing in.
I don't know why you keep harping on this. Just because an algorithm logically can produce a certain output, and probably will produce that output, doesn't mean good intentions and vigorous handwaving are any less capable of magic.
This is why when I fire a gun, I just point it in the general direction of my target, and assume the universe will know what I meant to hit.
Delete the word "hardwiring" from your vocabulary. You can't do it with wires, and saying it doesn't accomplish any magic.
I think there is an interpretation of "hardwiring" that makes sense when talking about AI. For example, say you have a chess program. You can make a patch for it that says "if my light squared bishop is threatened, getting it out of danger is highest priority, second only to getting the king out of check". Moreover, even for very complex chess programs, I would expect that patch to be pretty simple, compared to the whole program.
Maybe a general AI will necessarily have an architecture that makes such patches impossible or ineffective. Then again, maybe not. You could argue that an AI would work around any limitations imposed by patches, but I don't see why a computer program with an ugly patch would magically acquire a desire to behave as if it didn't have the patch, and converge to maximizing expected utility or something. In any case I'd like to see a more precise version of that argument.
ETA: I share your concern about the use of "hardwiring" to sweep complexity under the rug. But saying that AIs can do one magical thing (understand human desires) but not another magical thing (whatever is supposed to be "hardwired") seems a little weird to me.
Yeah, well, hardwiring the AI to understand human desires wouldn't be goddamned trivial either, I just decided not to go down that particular road, mostly because I'd said it before and Holden had apparently read at least some of it.
Getting the light-square bishop out of danger as highest priority...
1) Do I assume the opponent assigns symmetric value to attacking the light-square bishop?
2) Or that the opponent actually values checkmates only, but knows that I value the light-square bishop myself and plan forks and skewers accordingly?
3) Or that the opponent has no idea why I'm doing what I'm doing?
4) Or that the opponent will figure it out eventually, but maybe not in the first game?
5) What about the complicated static-position evaluator? Do I have to retrain all of it, and possibly design new custom heuristics, now that the value of a position isn't "leads to checkmate" but rather "leads to checkmate + 25% leads to bishop being captured"?
Adding this to Deep Blue is not remotely as trivial as it sounds in English. Even to add it in a half-assed way, you have to at least answer question 1, because the entire non-brute-force search-tree pruning mechanism depends on guessing which branches the opponent will prune. Look up alpha-beta search to start seeing why everything becomes more interesting when position-values are no longer being determined symmetrically.
For what it's worth, the intended answers are 1) no 2) no 3) yes 4) no 5) the evaluation function and the opening book stay the same, there's just a bit of logic squished above them that kicks in only when the bishop is threatened, not on any move before that.
Yeah, game-theoretic considerations make the problem funny, but the intent wasn't to convert an almost-consistent utility maximizer into another almost-consistent utility maximizer with a different utility function that somehow values keeping the bishop safe. The intent was to add a hack that throws consistency to the wind, and observe that the AI doesn't rebel against the hack. After all, there's no law saying you must build only consistent AIs.
My guess is that's what most folks probably mean when they talk about "hardwiring" stuff into the AI. They don't mean changing the AI's utility function over the real world, they mean changing the AI's code so it's no longer best described as maximizing such a function. That might make the AI stupid in some respects and manipulable by humans, which may or may not be a bad thing :-) Of course your actual goals (whatever they are) would be better served by a genuine expected utility maximizer, but building that could be harder and more dangerous. Or at least that's how the reasoning is supposed to go, I think.
Either your comment is in violent agreement agreement with mine ("that might make the AI stupid in some respects and manipulable by humans"), or I don't understand what you're trying to say...
I'm saying that using the word "hardwiring" is always harmful because they imagine an instruction with lots of extra force, when in fact there's no such thing as a line of programming which you say much more forcefully than any other line. Either you know how to program something or you don't, and it's usually much more complex than it sounds even if you say "hardwire". See the reply above on "hardwiring" Deep Blue to protect the light-square bishop. Though usually it's even worse than this, like trying to do the equivalent of having an instruction that says "#define BUGS OFF" and then saying, "And just to make sure it works, let's hardwire it in!"
Probably nothing new, but I just wanted to note that when you couple two straightforward Google tools, Maps and a large enough fleet of self-driving cars, they are likely to unintentionally agentize by shaping the traffic.
For example, the goal of each is to optimize the fuel economy/driving time, so the routes Google cars would take depend on the expected traffic volume, as predicted by Maps access, among other things. Similarly, Maps would know where these cars are or will be at a given time, and would adjust its output accordingly (possibly as a user option). An optimization strategy might easy arise that gives Google cars preference over other cars, in order to minimize, say, the overall emission levels. This can be easily seen as unfriendly by a regular Map user, but friendly by the municipality.
Similar scenarios would pop up in many cases where, in the EE speak, a tool gains an intentional or a parasitic feedback, whether positive or negative. As anyone who dealt with music amps knows, this feedback appears spontaneously and is often very difficult to track down. In a sense, a tool as simple as an amp can agentize and drown the positive signal. As the tool complexity grows, so do the odds of parasitic feedback. Coupling multiple "safe" tools together increases such odds exponentially.
This is the first time I can recall Eliezer giving an overt indication regarding how likely an AGI project is to doom us. He suggests that 90% chance of Doom given intelligent effort is unrealistically high. Previously I had only seem him declare that FAI is worth attempting once you multiply. While he still hasn't given numbers (not saying he should) he has has given a bound. Interesting. And perhaps a little more optimistic than I expected - or at least more optimistic than I would have expected prior to Luke's comment.
how likely an AGI project is to doom us
Isn't it more like "how likely a formally proven FAI design is to doom us", since this is what Holden seems to be arguing (see his quote below)?
Suppose that it is successful in the "AGI" part of its goal, i.e., it has successfully created an intelligence vastly superior to human intelligence and extraordinarily powerful from our perspective. Suppose that it has also done its best on the "Friendly" part of the goal: it has developed a formal argument for why its AGI's utility function will be Friendly, it believes this argument to be airtight, and it has had this argument checked over by 100 of the world's most intelligent and relevantly experienced people. .. What will be the outcome?
Marcus Hutter is a rare exception who specified his AGI in such unambiguous mathematical terms that he actually succeeded at realizing, after some discussion with SIAI personnel, that AIXI would kill off its users and seize control of its reward button.
A couple of people have enquired with Hutter and he has denied saying this. So it appears a citation is needed.
I'll try to get the results in writing the next time we have a discussion. Human memory is a fragile thing under the best of circumstances.
Commentary (there will be a lot of "to me"s because I have been a bystander to this exchange so far):
I think this post misunderstands Holden's point, because it looks like it's still talking about agents. Tool AI, to me, is a decision support system: I tell Google Maps where I will start from and where I will leave from, and it generates a route using its algorithm. Similarly, I could tell Dr. Watson my medical data, and it will supply a diagnosis and a treatment plan that has a high score based on the utility function I provide.
In neither case are the skills of "looking at the equations and determining real-world consequences" that necessary. There are no dark secrets lurking in the soul of A*. Indeed, that might be the heart of the issue: tool AI might be those situations where you can make a network that represents the world, identify two nodes, and call your optimization algorithm of choice to determine the best actions to choose to attempt to make it from the start node to the end node.
Reducing the world to a network is really hard. Determining preferences between outcomes is hard. But Tool AI looks to me like saying "well, the whole world is really to...
Holden explicitly said that he was talking about AGI in his dialogue with Jaan Tallinn:
Jaan: so GMAGI would -- effectively -- still be a narrow AI that's designed to augment human capabilities in particularly strategic domains, while not being able to perform tasks such as programming. also, importantly, such GMAGI would not be able to make non-statistical (ie, individual) predictions about the behaviour of human beings, since it is unable to predict their actions in domains where it is inferior.
Holden: [...] I don't think of the GMAGI I'm describing as necessarily narrow - just as being such that assigning it to improve its own prediction algorithm is less productive than assigning it directly to figuring out the questions the programmer wants (like "how do I develop superweapons"). There are many ways this could be the case.
Jaan: [...] i stand corrected re the GMAGI definition -- from now on let's assume that it is a full blown AGI in the sense that it can perform every intellectual task better than the best of human teams, including programming itself.
I think you're arguing about Karnovsky's intention, but it seems clear (to me :) that he is proposing something much more general that a strategy of pursuing best narrow AIs - see the "Here's how I picture the Google Maps AGI " code snipped Eliezer is working of.
In any case, taking your interpretation as your proposal, I don't think anyone is disagreeing with the value of building good narrow AIs where we can, the issue is that the world might be economically driven towards AGI, and someone needs to do the safety research, which is essentially the SI mission.
Eliezer argued that looking at modern software does not support Holden's claim that powerful tool AI is likely to come before dangerous agent AI. I'm not sure I think the examples he gave support his claim, especially if we broaden the "tool" concept in a way that seems consistent with Holden's arguments. I'm not to sure about this, but I would like to hear reactions.
Eliezer:
...At one point in his conversation with Tallinn, Holden argues that AI will inevitably be developed along planning-Oracle lines, because making suggestions to humans is the
Minor point from Nick Bostrom: an agent AI may be safer than a tool AI, because if something goes unexpectedly wrong, then an agent with safe goals should turn out to be better than a non-agent whose behaviour would be unpredictable.
Also, an agent with safer goals than humans have (which is a high bar, but not nearly as high a bar as some alternatives) is safer than humans with equivalently powerful tools.
Folks seem to habitually misrepresent the nature of modern software by focusing on a narrow slice of it. Google Maps is so much more than the pictures and text we touch and read on a screen.
Google Maps is the software. It is also the infrastructure running and delivering the software. It is the traffic sensors and cameras feeding it real-world input. Google Maps is also the continually shifting organization of brilliant human beings within Google focusing their own minds and each other's minds on refining the software to better meet users' needs and desig...
[Eli's personal notes for Eli's personal understanding. Feel free to ignore or engage.]
Eli's proposed AGI planning-oracle design:
The AGI has four parts:
Here's how it works:
1. A user makes a request of the system, by giving some goal that the user would like to achieve, like "cure cancer". This request is phrased in natural language,...
Writing nitpick:
It's sort of like thinking that a machine learning professional who did sales optimization for an orange company couldn't possibly do sales optimization for a banana company, because their skills must be about oranges rather than bananas.
This is a terrible analogy. It assumes what you're trying to prove, oversimplifies a complex issue, and isn't even all that analogous to the issue at hand. Sales optimization for a banana company is obviously related to sales optimization in an orange company; not so with Oracle Al and Friendly AI.
Is Google Maps such a good example of a tool AI?
If a significant amount of people is using google maps to decide their route, then solving queries from multiple users while coordinating the responses to each request is going to provide a strong advantage in terms of its optimization goal and will probably be an obvious feature to implement. The responses from the tool are going to be shaping the city traffic.
If this is the case, It's going to be extremely hard for humans to supervise the set of answers given by google maps (Of course, individual answers...
[Eli's personal notes for personal understanding. Feel free to ignore or engage.]
If a planning Oracle is going to produce better solutions than humanity has yet managed to the Rubik's Cube, it needs to be capable of doing original computer science research and writing its own code.
Is this true? It seems like the crux of this argument.
[Eli's personal notes for personal understanding. Feel free to ignore or engage.]
"increase the correspondence between the user's belief about relevant consequences and reality"
[Squint] Google Maps is not trying to do that. Google Maps doesn't have anything like a concept of a "user". I could imagine an advanced AI that does have a concept of a "user", but is indifferent to him/her. It just produces printouts, that, incidentally, the user reads.
I was briefly tripped up by the use of "risk gradient between X and Y" to indicate how much riskier X is than Y (perhaps "gradient" evokes a continuum between X and Y). I'd strike the jargon, or explain what it means.
"Holden should respect our difficult-to-explain expertise just as we ask others to respect Holden's" might actually be persuasive to Holden (smart people often forget to search for ideas via an empathic perspective), but it's whiny as a public signal.
I'm deeply confused. How can you even define the difference between tool AI and FAI?
I assume that even tool AI is supposed to be able to opine on relatively long sequences of input. In particular, to be useful it must be able to accumulate information over essentially unbounded time periods. Say if you want advise about where to position your air defenses you must be able to go back to the AI system each day hand it updates on enemy activity and expect it to integrate that information with information it received during previous sessions. Whether or no...
To summarize how I see the current state of the debate over "tool AI":
I'm surprised to see no mention of the old "How do you ensure that your Oracle AI doesn't scribble over the world in order to gain more computational resources with which to answer your question?" argument.
I think the link on Demis Hassabis in section 3 is incorrect . It is the same as the Ray Kurzweil link.
The thing that is most like an agent in the Tool AI scenario is not the computer and software that it is running. The agent is the combination of the human (which is of course very much like an agent) together with the computer-and-software that constitutes the tool. Holden's argument is that this combination agent is safer somehow. (Perhaps it is more familiar; we can judge intention of the human component with facial expression, for example.)
The claim that Tool AI is an obvious answer to the Friendly AI problem is a paper tiger that Eliezer demolished. H...
The link to How to Purchase AI Risk Reduction, in part 4, seems to be not working.
EDIT: looks fixed now!
What makes us think that AI would stick with the utility function they're given? I change my utility function all the time, sometimes on purpose.
100% agreed.
I have an enormous amount of sympathy for us humans, who are required to make these kinds of decisions with nothing but our brains. My sympathy increased radically during the period of my life when, due to traumatic brain injury, my level of executive function was highly impaired and ordering lunch became an "above my pay grade" decision. We really do astonishingly well, for what we are.
But none of that changes my belief that we aren't especially well designed for making hard choices.
It's also not surprising that people can't fly across the Atlantic Ocean. But I expect a sufficiently well designed aircraft to do so.
Marcus Hutter is a rare exception who specified his AGI in such unambiguous mathematical terms that he actually succeeded at realizing, after some discussion with SIAI personnel, that AIXI would kill off its users and seize control of its reward button. But based on past sad experience with many other would-be designers, I say "Explain to a neutral judge how the math kills" and not "Explain to the person who invented that math and likes it."
Any sources to this extraordinary claim? Hutter's own statements? Cartesian-dualist AI has re...
Seems like a decent reply overall, but I found the fourth point very unconvincing. Holden has said 'what he knows know' - to wit that whereas the world's best experts would normally test a complicated programme by running it, isolating out what (inevitably) went wrong by examining the results it produced, rewriting it, then doing it again.
Almost no programmes are glitch free, so this is at best an optimization process and one which - as Holden pointed out - you can't do with this type of AI. If (/when) it goes wrong the first time, you don't get a second chance. Eliezer's reply doesn't seem to address this stark difference between what experts have been achieving and what SIAI is asking them to achieve.
Demis Hassabis (VC-funded to the tune of several million dollars)
No public reference to his start-up that I can find.
They're still underground, with Shane Legg and at least a dozen other people on board. The company is called "Deep Mind" these days, and it's being developed as a games company. It's one of the most significant AGI projects I know of, merely because Shane and Demis are highly competent and approaching AGI by one of the more tractable paths (e.g. not AIXI or Goedel machines). Shane predicts AGI in a mere ten years—in part, I suspect, because he plans to build it himself.
Acquiring such facts is another thing SI does.
I wouldn't endorse their significance the same way, and would stand by my statement that although the AGI field as a whole has perceptible risk, no individual project that I know of has perceptible risk. Shane and Demis are cool, but they ain't that cool.
Right. I should have clarified that by "one of the most significant AGI projects I know of" I meant "has a very tiny probability of FOOMing in the next 15 years, which is greater than the totally negligible probability of FOOMing in the next 15 years posed by Juergen Schmidhuber."
I am in general willing to make bets against anyone producing an artificial human-level intelligence (for a sufficiently well-defined unpacking of that term) in ten years. If I win, great, I win the bet. If I lose, great, we have artificial human-level intelligence.
Marcus Hutter is a rare exception who specified his AGI in such unambiguous mathematical terms that he actually succeeded at realizing, after some discussion with SIAI personnel, that AIXI would kill off its users and seize control of its reward button.
Not sure that's true of Hutter's beliefs, but for historical reference I'll link to a 2003 mailing list post by Eliezer describing some harmful consequences of AIXI-tl. Hutter wasn't part of that discussion, though.
Most of your points are valid, and Holden is pretty arrogant to think he sees this obvious solution that experts in the field are irresponsible for not doing.
But I can see a couple ways around this argument in particular:
Example question: "How should I get rid of my disease most cheaply?" Example answer: "You won't. You will die soon, unavoidably. This report is 99.999% reliable". Predicted human reaction: Decides to kill self and get it over with. Success rate: 100%, the disease is gone. Costs of cure: zero. Mission completed.
Optio...
Inside the AI, whether an agent AI or a planning Oracle, there would be similar AGI-challenges like "build a predictive model of the world", and similar FAI-conjugates of those challenges like finding the 'user' inside an AI-created model of the universe.
Isn't building a predictive model of the world central to any AGI development? I don't see why someone who focuses specifically on FAI would worry more about a predictive model that other AGI developers. Specifically I don't think that even without Singularity Institute there would still be AGI people working on building predictive models of the world.
My mind keeps turning up Ben Goertzel as the one who invented this caricature - "Don't you understand, poor fool Eliezer, life is full of uncertainty, your attempt to flee from it by refuge in 'mathematical proof' is doomed" - but I'm not sure he was actually the inventor.
Of course, that is not a genuine quotation from Ben.
My mind keeps turning up Ben Goertzel as the one who invented this caricature - "Don't you understand, poor fool Eliezer, life is full of uncertainty, your attempt to flee from it by refuge in 'mathematical proof' is doomed"
This is a common enough trope amongst Dynamists and other worshipers of chaos that I don't think it needs to be credited to anyone.
"I believe that the probability of an unfavorable outcome - by which I mean an outcome essentially equivalent to what a UFAI would bring about - exceeds 90% in such a scenario."
It's nice that this appreciates that the problem is hard.
The "scenario" in question involves a SIAI AGI - so maybe he just thinks that this organisation is incompetent.
I think the core distinction was poorly worded by Holden. The distinction is between AIs as they exist now (e.g. self driving car), and the economical model of AI within a larger model, as economical utility maximizer agent, a non-reductionistically modelled entity within a larger model, which is maximizing some utility non-reductionistically modelled within larger model (e.g. paperclip maximizer).
The AIs as they exist now, at the core, throw the 'intelligence' in form of solution search, at a problem of finding inputs to an internally defined mathematical...
Your link to Holden's post is broken.
It might be the right suggestion, but it's not so obviously right that our failure to prioritize discussing it reflects horrible negligence.
In a paragraph begging for charity, this sentence seems out of place.
(Commentary to follow.)
I can't see what you're getting at. Holden seems to say not just "you should do this", but "the fact that you're not already doing this reflects badly on your decision making". Eliezer replies that the first may be true but the second seems unwarranted.
I begin by thanking Holden Karnofsky of Givewell for his rare gift of his detailed, engaged, and helpfully-meant critical article Thoughts on the Singularity Institute (SI). In this reply I will engage with only one of the many subjects raised therein, the topic of, as I would term them, non-self-modifying planning Oracles, a.k.a. 'Google Maps AGI' a.k.a. 'tool AI', this being the topic that requires me personally to answer. I hope that my reply will be accepted as addressing the most important central points, though I did not have time to explore every avenue. I certainly do not wish to be logically rude, and if I have failed, please remember with compassion that it's not always obvious to one person what another person will think was the central point.
Luke Mueulhauser and Carl Shulman contributed to this article, but the final edit was my own, likewise any flaws.
Summary:
Holden's concern is that "SI appears to neglect the potentially important distinction between 'tool' and 'agent' AI." His archetypal example is Google Maps:
The reply breaks down into four heavily interrelated points:
First, Holden seems to think (and Jaan Tallinn doesn't apparently object to, in their exchange) that if a non-self-modifying planning Oracle is indeed the best strategy, then all of SIAI's past and intended future work is wasted. To me it looks like there's a huge amount of overlap in underlying processes in the AI that would have to be built and the insights required to build it, and I would be trying to assemble mostly - though not quite exactly - the same kind of team if I was trying to build a non-self-modifying planning Oracle, with the same initial mix of talents and skills.
Second, a non-self-modifying planning Oracle doesn't sound nearly as safe once you stop saying human-English phrases like "describe the consequences of an action to the user" and start trying to come up with math that says scary dangerous things like (he translated into English) "increase the correspondence between the user's belief about relevant consequences and reality". Hence why the people on the team would have to solve the same sorts of problems.
Appreciating the force of the third point is a lot easier if one appreciates the difficulties discussed in points 1 and 2, but is actually empirically verifiable independently: Whether or not a non-self-modifying planning Oracle is the best solution in the end, it's not such an obvious privileged-point-in-solution-space that someone should be alarmed at SIAI not discussing it. This is empirically verifiable in the sense that 'tool AI' wasn't the obvious solution to e.g. John McCarthy, Marvin Minsky, I. J. Good, Peter Norvig, Vernor Vinge, or for that matter Isaac Asimov. At one point, Holden says:
If I take literally that this is one of the things that bothers Holden most... I think I'd start stacking up some of the literature on the number of different things that just respectable academics have suggested as the obvious solution to what-to-do-about-AI - none of which would be about non-self-modifying smarter-than-human planning Oracles - and beg him to have some compassion on us for what we haven't addressed yet. It might be the right suggestion, but it's not so obviously right that our failure to prioritize discussing it reflects negligence.
The final point at the end is looking over all the preceding discussion and realizing that, yes, you want to have people specializing in Friendly AI who know this stuff, but as all that preceding discussion is actually the following discussion at this point, I shall reserve it for later.
1. The math of optimization, and the similar parts of a planning Oracle.
What does it take to build a smarter-than-human intelligence, of whatever sort, and have it go well?
A "Friendly AI programmer" is somebody who specializes in seeing the correspondence of mathematical structures to What Happens in the Real World. It's somebody who looks at Hutter's specification of AIXI and reads the actual equations - actually stares at the Greek symbols and not just the accompanying English text - and sees, "Oh, this AI will try to gain control of its reward channel," as well as numerous subtler issues like, "This AI presumes a Cartesian boundary separating itself from the environment; it may drop an anvil on its own head." Similarly, working on TDT means e.g. looking at a mathematical specification of decision theory, and seeing "Oh, this is vulnerable to blackmail" and coming up with a mathematical counter-specification of an AI that isn't so vulnerable to blackmail.
Holden's post seems to imply that if you're building a non-self-modifying planning Oracle (aka 'tool AI') rather than an acting-in-the-world agent, you don't need a Friendly AI programmer because FAI programmers only work on agents. But this isn't how the engineering skills are split up. Inside the AI, whether an agent AI or a planning Oracle, there would be similar AGI-challenges like "build a predictive model of the world", and similar FAI-conjugates of those challenges like finding the 'user' inside an AI-created model of the universe. The insides would look a lot more similar than the outsides. An analogy would be supposing that a machine learning professional who does sales optimization for an orange company couldn't possibly do sales optimization for a banana company, because their skills must be about oranges rather than bananas.
Admittedly, if it turns out to be possible to use a human understanding of cognitive algorithms to build and run a smarter-than-human Oracle without it being self-improving - this seems unlikely, but not impossible - then you wouldn't have to solve problems that arise with self-modification. But this eliminates only one dimension of the work. And on an even more meta level, it seems like you would call upon almost identical talents and skills to come up with whatever insights were required - though if it were predictable in advance that we'd abjure self-modification, then, yes, we'd place less emphasis on e.g. finding a team member with past experience in reflective math, and wouldn't waste (additional) time specializing in reflection. But if you wanted math inside the planning Oracle that operated the way you thought it did, and you wanted somebody who understood what could possibly go wrong and how to avoid it, you would need to make a function call to the same sort of talents and skills to build an agent AI, or an Oracle that was self-modifying, etc.
2. Yes, planning Oracles have hidden gotchas too.
"Tool AI" may sound simple in English, a short sentence in the language of empathically-modeled agents — it's just "a thingy that shows you plans instead of a thingy that goes and does things." If you want to know whether this hypothetical entity does X, you just check whether the outcome of X sounds like "showing someone a plan" or "going and doing things", and you've got your answer. It starts sounding much scarier once you try to say something more formal and internally-causal like "Model the user and the universe, predict the degree of correspondence between the user's model and the universe, and select from among possible explanation-actions on this basis."
Holden, in his dialogue with Jaan Tallinn, writes out this attempt at formalizing:
Google Maps doesn't check all possible routes. If I wanted to design Google Maps, I would start out by throwing out a standard planning technique on a connected graph where each edge has a cost function and there's a good heuristic measure of the distance, e.g. A* search. If that was too slow, I'd next try some more efficient version like weighted A* (or bidirectional weighted memory-bounded A*, which I expect I could also get off-the-shelf somewhere). Once you introduce weighted A*, you no longer have a guarantee that you're selecting the optimal path. You have a guarantee to within a known factor of the cost of the optimal path — but the actual path selected wouldn't be quite optimal. The suggestion produced would be an approximation whose exact steps depended on the exact algorithm you used. That's true even if you can predict the exact cost — exact utility — of any particular path you actually look at; and even if you have a heuristic that never overestimates the cost.
The reason we don't have God's Algorithm for solving the Rubik's Cube is that there's no perfect way of measuring the distance between any two Rubik's Cube positions — you can't look at two Rubik's cube positions, and figure out the minimum number of moves required to get from one to another. It took 15 years to prove that there was a position requiring at least 20 moves to solve, and then another 15 years to come up with a computer algorithm that could solve any position in at most 20 moves, but we still can't compute the actual, minimum solution to all Cubes ("God's Algorithm"). This, even though we can exactly calculate the cost and consequence of any actual Rubik's-solution-path we consider.
When it comes to AGI — solving general cross-domain "Figure out how to do X" problems — you're not going to get anywhere near the one, true, optimal answer. You're going to — at best, if everything works right — get a good answer that's a cross-product of the "utility function" and all the other algorithmic properties that determine what sort of answer the AI finds easy to invent (i.e. can be invented using bounded computing time).
As for the notion that this AGI runs on a "human predictive algorithm" that we got off of neuroscience and then implemented using more computing power, without knowing how it works or being able to enhance it further: It took 30 years of multiple computer scientists doing basic math research, and inventing code, and running that code on a computer cluster, for them to come up with a 20-move solution to the Rubik's Cube. If a planning Oracle is going to produce better solutions than humanity has yet managed to the Rubik's Cube, it needs to be capable of doing original computer science research and writing its own code. You can't get a 20-move solution out of a human brain, using the native human planning algorithm. Humanity can do it, but only by exploiting the ability of humans to explicitly comprehend the deep structure of the domain (not just rely on intuition) and then inventing an artifact, a new design, running code which uses a different and superior cognitive algorithm, to solve that Rubik's Cube in 20 moves. We do all that without being self-modifying, but it's still a capability to respect.
And I'm not even going into what it would take for a planning Oracle to out-strategize any human, come up with a plan for persuading someone, solve original scientific problems by looking over experimental data (like Einstein did), design a nanomachine, and so on.
Talking like there's this one simple "predictive algorithm" that we can read out of the brain using neuroscience and overpower to produce better plans... doesn't seem quite congruous with what humanity actually does to produce its predictions and plans.
If we take the concept of the Google Maps AGI at face value, then it actually has four key magical components. (In this case, "magical" isn't to be taken as prejudicial, it's a term of art that means we haven't said how the component works yet.) There's a magical comprehension of the user's utility function, a magical world-model that GMAGI uses to comprehend the consequences of actions, a magical planning element that selects a non-optimal path using some method other than exploring all possible actions, and a magical explain-to-the-user function.
report($leading_action) isn't exactly a trivial step either. Deep Blue tells you to move your pawn or you'll lose the game. You ask "Why?" and the answer is a gigantic search tree of billions of possible move-sequences, leafing at positions which are heuristically rated using a static-position evaluation algorithm trained on millions of games. Or the planning Oracle tells you that a certain DNA sequence will produce a protein that cures cancer, you ask "Why?", and then humans aren't even capable of verifying, for themselves, the assertion that the peptide sequence will fold into the protein the planning Oracle says it does.
"So," you say, after the first dozen times you ask the Oracle a question and it returns an answer that you'd have to take on faith, "we'll just specify in the utility function that the plan should be understandable."
Whereupon other things start going wrong. Viliam_Bur, in the comments thread, gave this example, which I've slightly simplified:
Bur is trying to give an example of how things might go wrong if the preference function is over the accuracy of the predictions explained to the human— rather than just the human's 'goodness' of the outcome. And if the preference function was just over the human's 'goodness' of the end result, rather than the accuracy of the human's understanding of the predictions, the AI might tell you something that was predictively false but whose implementation would lead you to what the AI defines as a 'good' outcome. And if we ask how happy the human is, the resulting decision procedure would exert optimization pressure to convince the human to take drugs, and so on.
I'm not saying any particular failure is 100% certain to occur; rather I'm trying to explain - as handicapped by the need to describe the AI in the native human agent-description language, using empathy to simulate a spirit-in-a-box instead of trying to think in mathematical structures like A* search or Bayesian updating - how, even so, one can still see that the issue is a tad more fraught than it sounds on an immediate examination.
If you see the world just in terms of math, it's even worse; you've got some program with inputs from a USB cable connecting to a webcam, output to a computer monitor, and optimization criteria expressed over some combination of the monitor, the humans looking at the monitor, and the rest of the world. It's a whole lot easier to call what's inside a 'planning Oracle' or some other English phrase than to write a program that does the optimization safely without serious unintended consequences. Show me any attempted specification, and I'll point to the vague parts and ask for clarification in more formal and mathematical terms, and as soon as the design is clarified enough to be a hundred light years from implementation instead of a thousand light years, I'll show a neutral judge how that math would go wrong. (Experience shows that if you try to explain to would-be AGI designers how their design goes wrong, in most cases they just say "Oh, but of course that's not what I meant." Marcus Hutter is a rare exception who specified his AGI in such unambiguous mathematical terms that he actually succeeded at realizing, after some discussion with SIAI personnel, that AIXI would kill off its users and seize control of its reward button. But based on past sad experience with many other would-be designers, I say "Explain to a neutral judge how the math kills" and not "Explain to the person who invented that math and likes it.")
Just as the gigantic gap between smart-sounding English instructions and actually smart algorithms is the main source of difficulty in AI, there's a gap between benevolent-sounding English and actually benevolent algorithms which is the source of difficulty in FAI. "Just make suggestions - don't do anything!" is, in the end, just more English.
3. Why we haven't already discussed Holden's suggestion
The above statement seems to lack perspective on how many different things various people see as the one obvious solution to Friendly AI. Tool AI wasn't the obvious solution to John McCarthy, I.J. Good, or Marvin Minsky. Today's leading AI textbook, Artificial Intelligence: A Modern Approach - where you can learn all about A* search, by the way - discusses Friendly AI and AI risk for 3.5 pages but doesn't mention tool AI as an obvious solution. For Ray Kurzweil, the obvious solution is merging humans and AIs. For Jurgen Schmidhuber, the obvious solution is AIs that value a certain complicated definition of complexity in their sensory inputs. Ben Goertzel, J. Storrs Hall, and Bill Hibbard, among others, have all written about how silly Singinst is to pursue Friendly AI when the solution is obviously X, for various different X. Among current leading people working on serious AGI programs labeled as such, neither Demis Hassabis (VC-funded to the tune of several million dollars) nor Moshe Looks (head of AGI research at Google) nor Henry Markram (Blue Brain at IBM) think that the obvious answer is Tool AI. Vernor Vinge, Isaac Asimov, and any number of other SF writers with technical backgrounds who spent serious time thinking about these issues didn't converge on that solution.
Obviously I'm not saying that nobody should be allowed to propose solutions because someone else would propose a different solution. I have been known to advocate for particular developmental pathways for Friendly AI myself. But I haven't, for example, told Peter Norvig that deterministic self-modification is such an obvious solution to Friendly AI that I would mistrust his whole AI textbook if he didn't spend time discussing it.
At one point in his conversation with Tallinn, Holden argues that AI will inevitably be developed along planning-Oracle lines, because making suggestions to humans is the natural course that most software takes. Searching for counterexamples instead of positive examples makes it clear that most lines of code don't do this. Your computer, when it reallocates RAM, doesn't pop up a button asking you if it's okay to reallocate RAM in such-and-such a fashion. Your car doesn't pop up a suggestion when it wants to change the fuel mix or apply dynamic stability control. Factory robots don't operate as human-worn bracelets whose blinking lights suggest motion. High-frequency trading programs execute stock orders on a microsecond timescale. Software that does happen to interface with humans is selectively visible and salient to humans, especially the tiny part of the software that does the interfacing; but this is a special case of a general cost/benefit tradeoff which, more often than not, turns out to swing the other way, because human advice is either too costly or doesn't provide enough benefit. Modern AI programmers are generally more interested in e.g. pushing the technological envelope to allow self-driving cars than to "just" do Google Maps. Branches of AI that invoke human aid, like hybrid chess-playing algorithms designed to incorporate human advice, are a field of study; but they're the exception rather than the rule, and occur primarily where AIs can't yet do something humans do, e.g. humans acting as oracles for theorem-provers, where the humans suggest a route to a proof and the AI actually follows that route. This is another reason why planning Oracles were not a uniquely obvious solution to the various academic AI researchers, would-be AI-creators, SF writers, etcetera, listed above. Again, regardless of whether a planning Oracle is actually the best solution, Holden seems to be empirically-demonstrably overestimating the degree to which other people will automatically have his preferred solution come up first in their search ordering.
4. Why we should have full-time Friendly AI specialists just like we have trained professionals doing anything else mathy that somebody actually cares about getting right, like pricing interest-rate options or something
I hope that the preceding discussion has made, by example instead of mere argument, what's probably the most important point: If you want to have a sensible discussion about which AI designs are safer, there are specialized skills you can apply to that discussion, as built up over years of study and practice by someone who specializes in answering that sort of question.
This isn't meant as an argument from authority. It's not meant as an attempt to say that only experts should be allowed to contribute to the conversation. But it is meant to say that there is (and ought to be) room in the world for Friendly AI specialists, just like there's room in the world for specialists on optimal philanthropy (e.g. Holden).
The decision to build a non-self-modifying planning Oracle would be properly made by someone who: understood the risk gradient for self-modifying vs. non-self-modifying programs; understood the risk gradient for having the AI thinking about the thought processes of the human watcher and trying to come up with plans implementable by the human watcher in the service of locally absorbed utility functions, vs. trying to implement its own plans in the service of more globally descriptive utility functions; and who, above all, understood on a technical level what exactly gets accomplished by having the plans routed through a human. I've given substantial previous thought to describing more precisely what happens — what is being gained, and how much is being gained — when a human "approves a suggestion" made by an AI. But that would be another a different topic, plus I haven't made too much progress on saying it precisely anyway.
In the transcript of Holden's conversation with Jaan Tallinn, it looked like Tallinn didn't deny the assertion that Friendly AI skills would be inapplicable if we're building a Google Maps AGI. I would deny that assertion and emphasize that denial, because to me it seems that it is exactly Friendly AI programmers who would be able to tell you if the risk gradient for non-self-modification vs. self-modification, the risk gradient for routing plans through humans vs. acting as an agent, the risk gradient for requiring human approval vs. unapproved action, and the actual feasibility of directly constructing transhuman modeling-prediction-and-planning algorithms through directly design of sheerly better computations than are presently run by the human brain, had the right combination of properties to imply that you ought to go construct a non-self-modifying planning Oracle. Similarly if you wanted an AI that took a limited set of actions in the world with human approval, or if you wanted an AI that "just answered questions instead of making plans".
It is similarly implied that a "philosophical AI" might obsolete Friendly AI programmers. If we're talking about PAI that can start with a human's terrible decision theory and come up with a good decision theory, or PAI that can start from a human talking about bad metaethics and then construct a good metaethics... I don't want to say "impossible", because, after all, that's just what human philosophers do. But we are not talking about a trivial invention here. Constructing a "philosophical AI" is a Holy Grail precisely because it's FAI-complete (just ask it "What AI should we build?"), and has been discussed (e.g. with and by Wei Dai) over the years on the old SL4 mailing list and the modern Less Wrong. But it's really not at all clear how you could write an algorithm which would knowably produce the correct answer to the entire puzzle of anthropic reasoning, without being in possession of that correct answer yourself (in the same way that we can have Deep Blue win chess games without knowing the exact moves, but understanding exactly what abstract work Deep Blue is doing to solve the problem).
Holden's post presents a restrictive view of what "Friendly AI" people are supposed to learn and know — that it's about machine learning for optimizing orange sales but not apple sales, or about producing an "agent" that implements CEV — which is something of a straw view, much weaker than the view that a Friendly AI programmer takes of Friendly AI programming. What the human species needs from an x-risk perspective is experts on This Whole Damn Problem, who will acquire whatever skills are needed to that end. The Singularity Institute exists to host such people and enable their research—once we have enough funding to find and recruit them. See also, How to Purchase AI Risk Reduction.
I'm pretty sure Holden has met people who think that having a whole institute to rate the efficiency of charities is pointless overhead, especially people who think that their own charity-solution is too obviously good to have to contend with busybodies pretending to specialize in thinking about 'marginal utility'. Which Holden knows about, I would guess, from being paid quite well to think about that economic details when he was a hedge fundie, and learning from books written by professional researchers before then; and the really key point is that people who haven't studied all that stuff don't even realize what they're missing by trying to wing it. If you don't know, you don't know what you don't know, or the cost of not knowing. Is there a problem of figuring out who might know something you don't, if Holden insists that there's this strange new stuff called 'marginal utility' you ought to learn about? Yes, there is. But is someone who trusts their philanthropic dollars to be steered just by the warm fuzzies of their heart, doing something wrong? Yes, they are. It's one thing to say that SIAI isn't known-to-you to be doing it right - another thing still to say that SIAI is known-to-you to be doing it wrong - and then quite another thing entirely to say that there's no need for Friendly AI programmers and you know it, that anyone can see it without resorting to math or cracking a copy of AI: A Modern Approach. I do wish that Holden would at least credit that the task SIAI is taking on contains at least as many gotchas, relative to the instinctive approach, as optimal philanthropy compared to instinctive philanthropy, and might likewise benefit from some full-time professionally specialized attention, just as our society creates trained professionals to handle any other problem that someone actually cares about getting right.
On the other side of things, Holden says that even if Friendly AI is proven and checked:
It's nice that this appreciates that the problem is hard. Associating all of the difficulty with agenty proposals and thinking that it goes away as soon as you invoke tooliness is, well, of this I've already spoken. I'm not sure whether this irreducible-90%-doom assessment is based on a common straw version of FAI where all the work of the FAI programmer goes into "proving" something and doing this carefully checked proof which then - alas, poor Spock! - turns out to be no more relevant than proving that the underlying CPU does floating-point arithmetic correctly if the transistors work as stated. I've repeatedly said that the idea behind proving determinism of self-modification isn't that this guarantees safety, but that if you prove the self-modification stable the AI might work, whereas if you try to get by with no proofs at all, doom is guaranteed. My mind keeps turning up Ben Goertzel as the one who invented this caricature - "Don't you understand, poor fool Eliezer, life is full of uncertainty, your attempt to flee from it by refuge in 'mathematical proof' is doomed" - but I'm not sure he was actually the inventor. In any case, the burden of safety isn't carried just by the proof, it's carried mostly by proving the right thing. If Holden is assuming that we're just running away from the inherent uncertainty of life by taking refuge in mathematical proof, then, yes, 90% probability of doom is an understatement, the vast majority of plausible-on-first-glance goal criteria you can prove stable will also kill you.
If Holden's assessment does take into account a great effort to select the right theorem to prove - and attempts to incorporate the difficult but finitely difficult feature of meta-level error-detection, as it appears in e.g. the CEV proposal - and he is still assessing 90% doom probability, then I must ask, "What do you think you know and how do you think you know it?" The complexity of the human mind is finite; there's only so many things we want or would-want. Why would someone claim to know that proving the right thing is beyond human ability, even if "100 of the world's most intelligent and relevantly experienced people" (Holden's terms) check it over? There's hidden complexity of wishes, but not infinite complexity of wishes or unlearnable complexity of wishes. There are deep and subtle gotchas but not an unending number of them. And if that were the setting of the hidden variables - how would you end up knowing that with 90% probability in advance? I don't mean to wield my own ignorance as a sword or engage in motivated uncertainty - I hate it when people argue that if they don't know something, nobody else is allowed to know either - so please note that I'm also counterarguing from positive facts pointing the other way: the human brain is complicated but not infinitely complicated, there are hundreds or thousands of cytoarchitecturally distinct brain areas but not trillions or googols. If humanity had two hundred years to solve FAI using human-level intelligence and there was no penalty for guessing wrong I would be pretty relaxed about the outcome. If Holden says there's 90% doom probability left over no matter what sane intelligent people do (all of which goes away if you just build Google Maps AGI, but leave that aside for now) I would ask him what he knows now, in advance, that all those sane intelligent people will miss. I don't see how you could (well-justifiedly) access that epistemic state.
I acknowledge that there are points in Holden's post which are not addressed in this reply, acknowledge that these points are also deserving of reply, and hope that other SIAI personnel will be able to reply to them.