Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Thoughts on the Singularity Institute (SI)

252 Post author: HoldenKarnofsky 11 May 2012 04:31AM

This post presents thoughts on the Singularity Institute from Holden Karnofsky, Co-Executive Director of GiveWell. Note: Luke Muehlhauser, the Executive Director of the Singularity Institute, reviewed a draft of this post, and commented: "I do generally agree that your complaints are either correct (especially re: past organizational competence) or incorrect but not addressed by SI in clear argumentative writing (this includes the part on 'tool' AI). I am working to address both categories of issues." I take Luke's comment to be a significant mark in SI's favor, because it indicates an explicit recognition of the problems I raise, and thus increases my estimate of the likelihood that SI will work to address them.

September 2012 update: responses have been posted by Luke and Eliezer (and I have responded in the comments of their posts). I have also added acknowledgements.

The Singularity Institute (SI) is a charity that GiveWell has been repeatedly asked to evaluate. In the past, SI has been outside our scope (as we were focused on specific areas such as international aid). With GiveWell Labs we are open to any giving opportunity, no matter what form and what sector, but we still do not currently plan to recommend SI; given the amount of interest some of our audience has expressed, I feel it is important to explain why. Our views, of course, remain open to change. (Note: I am posting this only to Less Wrong, not to the GiveWell Blog, because I believe that everyone who would be interested in this post will see it here.)

I am currently the GiveWell staff member who has put the most time and effort into engaging with and evaluating SI. Other GiveWell staff currently agree with my bottom-line view that we should not recommend SI, but this does not mean they have engaged with each of my specific arguments. Therefore, while the lack of recommendation of SI is something that GiveWell stands behind, the specific arguments in this post should be attributed only to me, not to GiveWell.

Summary of my views

  • The argument advanced by SI for why the work it's doing is beneficial and important seems both wrong and poorly argued to me. My sense at the moment is that the arguments SI is making would, if accepted, increase rather than decrease the risk of an AI-related catastrophe. More
  • SI has, or has had, multiple properties that I associate with ineffective organizations, and I do not see any specific evidence that its personnel/organization are well-suited to the tasks it has set for itself. More
  • A common argument for giving to SI is that "even an infinitesimal chance that it is right" would be sufficient given the stakes. I have written previously about why I reject this reasoning; in addition, prominent SI representatives seem to reject this particular argument as well (i.e., they believe that one should support SI only if one believes it is a strong organization making strong arguments). More
  • My sense is that at this point, given SI's current financial state, withholding funds from SI is likely better for its mission than donating to it. (I would not take this view to the furthest extreme; the argument that SI should have some funding seems stronger to me than the argument that it should have as much as it currently has.)
  • I find existential risk reduction to be a fairly promising area for philanthropy, and plan to investigate it further. More
  • There are many things that could happen that would cause me to revise my view on SI. However, I do not plan to respond to all comment responses to this post. (Given the volume of responses we may receive, I may not be able to even read all the comments on this post.) I do not believe these two statements are inconsistent, and I lay out paths for getting me to change my mind that are likely to work better than posting comments. (Of course I encourage people to post comments; I'm just noting in advance that this action, alone, doesn't guarantee that I will consider your argument.) More

Intent of this post

I did not write this post with the purpose of "hurting" SI. Rather, I wrote it in the hopes that one of these three things (or some combination) will happen:

  1. New arguments are raised that cause me to change my mind and recognize SI as an outstanding giving opportunity. If this happens I will likely attempt to raise more money for SI (most likely by discussing it with other GiveWell staff and collectively considering a GiveWell Labs recommendation).
  2. SI concedes that my objections are valid and increases its determination to address them. A few years from now, SI is a better organization and more effective in its mission.
  3. SI can't or won't make changes, and SI's supporters feel my objections are valid, so SI loses some support, freeing up resources for other approaches to doing good.

Which one of these occurs will hopefully be driven primarily by the merits of the different arguments raised. Because of this, I think that whatever happens as a result of my post will be positive for SI's mission, whether or not it is positive for SI as an organization. I believe that most of SI's supporters and advocates care more about the former than about the latter, and that this attitude is far too rare in the nonprofit world.

Does SI have a well-argued case that its work is beneficial and important?

I know no more concise summary of SI's views than this page, so here I give my own impressions of what SI believes, in italics.

  1. There is some chance that in the near future (next 20-100 years), an "artificial general intelligence" (AGI) - a computer that is vastly more intelligent than humans in every relevant way - will be created.
  2. This AGI will likely have a utility function and will seek to maximize utility according to this function.
  3. This AGI will be so much more powerful than humans - due to its superior intelligence - that it will be able to reshape the world to maximize its utility, and humans will not be able to stop it from doing so.
  4. Therefore, it is crucial that its utility function be one that is reasonably harmonious with what humans want. A "Friendly" utility function is one that is reasonably harmonious with what humans want, such that a "Friendly" AGI (FAI) would change the world for the better (by human standards) while an "Unfriendly" AGI (UFAI) would essentially wipe out humanity (or worse).
  5. Unless great care is taken specifically to make a utility function "Friendly," it will be "Unfriendly," since the things humans value are a tiny subset of the things that are possible.
  6. Therefore, it is crucially important to develop "Friendliness theory" that helps us to ensure that the first strong AGI's utility function will be "Friendly." The developer of Friendliness theory could use it to build an FAI directly or could disseminate the theory so that others working on AGI are more likely to build FAI as opposed to UFAI.

From the time I first heard this argument, it has seemed to me to be skipping important steps and making major unjustified assumptions. However, for a long time I believed this could easily be due to my inferior understanding of the relevant issues. I believed my own views on the argument to have only very low relevance (as I stated in my 2011 interview with SI representatives). Over time, I have had many discussions with SI supporters and advocates, as well as with non-supporters who I believe understand the relevant issues well. I now believe - for the moment - that my objections are highly relevant, that they cannot be dismissed as simple "layman's misunderstandings" (as they have been by various SI supporters in the past), and that SI has not published anything that addresses them in a clear way.

Below, I list my major objections. I do not believe that these objections constitute a sharp/tight case for the idea that SI's work has low/negative value; I believe, instead, that SI's own arguments are too vague for such a rebuttal to be possible. There are many possible responses to my objections, but SI's public arguments (and the private arguments) do not make clear which possible response (if any) SI would choose to take up and defend. Hopefully the dialogue following this post will clarify what SI believes and why.

Some of my views are discussed at greater length (though with less clarity) in a public transcript of a conversation I had with SI supporter Jaan Tallinn. I refer to this transcript as "Karnofsky/Tallinn 2011."

Objection 1: it seems to me that any AGI that was set to maximize a "Friendly" utility function would be extraordinarily dangerous.

Suppose, for the sake of argument, that SI manages to create what it believes to be an FAI. Suppose that it is successful in the "AGI" part of its goal, i.e., it has successfully created an intelligence vastly superior to human intelligence and extraordinarily powerful from our perspective. Suppose that it has also done its best on the "Friendly" part of the goal: it has developed a formal argument for why its AGI's utility function will be Friendly, it believes this argument to be airtight, and it has had this argument checked over by 100 of the world's most intelligent and relevantly experienced people. Suppose that SI now activates its AGI, unleashing it to reshape the world as it sees fit. What will be the outcome?

I believe that the probability of an unfavorable outcome - by which I mean an outcome essentially equivalent to what a UFAI would bring about - exceeds 90% in such a scenario. I believe the goal of designing a "Friendly" utility function is likely to be beyond the abilities even of the best team of humans willing to design such a function. I do not have a tight argument for why I believe this, but a comment on LessWrong by Wei Dai gives a good illustration of the kind of thoughts I have on the matter:

What I'm afraid of is that a design will be shown to be safe, and then it turns out that the proof is wrong, or the formalization of the notion of "safety" used by the proof is wrong. This kind of thing happens a lot in cryptography, if you replace "safety" with "security". These mistakes are still occurring today, even after decades of research into how to do such proofs and what the relevant formalizations are. From where I'm sitting, proving an AGI design Friendly seems even more difficult and error-prone than proving a crypto scheme secure, probably by a large margin, and there is no decades of time to refine the proof techniques and formalizations. There's good recent review of the history of provable security, titled Provable Security in the Real World, which might help you understand where I'm coming from.

I think this comment understates the risks, however. For example, when the comment says "the formalization of the notion of 'safety' used by the proof is wrong," it is not clear whether it means that the values the programmers have in mind are not correctly implemented by the formalization, or whether it means they are correctly implemented but are themselves catastrophic in a way that hasn't been anticipated. I would be highly concerned about both. There are other catastrophic possibilities as well; perhaps the utility function itself is well-specified and safe, but the AGI's model of the world is flawed (in particular, perhaps its prior or its process for matching observations to predictions are flawed) in a way that doesn't emerge until the AGI has made substantial changes to its environment.

By SI's own arguments, even a small error in any of these things would likely lead to catastrophe. And there are likely failure forms I haven't thought of. The overriding intuition here is that complex plans usually fail when unaccompanied by feedback loops. A scenario in which a set of people is ready to unleash an all-powerful being to maximize some parameter in the world, based solely on their initial confidence in their own extrapolations of the consequences of doing so, seems like a scenario that is overwhelmingly likely to result in a bad outcome. It comes down to placing the world's largest bet on a highly complex theory - with no experimentation to test the theory first.

So far, all I have argued is that the development of "Friendliness" theory can achieve at best only a limited reduction in the probability of an unfavorable outcome. However, as I argue in the next section, I believe there is at least one concept - the "tool-agent" distinction - that has more potential to reduce risks, and that SI appears to ignore this concept entirely. I believe that tools are safer than agents (even agents that make use of the best "Friendliness" theory that can reasonably be hoped for) and that SI encourages a focus on building agents, thus increasing risk.

Objection 2: SI appears to neglect the potentially important distinction between "tool" and "agent" AI.

Google Maps is a type of artificial intelligence (AI). It is far more intelligent than I am when it comes to planning routes.

Google Maps - by which I mean the complete software package including the display of the map itself - does not have a "utility" that it seeks to maximize. (One could fit a utility function to its actions, as to any set of actions, but there is no single "parameter to be maximized" driving its operations.)

Google Maps (as I understand it) considers multiple possible routes, gives each a score based on factors such as distance and likely traffic, and then displays the best-scoring route in a way that makes it easily understood by the user. If I don't like the route, for whatever reason, I can change some parameters and consider a different route. If I like the route, I can print it out or email it to a friend or send it to my phone's navigation application. Google Maps has no single parameter it is trying to maximize; it has no reason to try to "trick" me in order to increase its utility.

In short, Google Maps is not an agent, taking actions in order to maximize a utility parameter. It is a tool, generating information and then displaying it in a user-friendly manner for me to consider, use and export or discard as I wish.

Every software application I know of seems to work essentially the same way, including those that involve (specialized) artificial intelligence such as Google Search, Siri, Watson, Rybka, etc. Some can be put into an "agent mode" (as Watson was on Jeopardy!) but all can easily be set up to be used as "tools" (for example, Watson can simply display its top candidate answers to a question, with the score for each, without speaking any of them.)

The "tool mode" concept is importantly different from the possibility of Oracle AI sometimes discussed by SI. The discussions I've seen of Oracle AI present it as an Unfriendly AI that is "trapped in a box" - an AI whose intelligence is driven by an explicit utility function and that humans hope to control coercively. Hence the discussion of ideas such as the AI-Box Experiment. A different interpretation, given in Karnofsky/Tallinn 2011, is an AI with a carefully designed utility function - likely as difficult to construct as "Friendliness" - that leaves it "wishing" to answer questions helpfully. By contrast with both these ideas, Tool-AGI is not "trapped" and it is not Unfriendly or Friendly; it has no motivations and no driving utility function of any kind, just like Google Maps. It scores different possibilities and displays its conclusions in a transparent and user-friendly manner, as its instructions say to do; it does not have an overarching "want," and so, as with the specialized AIs described above, while it may sometimes "misinterpret" a question (thereby scoring options poorly and ranking the wrong one #1) there is no reason to expect intentional trickery or manipulation when it comes to displaying its results.

Another way of putting this is that a "tool" has an underlying instruction set that conceptually looks like: "(1) Calculate which action A would maximize parameter P, based on existing data set D. (2) Summarize this calculation in a user-friendly manner, including what Action A is, what likely intermediate outcomes it would cause, what other actions would result in high values of P, etc." An "agent," by contrast, has an underlying instruction set that conceptually looks like: "(1) Calculate which action, A, would maximize parameter P, based on existing data set D. (2) Execute Action A." In any AI where (1) is separable (by the programmers) as a distinct step, (2) can be set to the "tool" version rather than the "agent" version, and this separability is in fact present with most/all modern software. Note that in the "tool" version, neither step (1) nor step (2) (nor the combination) constitutes an instruction to maximize a parameter - to describe a program of this kind as "wanting" something is a category error, and there is no reason to expect its step (2) to be deceptive.

I elaborated further on the distinction and on the concept of a tool-AI in Karnofsky/Tallinn 2011.

This is important because an AGI running in tool mode could be extraordinarily useful but far more safe than an AGI running in agent mode. In fact, if developing "Friendly AI" is what we seek, a tool-AGI could likely be helpful enough in thinking through this problem as to render any previous work on "Friendliness theory" moot. Among other things, a tool-AGI would allow transparent views into the AGI's reasoning and predictions without any reason to fear being purposefully misled, and would facilitate safe experimental testing of any utility function that one wished to eventually plug into an "agent."

Is a tool-AGI possible? I believe that it is, and furthermore that it ought to be our default picture of how AGI will work, given that practically all software developed to date can (and usually does) run as a tool and given that modern software seems to be constantly becoming "intelligent" (capable of giving better answers than a human) in surprising new domains. In addition, it intuitively seems to me (though I am not highly confident) that intelligence inherently involves the distinct, separable steps of (a) considering multiple possible actions and (b) assigning a score to each, prior to executing any of the possible actions. If one can distinctly separate (a) and (b) in a program's code, then one can abstain from writing any "execution" instructions and instead focus on making the program list actions and scores in a user-friendly manner, for humans to consider and use as they wish.

Of course, there are possible paths to AGI that may rule out a "tool mode," but it seems that most of these paths would rule out the application of "Friendliness theory" as well. (For example, a "black box" emulation and augmentation of a human mind.) What are the paths to AGI that allow manual, transparent, intentional design of a utility function but do not allow the replacement of "execution" instructions with "communication" instructions? Most of the conversations I've had on this topic have focused on three responses:

  • Self-improving AI. Many seem to find it intuitive that (a) AGI will almost certainly come from an AI rewriting its own source code, and (b) such a process would inevitably lead to an "agent." I do not agree with either (a) or (b). I discussed these issues in Karnofsky/Tallinn 2011 and will be happy to discuss them more if this is the line of response that SI ends up pursuing. Very briefly:
    • The idea of a "self-improving algorithm" intuitively sounds very powerful, but does not seem to have led to many "explosions" in software so far (and it seems to be a concept that could apply to narrow AI as well as to AGI).
    • It seems to me that a tool-AGI could be plugged into a self-improvement process that would be quite powerful but would also terminate and yield a new tool-AI after a set number of iterations (or after reaching a set "intelligence threshold"). So I do not accept the argument that "self-improving AGI means agent AGI." As stated above, I will elaborate on this view if it turns out to be an important point of disagreement.
    • I have argued (in Karnofsky/Tallinn 2011) that the relevant self-improvement abilities are likely to come with or after - not prior to - the development of strong AGI. In other words, any software capable of the relevant kind of self-improvement is likely also capable of being used as a strong tool-AGI, with the benefits described above.
    • The SI-related discussions I've seen of "self-improving AI" are highly vague, and do not spell out views on the above points.
  • Dangerous data collection. Some point to the seeming dangers of a tool-AI's "scoring" function: in order to score different options it may have to collect data, which is itself an "agent" type action that could lead to dangerous actions. I think my definition of "tool" above makes clear what is wrong with this objection: a tool-AGI takes its existing data set D as fixed (and perhaps could have some pre-determined, safe set of simple actions it can take - such as using Google's API - to collect more), and if maximizing its chosen parameter is best accomplished through more data collection, it can transparently output why and how it suggests collecting more data. Over time it can be given more autonomy for data collection through an experimental and domain-specific process (e.g., modifying the AI to skip specific steps of human review of proposals for data collection after it has become clear that these steps work as intended), a process that has little to do with the "Friendly overarching utility function" concept promoted by SI. Again, I will elaborate on this if it turns out to be a key point.
  • Race for power. Some have argued to me that humans are likely to choose to create agent-AGI, in order to quickly gain power and outrace other teams working on AGI. But this argument, even if accepted, has very different implications from SI's view.

    Conventional wisdom says it is extremely dangerous to empower a computer to act in the world until one is very sure that the computer will do its job in a way that is helpful rather than harmful. So if a programmer chooses to "unleash an AGI as an agent" with the hope of gaining power, it seems that this programmer will be deliberately ignoring conventional wisdom about what is safe in favor of shortsighted greed. I do not see why such a programmer would be expected to make use of any "Friendliness theory" that might be available. (Attempting to incorporate such theory would almost certainly slow the project down greatly, and thus would bring the same problems as the more general "have caution, do testing" counseled by conventional wisdom.) It seems that the appropriate measures for preventing such a risk are security measures aiming to stop humans from launching unsafe agent-AIs, rather than developing theories or raising awareness of "Friendliness."

One of the things that bothers me most about SI is that there is practically no public content, as far as I can tell, explicitly addressing the idea of a "tool" and giving arguments for why AGI is likely to work only as an "agent." The idea that AGI will be driven by a central utility function seems to be simply assumed. Two examples:

  • I have been referred to Muehlhauser and Salamon 2012 as the most up-to-date, clear explanation of SI's position on "the basics." This paper states, "Perhaps we could build an AI of limited cognitive ability — say, a machine that only answers questions: an 'Oracle AI.' But this approach is not without its own dangers (Armstrong, Sandberg, and Bostrom 2012)." However, the referenced paper (Armstrong, Sandberg and Bostrom 2012) seems to take it as a given that an Oracle AI is an "agent trapped in a box" - a computer that has a basic drive/utility function, not a Tool-AGI. The rest of Muehlhauser and Salamon 2012 seems to take it as a given that an AGI will be an agent.
  • I have often been referred to Omohundro 2008 for an argument that an AGI is likely to have certain goals. But this paper seems, again, to take it as given that an AGI will be an agent, i.e., that it will have goals at all. The introduction states, "To say that a system of any design is an 'articial intelligence', we mean that it has goals which it tries to accomplish by acting in the world." In other words, the premise I'm disputing seems embedded in its very definition of AI.

The closest thing I have seen to a public discussion of "tool-AGI" is in Dreams of Friendliness, where Eliezer Yudkowsky considers the question, "Why not just have the AI answer questions, instead of trying to do anything? Then it wouldn't need to be Friendly. It wouldn't need any goals at all. It would just answer questions." His response:

To which the reply is that the AI needs goals in order to decide how to think: that is, the AI has to act as a powerful optimization process in order to plan its acquisition of knowledge, effectively distill sensory information, pluck "answers" to particular questions out of the space of all possible responses, and of course, to improve its own source code up to the level where the AI is a powerful intelligence. All these events are "improbable" relative to random organizations of the AI's RAM, so the AI has to hit a narrow target in the space of possibilities to make superintelligent answers come out.

This passage appears vague and does not appear to address the specific "tool" concept I have defended above (in particular, it does not address the analogy to modern software, which challenges the idea that "powerful optimization processes" cannot run in tool mode). The rest of the piece discusses (a) psychological mistakes that could lead to the discussion in question; (b) the "Oracle AI" concept that I have outlined above. The comments contain some more discussion of the "tool" idea (Denis Bider and Shane Legg seem to be picturing something similar to "tool-AGI") but the discussion is unresolved and I believe the "tool" concept defended above remains essentially unaddressed.

In sum, SI appears to encourage a focus on building and launching "Friendly" agents (it is seeking to do so itself, and its work on "Friendliness" theory seems to be laying the groundwork for others to do so) while not addressing the tool-agent distinction. It seems to assume that any AGI will have to be an agent, and to make little to no attempt at justifying this assumption. The result, in my view, is that it is essentially advocating for a more dangerous approach to AI than the traditional approach to software development.

Objection 3: SI's envisioned scenario is far more specific and conjunctive than it appears at first glance, and I believe this scenario to be highly unlikely.

SI's scenario concerns the development of artificial general intelligence (AGI): a computer that is vastly more intelligent than humans in every relevant way. But we already have many computers that are vastly more intelligent than humans in some relevant ways, and the domains in which specialized AIs outdo humans seem to be constantly and continuously expanding. I feel that the relevance of "Friendliness theory" depends heavily on the idea of a "discrete jump" that seems unlikely and whose likelihood does not seem to have been publicly argued for.

One possible scenario is that at some point, we develop powerful enough non-AGI tools (particularly specialized AIs) that we vastly improve our abilities to consider and prepare for the eventuality of AGI - to the point where any previous theory developed on the subject becomes useless. Or (to put this more generally) non-AGI tools simply change the world so much that it becomes essentially unrecognizable from the perspective of today - again rendering any previous "Friendliness theory" moot. As I said in Karnofsky/Tallinn 2011, some of SI's work "seems a bit like trying to design Facebook before the Internet was in use, or even before the computer existed."

Perhaps there will be a discrete jump to AGI, but it will be a sort of AGI that renders "Friendliness theory" moot for a different reason. For example, in the practice of software development, there often does not seem to be an operational distinction between "intelligent" and "Friendly." (For example, my impression is that the only method programmers had for evaluating Watson's "intelligence" was to see whether it was coming up with the same answers that a well-informed human would; the only way to evaluate Siri's "intelligence" was to evaluate its helpfulness to humans.) "Intelligent" often ends up getting defined as "prone to take actions that seem all-around 'good' to the programmer." So the concept of "Friendliness" may end up being naturally and subtly baked in to a successful AGI effort.

The bottom line is that we know very little about the course of future artificial intelligence. I believe that the probability that SI's concept of "Friendly" vs. "Unfriendly" goals ends up seeming essentially nonsensical, irrelevant and/or unimportant from the standpoint of the relevant future is over 90%.

Other objections to SI's views

There are other debates about the likelihood of SI's work being relevant/helpful; for example,

  • It isn't clear whether the development of AGI is imminent enough to be relevant, or whether other risks to humanity are closer.
  • It isn't clear whether AGI would be as powerful as SI's views imply. (I discussed this briefly in Karnofsky/Tallinn 2011.)
  • It isn't clear whether even an extremely powerful UFAI would choose to attack humans as opposed to negotiating with them. (I find it somewhat helpful to analogize UFAI-human interactions to human-mosquito interactions. Humans are enormously more intelligent than mosquitoes; humans are good at predicting, manipulating, and destroying mosquitoes; humans do not value mosquitoes' welfare; humans have other goals that mosquitoes interfere with; humans would like to see mosquitoes eradicated at least from certain parts of the planet. Yet humans haven't accomplished such eradication, and it is easy to imagine scenarios in which humans would prefer honest negotiation and trade with mosquitoes to any other arrangement, if such negotiation and trade were possible.)

Unlike the three objections I focus on, these other issues have been discussed a fair amount, and if these other issues were the only objections to SI's arguments I would find SI's case to be strong (i.e., I would find its scenario likely enough to warrant investment in).

Wrapup

  • I believe the most likely future scenarios are the ones we haven't thought of, and that the most likely fate of the sort of theory SI ends up developing is irrelevance.
  • I believe that unleashing an all-powerful "agent AGI" (without the benefit of experimentation) would very likely result in a UFAI-like outcome, no matter how carefully the "agent AGI" was designed to be "Friendly." I see SI as encouraging (and aiming to take) this approach.
  • I believe that the standard approach to developing software results in "tools," not "agents," and that tools (while dangerous) are much safer than agents. A "tool mode" could facilitate experiment-informed progress toward a safe "agent," rather than needing to get "Friendliness" theory right without any experimentation.
  • Therefore, I believe that the approach SI advocates and aims to prepare for is far more dangerous than the standard approach, so if SI's work on Friendliness theory affects the risk of human extinction one way or the other, it will increase the risk of human extinction. Fortunately I believe SI's work is far more likely to have no effect one way or the other.

For a long time I refrained from engaging in object-level debates over SI's work, believing that others are better qualified to do so. But after talking at great length to many of SI's supporters and advocates and reading everything I've been pointed to as relevant, I still have seen no clear and compelling response to any of my three major objections. As stated above, there are many possible responses to my objections, but SI's current arguments do not seem clear on what responses they wish to take and defend. At this point I am unlikely to form a positive view of SI's work until and unless I do see such responses, and/or SI changes its positions.

Is SI the kind of organization we want to bet on?

This part of the post has some risks. For most of GiveWell's history, sticking to our standard criteria - and putting more energy into recommended than non-recommended organizations - has enabled us to share our honest thoughts about charities without appearing to get personal. But when evaluating a group such as SI, I can't avoid placing a heavy weight on (my read on) the general competence, capability and "intangibles" of the people and organization, because SI's mission is not about repeating activities that have worked in the past. Sharing my views on these issues could strike some as personal or mean-spirited and could lead to the misimpression that GiveWell is hostile toward SI. But it is simply necessary in order to be fully transparent about why I hold the views that I hold.

Fortunately, SI is an ideal organization for our first discussion of this type. I believe the staff and supporters of SI would overwhelmingly rather hear the whole truth about my thoughts - so that they can directly engage them and, if warranted, make changes - than have me sugar-coat what I think in order to spare their feelings. People who know me and my attitude toward being honest vs. sparing feelings know that this, itself, is high praise for SI.

One more comment before I continue: our policy is that non-public information provided to us by a charity will not be published or discussed without that charity's prior consent. However, none of the content of this post is based on private information; all of it is based on information that SI has made available to the public.

There are several reasons that I currently have a negative impression of SI's general competence, capability and "intangibles." My mind remains open and I include specifics on how it could be changed.

  • Weak arguments. SI has produced enormous quantities of public argumentation, and I have examined a very large proportion of this information. Yet I have never seen a clear response to any of the three basic objections I listed in the previous section. One of SI's major goals is to raise awareness of AI-related risks; given this, the fact that it has not advanced clear/concise/compelling arguments speaks, in my view, to its general competence.
  • Lack of impressive endorsements. I discussed this issue in my 2011 interview with SI representatives and I still feel the same way on the matter. I feel that given the enormous implications of SI's claims, if it argued them well it ought to be able to get more impressive endorsements than it has.

    I have been pointed to Peter Thiel and Ray Kurzweil as examples of impressive SI supporters, but I have not seen any on-record statements from either of these people that show agreement with SI's specific views, and in fact (based on watching them speak at Singularity Summits) my impression is that they disagree. Peter Thiel seems to believe that speeding the pace of general innovation is a good thing; this would seem to be in tension with SI's view that AGI will be catastrophic by default and that no one other than SI is paying sufficient attention to "Friendliness" issues. Ray Kurzweil seems to believe that "safety" is a matter of transparency, strong institutions, etc. rather than of "Friendliness." I am personally in agreement with the things I have seen both of them say on these topics. I find it possible that they support SI because of the Singularity Summit or to increase general interest in ambitious technology, rather than because they find "Friendliness theory" to be as important as SI does.

    Clear, on-record statements from these two supporters, specifically endorsing SI's arguments and the importance of developing Friendliness theory, would shift my views somewhat on this point.

  • Resistance to feedback loops. I discussed this issue in my 2011 interview with SI representatives and I still feel the same way on the matter. SI seems to have passed up opportunities to test itself and its own rationality by e.g. aiming for objectively impressive accomplishments. This is a problem because of (a) its extremely ambitious goals (among other things, it seeks to develop artificial intelligence and "Friendliness theory" before anyone else can develop artificial intelligence); (b) its view of its staff/supporters as having unusual insight into rationality, which I discuss in a later bullet point.

    SI's list of achievements is not, in my view, up to where it needs to be given (a) and (b). Yet I have seen no declaration that SI has fallen short to date and explanation of what will be changed to deal with it. SI's recent release of a strategic plan and monthly updates are improvements from a transparency perspective, but they still leave me feeling as though there are no clear metrics or goals by which SI is committing to be measured (aside from very basic organizational goals such as "design a new website" and very vague goals such as "publish more papers") and as though SI places a low priority on engaging people who are critical of its views (or at least not yet on board), as opposed to people who are naturally drawn to it.

    I believe that one of the primary obstacles to being impactful as a nonprofit is the lack of the sort of helpful feedback loops that lead to success in other domains. I like to see groups that are making as much effort as they can to create meaningful feedback loops for themselves. I perceive SI as falling well short on this front. Pursuing more impressive endorsements and developing benign but objectively recognizable innovations (particularly commercially viable ones) are two possible ways to impose more demanding feedback loops. (I discussed both of these in my interview linked above).

  • Apparent poorly grounded belief in SI's superior general rationality. Many of the things that SI and its supporters and advocates say imply a belief that they have special insights into the nature of general rationality, and/or have superior general rationality, relative to the rest of the population. (Examples here, here and here). My understanding is that SI is in the process of spinning off a group dedicated to training people on how to have higher general rationality.

    Yet I'm not aware of any of what I consider compelling evidence that SI staff/supporters/advocates have any special insight into the nature of general rationality or that they have especially high general rationality.

    I have been pointed to the Sequences on this point. The Sequences (which I have read the vast majority of) do not seem to me to be a demonstration or evidence of general rationality. They are about rationality; I find them very enjoyable to read; and there is very little they say that I disagree with (or would have disagreed with before I read them). However, they do not seem to demonstrate rationality on the part of the writer, any more than a series of enjoyable, not-obviously-inaccurate essays on the qualities of a good basketball player would demonstrate basketball prowess. I sometimes get the impression that fans of the Sequences are willing to ascribe superior rationality to the writer simply because the content seems smart and insightful to them, without making a critical effort to determine the extent to which the content is novel, actionable and important. 

    I endorse Eliezer Yudkowsky's statement, "Be careful … any time you find yourself defining the [rationalist] as someone other than the agent who is currently smiling from on top of a giant heap of utility." To me, the best evidence of superior general rationality (or of insight into it) would be objectively impressive achievements (successful commercial ventures, highly prestigious awards, clear innovations, etc.) and/or accumulation of wealth and power. As mentioned above, SI staff/supporters/advocates do not seem particularly impressive on these fronts, at least not as much as I would expect for people who have the sort of insight into rationality that makes it sensible for them to train others in it. I am open to other evidence that SI staff/supporters/advocates have superior general rationality, but I have not seen it.

    Why is it a problem if SI staff/supporter/advocates believe themselves, without good evidence, to have superior general rationality? First off, it strikes me as a belief based on wishful thinking rather than rational inference. Secondly, I would expect a series of problems to accompany overconfidence in one's general rationality, and several of these problems seem to be actually occurring in SI's case:

    • Insufficient self-skepticism given how strong its claims are and how little support its claims have won. Rather than endorsing "Others have not accepted our arguments, so we will sharpen and/or reexamine our arguments," SI seems often to endorse something more like "Others have not accepted their arguments because they have inferior general rationality," a stance less likely to lead to improvement on SI's part.
    • Being too selective (in terms of looking for people who share its preconceptions) when determining whom to hire and whose feedback to take seriously.
    • Paying insufficient attention to the limitations of the confidence one can have in one's untested theories, in line with my Objection 1.
  • Overall disconnect between SI's goals and its activities. SI seeks to build FAI and/or to develop and promote "Friendliness theory" that can be useful to others in building FAI. Yet it seems that most of its time goes to activities other than developing AI or theory. Its per-person output in terms of publications seems low. Its core staff seem more focused on Less Wrong posts, "rationality training" and other activities that don't seem connected to the core goals; Eliezer Yudkowsky, in particular, appears (from the strategic plan) to be focused on writing books for popular consumption. These activities seem neither to be advancing the state of FAI-related theory nor to be engaging the sort of people most likely to be crucial for building AGI.

    A possible justification for these activities is that SI is seeking to promote greater general rationality, which over time will lead to more and better support for its mission. But if this is SI's core activity, it becomes even more important to test the hypothesis that SI's views are in fact rooted in superior general rationality - and these tests don't seem to be happening, as discussed above.

  • Theft. I am bothered by the 2009 theft of $118,803.00 (as against a $541,080.00 budget for the year). In an organization as small as SI, it really seems as though theft that large relative to the budget shouldn't occur and that it represents a major failure of hiring and/or internal controls.

    In addition, I have seen no public SI-authorized discussion of the matter that I consider to be satisfactory in terms of explaining what happened and what the current status of the case is on an ongoing basis. Some details may have to be omitted, but a clear SI-authorized statement on this point with as much information as can reasonably provided would be helpful.

A couple positive observations to add context here:

  • I see significant positive qualities in many of the people associated with SI. I especially like what I perceive as their sincere wish to do whatever they can to help the world as much as possible, and the high value they place on being right as opposed to being conventional or polite. I have not interacted with Eliezer Yudkowsky but I greatly enjoy his writings.
  • I'm aware that SI has relatively new leadership that is attempting to address the issues behind some of my complaints. I have a generally positive impression of the new leadership; I believe the Executive Director and Development Director, in particular, to represent a step forward in terms of being interested in transparency and in testing their own general rationality. So I will not be surprised if there is some improvement in the coming years, particularly regarding the last couple of statements listed above. That said, SI is an organization and it seems reasonable to judge it by its organizational track record, especially when its new leadership is so new that I have little basis on which to judge these staff.

Wrapup

While SI has produced a lot of content that I find interesting and enjoyable, it has not produced what I consider evidence of superior general rationality or of its suitability for the tasks it has set for itself. I see no qualifications or achievements that specifically seem to indicate that SI staff are well-suited to the challenge of understanding the key AI-related issues and/or coordinating the construction of an FAI. And I see specific reasons to be pessimistic about its suitability and general competence.

When estimating the expected value of an endeavor, it is natural to have an implicit "survivorship bias" - to use organizations whose accomplishments one is familiar with (which tend to be relatively effective organizations) as a reference class. Because of this, I would be extremely wary of investing in an organization with apparently poor general competence/suitability to its tasks, even if I bought fully into its mission (which I do not) and saw no other groups working on a comparable mission.

But if there's even a chance …

A common argument that SI supporters raise with me is along the lines of, "Even if SI's arguments are weak and its staff isn't as capable as one would like to see, their goal is so important that they would be a good investment even at a tiny probability of success."

I believe this argument to be a form of Pascal's Mugging and I have outlined the reasons I believe it to be invalid in two posts (here and here). There have been some objections to my arguments, but I still believe them to be valid. There is a good chance I will revisit these topics in the future, because I believe these issues to be at the core of many of the differences between GiveWell-top-charities supporters and SI supporters.

Regardless of whether one accepts my specific arguments, it is worth noting that the most prominent people associated with SI tend to agree with the conclusion that the "But if there's even a chance …" argument is not valid. (See comments on my post from Michael Vassar and Eliezer Yudkowsky as well as Eliezer's interview with John Baez.)

Existential risk reduction as a cause

I consider the general cause of "looking for ways that philanthropic dollars can reduce direct threats of global catastrophic risks, particularly those that involve some risk of human extinction" to be a relatively high-potential cause. It is on the working agenda for GiveWell Labs and we will be writing more about it.

However, I don't think that "Cause X is the one I care about and Organization Y is the only one working on it" to be a good reason to support Organization Y. For donors determined to donate within this cause, I encourage you to consider donating to a donor-advised fund while making it clear that you intend to grant out the funds to existential-risk-reduction-related organizations in the future. (One way to accomplish this would be to create a fund with "existential risk" in the name; this is a fairly easy thing to do and one person could do it on behalf of multiple donors.)

For one who accepts my arguments about SI, I believe withholding funds in this way is likely to be better for SI's mission than donating to SI - through incentive effects alone (not to mention my specific argument that SI's approach to "Friendliness" seems likely to increase risks).

How I might change my views

My views are very open to revision.

However, I cannot realistically commit to read and seriously consider all comments posted on the matter. The number of people capable of taking a few minutes to write a comment is sufficient to swamp my capacity. I do encourage people to comment and I do intend to read at least some comments, but if you are looking to change my views, you should not consider posting a comment to be the most promising route.

Instead, what I will commit to is reading and carefully considering up to 50,000 words of content that are (a) specifically marked as SI-authorized responses to the points I have raised; (b) explicitly cleared for release to the general public as SI-authorized communications. In order to consider a response "SI-authorized and cleared for release," I will accept explicit communication from SI's Executive Director or from a majority of its Board of Directors endorsing the content in question. After 50,000 words, I may change my views and/or commit to reading more content, or (if I determine that the content is poor and is not using my time efficiently) I may decide not to engage further. SI-authorized content may improve or worsen SI's standing in my estimation, so unlike with comments, there is an incentive to select content that uses my time efficiently. Of course, SI-authorized content may end up including excerpts from comment responses to this post, and/or already-existing public content.

I may also change my views for other reasons, particularly if SI secures more impressive achievements and/or endorsements.

One more note: I believe I have read the vast majority of the Sequences, including the AI-foom debate, and that this content - while interesting and enjoyable - does not have much relevance for the arguments I've made.

Again: I think that whatever happens as a result of my post will be positive for SI's mission, whether or not it is positive for SI as an organization. I believe that most of SI's supporters and advocates care more about the former than about the latter, and that this attitude is far too rare in the nonprofit world.

Acknowledgements

Thanks to the following people for reviewing a draft of this post and providing thoughtful feedback (this of course does not mean they agree with the post or are responsible for its content): Dario Amodei, Nick Beckstead, Elie Hassenfeld, Alexander Kruel, Tim Ogden, John Salvatier, Jonah Sinick, Cari Tuna, Stephanie Wykstra.

Comments (1262)

Comment author: Rain 10 May 2012 08:03:13PM *  32 points [-]

I completely agree with the intent of this post. These are all important issues SI should officially answer. (Edit: SI's official reply is here.) Here are some of my thoughts:

  • I completely agree with objection 1. I think SI should look into doing exactly as you say. I also feel that friendliness has a very high failure chance and that all SI can accomplish is a very low marginal decrease in existential risk. However, I feel this is the result of existential risk being so high and difficult to overcome (Great Filter) rather than SI being so ineffective. As such, for them to engage this objection is to admit defeatism and millenialism, and so they put it out of mind since they need motivation to keep soldiering on despite the sure defeat.

  • Objection 2 is interesting, though you define AGI differently, as you say. Some points against it: Only one AGI needs to be in agent mode to realize existential risk, even if there are already billions of tool-AIs running safely. Tool-AI seems closer in definition to narrow AI, which you point out we already have lots of, and are improving. It's likely that very advanced tool-AIs will indeed be the first to achieve some measure of AGI capability. SI uses AGI to mean agent-AI precisely because at some point someone will move beyond narrow/tool-AI into agent-AI. AGI doesn't "have to be an agent", but there will likely be agent-AI at some point. I don't see a means to limit all AGI to tool-AI in perpetuity.

  • 'Race for power' should be expanded to 'incentivised agent-AI'. There exist great incentives to create agent-AI above tool-AI, since AGI will be tireless, ever watchful, supremely faster, smarter, its answers not necessarily understood, etc. These include economic incentives, military incentives, etc., not even to implement-first, but to be better/faster on practical everyday events.

  • Objection 3, I mostly agree. Though should tool-AIs achieve such power, they can be used as weapons to realize existential risk, similar to nuclear, chemical, bio-, and nanotechnological advances.

  • I think this post focuses too much on "Friendliness theory". As Zack_M_Davis stated, SIAI should have more appropriately been called "The Singularity Institute For or Against Artificial Intelligence Depending on Which Seems to Be a Better Idea Upon Due Consideration". Friendliness is one word which could encapsulate a basket of possible outcomes, and they're agile enough to change position should it be shown to be necessary, as some of your comments request. Maybe SI should make tool-AI a clear stepping stone to friendliness, or at least a clear possible avenue worth exploring. Agreed.

  • Much agreed re: feedback loops.

  • "Kind of organization": painful but true.

However, I don't think that "Cause X is the one I care about and Organization Y is the only one working on it" to be a good reason to support Organization Y. For donors determined to donate within this cause, I encourage you to consider donating to a donor-advised fund while making it clear that you intend to grant out the funds to existential-risk-reduction-related organizations in the future. (One way to accomplish this would be to create a fund with "existential risk" in the name; this is a fairly easy thing to do and one person could do it on behalf of multiple donors.) For one who accepts my arguments about SI, I believe withholding funds in this way is likely to be better for SI's mission than donating to SI - through incentive effects alone (not to mention my specific argument that SI's approach to "Friendliness" seems likely to increase risks).

Good advice; I'll look into doing this. One reason I've been donating to them is so they can keep the lights on long enough to see and heed this kind of criticism. Maybe those incentives weren't appropriate.

This post limits my desire to donate additional money to SI beyond previous commitments. I consider it a landmark in SI criticism. Thank you for engaging this very important topic.

Edit: After SI's replies and careful consideration, I decided to continue donating directly to them, as they have a very clear roadmap for improvement and still represent the best value in existential risk reduction.

Comment author: khafra 11 May 2012 02:20:10PM 7 points [-]

You're an accomplished and proficient philanthropist; if you do make steps in the direction of a donor-directed existential risk fund, I'd like to see them written about.

Comment author: Eliezer_Yudkowsky 15 May 2012 05:49:19PM 30 points [-]

Reading Holden's transcript with Jaan Tallinn (trying to go over the whole thing before writing a response, due to having done Julia's Combat Reflexes unit at Minicamp and realizing that the counter-mantra 'If you respond too fast you may lose useful information' was highly applicable to Holden's opinions about charities), I came across the following paragraph:

My understanding is that once we figured out how to get a computer to do arithmetic, computers vastly surpassed humans at arithmetic, practically overnight ... doing so didn't involve any rewriting of their own source code, just implementing human-understood calculation procedures faster and more reliably than humans can. Similarly, if we reached a good enough understanding of how to convert data into predictions, we could program this understanding into a computer and it would overnight be far better at predictions than humans - while still not at any point needing to be authorized to rewrite its own source code, make decisions about obtaining "computronium" or do anything else other than plug data into its existing hardware and algorithms and calculate and report the likely consequences of different courses of action

I've been previously asked to evaluate this possibility a few times, but I think the last time I did was several years ago, and when I re-evaluated it today I noticed that my evaluation had substantially changed in the interim due to further belief shifts in the direction of "Intelligence is not as computationally expensive as it looks" - constructing a non-self-modifying predictive super-human intelligence might be possible on the grounds that human brains are just that weak. It would still require a great feat of cleanly designed, strong-understanding-math-based AI - Holden seems to think this sort of development would happen naturally with the sort of AGI researchers we have nowadays, and I wish he'd spent a few years arguing with some of them to get a better picture of how unlikely this is. Even if you write and run algorithms and they're not self-modifying, you're still applying optimization criteria to things like "have the humans understand you", and doing inductive learning has a certain inherent degree of program-creation to it. You would need to have done a lot of "the sort of thinking you do for Friendly AI" to set out to create such an Oracle and not have it kill your planet.

Nonetheless, I think after further consideration I would end up substantially increasing my expectation that if you have some moderately competent Friendly AI researchers, they would apply their skills to create a (non-self-modifying) (but still cleanly designed) Oracle AI first - that this would be permitted by the true values of "required computing power" and "inherent difficulty of solving problem directly", and desirable for reasons I haven't yet thought through in much detail - and so by Conservation of Expected Evidence I am executing that update now.

Flagging and posting now so that the issue doesn't drop off my radar.

Comment author: jsteinhardt 18 May 2012 03:05:21PM 9 points [-]

Holden seems to think this sort of development would happen naturally with the sort of AGI researchers we have nowadays, and I wish he'd spent a few years arguing with some of them to get a better picture of how unlikely this is.

While I can't comment on AGI researchers, I think you underestimate e.g. more mainstream AI researchers such as Stuart Russell and Geoff Hinton, or cognitive scientists like Josh Tenenbaum, or even more AI-focused machine learning people like Andrew Ng, Daphne Koller, Michael Jordan, Dan Klein, Rich Sutton, Judea Pearl, Leslie Kaelbling, and Leslie Valiant (and this list is no doubt incomplete). They might not be claiming that they'll have AI in 20 years, but that's likely because they are actually grappling with the relevant issues and therefore see how hard the problem is likely to be.

Not that it strikes me as completely unreasonable that we would have a major breakthrough that gives us AI in 20 years, but it's hard to see what the candidate would be. But I have only been thinking about these issues for a couple years, so I still maintain a pretty high degree of uncertainty about all of these claims.

I do think I basically agree with you re: inductive learning and program creation, though. When you say non-self-modifying Oracle AI, do you also mean that the Oracle AI doesn't get to do inductive learning? Because I suspect that inductive learning of some sort is fundamentally necessary, for reasons that you yourself nicely outline here.

Comment author: Eliezer_Yudkowsky 18 May 2012 10:11:15PM *  12 points [-]

I agree that top mainstream AI guy Peter Norvig was way the heck more sensible than the reference class of declared "AGI researchers" when I talked to him about FAI and CEV, and that estimates should be substantially adjusted accordingly.

Comment author: Eliezer_Yudkowsky 15 May 2012 05:58:32PM 13 points [-]

Jaan's reply to Holden is also correct:

... the oracle is, in principle, powerful enough to come up with self-improvements, but refrains from doing so because there are some protective mechanisms in place that control its resource usage and/or self-reflection abilities. i think devising such mechanisms is indeed one of the possible avenues for safety research that we (eg, organisations such as SIAI) can undertake. however, it is important to note the inherent instability of such system -- once someone (either knowingly or as a result of some bug) connects a trivial "master" program with a measurable goal to the oracle, we have a disaster in our hands. as an example, imagine a master program that repeatedly queries the oracle for best packets to send to the internet in order to minimize the oxygen content of our planet's atmosphere.

Obviously you wouldn't release the code of such an Oracle - given code and understanding of the code it would probably be easy, possibly trivial, to construct some form of FOOM-going AI out of the Oracle!

Comment author: kalla724 17 May 2012 01:11:41AM 7 points [-]

Hm. I must be missing something. No, I haven't read all the sequences in detail, so if these are silly, basic, questions - please just point me to the specific articles that answer them.

You have an Oracle AI that is, say, a trillionfold better at taking existing data and producing inferences.

1) This Oracle AI produces inferences. It still needs to test those inferences (i.e. perform experiments) and get data that allow the next inferential cycle to commence. Without experimental feedback, the inferential chain will quickly either expand into an infinity of possibilities (i.e. beyond anything that any physically possible intelligence can consider), or it will deviate from reality. The general intelligence is only as good as the data its inferences are based upon.

Experiments take time, data analysis takes time. No matter how efficient the inferential step may become, this puts an absolute limit to the speed of growth in capability to actually change things.

2) The Oracle AI that "goes FOOM" confined to a server cloud would somehow have to create servitors capable of acting out its desires in the material world. Otherwise, you have a very angry and very impotent AI. If you increase a person's intelligence trillionfold, and then enclose them into a sealed concrete cell, they will never get out; their intelligence can calculate all possible escape solutions, but none will actually work.

Do you have a plausible scenario how a "FOOM"-ing AI could - no matter how intelligent - minimize oxygen content of our planet's atmosphere, or any such scenario? After all, it's not like we have any fully-automated nanobot production factories that could be hijacked.

Comment author: Eliezer_Yudkowsky 17 May 2012 08:35:04PM 13 points [-]
Comment author: private_messaging 16 May 2012 11:01:29AM *  6 points [-]

"Intelligence is not as computationally expensive as it looks"

How sure are you that your intuitions do not arise from typical mind fallacy and from you attributing the great discoveries and inventions of mankind to the same processes that you feel run in your skull and which did not yet result in any great novel discoveries and inventions that I know of?

I know this sounds like ad-hominem, but as your intuitions are significantly influenced by your internal understanding of your own process, your self esteem will stand hostage to be shot through in many of the possible counter arguments and corrections. (Self esteem is one hell of a bullet proof hostage though, and tends to act more as a shield for bad beliefs).

It would still require a great feat of cleanly designed, strong-understanding-math-based AI - Holden seems to think this sort of development would happen naturally with the sort of AGI researchers we have nowadays

There is a lot of engineers working on software for solving engineering problems, including the software that generates and tests possible designs and looks for ways to make better computers. Your philosophy-based natural-language-defined in-imagination-running Oracle AI may have to be very carefully specified so that it does not kill imaginary mankind. And it may well be very difficult to build such a specification. Just don't confuse it with the software written to solve definable problems.

Ultimately, figuring out how to make a better microchip involves a lot of testing of various designs, that's how humans do it, that's how tools do it. I don't know how you think it is done. The performance is a result of a very complex function of the design. To build a design that performs you need to reverse this ultra complicated function, which is done by a mixture of analytical methods and iteration of possible input values, and unless P=NP, we have very little reason to expect any fundamentally better solutions (and even if P=NP there may still not be any). Meaning that the AGI won't have any edge over practical software, and won't out-foom it.

Comment author: jacob_cannell 15 May 2012 09:23:08AM *  9 points [-]

I'm glad for this, LessWrong can always use more engaging critiques of substance. I partially agree with Holden's conclusions, although I reach them from a substantially different route. I'm a little surprised then that few of the replies have directly engaged what I find to be the more obvious flaws in Holden's argument: namely objection 2 and the inherent contradictions with it and objection 1.

Holden posits that many (most?) well-known current AI applications more or less operate as sophisticated knowledge bases. His tool/agent distinction draws a boundary around AI tools: systems whose only external actions consist of communicating results to humans, and the rest being agents which actually plan and execute actions with external side effects. Holden distinguishes 'tool' AI from Oracle AI, the latter really being agent AI (designed for autonomy) which is trapped in some sort of box. Accepting Holden's terminology and tool/agent distinction, he then asserts:

  1. That 'tool' AGI already is and will continue to be the dominant type of AI system.
  2. That AGI running in tool mode will: " be extraordinarily useful but far more safe than an AGI running in agent mode,"

I can accept that any AGI running in 'tool' mode will be far safer than an AGI running in agent mode (although perhaps still not completely safe), but I believe Holden critically overestimates the domain and potential of 'tool' AGI, given his distinction.

It is true that many well known current AI systems operate as sophisticated knowledge tools, rather than agents. Search engines such as google are the first example Holden lists, but I haven't heard many people refer to search engines as AGIs.

In fact, having the capability to act in the world and learn from the history of such interactions is a crucial component of many AGI architectures, and perhaps all with the potential for human-level general intelligence. One could certainly remove the AGI's capacity for action at a later date: in Holden's terminology this would be switching the AGI from tool mode to agent mode. If we were using more everyday terminology we might as well call this paralyzing the AGI.

Yes switching an existing agent AGI into 'tool' mode (paralyzing it) certainly solves most safety issues regarding that particular agent, but this is far from a global panacea. Being less charitable, I would say it adds little of substance to the discussions of AI existential risk. It's much like one saying "but we can simply disable the nukes!". (and it's even potentially less effective than the analogy implies, because superpowerful unsafe agent AIs may not be so easy to 'switch' into 'tool' mode, to put it mildly).

After Google, Holden's next examples of primarily 'tool' mode AI are Siri and Watson. Siri is actually an agent in Holden's terminology, it can execute some web tasks in its limited set of domains. This may be a small percent of its current usage, but I don't expect that to hold true for its future descendants.

What Holden fails to mention are any relevant examples of all the current agent AI systems we already have today, and what tomorrow may bring.

The world of financial trading is already dominated by AI agents, and this particular history is most telling. Decades ago, when computer were very weak, they were used as simple tools to evaluate financial models which in turn were just a component of a human agent's overall trading strategy. As computers grew in power and became integrally connected in financial networks, computers began to take on more and more book-keeping actions, and eventually people started using computers to execute entire simple trading strategies on their own (albeit with much hands on supervision). Fast forward to 2012 and we now have the majority of trades executed by automated and increasingly autonomous trading systems. They are still under the supervision of human caretakers, but as they grow in complexity this increasingly becomes a nominal role.

There is a vast profitable micro-realm where these agents trade on lightning fast millisecond timescales; an economic niche that humans literally can not enter: it's an alien environment, and we have been engineering and evolving alien agents to inhabit and exploit it for us.

To one who has only basic familiarity with software development, one may imagine that software is something that humans design, write and build according to their specifications. That is only partially true, moreso for smaller projects.

The other truth, perhaps a deeper wisdom, is that large software systems evolve. Huge software systems are too massive to be designed or even fully understood by individual human minds, so their development follows a trajectory that is perhaps better understood in evolutionary terms. This is already the cause of much concern in the context of operating systems and security, which itself is only a small taste of the safety issues in a future world dominated by large, complex evolved agent systems.

It is true that there are many application domains where agents have had little impact as of yet, but this just shows us the niches that agents will eventually occupy.

In the end of the day, we need just compare the ultimate economic value of a tool vs an agent. What fraction of current human occupations can be considered 'tool' jobs vs 'agent' jobs? Agents which drive cars or execute finacial trades are just the beginning, the big opportunites are in agents which autonomously build software systems, design aircraft, perform biology research, and so on. Systems such as Siri and Watson today mainly function as knowledge tools, but we can expect that their descendants will eventually occupy a broad full range of human jobs, most of which involve varying degrees of autonomous agent behavior.

Consider the current domain of software development. What does a tool-mode AGI really add here in a world that already has google and the web? A tool-mode AGI could take a vague design and set of business constraints and output a detailed design or perhaps even an entire program, but that program would still need to be tested, and you might as well automate that. And most large software systems consist of ecologies of interacting programs: web crawlers, databases, planning/logistic systems, and so on where most of the 'actions' are already automated rather than assigned to humans.

As another example consider the 'lights-out' automated factory. The foundries that produce microchips are becoming increasingly autonomous systems, as is the front side design industry. If we extrapolate that to the future ...

The IBM of tommorrow may well consist of a small lucky pool of human stockowners reaping the benefits of a vast army of watson's future descendants who have gone on to replace all the expensive underperforming human employees of our time. International Business Machines, indeed: a world where everything from the low level hardware and foundry machinery up to the software and even business strategy is designed and built by complex ecologies of autonomous software agents. That seems to be not only where we are heading, but to some limited degree, where we already are.

Thus I find it highly unlikely that tool mode AI is and will be the dominate paradigm, as Holden asserts. Moreover, his argument really depends on tool mode being dominate by a significant fraction. If agent AI consists of even only 5% of the market at some future date, it still could contribute an unacceptable majority of risk.

Comment author: lukeprog 10 May 2012 09:24:19PM *  60 points [-]

Update: My full response to Holden is now here.

As Holden said, I generally think that Holden's objections for SI "are either correct (especially re: past organizational competence) or incorrect but not addressed by SI in clear argumentative writing (this includes the part on 'tool' AI)," and we are working hard to fix both categories of issues.

In this comment I would merely like to argue for one small point: that the Singularity Institute is undergoing comprehensive changes — changes which I believe to be improvements that will help us to achieve our mission more efficiently and effectively.

Holden wrote:

I'm aware that SI has relatively new leadership that is attempting to address the issues behind some of my complaints. I have a generally positive impression of the new leadership; I believe the Executive Director and Development Director, in particular, to represent a step forward in terms of being interested in transparency and in testing their own general rationality. So I will not be surprised if there is some improvement in the coming years...

Louie Helm was hired as Director of Development in September 2011. I was hired as a Research Fellow that same month, and made Executive Director in November 2011. Below are some changes made since September. (Pardon the messy presentation: LW cannot correctly render tables in comments.)

SI before Sep. 2011: Very few peer-reviewed research publications.
SI today: More peer-reviewed publications coming in 2012 than in all past years combined. Additionally, I alone have a dozen papers in development, for which I am directing every step of research and writing, and will write the final draft, but am collaborating with remote researchers so as to put in only 5%-20% of the total hours required myself.

SI before Sep. 2011: No donor database / a very broken one.
SI today: A comprehensive donor database.

SI before Sep. 2011: Nearly all work performed directly by SI staff.
SI today: Most work outsourced to remote collaborators so that SI staff can focus on the things that only they can do.

SI before Sep. 2011: No strategic plan.
SI today: A strategic plan developed with input from all SI staff, and approved by the Board.

SI before Sep. 2011: Very little communication about what SI is doing.
SI today: Monthly progress reports, plus three Q&As with Luke about SI research and organizational development.

SI before Sep. 2011: No list of the research problems SI is working on.
SI today: A long, fully-referenced list of research problems SI is working on.

SI before Sep. 2011: Very little direct management of staff and projects.
SI today: Luke monitors all projects and staff work, and meets regularly with each staff member.

SI before Sep. 2011: Almost no detailed tracking of the expense of major SI projects (e.g. Summit, papers, etc.). The sole exception seems to be that Amy was tracking the costs of the 2011 Summit in NYC.
SI today: Detailed tracking of the expense of major SI projects for which this is possible (Luke has a folder in Google docs for these spreadsheets, and the summary spreadsheet is shared with the Board).

SI before Sep. 2011: No staff worklogs.
SI today: All staff members share their worklogs with Luke, Luke shares his worklog with all staff plus the Board.

SI before Sep. 2011: Best practices not followed for bookkeeping/accounting; accountant's recommendations ignored.
SI today: Meetings with consultants about bookkeeping/accounting; currently working with our accountant to implement best practices and find a good bookkeeper.

SI before Sep. 2011: Staff largely separated, many of them not well-connected to the others.
SI today: After a dozen or so staff dinners, staff much better connected, more of a team.

SI before Sep. 2011: Want to see the basics of AI Risk explained in plain language? Read The Sequences (more than a million words) or this academic book chapter by Yudkowsky.
SI today: Want to see the basics of AI Risk explained in plain language? Read Facing the Singularity (now in several languages, with more being added) or listen to the podcast version.

SI before Sep. 2011: Very few resources created to support others' research in AI risk.
SI today: IntelligenceExplosion.com, Friendly-AI.com, list of open problems in the field, with references, AI Risk Bibliography 2012, annotated list of journals that may publish papers on AI risk, a partial history of AI risk research, and a list of forthcoming and desired articles on AI risk.

SI before Sep. 2011: A hard-to-navigate website with much outdated content.
SI today: An entirely new website that is easier to navigate and has much new content (nearly complete; should launch in May or June).

SI before Sep. 2011: So little monitoring of funds that $118k was stolen in 2010 before SI noticed. (Note that we have won stipulated judgments to get much of this back, and have upcoming court dates to argue for stipulated judgments to get the rest back.)
SI today: Our bank accounts have been consolidated, with 3-4 people regularly checking over them.

SI before Sep. 2011: SI publications exported straight to PDF from Word or Google Docs, sometimes without even author names appearing.
SI today: All publications being converted into slick, useable LaTeX template (example), with all references checked and put into a central BibTeX file.

SI before Sep. 2011: No write-up of our major public technical breakthrough (TDT) using the mainstream format and vocabulary comprehensible to most researchers in the field (this is what we have at the moment).
SI today: Philosopher Rachael Briggs, whose papers on decision theory have been twice selected for the Philosopher's Annual, has been contracted to write an explanation of TDT and publish it in one of a select few leading philosophy journals.

SI before Sep. 2011: No explicit effort made toward efficient use of SEO or our (free) Google Adwords.
SI today: Highly optimized use of Google Adwords to direct traffic to our sites; currently working with SEO consultants to improve our SEO (of course, the new website will help).

(Just to be clear, I think this list shows not that "SI is looking really great!" but instead that "SI is rapidly improving and finally reaching a 'basic' level of organizational function.")

Comment author: lukeprog 11 May 2012 02:54:28AM *  21 points [-]

...which is not to say, of course, that things were not improving before September 2011. It's just that the improvements have accelerated quite a bit since then.

For example, Amy was hired in December 2009 and is largely responsible for these improvements:

  • Built a "real" Board and officers; launched monthly Board meetings in February 2010.
  • Began compiling monthly financial reports in December 2010.
  • Began tracking Summit expenses and seeking Summit sponsors.
  • Played a major role in canceling many programs and expenses that were deemed low ROI.
Comment author: STL 11 May 2012 04:25:54AM *  9 points [-]

Our bank accounts have been consolidated, with 3-4 people regularly checking over them.

In addition to reviews, should SI implement a two-man rule for manipulating large quantities of money? (For example, over 5k, over 10k, etc.)

Comment author: JoshuaFox 17 May 2012 03:12:28PM 4 points [-]

As a supporter and donor to SI since 2006, I can say that I had a lot of specific criticisms of the way that the organization was managed. The points Luke lists above were among them. I was surprised that on many occasions management did not realize the obvious problems and fix them.

But the current management is now recognizing many of these points and resolving them one by one, as Luke says. If this continues, SI's future looks good.

Comment author: army1987 11 May 2012 08:18:32AM *  5 points [-]

I was hired as a Research Fellow that same month

Luke alone has a dozen papers in development

Why did you start referring to yourself in the first person and then change your mind? (Or am I missing something?)

Comment author: lukeprog 11 May 2012 08:20:33AM *  9 points [-]

Brain fart: now fixed.

Comment author: army1987 11 May 2012 08:27:14AM *  18 points [-]

(Why was this downvoted? If it's because the downvoter wants to see fewer brain farts, they're doing it wrong, because the message such a downvote actually conveys is that they want to see fewer acknowledgements of brain farts. Upvoted back to 0, anyway.)

Comment author: siodine 11 May 2012 01:35:22PM 4 points [-]

Isn't this very strong evidence in support for Holden's point about "Apparent poorly grounded belief in SI's superior general rationality" (excluding Luke, at least)? And especially this?

Comment author: lukeprog 11 May 2012 08:13:20PM *  18 points [-]

This topic is something I've been thinking about lately. Do SIers tend to have superior general rationality, or do we merely escape a few particular biases? Are we good at rationality, or just good at "far mode" rationality (aka philosophy)? Are we good at epistemic but not instrumental rationality? (Keep in mind, though, that rationality is only a ceteris paribus predictor of success.)

Or, pick a more specific comparison. Do SIers tend to be better at general rationality than someone who can keep a small business running for 5 years? Maybe the tight feedback loops of running a small business are better rationality training than "debiasing interventions" can hope to be.

Of course, different people are more or less rational in different domains, at different times, in different environments.

This isn't an idle question about labels. My estimate of the scope and level of people's rationality in part determines how much I update from their stated opinion on something. How much evidence for Hypothesis X (about organizational development) is it when Eliezer gives me his opinion on the matter, as opposed to when Louie gives me his opinion on the matter? When Person B proposes to take on a totally new kind of project, I think their general rationality is a predictor of success — so, what is their level of general rationality?

Comment author: ghf 11 May 2012 10:38:10PM *  5 points [-]

My hope is that the upcoming deluge of publications will answer this objection, but for the moment, I am unclear as to the justification for the level of resources being given to SIAI researchers.

Additionally, I alone have a dozen papers in development, for which I am directing every step of research and writing, and will write the final draft, but am collaborating with remote researchers so as to put in only 5%-20% of the total hours required myself.

This level of freedom is the dream of every researcher on the planet. Yet, it's unclear why these resources should be devoted to your projects. While I strongly believe that the current academic system is broken, you are asking for a level of support granted to top researchers prior to have made any original breakthroughs yourself.

If you can convince people to give you that money, wonderful. But until you have made at least some serious advancement to demonstrate your case, donating seems like an act of faith.

It's impressive that you all have found a way to hack the system and get paid to develop yourselves as researchers outside of the academic system and I will be delighted to see that development bear fruit over the coming years. But, at present, I don't see evidence that the work being done justifies or requires that support.

Comment author: lukeprog 11 May 2012 10:48:13PM 8 points [-]

This level of freedom is the dream of every researcher on the planet. Yet, it's unclear why these resources should be devoted to your projects.

Because some people like my earlier papers and think I'm writing papers on the most important topic in the world?

It's impressive that you all have found a way to hack the system and get paid to develop yourselves as researchers outside of the academic system...

Note that this isn't uncommon. SI is far from the only think tank with researchers who publish in academic journals. Researchers at private companies do the same.

Comment author: ghf 11 May 2012 11:15:03PM *  13 points [-]

First, let me say that, after re-reading, I think that my previous post came off as condescending/confrontational which was not my intent. I apologize.

Second, after thinking about this for a few minutes, I realized that some of the reason your papers seem so fluffy to me is that they argue what I consider to be obvious points. In my mind, of course we are likely "to develop human-level AI before 2100." Because of that, I may have tended to classify your work as outreach more than research.

But outreach is valuable. And, so that we can factor out the question of the independent contribution of your research, having people associated with SIAI with the publications/credibility to be treated as experts has gigantic benefits in terms of media multipliers (being the people who get called on for interviews, panels, etc). So, given that, I can see a strong argument for publication support being valuable to the overall organization goals regardless of any assessment of the value of the research.

Note that this isn't uncommon. SI is far from the only think tank with researchers who publish in academic journals. Researchers at private companies do the same.

My only point was that, in those situations, usually researchers are brought in with prior recognized achievements (or, unfortunately all too often, simply paper credentials). SIAI is bringing in people who are intelligent but unproven and giving them the resources reserved for top talent in academia or industry. As you've pointed out, one of the differences with SIAI is the lack of hoops to jump through.

Edit: I see you commented below that you view your own work as summarization of existing research and we agree on the value of that. Sorry that my slow typing speed left me behind the flow of the thread.

Comment author: Bugmaster 11 May 2012 10:53:44PM 2 points [-]

Researchers at private companies do the same.

It's true at my company, at least. There are quite a few papers out there authored by the researchers at the company where I work. There are several good business reasons for a company to invest time into publishing a paper; positive PR is one of them.

Comment author: Eliezer_Yudkowsky 11 May 2012 05:00:20AM 4 points [-]

And note that these improvements would not and could not have happened without more funding than the level of previous years - if, say, everyone had been waiting to see these kinds of improvements before funding.

Comment author: lukeprog 11 May 2012 08:13:02AM *  53 points [-]

note that these improvements would not and could not have happened without more funding than the level of previous years

Really? That's not obvious to me. Of course you've been around for all this and I haven't, but here's what I'm seeing from my vantage point...

Recent changes that cost very little:

  • Donor database
  • Strategic plan
  • Monthly progress reports
  • A list of research problems SI is working on (it took me 16 hours to write)
  • IntelligenceExplosion.com, Friendly-AI.com, AI Risk Bibliography 2012, annotated list of journals that may publish papers on AI risk, a partial history of AI risk research, and a list of forthcoming and desired articles on AI risk (each of these took me only 10-25 hours to create)
  • Detailed tracking of the expenses for major SI projects
  • Staff worklogs
  • Staff dinners (or something that brought staff together)
  • A few people keeping their eyes on SI's funds so theft would be caught sooner
  • Optimization of Google Adwords

Stuff that costs less than some other things SI had spent money on, such as funding Ben Goertzel's AGI research or renting downtown Berkeley apartments for the later visiting fellows:

  • Research papers
  • Management of staff and projects
  • Rachael Briggs' TDT write-up
  • Best-practices bookkeeping/accounting
  • New website
  • LaTeX template for SI publications; references checked and then organized with BibTeX
  • SEO

Do you disagree with these estimates, or have I misunderstood what you're claiming?

Comment author: David_Gerard 12 May 2012 06:37:08PM *  19 points [-]

A lot of charities go through this pattern before they finally work out how to transition from a board-run/individual-run tax-deductible band of conspirators to being a professional staff-run organisation tuned to doing the particular thing they do. The changes required seem simple and obvious in hindsight, but it's a common pattern for it to take years, so SIAI has been quite normal, or at the very least not been unusually dumb.

(My evidence is seeing this pattern close-up in the Wikimedia Foundation, Wikimedia UK (the first attempt at which died before managing it, the second making it through barely) and the West Australian Music Industry Association, and anecdotal evidence from others. Everyone involved always feels stupid at having taken years to achieve the retrospectively obvious. I would be surprised if this aspect of the dynamics of nonprofits had not been studied.)

edit: Luke's recommendation of The Nonprofit Kit For Dummies looks like precisely the book all the examples I know of needed to have someone throw at them before they even thought of forming an organisation to do whatever it is they wanted to achieve.

Comment author: Eliezer_Yudkowsky 12 May 2012 04:04:19AM 18 points [-]

Things that cost money:

  • Amy Willey
  • Luke Muehlhauser
  • Louie Helm
  • CfAR
  • trying things until something worked
Comment author: lukeprog 14 May 2012 10:07:06AM 62 points [-]

I don't think this response supports your claim that these improvements "would not and could not have happened without more funding than the level of previous years."

I know your comment is very brief because you're busy at minicamp, but I'll reply to what you wrote, anyway: Someone of decent rationality doesn't just "try things until something works." Moreover, many of the things on the list of recent improvements don't require an Amy, a Luke, or a Louie.

I don't even have past management experience. As you may recall, I had significant ambiguity aversion about the prospect of being made Executive Director, but as it turned out, the solution to almost every problem X has been (1) read what the experts say about how to solve X, (2) consult with people who care about your mission and have solved X before, and (3) do what they say.

When I was made Executive Director and phoned our Advisors, most of them said "Oh, how nice to hear from you! Nobody from SingInst has ever asked me for advice before!"

That is the kind of thing that makes me want to say that SingInst has "tested every method except the method of trying."

Donor database, strategic plan, staff worklogs, bringing staff together, expenses tracking, funds monitoring, basic management, best-practices accounting/bookkeeping... these are all literally from the Nonprofits for Dummies book.

Maybe these things weren't done for 11 years because SI's decision-makers did make good plans but failed to execute them due to the usual defeaters. But that's not the history I've heard, except that some funds monitoring was insisted upon after the large theft, and a donor database was sorta-kinda-not-really attempted at one point. The history I've heard is that SI failed to make these kinds of plans in the first place, failed to ask advisors for advice, failed to read Nonprofits for Dummies, and so on.

Money wasn't the barrier to doing many of those things, it was a gap in general rationality.

I will agree, however, that what is needed now is more money. We are rapidly becoming a more robust and efficient and rational organization, stepping up our FAI team recruiting efforts, stepping up our transparency and accountability efforts, and stepping up our research efforts, and all those things cost money.

At the risk of being too harsh… When I began to intern with the Singularity Institute in April 2011, I felt uncomfortable suggesting that people donate to SingInst, because I could see it from the inside and it wasn't pretty. (And I'm not the only SIer who felt this way at the time.)

But now I do feel comfortable asking people to donate to SingInst. I'm excited about our trajectory and our team, and if we can raise enough support then we might just have a shot at winning after all.

Comment author: Eliezer_Yudkowsky 21 May 2012 04:29:45AM 30 points [-]

Luke has just told me (personal conversation) that what he got from my comment was, "SIAI's difficulties were just due to lack of funding" which was not what I was trying to say at all. What I was trying to convey was more like, "I didn't have the ability to run this organization, and knew this - people who I hoped would be able to run the organization, while I tried to produce in other areas (e.g. turning my back on everything else to get a year of FAI work done with Marcello or writing the Sequences) didn't succeed in doing so either - and the only reason we could hang on long enough to hire Luke was that the funding was available nonetheless and in sufficient quantity that we could afford to take risks like paying Luke to stay on for a while, well before we knew he would become Executive Director".

Comment author: MarkusRamikin 14 May 2012 03:41:32PM 24 points [-]

You're allowed to say these things on the public Internet?

I just fell in love with SI.

Comment author: lukeprog 26 May 2012 12:33:50AM *  18 points [-]

You're allowed to say these things on the public Internet?

Well, at our most recent board meeting I wasn't fired, reprimanded, or even questioned for making these comments, so I guess I am. :)

Comment author: Benquo 14 May 2012 02:21:30PM *  17 points [-]

This makes me wonder... What "for dummies" books should I be using as checklists right now? Time to set a 5-minute timer and think about it.

Comment author: ghf 11 May 2012 10:06:54PM *  7 points [-]

And note that these improvements would not and could not have happened without more funding than the level of previous years

Given the several year lag between funding increases and the listed improvements, it appears that this was less a result of a prepared plan and more a process of underutilized resources attracting a mix of parasites (the theft) and talent (hopefully the more recent staff additions).

Which goes towards a critical question in terms of future funding: is SIAI primarily constrained in its mission by resources or competence?

Of course, the related question is: what is SIAI's mission? Someone donating primarily for AGI research might not count recent efforts (LW, rationality camps, etc) as improvements.

What should a potential donor expect from money invested into this organization going forward? Internally, what are your metrics for evaluation?

Edited to add: I think that the spin-off of the rationality efforts is a good step towards answering these questions.

Comment author: Wei_Dai 13 May 2012 12:31:40PM 22 points [-]

I find it unfortunate that none of the SIAI research associates have engaged very deeply in this debate, even LessWrong regulars like Nesov and cousin_it. This is part of the reason why I was reluctant to accept (and ultimately declined) when SI invited me to become a research associate, that I would feel less free to to speak up both in support of SI and in criticism of it.

I don't think this is SI's fault, but perhaps there are things it could do to lessen this downside of the research associate program. For example it could explicitly encourage the research associates to publicly criticize SI and to disagree with its official positions, and make it clear that no associate will be blamed if someone mistook their statements to be official SI positions or saw them as reflecting badly on SI in general. I also write this comment because just being consciously aware of this bias (in favor of staying silent) may help to counteract it.

Comment author: cousin_it 13 May 2012 01:20:16PM *  8 points [-]

Not sure about the others, but as for me, at some point this spring I realized that talking about saving the world makes me really upset and I'm better off avoiding the whole topic.

Comment author: Wei_Dai 13 May 2012 07:07:17PM 10 points [-]

Would it upset you to talk about why talking about saving the world makes you upset?

Comment author: homunq 14 May 2012 07:23:40PM 3 points [-]

It would appear that cousin_it believes we're screwed. It's tempting to argue that this would, overall, be an argument against the effectiveness of the SI program. However, that's probably not true, because we could be 99% screwed and the remaining 1% could depend on SI; this would be a depressing fact, yet still justify supporting the SI.

(Personally, I agree with the poster about the problems with SI, but I'm just laying it out. Responding to weidai rather than cousinit because I don't want to upset the latter unnecessarily.)

Comment author: cousin_it 13 May 2012 08:06:38PM 4 points [-]

Yes.

Comment author: Vladimir_Nesov 13 May 2012 01:43:22PM *  17 points [-]

I don't usually engage in potentially protracted debates lately. A very short summary of my disagreement with Holden's object-level argument part of the post is (1) I don't see in what way can the idea of powerful Tool AI be usefully different from that of Oracle AI, and it seems like the connotations of "Tool AI" that distinguish it from "Oracle AI" follow from an implicit sense of it not having too much optimization power, so it might be impossible for a Tool AI to both be powerful and hold the characteristics suggested in the post; (1a) the description of Tool AI denies it goals/intentionality and other words, but I don't see what they mean apart from optimization power, and so I don't know how to use them to characterize Tool AI; (2) the potential danger of having a powerful Tool/Oracle AI around is such that aiming at their development doesn't seem like a good idea; (3) I don't see how a Tool/Oracle AI could be sufficiently helpful to break the philosophical part of the FAI problem, since we don't even know which questions to ask.

Since Holden stated that he's probably not going to (interactively) engage the comments to this post, and writing this up in a self-contained way is a lot of work, I'm going to leave this task to the people who usually write up SingInst outreach papers.

Comment author: paulfchristiano 10 May 2012 05:16:26PM *  32 points [-]

Thanks for taking the time to express your views quite clearly--I think this post is good for the world (even with a high value on your time and SI's fundraising ability), and that norms encouraging this kind of discussion are a big public good.

I think the explicit objections 1-3 are likely to be addressed satisfactorily (in your judgment) by less than 50,000 words, and that this would provide a good opportunity for SI to present sharper versions of the core arguments---part of the problem with existing materials is certainly that it is difficult and unrewarding to respond to a nebulous and shifting cloud of objections. A lot of what you currently view as disagreements with SI's views may get shifted to doubts about SI being the right organization to back, which probably won't get resolved by 50,000 words.

Comment author: NancyLebovitz 11 May 2012 04:50:00PM 7 points [-]

I'd brought up a version of the tool/agent distinction, and was told firmly that people aren't smart or fast enough to direct an AI. (Sorry, this is from memory-- I don't have the foggiest how to do an efficient search to find that exchange.)

I'm not sure that's a complete answer-- how possible is it to augment a human towards being able to manage an AI? On the other hand, a human like that isn't going to be much like humans 1.0, so problems of Friendliness are still in play.

Perhaps what's needed is building akrasia into the world-- a resistance to sudden change. This has its own risks, but sudden existential threats are rare. [1]

At this point, I think the work on teaching rationality is more reliably important than the work on FAI. FAI involves some long inferential chains. The idea that people could improve their lives a lot by thinking more carefully about what they're doing and acting on those thoughts (with willingness to take feedback) is a much more plausible idea, even if you factor in the idea that rationality can be taught.

[1] Good enough for fiction-- we're already living in a world like that. We call the built-in akrasia Murphy.

Comment author: TheOtherDave 11 May 2012 06:05:03PM 7 points [-]

You may be thinking of this exchange, which I found only because I remembered having been involved in it.

I continue to think that "tool" is a bad term to use here, because people's understanding of what it refers to vary so relevantly.

As for what is valuable work... hm.

I think teaching people to reason in truth-preserving and value-preserving ways is worth doing.
I think formalizing a decision theory that captures universal human intuitions about what the right thing to do in various situations is worth doing.
I think formalizing a decision theory that captures non-universal but extant "right thing" intuitions is potentially worth doing, but requires a lot of auxiliary work to actually be worth doing.
I think formalizing a decision theory that arrives at judgments about the right thing to do in various situations where those judgments are counterintuitive for most/all humans but reliably lead, if implemented, to results that those same humans reliably endorse more the results of their intuitive judgments is worth doing.
I think building systems that can solve real-world problems efficiently is worth doing, all else being equal, though I agree that powerful tools frequently have unexpected consequences that create worse problems than they solve, in which case it's not worth doing.
I think designing frameworks within which problem-solving systems can be built, such that the chances of unexpected negative consequences are lower inside that framework than outside of it, is worth doing.

I don't find it likely that SI is actually doing any of those things particularly more effectively than other organizations.

Comment author: NancyLebovitz 11 May 2012 06:59:24PM 2 points [-]

Thanks for the link-- that was what I was thinking of.

Do you have other organizations which teach rationality in mind? Offhand, the only thing I can think of is cognitive behavioral therapy, and it's not exactly an organization.

Comment author: Dolores1984 10 May 2012 07:26:04PM 7 points [-]

Leaving aside the question of whether Tool AI as you describe it is possible until I've thought more about it:

The idea of a "self-improving algorithm" intuitively sounds very powerful, but does not seem to have led to many "explosions" in software so far (and it seems to be a concept that could apply to narrow AI as well as to AGI).

Looking to the past for examples is a very weak heuristic here, since we have never dealt with software that could write code at a better than human level before. It's like saying, before the invention of the internal combustion engine, "faster horses have never let you cross oceans before." Same goes for the assumption that strong AI will resemble extremely narrow AI software tools that already exist in specific regards. It's evidence, but it's very weak evidence, and I for one wouldn't bet on it.

Comment author: p4wnc6 11 May 2012 10:54:05PM 6 points [-]

I am very happy to see this post and the subsequent dialogue. I've been talking with some people at Giving What We Can about volunteering (beginning in June) to do statistical work for them in trying to find effective ways to quantify and assess the impact of charitable giving specifically to organizations that work on mitigating existential risks. I hope to incorporate a lot of what is discussed here into my future work.

Comment author: Wei_Dai 11 May 2012 02:45:15AM 47 points [-]

Is it just me, or do Luke and Eliezer's initial responses appear to send the wrong signals? From the perspective of an SI critic, Luke's comment could be interpreted as saying "for us, not being completely incompetent is worth bragging about", and Eliezer's as "we're so arrogant that we've only taken two critics (including Holden) seriously in our entire history". These responses seem suboptimal, given that Holden just complained about SI's lack of impressive accomplishments, and being too selective about whose feedback to take seriously.

Comment author: Will_Newsome 11 May 2012 03:47:13AM 19 points [-]

Eliezer's comment makes me think that you, specifically, should consider collecting your criticisms and putting them in Main where Eliezer is more likely to see them and take the time to seriously consider them.

Comment author: Nick_Beckstead 11 May 2012 03:56:21AM 49 points [-]

While I have sympathy with the complaint that SI's critics are inarticulate and often say wrong things, Eliezer's comment does seem to be indicative of the mistake Holden and Wei Dai are describing. Most extant presentations of SIAI's views leave much to be desired in terms of clarity, completeness, concision, accessibility, and credibility signals. This makes it harder to make high quality objections. I think it would be more appropriate to react to poor critical engagement more along the lines of "We haven't gotten great critics. That probably means that we need to work on our arguments and their presentation," and less along the lines of "We haven't gotten great critics. That probably means that there's something wrong with the rest of the world."

Comment author: ChrisHallquist 11 May 2012 04:04:08AM 24 points [-]

This. I've been trying to write something about Eliezer's debate with Robin Hanson, but the problem I keep running up against is that Eliezer's points are not clearly articulated at all. Even making my best educated guesses about what's supposed to go in the gaps in his arguments, I still ended up with very little.

Comment author: jacob_cannell 17 May 2012 09:04:05AM 4 points [-]

Have the key points of that 'debate' subsequently been summarized or clarified on LW? I found that debate exasperating in that Hanson and EY were mainly talking past each other and couldn't seem to hone in on their core disagreements.

I know it generally has to do with hard takeoff / recursive self-improvement vs more gradual EM revolution, but that's not saying all that much.

Comment author: Kaj_Sotala 17 May 2012 07:13:22PM 13 points [-]

I'm in the process of writing a summary and analysis of the key arguments and points in that debate.

The most recent version runs at 28 pages - and that's just an outline.

Comment author: Nick_Beckstead 11 May 2012 05:11:05AM 5 points [-]

In fairness I should add that I think Luke M agrees with this assessment and is working on improving these arguments/communications.

Comment author: lukeprog 11 May 2012 07:21:31PM 8 points [-]

Agree with all this.

Comment author: magfrump 11 May 2012 04:50:01AM 7 points [-]

Luke's comment addresses the specific point that Holden made about changes in the organization given the change in leadership.

Holden said:

I'm aware that SI has relatively new leadership that is attempting to address the issues behind some of my complaints. I have a generally positive impression of the new leadership; I believe the Executive Director and Development Director, in particular, to represent a step forward in terms of being interested in transparency and in testing their own general rationality. So I will not be surprised if there is some improvement in the coming years, particularly regarding the last couple of statements listed above. That said, SI is an organization and it seems reasonable to judge it by its organizational track record, especially when its new leadership is so new that I have little basis on which to judge these staff.

Luke attempted to provide (for the reader) a basis on which to judge these staff members.

Eliezer's response was... characteristic of Eliezer? And also very short and coming at a busy time for him.

Comment author: lukeprog 11 May 2012 07:15:52PM 17 points [-]

Luke's comment could be interpreted as saying "for us, not being completely incompetent is worth bragging about"

Really? I personally feel pretty embarrassed by SI's past organizational competence. To me, my own comment reads more like "Wow, SI has been in bad shape for more than a decade. But at least we're improving very quickly."

Also, I very much agree with Beckstead on this: "Most extant presentations of SIAI's views leave much to be desired in terms of clarity, completeness, concision, accessibility, and credibility signals. This makes it harder to make high quality objections." And also this: "We haven't gotten great critics. That probably means that we need to work on our arguments and their presentation."

Comment author: Wei_Dai 11 May 2012 08:37:07PM 13 points [-]

Really?

Yes, I think it at least gives a bad impression to someone, if they're not already very familiar with SI and sympathetic to its cause. Assuming you don't completely agree with the criticisms that Holden and others have made, you should think about why they might have formed wrong impressions of SI and its people. Comments like the ones I cited seem to be part of the problem.

I personally feel pretty embarrassed by SI's past organizational competence. To me, my own comment reads more like "Wow, SI has been in bad shape for more than a decade. But at least we're improving very quickly."

That's good to hear, and thanks for the clarifications you added.

Comment author: ciphergoth 11 May 2012 06:34:15AM 4 points [-]

Are there other specific critiques you think should have made Eliezer's list, or is it that you think he should not have drawn attention to their absence?

Comment author: Wei_Dai 11 May 2012 07:39:41AM 26 points [-]

Are there other specific critiques you think should have made Eliezer's list, or is it that you think he should not have drawn attention to their absence?

Many of Holden's criticisms have been made by others on LW already. He quoted me in Objection 1. Discussion of whether Tool-AI and Oracle-AI are or are not safe have occurred numerous times. Here's one that I was involved in. Many people have criticized Eliezer/SI for not having sufficiently impressive accomplishments. Cousin_it and Silas Barta have questioned whether the rationality techniques being taught by SI (and now the rationality org) are really effective.

Comment author: Furcas 11 May 2012 03:15:54AM *  23 points [-]

Luke isn't bragging, he's admitting that SI was/is bad but pointing out it's rapidly getting better. And Eliezer is right, criticisms of SI are usually dumb. Could their replies be interpreted the wrong way? Sure, anything can be interpreted in any way anyone likes. Of course Luke and Eliezer could have refrained from posting those replies and instead posted carefully optimized responses engineered to send nothing but extremely appealing signals of humility and repentance.

But if they did turn themselves into politicians, we wouldn't get to read what they actually think. Is that what you want?

Comment author: Wei_Dai 11 May 2012 08:30:50AM *  27 points [-]

Luke isn't bragging, he's admitting that SI was/is bad but pointing out it's rapidly getting better.

But the accomplishments he listed (e.g., having a strategic plan, website redesign) are of the type that Holden already indicated to be inadequate. So why the exhaustive listing, instead of just giving a few examples to show SI is getting better and then either agreeing that they're not yet up to par, or giving an argument for why Holden is wrong? (The reason I think he could be uncharitably interpreted as bragging is that he would more likely exhaustively list the accomplishments if he was proud of them, instead of just seeing them as fixes to past embarrassments.)

And Eliezer is right, criticisms of SI are usually dumb.

I'd have no problem with "usually" but "all except two" seems inexcusable.

But if they did turn themselves into politicians, we wouldn't get to read what they actually think. Is that what you want?

Do their replies reflect their considered, endorsed beliefs, or were they just hurried remarks that may not say what they actually intended? I'm hoping it's the latter...

Comment author: Kaj_Sotala 11 May 2012 10:10:04AM *  36 points [-]

But the accomplishments he listed (e.g., having a strategic plan, website redesign) are of the type that Holden already indicated to be inadequate. So why the exhaustive listing, instead of just giving a few examples to show SI is getting better and then either agreeing that they're not yet up to par, or giving an argument for why Holden is wrong?

Presume that SI is basically honest and well-meaning, but possibly self-deluded. In other words, they won't outright lie to you, but they may genuinely believe that they're doing better than they really are, and cherry-pick evidence without realizing that they're doing so. How should their claims of intending to get better be evaluated?

Saying "we're going to do things better in the future" is some evidence about SI intending to do better, but rather weak evidence, since talk is cheap and it's easy to keep thinking that you're really going to do better soon but there's this one other thing that needs to be done first and we'll get started on the actual improvements tomorrow, honest.

Saying "we're going to do things better in the future, and we've fixed these three things so far" is stronger evidence, since it shows that you've already began fixing problems and might keep up with it. But it's still easy to make a few improvements and then stop. There are far more people who try to get on a diet, follow it for a while and then quit than there are people who actually diet for as long as they initially intended to do.

Saying "we're going to do things better in the future, and here's the list of 18 improvements that we've implemented so far" is much stronger evidence than either of the two above, since it shows that you've spent a considerable amount of effort on improvements over an extended period of time, enough to presume that you actually care deeply about this and will keep up with it.

I don't have a cite at hand, but it's been my impression that in a variety of fields, having maintained an activity for longer than some threshold amount of time is a far stronger predictor of keeping up with it than having maintained it for a shorter time. E.g. many people have thought about writing a novel and many people have written the first five pages of a novel. But when considering the probability of finishing, the difference between the person who's written the first 5 pages and the person who's written the first 50 pages is much bigger than the difference between the person who's written the first 100 pages and the person who's written the first 150 pages.

There's a big difference between managing some performance once, and managing sustained performance over an extended period of time. Luke's comment is far stronger evidence of SI managing sustained improvements over an extended period of time than a comment just giving a few examples of improvement.

Comment author: thomblake 11 May 2012 07:34:11PM 8 points [-]

I think it's unfair to take Eliezer's response as anything other than praise for this article. He noted already that he did not have time to respond properly.

And why even point out that a human's response to anything is "suboptimal"? It will be notable when a human does something optimal.

Comment author: faul_sname 11 May 2012 10:22:58PM 9 points [-]

We do, on occasion, come up with optimal algorithms for things. Also, "suboptimal" usually means "I can think of several better solutions off the top of my head", not "This solution is not maximally effective".

Comment author: ChrisHallquist 11 May 2012 03:58:27AM 5 points [-]

I read Luke's comment just as "I'm aware these are issues and we're working on it." I didn't read him as "bragging" about the ones that have been solved. Eliezer's... I see the problem with. I initially read it as just commenting Holden on his high-quality article (which I agree was high-quality), but I can see it being read as backhanded at anyone else who's criticized SIAI.

Comment author: ciphergoth 11 May 2012 06:31:10AM 27 points [-]

Firstly, I'd like to add to the chorus saying that this is an incredible post; as a supporter of SI, it warms my heart to see it. I disagree with the conclusion - I would still encourage people to donate to SI - but if SI gets a critique this good twice a decade it should count itself lucky.

I don't think GiveWell making SI its top rated charity would be in SI's interests. In the long term, SI benefits hugely when people are turned on to the idea of efficient charity, and asking them to swallow all of the ideas behind SI's mission at the same time will put them off. If I ran GiveWell and wanted to give an endorsement to SI, I might break the rankings into multiple lists: the most prominent being VillageReach-like charities which directly do good in the near future, then perhaps a list for charities that mitigate broadly accepted and well understood existential risks (if this can be done without problems with politics), and finally a list of charities which mitigate more speculative risks.

Comment author: Wei_Dai 12 May 2012 08:11:19PM 5 points [-]

I don't think GiveWell making SI its top rated charity would be in SI's interests.

This seems like a good point and perhaps would have been a good reason for SI to not have approached GiveWell in the first place. At this point though, GiveWell is not only refusing to make SI a top rated charity, but actively recommending people to "withhold" funds from SI, which as far as I can tell, it almost never does. It'd be a win for SI to just convince GiveWell to put it back on the "neutral" list.

Comment author: ciphergoth 12 May 2012 08:19:15PM 2 points [-]

Agreed. Did SI approach GiveWell?

Comment author: Wei_Dai 12 May 2012 08:49:23PM 5 points [-]

Did SI approach GiveWell?

Yes. Hmm, reading that discussion shows that they were already thinking about having GiveWell create a separate existential risk category (and you may have gotten the idea there yourself and then forgot the source).

Comment author: shminux 11 May 2012 07:29:45PM 5 points [-]

Given that much of the discussion revolves around the tool/agent issue, I'm wondering if anyone can point me to a mathematically precise definition of each, in whatever limited context it applies.

Comment author: Will_Newsome 11 May 2012 08:05:43PM *  10 points [-]

It's mostly a question for philosophy of mind, I think specifically a question about intentionality. I think the closest you'll get to a mathematical framework is control theory; controllers are a weird edge case between tools and very simple agents. Control theory is mathematically related to Bayesian optimization, which I think Eliezer believes is fundamental to intelligence: thus identifying cases where a controller is a tool or an agent would be directly relevant. But I don't see how the mathematics, or any mathematics really, could help you. It's possible that someone has mathematized arguments about intentionality by using information theory or some such, you could Google that. Even so I think that at this point the ideas are imprecise enough such that plain ol' philosophy is what we have to work with. Unfortunately AFAIK very few people on LW are familiar with the relevant parts of philosophy of mind.

Comment author: shminux 11 May 2012 08:14:07PM *  8 points [-]

It is an EY's announced intention to work toward an AI that is provably friendly. "Provably" means that said AI is defined in some mathematical framework first. I don't see how one can make much progress in that area before rigorously defining intentionality.

I guess I am getting ahead of myself here. What would a relevant mathematical framework entail, to begin with?

Comment author: Will_Newsome 11 May 2012 09:01:27PM *  7 points [-]

(It's possible that intentionality isn't the sharpest distinction between "tools" and "agents", but it's the one that I see most often emphasized in philosophy of mind, especially with regards to necessary preconditions for the development of any "strong AI".)

It seems that one could write an AI that is in some sense "provably Friendly" even while remaining agnostic as to whether the described AI is or will ultimately become a tool or an agent. It might be that a proposed AI couldn't be an agent because it couldn't solve the symbol grounding problem, i.e. because it lacked intentionality, and thus wouldn't be an effective FAI, but would nonetheless be Friendly in a certain limited sense. However if effectiveness is considered a requirement of Friendliness then one would indeed have to prove in advance that one's proposed AI could solve the grounding problem in order to prove that said AI was Friendly, or alternatively, prove that the grounding problem as such isn't a meaningful concept. I'm not sure what Eliezer would say about this; given his thinking about "outcome pumps" and so on, I doubt he thinks symbol grounding is a fundamental or meaningful problem, and so I doubt that he has or is planning to develop any formal argument that symbol grounding isn't a fundamental roadblock for his preferred attack on AGI.

I guess I am jumping the shark here. The shark in question being the framework itself. What would a relevant mathematical framework entail?

Your question about what a relevant mathematical framework would entail seems too vague for me to parse; my apologies, it's likely my exhaustion. But anyway, if minds leave certain characteristic marks on their environment by virtue of their having intentional (mental) states, then how precise and deep you can make your distinguishing mathematical framework depends on how sharp a cutoff there is in reality between intentional and non-intentional states. It's possible that the cutoff isn't sharp at all, in which case it's questionable whether the supposed distinction exists or is meaningful. If that's the case then it's quite possible that it's not possible to formulate a deep theory that could distinguish agents from tools, or intentional states from non-intentional ones. I think it likely that most AGI researchers, including Eliezer, hold the position that it is indeed impossible to do so. I don't think it would be possible to prove the non-existence of a sharp cutoff, so I think Eliezer could justifiably conclude that he didn't have to prove that his AI would be an "agent" or a "tool", because he could deny, even without mathematical justification, that such a distinction is meaningful.

I'm tired, apologies for any errors.

Comment author: dlthomas 11 May 2012 08:29:20PM 10 points [-]

I guess I am jumping the shark here.

I don't think that idiom means what you think it means.

Comment author: shminux 11 May 2012 08:35:20PM 3 points [-]

Thank you, fixed.

Comment author: quintopia 17 May 2012 06:23:07AM 3 points [-]

You were probably fishing for "jumping the gun".

Comment author: shminux 18 May 2012 12:55:36AM 3 points [-]

Yeah, should have been shooting instead of fishing.

Comment author: othercriteria 11 May 2012 09:15:16PM *  4 points [-]

Focusing on intentionality seems interesting since it lets us look at black box actors (whose agent-ness or tool-ness we don't have to carefully define) and ask if they are acting in an apparently goal-directed manner. I've just skimmed [1] and barely remember [2] but it looks like you can make the inference work in simple cases and also prove some intractability results.

Obviously, FAI can't be solved by just building some AI, modeling P(AI has goal "destroy humanity" | AI's actions, state of world) and pulling the plug when that number gets too high. But maybe something else of value can be gained from a mathematical formalization like this.

[1] I. Van Rooij, J. Kwisthout, M. Blokpoel, J. Szymanik, T. Wareham, and I. Toni, “Intentional communication: Computationally easy or difficult?,” Frontiers in Human Neuroscience, vol. 5, 2011.
[2] C. L. Baker, R. R. Saxe, and J. B. Tenenbaum, “Bayesian theory of mind: Modeling joint belief-desire attribution,” Proceedings of the Thirty-Second Annual Conference of the Cognitive Science Society, 2011.

Comment author: Will_Newsome 11 May 2012 09:32:09PM 5 points [-]

Tenenbaum's papers and related inductive approaches to detecting agency were the first attacks that came to mind, but I'm not sure that such statistical evidence could even in principle supply the sort of proof-strength support and precision that shminux seems to be looking for. I suppose I say this because I doubt someone like Searle would be convinced that an AI had intentional states in the relevant sense on the basis that it displayed sufficiently computationally complex communication, because such intentionality could easily be considered derived intentionality and thus not proof of the AI's own agency. The point at which this objection loses its force unfortunately seems to be exactly the point at which you could actually run the AGI and watch it self-improve and so on, and so I'm not sure that it's possible to prove hypothetical-Searle wrong in advance of actually running a full-blown AGI. Or is my model wrong?

Comment author: Bugmaster 11 May 2012 11:42:31PM 3 points [-]

I am not sure if I agree with Holden that there's a meaningful distinction between tools an agents. However, one definition I could think of is this:

"A tool, unlike an agent, includes blocking human input in its perceive/decide/act loop."

Thus, an agent may work entirely autonomously, whereas a tool would wait for a human to make a decision before performing an action.

Of course, under this definition, Google's webcrawler would be an agent, not a tool -- which is one of the reasons I might disagree with Holden.

Comment author: Nick_Beckstead 11 May 2012 11:29:35PM *  2 points [-]

I don't think anyone will be able to. Here is my attempt at a more precise definition than what we have on the table:

An agent models the world and selects actions in a way that depends on what its modeling says will happen if it selects a given action.

A tool may model the world, and may select actions depending on its modeling, but may not select actions in a way that depends on what its modeling says will happen if it selects a given action.

A consequence of this definition is that some very simple AIs that can be thought of as "doing something," such as some very simple checkers programs or a program that waters your plants if and only if its model says it didn't rain, would count as tools rather than agents. I think that is a helpful way of carving things up.

Comment author: Wei_Dai 10 May 2012 10:44:59PM 13 points [-]

I agree with much of this post, but find a disconnect between the specific criticisms and the overall conclusion of withholding funds from SI even for "donors determined to donate within this cause", and even aside from whether SI's FAI approach increases risk. I see a couple of ways in which the conclusion might hold.

  1. SI is doing worse than they are capable of, due to wrong beliefs. Withholding funds provides incentive for them to do what you think is right, without having to change their beliefs. But this could lead to waste if people disagree in different directions, and funds end up sitting unused because SI can't satisfy everyone, or if SI thinks the benefit of doing what they think is optimal is greater than the value of extra funds they could get from doing what you think is best.
  2. A more capable organization already exists or will come up later and provide a better use of your money. This seems unlikely in the near future, given that we're already familiar with the "major players" in the existential risk area and based on past history, it doesn't seem likely that a new group of highly capable people would suddenly get interested in the cause. In the longer run, it's likely that many more people will be attracted to work in this area as time goes on and the threat of a bad-by-default Singularity becomes more obvious, but those people have the disadvantage of having less time for their work to take effect (which reduces the average value of donations), and there will probably also be many more willing donors than at this time (which reduces the marginal value of donations).

So neither of these ways to fill in the missing part of the argument seems very strong. I'd be interested to know what Holden's own thoughts are, or if anyone else can make stronger arguments on his behalf.

Comment author: Bugmaster 10 May 2012 11:04:10PM *  7 points [-]

Holden said,

However, I don't think that "Cause X is the one I care about and Organization Y is the only one working on it" to be a good reason to support Organization Y.

This addresses your point (2). Holden believes that SI is grossly inefficient at best, and actively harmful at worst (since he thinks that they might inadvertently increase AI risk). Therefore, giving money to SI would be counterproductive, and a donor would get a better return on investment in other places.

As for point (1), my impression is that Holden's low estimate of SI's competence is due to a combination of what he sees as wrong beliefs, as well as an insufficient capability to implement even the correct beliefs into practice. SI claims to be supremely rational, but their list of achievements is lackluster at best -- which indicates a certain amount of Donning-Kruger effect that's going on. Furthermore, SI appears to be focused on growing SI and teaching rationality workshops, as opposed to their stated mission of researching FAI theory.

Additionally, Holden indicted SI members pretty strongly (though very politely) for what I will (in a less polite fashion) label as arrogance. The prevailing attitude of SI members seems to be (according to Holden) that the rest of the world is just too irrational to comprehend their brilliant insights, and therefore the rest of the world has little to offer -- and therefore, any criticism of SI's goals or actions can be dismissed out of hand.

EDIT: found the right quote, duh.

Comment author: TheOtherDave 10 May 2012 11:18:03PM 12 points [-]

If Holden believes that:
A) reducing existential risk is valuable, and
B) SI's effectiveness at reducing existential risk is a significant contributor to the future of existential risk, and
C) SI is being less effective at reducing existential risk than they would be if they fixed some set of problems P, and
D) withholding GiveWell's endorsement while pre-committing to re-evaluating that refusal if given evidence that P has been fixed increases the chances that SI will fix P...

...it seems to me that Holden should withhold GiveWell's endorsement while pre-committing to re-evaluating that refusal if given evidence that P has been fixed.

Which seems to be what he's doing. (Of course, I don't know whether those are his reasons.)

What, on your view, ought he do instead, if he believes those things?

Comment author: Wei_Dai 11 May 2012 12:36:02AM 5 points [-]

Holden must believe some additional relevant statements, because A-D (with "existential risk" suitably replaced) could be applied to every other charity, as presumably no charity is perfect.

I guess what I most want to know is what Holden thinks are the reasons SI hasn't already fixed the problems P. If it's lack of resources or lack of competence, then "withholding ... while pre-committing ..." isn't going to help. If it's wrong beliefs, then arguing seems better than "incentivizing", since that provides a permanent instead of temporary solution, and in the course of arguing you might find out that you're wrong yourself. What does Holden believe that causes him to think that providing explicit incentives to SI is a good thing to do?

Comment author: ciphergoth 11 May 2012 06:44:03AM 2 points [-]

Thanks for making this argument!

AFAICT charities generally have perverse incentives - to do what will bring in donations, rather than what will do the most good. That can usually argue against things like transparency, for example. So I think when Holden usually says "don't donate to X yet" it's as part of an effort to make these incentives saner.

As it happens, I don't think this problem applies especially strongly to SI, but others may differ.

Comment author: shminux 10 May 2012 06:30:00PM *  57 points [-]

Wow, I'm blown away by Holden Karnofsky, based on this post alone. His writing is eloquent, non-confrontational and rational. It shows that he spent a lot of time constructing mental models of his audience and anticipated its reaction. Additionally, his intelligence/ego ratio appears to be through the roof. He must have learned a lot since the infamous astroturfing incident. This is the (type of) person SI desperately needs to hire.

Emotions out of the way, it looks like the tool/agent distinction is the main theoretical issue. Fortunately, it is much easier than the general FAI one. Specifically, to test the SI assertion that, paraphrasing Arthur C. Clarke,

Any sufficiently advanced tool is indistinguishable from an agent.

one ought to formulate and prove this as a theorem, and present it for review and improvement to the domain experts (the domain being math and theoretical computer science). If such a proof is constructed, it can then be further examined and potentially tightened, giving new insights to the mission of averting the existential risk from intelligence explosion.

If such a proof cannot be found, this will lend further weight to the HK's assertion that SI appears to be poorly qualified to address its core mission.

Comment author: Eliezer_Yudkowsky 11 May 2012 12:06:50AM 29 points [-]

Any sufficiently advanced tool is indistinguishable from agent.

I shall quickly remark that I, myself, do not believe this to be true.

Comment author: Viliam_Bur 11 May 2012 03:07:19PM 5 points [-]

What exactly is the difference between a "tool" and an "agent", if we taboo the words?

My definition would be that "agent" has their own goals / utility functions (speaking about human agents, those goals / utility functions are set by evolution), while "tool" has a goal / utility function set by someone else. This distinction may be reasonable on a human level, "human X optimizing for human X's utility" versus "human X optimizing for human Y's utility", but on a machine level, what exactly is the difference between a "tool" that is ordered to reach a goal / optimize a utility function, and an "agent" programmed with the same goal / utility function?

Am I using a bad definition that misses something important? Or is there anything than prevents "agent" to be reduced to a "tool" (perhaps a misconstructed tool) of the forces that have created them? Or is it that all "agents" are "tools", but not all "tools" are "agents", because... why?

Comment author: chaosmage 14 May 2012 10:58:36AM 4 points [-]

How about this: An agent with a very powerful tool is indistinguishable from a very powerful agent.

Comment author: shminux 11 May 2012 12:22:18AM *  6 points [-]

Then the objection 2 seems to hold:

AGI running in tool mode could be extraordinarily useful but far more safe than an AGI running in agent mode

unless I misunderstand your point severely (it happened once or twice before).

Comment author: Eliezer_Yudkowsky 11 May 2012 01:55:11AM 37 points [-]

It's complicated. A reply that's true enough and in the spirit of your original statement, is "Something going wrong with a sufficiently advanced AI that was intended as a 'tool' is mostly indistinguishable from something going wrong with a sufficiently advanced AI that was intended as an 'agent', because math-with-the-wrong-shape is math-with-the-wrong-shape no matter what sort of English labels like 'tool' or 'agent' you slap on it, and despite how it looks from outside using English, correctly shaping math for a 'tool' isn't much easier even if it "sounds safer" in English." That doesn't get into the real depths of the problem, but it's a start. I also don't mean to completely deny the existence of a safety differential - this is a complicated discussion, not a simple one - but I do mean to imply that if Marcus Hutter designs a 'tool' AI, it automatically kills him just like AIXI does, and Marcus Hutter is unusually smart rather than unusually stupid but still lacks the "Most math kills you, safe math is rare and hard" outlook that is implicitly denied by the idea that once you're trying to design a tool, safe math gets easier somehow. This is much the same problem as with the Oracle outlook - someone says something that sounds safe in English but the problem of correctly-shaped-math doesn't get very much easier.

Comment author: army1987 11 May 2012 08:22:12AM 27 points [-]

This sounds like it'd be a good idea to write a top-level post about it.

Comment author: lukeprog 11 May 2012 02:38:32AM *  9 points [-]

Though it's not as detailed and technical as many would like, I'll point readers to this bit of related reading, one of my favorites:

Yudkowsky (2011). Complex value systems are required to realize valuable futures.

Comment author: abramdemski 11 May 2012 04:53:27AM 6 points [-]

but I do mean to imply that if Marcus Hutter designs a 'tool' AI, it automatically kills him just like AIXI does

Why? Or, rather: Where do you object to the argument by Holden? (Given a query, the tool-AI returns an answer with a justification, so the plan for "cure cancer" can be checked to make sure it does not do so by killing or badly altering humans.)

Comment author: FeepingCreature 11 May 2012 12:27:08PM 4 points [-]

One trivial, if incomplete, answer is that to be effective, the Oracle AI needs to be able to answer the question "how do we build a better oracle AI" and in order to define "better" in that sentence in a way that causes our oracle to output a new design that is consistent with all the safeties we built into the original oracle, it needs to understand the intent behind the original safeties just as much as an agent-AI would.

Comment author: Cyan 11 May 2012 05:12:21PM *  15 points [-]

The real danger of Oracle AI, if I understand it correctly, is the nasty combination of (i) by definition, an Oracle AI has an implicit drive to issue predictions most likely to be correct according to its model, and (ii) a sufficiently powerful Oracle AI can accurately model the effect of issuing various predictions. End result: it issues powerfully self-fulfilling prophecies without regard for human values. Also, depending on how it's designed, it can influence the questions to be asked of it in the future so as to be as accurate as possible, again without regard for human values.

Comment author: ciphergoth 11 May 2012 05:34:49PM 7 points [-]

My understanding of an Oracle AI is that when answering any given question, that question consumes the whole of its utility function, so it has no motivation to influence future questions. However the primary risk you set out seems accurate. Countermeasures have been proposed, such as asking for an accurate prediction for the case where a random event causes the prediction to be discarded, but in that instance it knows that the question will be asked again of a future instance of itself.

Comment author: Vladimir_Nesov 11 May 2012 09:01:51PM *  10 points [-]

My understanding of an Oracle AI is that when answering any given question, that question consumes the whole of its utility function, so it has no motivation to influence future questions.

It could acausally trade with its other instances, so that a coordinated collection of many instances of predictors would influence the events so as to make each other's predictions more accurate.

Comment author: abramdemski 12 May 2012 05:53:28AM *  3 points [-]

However the primary risk you set out seems accurate.

(I assume you mean, self-fulfilling prophecies.)

In order to get these, it seems like you would need a very specific kind of architecture: one which considers the results of its actions on its utility function (set to "correctness of output"). This kind of architecture is not the likely architecture for a 'tool'-style system; the more likely architecture would instead maximize correctness without conditioning on its act of outputting those results.

Thus, I expect you'd need to specifically encode this kind of behavior to get self-fulfilling-prophecy risk. But I admit it's dependent on architecture.

(Edit-- so, to be clear: in cases where the correctness of the results depended on the results themselves, the system would have to predict its own results. Then if it's using TDT or otherwise has a sufficiently advanced self-model, my point is moot. However, again you'd have to specifically program these, and would be unlikely to do so unless you specifically wanted this kind of behavior.)

Comment author: Wei_Dai 13 May 2012 06:57:58PM 5 points [-]

When you say "Most math kills you" does that mean you disagree with arguments like these, or are you just simplifying for a soundbite?

Comment author: ewjordan 12 May 2012 06:21:29AM *  12 points [-]

Even if we accepted that the tool vs. agent distinction was enough to make things "safe", objection 2 still boils down to "Well, just don't build that type of AI!", which is exactly the same keep-it-in-a-box/don't-do-it argument that most normal people make when they consider this issue. I assume I don't need to explain to most people here why "We should just make a law against it" is not a solution to this problem, and I hope I don't need to argue that "Just don't do it" is even worse...

More specifically, fast forward to 2080, when any college kid with $200 to spend (in equivalent 2012 dollars) can purchase enough computing power so that even the dumbest AIXI approximation schemes are extremely effective, good enough so that creating an AGI agent would be a week's work for any grad student that knew their stuff. Are you really comfortable living in that world with the idea that we rely on a mere gentleman's agreement not to make self-improving AI agents? There's a reason this is often viewed as an arms race, to a very real extent the attempt to achieve Friendly AI is about building up a suitably powerful defense against unfriendly AI before someone (perhaps accidentally) unleashes one on us, and making sure that it's powerful enough to put down any unfriendly systems before they can match it.

From what I can tell, stripping away the politeness and cutting to the bone, the three arguments against working on friendly AI theory are essentially:

  • Even if you try to deploy friendly AGI, you'll probably fail, so why waste time thinking about it?
  • Also, you've missed the obvious solution, which I came up with after a short survey of your misguided literature: just don't build AGI! The "standard approach" won't ever try to create agents, so just leave them be, and focus on Norvig-style dumb-AI instead!
  • Also, AGI is just a pipe dream. Why waste time thinking about it? [1]

FWIW, I mostly agree with the rest of the article's criticisms, especially re: the organization's achievements and focus. There's a lot of room for improvement there, and I would take these criticisms very seriously.

But that's almost irrelevant, because this article argues against the core mission of SIAI, using arguments that have been thoroughly debunked and rejected time and time again here, though they're rarely dressed up this nicely. To some extent I think this proves the institute's failure in PR - here is someone that claims to have read most of the sequences, and yet this criticism basically amounts to a sexing up of the gut reaction arguments that even completely uninformed people make - AGI is probably a fantasy, even if it's not you won't be able to control it, so let's just agree not to build it.

Or am I missing something new here?

[1] Alright, to be fair, this is not a great summary of point 3, which really says that specialized AIs might help us solve the AGI problem in a safer way, that a hard takeoff is "just a theory" and realistically we'll probably have more time to react and adapt.

Comment author: Eliezer_Yudkowsky 15 May 2012 08:01:18PM 7 points [-]

purchase enough computing power so that even the dumbest AIXI approximation schemes are extremely effective

There isn't that much computing power in the physical universe. I'm not sure even smarter AIXI approximations are effective on a moon-sized nanocomputer. I wouldn't fall over in shock if a sufficiently smart one did something effective, but mostly I'd expect nothing to happen. There's an awful lot that happens in the transition from infinite to finite computing power, and AIXI doesn't solve any of it.

Comment author: JoshuaZ 15 May 2012 08:06:09PM 3 points [-]

There isn't that much computing power in the physical universe. I'm not sure even smarter AIXI approximations are effective on a moon-sized nanocomputer.

Is there some computation or estimate where these results are coming from? They don't seem unreasonable, but I'm not aware of any estimates about how efficient largescale AIXI approximations are in practice. (Although attempted implementations suggest that empirically things are quite inefficient.)

Comment author: jsteinhardt 18 May 2012 02:05:21PM 4 points [-]

Naieve AIXI is doing brute force search through an exponentially large space. Unless the right Turing machine is 100 bits or less (which seems unlikely), Eliezer's claim seems pretty safe to me.

Most of mainstream machine learning is trying to solve search problems through spaces far tamer than the search space for AIXI, and achieving limited success. So it also seems safe to say that even pretty smart implementations of AIXI probably won't make much progress.

Comment author: MarkusRamikin 10 May 2012 07:59:51PM *  20 points [-]

Wow, I'm blown away by Holden Karnofsky, based on this post alone. His writing is eloquent, non-confrontational and rational. It shows that he spent a lot of time constructing mental models of his audience and anticipated its reaction. Additionally, his intelligence/ego ratio appears to be through the roof.

Agreed. I normally try not to post empty "me-too" replies; the upvote button is there for a reason. But now I feel strongly enough about it that I will: I'm very impressed with the good will and effort and apparent potential for intelligent conversation in HoldenKarnofsky's post.

Now I'm really curious as to where things will go from here. With how limited my understanding of AI issues is, I doubt a response from me would be worth HoldenKarnofsky's time to read, so I'll leave that to my betters instead of adding more noise. But yeah. Seeing SI ideas challenged in such a positive, constructive way really got my attention. Looking forward to the official response, whatever it might be.

Comment author: army1987 11 May 2012 08:34:24AM 5 points [-]

Agreed. I normally try not to post empty "me-too" replies; the upvote button is there for a reason. But now I feel strongly enough about it that I will: I'm very impressed with the good will and effort and apparent potential for intelligent conversation in HoldenKarnofsky's post.

“the good will and effort and apparent potential for intelligent conversation” is more information than an upvote, IMO.

Comment author: MarkusRamikin 11 May 2012 09:00:28AM *  2 points [-]

Right, I just meant shminux said more or less the same thing before me. So normally I would have just upvoted his comment.

Comment author: dspeyer 11 May 2012 02:47:26AM 6 points [-]

Any sufficiently advanced tool is indistinguishable from [an] agent.

Let's see if we can use concreteness to reason about this a little more thoroughly...

As I understand it, the nightmare looks something like this. I ask Google SuperMaps for the fastest route from NYC to Albany. It recognizes that computing this requires traffic information, so it diverts several self-driving cars to collect real-time data. Those cars run over pedestrians who were irrelevant to my query.

The obvious fix: forbid SuperMaps to alter anything outside of its own scratch data. It works with the data already gathered. Later a Google engineer might ask it what data would be more useful, or what courses of action might cheaply gather that data, but the engineer decides what if anything to actually do.

This superficially resembles a box, but there's no actual box involved. The AI's own code forbids plans like that.

But that's for a question-answering tool. Let's take another scenario:

I tell my super-intelligent car to take me to Albany as fast as possible. It sends emotionally manipulative emails to anyone else who would otherwise be on the road encouraging them to stay home.

I don't see an obvious fix here.

So the short answer seems to be that it matters what the tool is for. A purely question-answering tool would be extremely useful, but not as useful as a general purpose one.

Could humans with a oracular super-AI police the development and deployment of active super-AIs?

Comment author: shminux 11 May 2012 04:49:57AM 2 points [-]

I tell my super-intelligent car to take me to Albany as fast as possible. It sends emotionally manipulative emails to anyone else who would otherwise be on the road encouraging them to stay home.

I believe that HK's post explicitly characterizes anything active like this as having agency.

Comment author: Will_Sawin 11 May 2012 06:21:55AM 9 points [-]

I think the correct objection is something you can't quite see in google maps. If you program an AI to do nothing but output directions, it will do nothing but output directions. If those directions are for driving, you're probably fine. If those directions are big and complicated plans for something important, that you follow without really understanding why you're doing (and this is where most of the benefits of working with an AGI will show up), then you could unknowingly take over the world using a sufficiently clever scheme.

Also note that it would be a lot easier for the AI to pull this off if you let it tell you how to improve its own design. If recursively self-improving AI blows other AI out of the water, then tool AI is probably not safe unless it is made ineffective.

This does actually seem like it would raise the bar of intelligence needed to take over the world somewhat. It is unclear how much. The topic seems to me to be worthy of further study/discussion, but not (at least not obviously) a threat to the core of SIAI's mission.

Comment author: Viliam_Bur 11 May 2012 03:16:32PM *  2 points [-]

If those directions are big and complicated plans for something important, that you follow without really understanding why you're doing (and this is where most of the benefits of working with an AGI will show up), then you could unknowingly take over the world using a sufficiently clever scheme.

It also helps that Google Maps does not have general intelligence, so it does not include user's reactions to its output, the consequent user's actions in the real world, etc. as variables in its model, which may influence the quality of the solution, and therefore can (and should) be optimized (within constraints given by user's psychology, etc.), if possible.

Shortly: Google Maps does not manipulate you, because it does not see you.

Comment author: drnickbone 11 May 2012 09:36:18AM 4 points [-]

This was my thought as well: an automated vehicle is in "agent" mode.

The example also demonstrates why an AI in agent mode is likely to be more useful (in many cases) than an AI in tool mode. Compare using Google maps to find a route to the airport versus just jumping into a taxi cab and saying "Take me to the airport". Since agent-mode AI has uses, it is likely to be developed.

Comment author: army1987 11 May 2012 08:13:25AM *  4 points [-]

Any sufficiently advanced tool is indistinguishable from an agent.

I have no strong intuition about whether this is true or not, but I do intuit that if it's true, the value of sufficiently for which it's true is so high it'd be nearly impossible to achieve it accidentally.

(On the other hand the blind idiot god did ‘accidentally’ make tools into agents when making humans, so... But after all that only happened once in hundreds of millions of years of ‘attempts’.)

Comment author: othercriteria 11 May 2012 01:04:24PM 3 points [-]

the blind idiot god did ‘accidentally’ make tools into agents when making humans, so... But after all that only happened once in hundreds of millions of years of ‘attempts’.

This seems like a very valuable point. In that direction, we also have the tens of thousands of cancers that form every day, military coups, strikes, slave revolts, cases of regulatory capture, etc.

Comment author: badger 10 May 2012 11:28:21PM 2 points [-]

If the tool/agent distinction exists for sufficiently powerful AI, then a theory of friendliness might not be strictly necessary, but still highly prudent.

Going from a tool-AI to an agent-AI is a relatively simple step of the entire process. If meaningful guarantees of friendliness turn out to be impossible, then security comes down on no one attempting to make an agent-AI when strong enough tool-AIs are available. Agency should be kept to a minimum, even with a theory of friendliness in hand, as Holden argues in objection 1. Guarantees are safeguards against the possibility of agency rather than a green light.

Comment author: private_messaging 11 May 2012 07:56:39AM 3 points [-]

Any sufficiently advanced tool is indistinguishable from an agent.

I do not think this is even true.

Comment author: David_Gerard 11 May 2012 02:00:03PM *  3 points [-]

I routinely try to turn sufficiently reliable tools into agents wherever possible, per this comment.

I suppose we could use a definition of "agent" that implied greater autonomy in setting its own goals. But there are useful definitions that don't.

Comment author: Wei_Dai 12 May 2012 07:35:37PM *  12 points [-]

Some comments on objections 1 and 2.

For example, when the comment says "the formalization of the notion of 'safety' used by the proof is wrong," it is not clear whether it means that the values the programmers have in mind are not correctly implemented by the formalization, or whether it means they are correctly implemented but are themselves catastrophic in a way that hasn't been anticipated.

Both (with the caveat that SI's plans are to implement an extrapolation procedure for the values, and not the values themselves).

Another way of putting this is that a "tool" has an underlying instruction set that conceptually looks like: "(1) Calculate which action A would maximize parameter P, based on existing data set D. (2) Summarize this calculation in a user-friendly manner, including what Action A is, what likely intermediate outcomes it would cause, what other actions would result in high values of P, etc."

I think such a Tool-AI will be much less powerful than an equivalent Agent-AI, due to the bottleneck of having to summarize its calculations in a human-readable form, and then waiting for the human to read and understand the summary and then make a decision. It's not even clear that the huge amounts of calculations that a Tool-AI might do in order to find optimal actions can be summarized in any useful way, or this process of summarization can be feasibly developed before others create Agent-AIs. (Edit: See further explanation of this problem here.) Of course you do implicitly acknowledge this:

Some have argued to me that humans are likely to choose to create agent-AGI, in order to quickly gain power and outrace other teams working on AGI. But this argument, even if accepted, has very different implications from SI's view. [...] It seems that the appropriate measures for preventing such a risk are security measures aiming to stop humans from launching unsafe agent-AIs, rather than developing theories or raising awareness of "Friendliness."

I do accept this argument (and have made similar arguments), except that I advocate trying to convince AGI researchers to slow down development of all types of AGI (including Tool-AI, which can be easily converted into Agent-AI), and don't think "security measures" are of much help without a world government that implements a police state to monitor what goes on in every computer. Convincing AGI researchers to slow down is also pointless without a simultaneous program to create a positive Singularity via other means. I've written more about my ideas here, here, and here.

Comment author: komponisto 12 May 2012 02:55:35AM 24 points [-]

Lack of impressive endorsements. [...] I feel that given the enormous implications of SI's claims, if it argued them well it ought to be able to get more impressive endorsements than it has. I have been pointed to Peter Thiel and Ray Kurzweil as examples of impressive SI supporters, but I have not seen any on-record statements from either of these people that show agreement with SI's specific views, and in fact (based on watching them speak at Singularity Summits) my impression is that they disagree.

This is key: they support SI despite not agreeing with SI's specific arguments. Perhaps you should, too, at least if you find folks like Thiel and Kurzweil sufficiently impressive.

In fact, this has always been roughly my own stance. The primary reason I think SI should be supported is not that their arguments for why they should be supported are good (although I think they are, or at least, better than you do). The primary reason I think SI should be supported is that I like what the organization actually does, and wish it to continue. The Less Wrong Sequences, Singularity Summit, rationality training camps, and even HPMoR and Less Wrong itself are all worth paying some amount of money for. Not to mention the general paying-of-attention to systematic rationality training, and to existential risks relating to future technology.

Strangely, the possibility of this kind of view doesn't seem to be discussed much, even though it is apparently the attitude of some of SI's most prominent supporters.

I furthermore have to say that to raise this particular objection seems to me almost to defeat the purpose of GiveWell. After all, if we could rely on standard sorts of prestige-indicators to determine where our money would be best spent, everybody would be spending their money in those places already, and "efficient charity" wouldn't be a problem for some special organization like yours to solve.

Comment author: ghf 13 May 2012 08:12:00PM *  13 points [-]

The primary reason I think SI should be supported is that I like what the organization actually does, and wish it to continue. The Less Wrong Sequences, Singularity Summit, rationality training camps, and even HPMoR and Less Wrong itself are all worth paying some amount of money for.

I think that my own approach is similar, but with a different emphasis. I like some of what they've done, so my question is how do encourage those pieces. This article was very helpful in prompting some thought into how to handle that. I generally break down their work into three categories:

  1. Rationality (minicamps, training, LW, HPMoR): Here I think they've done some very good work. Luckily, the new spinoff will allow me to support these pieces directly.

  2. Existential risk awareness (singularity summit, risk analysis articles): Here their record has been mixed. I think the Singularity Summit has been successful, other efforts less so but seemingly improving. I can support the Singularity Summit by continuing to attend and potentially donating directly if necessary (since it's been running positive in recent years, for the moment this does not seem necessary).

  3. Original research (FAI, timeless decision theory): This is the area where I do not find them to be at all effective. From what I've read, there seems a large disconnect between ambitions and capabilities. Given that I can now support the other pieces separately, this is why I would not donate generally to SIAI.

My overall view would be that, at present, there is no real organization to support. Rather there is a collection of talented people whose freedom to work on interesting things I'm supporting. Given that, I want to support those people where I think they are effective.

I find Eliezer in particular to be one of the best pop-science writers around (and I most assuredly do not mean that term as an insult). Things like the sequences or HPMoR are thought-provoking and worth supporting. I find the general work on rationality to be critically important and timely.

So, while I agree that much of the work being done is valuable, my conclusion has been to consider how to support that directly rather than SI in general.

Comment author: komponisto 13 May 2012 10:55:51PM 2 points [-]

I don't see how this constitutes a "different emphasis" from my own. Right now, SI is the way one supports the activities in question. Once the spinoff has finally spun off and can take donations itself, it will be possible to support the rationality work directly.

Comment author: ghf 13 May 2012 11:33:25PM 2 points [-]

The different emphasis comes down to your comment that:

...they support SI despite not agreeing with SI's specific arguments. Perhaps you should, too...

In my opinion, I can more effectively support those activities that I think are effective by not supporting SI. Waiting until the Center for Applied Rationality gets its tax-exempt status in place allows me to both target my donations and directly signal where I think SI has been most effective up to this point.

If they end up having short-term cashflow issues prior to that split, my first response would be to register for the next Singularity Summit a bit early since that's another piece that I wish to directly support.

Comment author: squelchtoad 12 May 2012 03:03:38AM *  11 points [-]

I furthermore have to say that to raise this particular objection seems to me almost to defeat the purpose of GiveWell. After all, if we could rely on standard sorts of prestige-indicators to determine where our money would be best spent, everybody would be spending their money in those places already, and "efficient charity" wouldn't be a problem for some special organization like yours to solve.

I think Holden seems to believe that Thiel and Kurzweil endorsing SIAI's UFAI-prevention methods would be more like a leading epidemiologist endorsing the malaria-prevention methods of the Against Malaria Foundation (AMF) than it would be like Celebrity X taking a picture with some children for the AMF. There are different kinds of "prestige-indicator," some more valuable to a Bayesian-minded charity evaluator than others.

Comment author: komponisto 12 May 2012 03:10:46AM 2 points [-]

I would still consider the leading epidemiologist's endorsement to be a standard sort of prestige-indicator. If an anti-disease charity is endorsed by leading epidemiologists, you hardly need GiveWell. (At least for the epidemiological aspects. The financial/accounting part may be another matter.)

Comment author: squelchtoad 12 May 2012 03:17:03AM *  3 points [-]

I would argue that this is precisely what GiveWell does in evaluating malaria charity. If the epidemiological consensus changed, and bednets were held to be an unsustainable solution (this is less thoroughly implausible than it might sound, though probably still unlikely), then even given the past success of certain bednet charities on all GiveWell's other criteria, GiveWell might still downgrade those charities. And don't underestimate the size of the gap between "a scientifically plausible mechanism for improving lives" and "good value in lives saved/improved per dollar." There are plenty of bednet charities, and there's a reason GiveWell recommends AMF and not, say, Nothing But Nets.

The endorsement, in other words, is about the plausibility of the mechanism, which is only one of several things to consider in donating to a charity, but it's the area in which a particular kind of expert endorsement is most meaningful.

Comment author: komponisto 12 May 2012 04:13:34AM *  3 points [-]

If the epidemiological consensus changed, and bednets were held to be an unsustainable solution...then even given the past success of certain bednet charities on all GiveWell's other criteria, GiveWell might still downgrade those charities.

As they should. But the point is that, in so doing, GiveWell would not be adding any new information not already contained in the epidemiological consensus (assuming they don't have privileged information about the latter).

And don't underestimate the size of the gap between "a scientifically plausible mechanism for improving lives" and "good value in lives saved/improved per dollar."

Indeed. The latter is where GiveWell enters the picture; it is their unique niche. The science itself, on the other hand, is not really their purview, as opposed to the experts. If GiveWell downgrades a charity solely because of the epidemiological consensus, and (for some reason) I have good reason to think the epidemiological consensus is wrong, or inadequately informative, then GiveWell hasn't told me anything, and I have no reason to pay attention to them. Their rating is screened off.

Imagine that 60% of epidemiologists think that Method A is not effective against Disease X, while 40% think it is effective. Suppose Holden goes to a big conference of epidemiologists and says "GiveWell recommends against donating to Charity C because it uses Method A, which the majority of epidemiologists say is not effective." Assuming they already knew Charity C uses Method A, should they listen to him?

Of course not. The people at the conference are all epidemiologists themselves, and those in the majority are presumably already foregoing donations to Charity C, while those in the minority already know that the majority of their colleagues disagree with them. Holden hasn't told them anything new. So, if his organization is going to be of any use to such an audience, it should focus on the things they can't already evaluate themselves, like financial transparency, accounting procedures, and the like; unless it can itself engage the scientific details.

This is analogous to the case at hand: if all that GiveWell is going to tell the world is that SI hasn't signaled enough status, well, the world already knows that. Their raison d'être is to tell people info that they can't find (or is costly to find) via other channels: such as info about non-high-status charities that may be worth supporting despite their non-high-status. If it limits its endorsements to high-status charities, then it may as well not even bother -- just as it need not bother telling a conference of epidemiologists that it doesn't endorse a charity because of the epidemiological consensus.

Comment author: squelchtoad 12 May 2012 11:37:13AM *  3 points [-]

A few points:

"Possesses expert endorsement of its method" does not necessarily equal "high-status charity." A clear example here is de-worming and other parasite control, which epidemiologists all agree works well, but which doesn't get the funding a lot of other developing world charity does because it's not well advertised. GiveWell would like SIAI to be closer to de-worming charities in that outside experts give some credence to the plausibility of the methods by which SIAI proposes to do good.

Moreover, "other high-status charities using one's method" also doesn't equal "high-status charity." Compare the number of Facebook likes for AMF and Nothing But Nets. The reason GiveWell endorses one but not the other is that AMF, unlike NBN, has given compelling evidence that it can scale the additional funding that a GiveWell endorsement promises into more lives saved/improved at a dollar rate comparable to their current lives saved/improved per dollar.

So we should distinguish a charity's method being "high-status" from the charity itself being "high-status." But if you define "high status method" as "there exists compelling consensus among the experts GiveWell has judged to be trustworthy that the proposed method for doing good is even plausible," then I, as a Bayesian, am perfectly comfortable with GiveWell only endorsing "high-status method" charities. They still might buck the prevailing trends on optimal method; perhaps some of the experts are on GiveWell's own staff, or aren't prominent in the world at large. But by demanding that sort of "high-status method" from a charity, GiveWell discourages crankism and is unlikely to miss a truly good cause for too long.

Expert opinion on method plausibility is all the more important with more speculative charity like SIAI because there isn't a corpus of "effectiveness data to date" to evaluate directly.

Comment author: NancyLebovitz 12 May 2012 09:51:27AM 4 points [-]

If a tool AI is programmed with a strong utility function to get accurate answers, is there a risk of it behaving like a UFAI to get more resources in order to improve its answers?

Comment author: Johnicholas 12 May 2012 01:27:25PM 5 points [-]

There's two uses of 'utility function'. One is analogous to Daniel Dennett's "intentional stance" in that you can choose to interpret an entity as having a utility function - this is always possible but not necessarily a perspicuous way of understanding an entity - because you might end up with utility functions like "enjoys running in circles but is equally happy being prevented from running in circles".

The second form is as an explicit component within an AI design. Tool-AIs do not contain such a component - they might have a relevance or accuracy function for evaluating answers, but it's not a utility function over the world.

Comment author: NancyLebovitz 12 May 2012 04:11:27PM 3 points [-]

because you might end up with utility functions like "enjoys running in circles but is equally happy being prevented from running in circles".

Is that a problem so long as some behaviors are preferred over others? You could have "is neutral about running in circles, but resists jumping up and down and prefers making abstract paintings".

Tool-AIs do not contain such a component - they might have a relevance or accuracy function for evaluating answers, but it's not a utility function over the world.

Wouldn't that depend on the Tool-AI? Eliezer's default no-akrasia AI does everything it can to fulfill its utility function. You presumably want it to be as accurate as possible or perhaps as accurate as useful. Would it be a problem for it to ask for more resources? To earn money on its own initiative for more resources? To lobby to get laws passed to give it more resources? At some point, it's a problem if it's going to try to rule the world to get more resources.....

Comment author: CuSithBell 12 May 2012 04:39:31PM 6 points [-]

Tool-AIs do not contain such a component - they might have a relevance or accuracy function for evaluating answers, but it's not a utility function over the world.

Wouldn't that depend on the Tool-AI?

I think this is explicitly part of the "Tool-AI" definition, that it is not a Utility Maximizer.

Comment author: private_messaging 12 May 2012 06:00:35PM *  5 points [-]

What the hell does SIAI mean by 'utility function' anyway? (math please)

Inside the agents and tools as currently implemented, there is a solver that works on a function, and finds input values to that function, which result in maximum (or, usually, minimum) of that function (note that the output may be binary).

[To clarify: that function can include both model of the world and the evaluation of 'desirability' of properties of a state of this model. Usually, in software development, if you have f(g(x)) (where g is world predictor and f is the desirability evaluator), and g's output is only ever used by f, this is a target for optimization to create fg(x) which is more accurate in given time but does not consist of nearly separable parts. Furthermore, the f output is only ever fed to comparison operators, making it another optimization target to create cmp_fg() which compares the actions directly perhaps by calculating the difference between worlds that is caused by particular action, which allows to cull most of processing out]

It, however, is entirely indifferent to actually maximizing anything. It doesn't even try to maximize some internal variable (it will gladly try inputs that result in small output value, but usually is written not to report those inputs).

I think the confusion arises from defining the agent in English language-based concepts, as opposed to the AI developer's behaviour where they would define things in some logical down-to-elements way, and then try to communicate it using English. The command in English, 'bring me the best answer!', does tell you to go ahead and convert universe to computronium to answer it (if you are to interpret it in science-fiction-robot-minded way). The commands in programming languages, not really. I don't think English specifies that either, we just can interpret it charitably enough if we feel like (starting from other purpose, such as 'be nice').

edit: I feel that a lot of difficulties of making 'safe AGI', those that are not outright nonsensical, are just repackaged special cases of statements about general difficulty of making any AGI, safe or not. That's very nasty thing to do, to generate such special cases preferentially. edit: Also, some may be special cases of lack/impossibility of solution to symbol grounding.

Comment author: jedharris 10 May 2012 08:19:47PM *  10 points [-]

Karnofsky's focus on "tool AI" is useful but also his statement of it may confuse matters and needs refinement. I don't think the distinction between "tool AI" and "agent AI" is sharp, or in quite the right place.

For example, the sort of robot cars we will probably have in a few years are clearly agents-- you tell them to "come here and take me there" and they do it without further intervention on your part (when everything is working as planned). This is useful in a way that any amount and quality of question answering is not. Almost certainly there will be various flavors of robot cars available and people will choose the ones they like (that don't drive in scary ways, that get them where they want to go even if it isn't well specified, that know when to make conversation and when to be quiet, etc.) As long as robot cars just drive themselves and people around, can't modify the world autonomously to make their performance better, and are subject to continuing selection by their human users, they don't seem to be much of a threat.

The key points here seem to be (1) limited scope, (2) embedding in a network of other actors and (3) humans in the loop as evaluators. We could say these define "tool AIs" or come up with another term. But either way the antonym doesn't seem to be "agent AIs" but maybe something like "autonomous AIs" or "independent AIs" -- AIs with the power to act independently over a very broad range, unchecked by embedding in a network of other actors or by human evaluation.

Framed this way, we can ask "Why would independent AIs exist?" If the reason is mad scientists, an arms race, or something similar then Karnofsky has a very strong argument that any study of friendliness is beside the point. Outside these scenarios, the argument that we are likely to create independent AIs with any significant power seems weak; Karnofsky's survey more or less matches my own less methodical findings. I'd be interested in strong arguments if they exist.

Given this analysis, there seem to be two implications:

  • We shouldn't build independent AIs, and should organize to prevent their development if they seem likely.

  • We should thoroughly understand the likely future evolution of a patchwork of diverse tool AIs, to see where dangers arise.

For better or worse, neither of these lend themselves to tidy analytical answers, though analytical work would be useful for both. But they are very much susceptible to investigation, proposals, evangelism, etc.

These do lend themselves to collaboration with existing AI efforts. To the extent they perceive a significant risk of development of independent AIs in the foreseeable future, AI researchers will want to avoid that. I'm doubtful this is an active risk but could easily be convinced by evidence -- not just abstract arguments -- and I'm fairly sure they feel the same way.

Understanding the long term evolution of a patchwork of diverse tool AIs should interest just about all major AI developers, AI project funders, and long term planners who will be affected (which is just about all of them). Short term bias and ceteris paribus bias will lead to lots of these folks not engaging with the issue, but I think it will seem relevant to an increasing number as the hits keep coming.

Comment author: rhollerith_dot_com 11 May 2012 04:04:57AM *  13 points [-]

I feel that [SI] ought to be able to get more impressive endorsements than it has.

SI seems to have passed up opportunities to test itself and its own rationality by e.g. aiming for objectively impressive accomplishments.

Holden, do you believe that charitable organizations should set out deliberately to impress donors and high-status potential endorsers? I would have thought that a donor like you would try to ignore the results of any attempts at that and to concentrate instead on how much the organization has actually improved the world because to do otherwise is to incentivize organizations whose real goal is to accumulate status and money for their own sake.

For example, Eliezer's attempts to teach rationality or "technical epistemology" or whatever you want to call it through online writings seem to me to have actually improved the world in a non-negligible way and seem to have been designed to do that rather than designed merely to impress.

ADDED. The above is probably not as clear as it should be, so let me say it in different words: I suspect it is a good idea for donors to ignore certain forms of evidence ("impressiveness", affiliation with high-status folk) of a charity's effectiveness to discourage charities from gaming donors in ways that seems to me already too common, and I was a little surprised to see that you do not seem to ignore those forms of evidence.

Comment author: faul_sname 11 May 2012 11:07:53PM 3 points [-]

Holden, do you believe that charitable organizations should set out deliberately to impress donors and high-status potential endorsers?

The obvious answer would be "Yes." Givewell only funneled about $5M last year, as compared to the $300,000M or so that Americans give on an annual basis. Most money still comes from people that base their decision on something other than efficiency, so targeting these people makes sense.

Comment author: JGWeissman 11 May 2012 11:16:15PM 3 points [-]

The question was not if an individual charity, holding constant the behavior of other charities, benefits from "setting out deliberately to impress donors and high-status potential endorsers", but whether it is in Holden's interests (in making charities more effective) to generally encourage charities to do so.

Comment author: rhollerith_dot_com 11 May 2012 06:36:47PM *  5 points [-]

In other words, I tend to think that people who make philanthropy their career and who have accumulated various impressive markers of their potential to improve the world are likely to continue to accumulate impressive markers, but are less likely to improve the world than people who have already actually improved the world.

And of the three core staff members of SI I have gotten to know, 2 (Eliezer and another one who probably does not want to be named) have already improved the world in non-negligible ways and the third spends less time accumulating credentials and impressiveness markers than almost anyone I know.

Comment author: ModusPonies 12 May 2012 06:11:56AM 2 points [-]

I don't think Holden was looking for endorsements from "donors and high-status potential endorsers". I interpreted his post as looking for endorsements from experts on AI. The former would be evidence that SI could go on to raise money and impress people, and the latter would be evidence that SI's mission is theoretically sound. (The strength of that evidence is debatable, of course.) Given that, looking for endorsements from AI experts seems like it would be A) a good idea and B) consistent with the rest of GiveWell's methodology.

Comment author: lukeprog 11 May 2012 10:13:23PM *  35 points [-]

This post is highly critical of SIAI — both of its philosophy and its organizational choices. It is also now the #1 most highly voted post in the entire history of LessWrong — higher than any posts by Eliezer or myself.

I shall now laugh harder than ever when people try to say with a straight face that LessWrong is an Eliezer-cult that suppresses dissent.

Comment author: Eliezer_Yudkowsky 12 May 2012 02:36:01PM *  13 points [-]

Either I promoted this and then forgot I'd done so, or someone else promoted it - of course I was planning to promote it, but I thought I'd planned to do so on Tuesday after the SIAIers currently running a Minicamp had a chance to respond, since I expected most RSS subscribers to the Promoted feed to read comments only once (this is the same reason I wait a while before promoting e.g. monthly quotes posts). On the other hand, I certainly did upvote it the moment I saw it.

Comment author: lukeprog 12 May 2012 05:23:12PM 2 points [-]

Original comment now edited; I wasn't aware anyone besides you might be promoting posts.

Comment author: JackV 12 May 2012 09:29:41AM 10 points [-]

I agree (as a comparative outsider) that the polite response to Holden is excellent. Many (most?) communities -- both online communities and real-world organisations, especially long-standing ones -- are not good at it for lots of reasons, and I think the measured response of evaluating and promoting Holden's post is exactly what LessWrong members would hope LessWrong could do, and they showed it succeeded.

I agree that this is good evidence that LessWrong isn't just an Eliezer-cult. (The true test would be if Elizier and another long-standing poster were dismissive to the post, and then other people persuaded them otherwise. In fact, maybe people should roleplay that or something, just to avoid getting stuck in an argument-from-authority trap, but that's a silly idea. Either way, the fact that other people spoke positively, and Elizier and other long-standing posters did too, is a good thing.)

However, I'm not sure it's as uniquely a victory for the rationality of LessWrong as it sounds. In responose to srdiamond, Luke quoted tenlier saying "[Holden's] critique mostly consists of points that are pretty persistently bubbling beneath the surface around here, and get brought up quite a bit. Don't most people regard this as a great summary of their current views, rather than persuasive in any way?" To me, that suggests that Holden did a really excellent job expressing these views clearly and persuasively. However, it suggests that previous people had tried to express something similar, but it hadn't been expressed well enough to be widely accepted, and people reading had failed to sufficiently apply the dictum of "fix your opponents' arguments for them". I'm not sure if that's true (it's certainly not automatically true), but I suspect it might be. What do people think?

If there's any truth to it, it suggests one good answer to the recent post http://lesswrong.com/lw/btc/how_can_we_get_more_and_better_lw_contrarians (whether that was desirable in general or not) would be, as a rationalist exercise for someone familiar with/to the community and good at writing rationally, to take a survey of contrarian views on the topic that people on the community may have had but not been able to express, and don't worry about showmanship like pretending to believe it yourself, but just say "I think what some people think is [well-expressed argument]. Do you agree that's fair? If so, do I and other people think they have a point?" Whether or not that argument is right it's still good to engage with it if many people are thinking it.

Comment author: pleeppleep 12 May 2012 05:30:48PM 4 points [-]

Third highest now. Eliezer just barely gets into the top 20.

Comment author: MarkusRamikin 17 May 2012 07:56:56AM 3 points [-]

It is also now the 3rd most highly voted post

1st.

At this point even I am starting to be confused.

Comment author: amcknight 15 May 2012 09:00:51PM 8 points [-]

Holden does a great job but makes two major flaws:
1) His argument about Tool-AI is irrelevant, because creating Tool-AI does almost nothing to avoid Agent-AI, which he agrees is dangerous.
2) He too narrowly construes SI's goals by assuming they are only working on Friendly AI rather than AGI x-risk reduction in general.

Comment author: kip1981 11 May 2012 05:49:51AM 7 points [-]

My biggest criticism of SI is that I cannot decide between:

A. promoting AI and FAI issues awareness will decrease the chance of UFAI catastrophe; or B. promoting AI and FAI issues awareness will increase the chance of UFAI catastrophe

This criticism seems district from the ones that Holden makes. But it is my primary concern. (Perhaps the closest example is Holden's analogy that SI is trying to develop facebook before the Internet).

A seems intuitive. Basically everyone associated with SI assumes that A is true, as far as I can tell. But A is not obviously true to me. It seems to me at least plausible that:

A1. promoting AI and FAI issues will get lots of scattered groups around the world more interested in creating AGI A2. one of these groups will develop AGI faster than otherwise due to A1 A3. the world will be at greater risk of UFAI catastrophe than otherwise due to A2 (i.e. the group creates AGI faster than otherwise, and fails at FAI)

More simply: SI's general efforts, albeit well intended, might accelerate the creation of AGI, and the acceleration of AGI might decrease the odds of the first AGI being friendly. This is one path by which B, not A, would be true.

SI might reply that, although it promotes AGI, it very specifically limits its promotion to FAI. Although that is SI's intention, it is not at all clear that promoting FAI will not have the unintended consequence of accelerating UFAI. By analogy, if a responsible older brother goes around promoting gun safety all the time, the little brother might be more likely to accidentally blow his face off, than if the older brother had just kept his mouth shut. Maybe the older brother shouldn't have kept his mouth shut, maybe he should have... it's not clear either way.

If B is more true than A, the best thing that SI could do would probably be develop clandestine missions to assassinate people who try to develop AGI. SI does almost the exact opposite.

SI's efforts are based on the assumption that A is true. But it's far from clear to me that A, instead of B, is true. Maybe it is, maybe it is. SI seems overconfident that A is true. I've never heard anyone at SI (or elsewhere) really address this criticism.

Comment author: drethelin 10 May 2012 08:32:39PM 3 points [-]

Tool-based works might be a faster and safer way to create useful AI, but as long as agent-based methods are possible it seems extremely important to me to work on verifying friendliness of artificial agents.

Comment author: jimrandomh 10 May 2012 07:26:29PM *  13 points [-]

I don't work for SI and this is not an SI-authorized response, unless SI endorses it later. This comment is based on my own understanding based on conversations with and publications of SI members and general world model, and does not necessarily reflect the views or activities of SI.

The first thing I notice is that your interpretation of SI's goals with respect to AGI are narrower than the impression I had gotten, based on conversations with SI members. In particular, I don't think SI's research is limited to trying to make AGI friendliness provable, but on a variety of different safety strategies, and on the relative win-rates of different technological paths, eg brain uploading vs. de-novo AI, classes of utility functions and their relative risks, and so on. There is also a distinction between "FAI theory" and "AGI theory" that you aren't making; the idea, as I see it, is that to the extent to which these are separable, "FAI theory" covers research into safety mechanisms which reduce the probability of disaster if any AGI is created, while "AGI theory" covers research that brings the creation of any AGI closer. Your first objection - that a maximizing FAI would be very dangerous - seems to be based on a belief, first, that SI is researching a narrower class of safety mechanisms than it really is, and second, that SI researches AGI theory, which I believe it explicitly does not.

You seem a bit sore that SI hasn't talked about your notion of Tool-AI, but I'm a bit confused by this, since it's the first time I've heard that term used, and your link is to an email thread which, unless I'm missing something, was not disseminated publicly or through SI in general. A conversation about tool-based AI is well worth having; my current perspective is that it looks like it interacts with the inevitability argument and the overall AI power curve in such a way that it's still very dangerous, and that it amounts to a slightly different spin on Oracle AI, but this would be a complicated discussion. But bringing it up effectively for the first time, in the middle of a multi-pronged attack on SI's credibility, seems really unfair. While there may have been a significant communications failure in there, a cursory reading suggests to me that your question never made it to the right person.

The claim that SI will perform better if they don't get funding seems very strange. My model is that it would force their current employees to leave and spend their time on unrelated paid work instead, which doesn't seem like an improvement. I get the impression that your views of SI's achievements may be getting measured against a metric of achievements-per-organization, rather than achievements-per-dollar; in absolute budget terms, SI is tiny. But they've still had a huge memetic influence, difficult as that is to measure.

All that said, I applaud your decision to post your objections and read the responses. This sort of dialogue is a good way to reach true beliefs, and I look forward to reading more of it from all sides.

Comment author: steven0461 10 May 2012 08:12:28PM *  6 points [-]

In particular, I don't think SI's research is limited to trying to make AGI friendliness provable, but on a variety of different safety strategies, and on the relative win-rates of different technological paths, eg brain uploading vs. de-novo AI, classes of utility functions and their relative risks, and so on.

I agree, and would like to note the possibility, for those who suspect FAI research is useless or harmful, of earmarking SI donations to research on different safety strategies, or on aspects of AI risk that are useful to understand regardless of strategy.

Comment author: rocurley 10 May 2012 10:55:19PM *  9 points [-]

This likely won't work. Money is fungible, so unless the total donations so earmarked exceeds the planned SI funding for that cause, they won't have to change anything. They're under no obligation to not defund your favorite cause by exactly the amount you donated, thus laundering your donation into the general fund. (Unless I misunderstand the relevant laws?)

EDIT NOTE: The post used to say vast majority; this was changed, but is referenced below.

Comment author: dlthomas 10 May 2012 11:03:45PM 5 points [-]

You have an important point here, but I'm not sure it gets up to "vast majority" before it becomes relevant.

Earmarking $K for X has an effect once $K exceeds the amount of money that would have been spent on X if the $K had not been earmarked. The size of the effect still certainly depends on the difference, and may very well not be large.

Comment author: steven0461 10 May 2012 11:02:48PM *  4 points [-]

Suppose you earmark to a paper on a topic X that SI would otherwise probably not write a paper on. Would that cause SI to take money out of research on topics similar to X and into FAI research? There would probably be some sort of (expected) effect in that direction, but I think the size of the effect depends on the details of what causes SI's allocation of resources, and I think the effect would be substantially smaller than would be necessary to make an earmarked donation equivalent to a non-earmarked donation. Still, you're right to bring it up.

Comment author: Eliezer_Yudkowsky 11 May 2012 12:30:27AM 22 points [-]

Thank you very much for writing this. I, um, wish you hadn't posted it literally directly before the May Minicamp when I can't realistically respond until Tuesday. Nonetheless, it already has a warm place in my heart next to the debate with Robin Hanson as the second attempt to mount informed criticism of SIAI.

Comment author: John_Maxwell_IV 11 May 2012 05:16:53AM *  21 points [-]

It looks to me as though Holden had the criticisms he expresses even before becoming "informed", presumably by reading the sequences, but was too intimidated to share them. Perhaps it is worth listening to/encouraging uninformed criticisms as well as informed ones?

Comment author: John_Maxwell_IV 12 May 2012 06:59:45AM 8 points [-]

Note the following criticism of SI identified by Holden:

Being too selective (in terms of looking for people who share its preconceptions) when determining whom to hire and whose feedback to take seriously.

Comment author: lukeprog 11 May 2012 08:19:04AM *  6 points [-]

[Holden's critique] already has a warm place in my heart... as the second attempt to mount informed criticism of SIAI.

To those who think Eliezer is exaggerating: please link me to "informed criticism of SIAI."

It is so hard to find good critics.

Edit: Well, I guess there are more than two examples, though relatively few. I was wrong to suggest otherwise. Much of this has to do with the fact that SI hasn't been very clear about many of its positions and arguments: see Beckstead's comment and Hallquist's followup.

Comment author: CarlShulman 11 May 2012 07:26:03PM *  40 points [-]

1) Most criticism of key ideas underlying SIAI's strategies does not reference SIAI, e.g. Chris Malcolm's "Why Robots Won't Rule" website is replying to Hans Moravec.

2) Dispersed criticism, with many people making local points, e.g. those referenced by Wei Dai, is still criticism and much of that is informed and reasonable.

3) Much criticism is unwritten, e.g. consider the more FAI-skeptical Singularity Summit speaker talks, or takes the form of brief responses to questions or the like. This doesn't mean it isn't real or important.

4) Gerrymandering the bounds of "informed criticism" to leave almost no one within bounds is in general a scurrilous move that one should bend over backwards to avoid.

5) As others have suggested, even within the narrow confines of Less Wrong and adjacent communities there have been many informed critics. Here's Katja Grace's criticism of hard takeoff (although I am not sure how separate it is from Robin's). Here's Brandon Reinhart's examination of SIAI, which includes some criticism and brings more in comments. Here's Kaj Sotala's comparison of FHI and SIAI. And there are of course many detailed and often highly upvoted comments in response to various SIAI-discussing posts and threads, many of which you have participated in.

Comment author: Wei_Dai 11 May 2012 06:26:10PM *  29 points [-]

This is a bit exasperating. Did you not see my comments in this thread? Have you and Eliezer considered that if there really have been only two attempts to mount informed criticism of SIAI, then LessWrong must be considered a massive failure that SIAI ought to abandon ASAP?

Comment author: Will_Newsome 11 May 2012 05:21:56PM 16 points [-]

Wei Dai has written many comments and posts that have some measure of criticism, and various members of the community, including myself, have expressed agreement with them. I think what might be a problem is that such criticisms haven't been collected into a single place where they can draw attention and stir up drama, as Holden's post has.

There are also critics like XiXiDu. I think he's unreliable, and I think he'd admit to that, but he also makes valid criticisms that are shared by other LW folk, and LW's moderation makes it easy to sift his comments for the better stuff.

Perhaps an institution could be designed. E.g., a few self-ordained SingInst critics could keep watch for critiques of SingInst, collect them, organize them, and update a page somewhere out-of-the-way over at the LessWrong Wiki that's easily checkable by SI folk like yourself. LW philanthropists like User:JGWeissman or User:Rain could do it, for example. If SingInst wanted to signal various good things then it could even consider paying a few people to collect and organize criticisms of SingInst. Presumably if there are good critiques out there then finding them would be well worth a small investment.

Comment author: Wei_Dai 12 May 2012 08:09:23AM *  20 points [-]

I think what might be a problem is that such criticisms haven't been collected into a single place where they can draw attention and stir up drama, as Holden's post has.

I put them in discussion, because well, I bring them up for the purpose of discussion, and not for the purpose of forming an overall judgement of SIAI or trying to convince people to stop donating to SIAI. I'm rarely sure that my overall beliefs are right and SI people's are wrong, especially on core issues that I know SI people have spent a lot of time thinking about, so mostly I try to bring up ideas, arguments, and possible scenarios that I suspect they may not have considered. (This is one major area where I differ from Holden: I have greater respect for SI people's rationality, at least their epistemic rationality. And I don't know why Holden is so confident about some of his own original ideas, like his solution to Pascal's Mugging, and Tool-AI ideas. (Well I guess I do, it's probably just typical human overconfidence.))

Having said that, I reserve the right to collect all my criticisms together and make a post in main in the future if I decide that serves my purposes, although I suspect that without the influence of GiveWell behind me it won't stir up nearly as much as drama as Holden's post. :)

ETA: Also, I had expected that SI people monitored LW discussions, not just for critiques, but also for new ideas in general (like the decision theory results that cousin_it, Nesov, and others occasionally post). This episode makes me think I may have overestimated how much attention they pay. It would be good if Luke or Eliezer could comment on this.

Comment author: CarlShulman 16 May 2012 01:34:46AM *  8 points [-]

Also, I had expected that SI people monitored LW discussions, not just for critiques, but also for new ideas in general

I read most such (apparently-relevant from post titles) discussions, and Anna reads a minority. I think Eliezer reads very few. I'm not very sure about Luke.

Comment author: Wei_Dai 16 May 2012 09:48:04AM 5 points [-]

Do you forward relevant posts to other SI people?

Comment author: CarlShulman 16 May 2012 08:59:34PM *  4 points [-]

Ones that seem novel and valuable, either by personal discussion or email.

Comment author: Will_Newsome 15 May 2012 12:30:57PM 5 points [-]

Also, I had expected that SI people monitored LW discussions, not just for critiques, but also for new ideas in general (like the decision theory results that cousin_it, Nesov, and others occasionally post).

I'm somewhat confident (from directly asking him a related question and also from many related observations over the last two years) that Eliezer mostly doesn't, or is very good at pretending that he doesn't. He's also not good at reading so even if he sees something he's only somewhat likely to understand it unless he already thinks it's worth it for him to go out of his way to understand it. If you want to influence Eliezer it's best to address him specifically and make sure to state your arguments clearly, and to explicitly disclaim that you're specifically not making any of the stupid arguments that your arguments could be pattern-matched to.

Also I know that Anna is often too busy to read LessWrong.

Comment author: lukeprog 11 May 2012 07:10:30PM 4 points [-]

Good point. Wei Dai qualifies as informed criticism. Though, he seems to agree with us on all the basics, so that might not be the kind of criticism Eliezer was talking about.

Comment author: thomblake 11 May 2012 05:49:32PM 10 points [-]

I'm not sure how much he's put into writing, but Ben Goertzel is surely informed. One might argue he comes to the wrong conclusions about AI danger, but it's not from not thinking about it.

Comment author: XiXiDu 11 May 2012 10:22:18AM *  14 points [-]

To those who think Eliezer is exaggerating: please link me to "informed criticism of SIAI."

It would help if you could elaborate on what you mean by "informed".

Most of what Holden wrote, and much more, has been said by other people, excluding myself, before.

I don't have the time right now to wade through all those years of posts and comments but might do so later.

And if you are not willing to take into account what I myself wrote, for being uninformed, then maybe you will however agree that at least all of my critical comments that have been upvoted to +10 (ETA changed to +10, although there is a lot more on-topic at +5) should have been taken into account. If you do so you will find that SI could have updated some time ago on some of what has been said in Holden's post.

Comment author: Gastogh 11 May 2012 03:10:11PM *  7 points [-]

It would help if you could elaborate on what you mean by "informed".

Seconded. It seems to me like it's not even possible to mount properly informed criticism if much of the findings are just sitting unpublished somewhere. I'm hopeful that this is actually getting fixed sometime this year, but it doesn't seem fair to not release information and then criticize the critics for being uninformed.

Comment author: private_messaging 17 May 2012 08:14:49AM *  5 points [-]

It is so hard to find good critics.

if you don't have a good argument you won't find good critics. (Unless you are as influential as religion. Then you can get good critic simply because you stepped onto good critic's foot. The critic probably ain't going to come to church to talk about it though, and also the ulterior motives (having had foot stepped onto) may make you qualify it as bad critic).

Much of this has to do with the fact that SI hasn't been very clear about many of its positions and arguments

When you look through a matte glass, and you see some blurred text that looks like it got equations in it, and you are told that what you see is a fuzzy image of proof that P!=NP (maybe you can make out the headers which are in bigger font, and those look like the kind of headers that valid proof might have), do you assume that it is really a valid proof, and they only need to polish the glass? What if it is P=NP instead? What if it doesn't look like it got equations in it?

Comment author: PhilGoetz 15 May 2012 12:23:41AM *  11 points [-]

I'm very impressed by Holden's thoroughness and thoughtfulness. What I'd like to know is why his post is Eliezer-endorsed and has 191 up-votes, while my many posts over the years hammering on Objection 1, and my comments raising Objection 2, have never gotten the green button, been frequently down-voted, and never been responded to by SIAI. Do you have to be outside the community to be taken seriously by it?

Comment author: metaphysicist 15 May 2012 12:35:46AM 19 points [-]

Not to be cynical, PhilGoetz, but isn't Holden an important player in the rational-charity movement? Wouldn't the ultimate costs of ignoring Holden be prohibitive?

Comment author: PhilGoetz 16 May 2012 02:25:05AM 4 points [-]

That could explain the green dot. I don't know which explanation is more depressing.

Comment author: Rain 15 May 2012 12:46:40AM 3 points [-]

You are absolutely correct. And, that's not the reason I find it engaging or informative.

Comment author: Rain 15 May 2012 12:42:08AM *  15 points [-]

I thought most of the stuff in Holden's post had been public knowledge for years, even to the point of being included in previous FAQs produced by SI. The main difference is that the presentation and solidity of it in this article are remarkable - interconnecting so many different threads which, when placed as individual sentences or paragraphs, might hang alone, but when woven together with the proper knots form a powerful net.

Comment author: Nick_Beckstead 15 May 2012 02:23:47AM 11 points [-]

I would be interested to see if you could link to posts where you made versions of these objections.

Comment author: PhilGoetz 18 May 2012 12:48:49AM 8 points [-]
Comment author: ghf 15 May 2012 01:36:05AM 6 points [-]

I think some of it comes down to the range of arguments offered. For example, posted alone, I would not have found Objection 2 particularly compelling, but I was impressed by many other points and in particular the discussion of organizational capacity. I'm sure there are others for whom those evaluations were completely reversed. Nonetheless, we all voted it up. Many of us who did so likely agree with one another less than we do with SIAI, but that has only showed up here and there on this thread.

Critically, it was all presented, not in the context of an inside argument, but in the context of "is SI an effective organization in terms of its stated goals." The question posed to each of us was: do you believe in SI's mission and, if so, do you think that donating to SI is an effective way to achieve that goal? It is a wonderful instantiation of the standard test of belief, "how much are you willing to bet on it?"

Comment author: John_Maxwell_IV 15 May 2012 12:41:12AM *  6 points [-]

Assuming what you say is true, it looks to me as though SI is paying the cost of ignoring its critics for so many years...

Comment author: p4wnc6 11 May 2012 11:09:27PM *  4 points [-]

I agree with timtyler's comment that Objections 1 and 2 are bogus, especially 2. The tool-AGI discussion reveals significant misunderstanding, I feel. Despite this, I think it is still a great and useful post.

Another sort of tangential issue is that this post fails to consider whether or not lots of disparate labs are just going to undertake AGI research regardless of SIAI. If lots of labs are doing that, it could be dangerous (if SIAI arguments are sound). So one upside to funding an organization like SIAI is that it will kind of rake the attention to a central point. Remember that one of SIAI's short term goals is to decelerate generic AGI research in favor of accelerating AGI safety research.

This post doesn't seem to account for the fact that by not funding SIAI you simply face the same number of counterfactual disparate labs pursuing AGI with their own willy-nilly sources of funding, but no aggregator organization to serve as a kind of steering committee. Regardless of whether SIAI's specific vision is the one that happens to come true, something should be said for the inherent danger of a bunch of labs trying to build their own stand-alone paperclip maxmizers, which they may very well believe are tool-AGIs, and then bam, game over.

Comment author: jonperry 11 May 2012 08:09:02AM 4 points [-]

Let's say that the tool/agent distinction exists, and that tools are demonstrably safer. What then? What course of action follows?

Should we ban the development of agents? All of human history suggests that banning things does not work.

With existential stakes, only one person needs to disobey the ban and we are all screwed.

Which means the only safe route is to make a friendly agent before anyone else can. Which is pretty much SI's goal, right?

So I don't understand how practically speaking this tool/agent argument changes anything.

Comment author: army1987 11 May 2012 08:54:47AM 2 points [-]

Which means the only safe route is to make a friendly agent before anyone else can.

Only if running too fast doesn't make it easier to screw something up, which it most likely does.

Comment author: khafra 11 May 2012 05:23:27PM 3 points [-]

If the time at which anyone activates a uFAI is known, SI should activate their current FAI best effort (CFBE) one day before that.

If the time at which anyone activates a GAI of unknown friendliness is known, SI should compare the probability distribution function for the friendliness of the two AIs, and activate their CFBE one day earlier only if it has more probability mass on the "friendly" side.

If the time at which anyone makes a uFAI is unknown, SI should activate their CFBE when the probability that they'll improve the CFBE in the next day is lower than the probability that someone will activate a uFAI in the next day.

If the time at which anyone makes a GAI of unknown friendliness is unknown, SI should activate their CFBE when the probability that CFBE=uFAI is less than the probability that anyone else will activate a GAI of unknown friendliness, multiplied by the probability that the other GAI will be unfriendly.

...I think. I do tend to miss the obvious when trying to think systematically, and I was visualizing gaussian pdfs without any particular justification, and a 1-day decision cycle with monotonically improving CFBE, and this is only a first-order approximation: It doesn't take into account any correlations between the decisions of SI and other GAI researchers.

Comment author: jonperry 11 May 2012 09:23:26AM 2 points [-]

Yes, you can create risk by rushing things. But you still have to be fast enough to outrun the creation of UFAI by someone else. So you have to be fast, but not too fast. It's a balancing act.

Comment author: Monkeymind 11 May 2012 03:10:04PM *  3 points [-]

If intelligence is the ability to understand concepts, and a super-intelligent AI has a super ability to understand concepts, what would prevent it (as a tool) from answering questions in a way so as to influence the user and affect outcomes as though it were an agent?

Comment author: hairyfigment 11 May 2012 07:41:37AM 4 points [-]

The organization section touches on something that concerns me. Developing a new decision theory sounds like it requires more mathematical talent than the SI yet has available. I've said before that hiring some world-class mathematicians for a year seems likely to either get said geniuses interested in the problem, to produce real progress, or to produce a proof that SI's current approach can't work. In other words, it seems like the best form of accountability we can hope for given the theoretical nature of the work.

Now Eliezer is definitely looking for people who might help. For instance, the latest chapter of "Harry Potter and the Methods of Rationality" mentioned

a minicamp for 20 mathematically talented youths...Most focus will be on technical aspects of rationality (probability theory, decision theory) but also with some teaching of the same mental skills in the other Minicamps.

It also says,

Several instructors of International Olympiad level have already volunteered.

So they technically have something already. And if there exists a high-school student who can help with the problem, or learn to do so, that person seems relatively likely to enjoy HP:MoR. But I worry that Eliezer is thinking too much in terms of his own life story here, and has not had to defend his approach enough.

Comment author: Kenny 12 May 2012 05:59:16PM 2 points [-]

I haven't read the entire post yet, but here are some thoughts I had after reading thru to about the first ten paragraphs of "Objection 2 ...". I think the problem with assuming, or judging, that tool-AI is safer than agent-AI is that a sufficiently powerful tool-AI would essentially be an agent-AI. Humans already hack other humans without directly manipulating each other's physical persons or environments, and those hacks can drastically alter theirs or others persons and (physical) environments. Sometimes the safest course is not to listen to poisoned tongues.

Comment author: kalla724 10 May 2012 11:26:58PM 2 points [-]

Very good. Objection 2 in particular resonates with my view of the situation.

One other thing that is often missed is the fact that SI assumes that development of superinteligent AI will precede other possible scenarios - including the augmented human intelligence scenario (CBI producing superhumans, with human motivations and emotions, but hugely enhanced intelligence). In my personal view, this scenario is far more likely than the creation of either friendly or unfriendly AI, and the problems related to this scenario are far more pressing.

Comment author: NancyLebovitz 11 May 2012 08:58:43PM 2 points [-]

and the problems related to this scenario are far more pressing.

Could you expand on that?

Comment author: kalla724 12 May 2012 09:45:37PM 5 points [-]

I can try, but the issue is too complex for comments. A series of posts would be required to do it justice, so mind the relative shallowness of what follows.

I'll focus on one thing. An artificial intelligence enhancement which adds more "spaces" to the working memory would create a human being capable of thinking far beyond any unenhanced human. This is not just a quantitative jump: we aren't talking someone who thinks along the same lines, just faster. We are talking about a qualitative change, making connections that are literally impossible to make for anyone else.

(This is even more unclear than I thought it would be. So a tangent to, hopefully, clarify. You can hold, say, seven items in your mind while considering any subject. This vastly limits your ability to consider any complex system. In order to do so at all, you have to construct "composite items" out of many smaller items. For instance, you can think of a mathematical formula, matrix, or an operation as one "item," which takes one space, and therefore allows you to cram "more math" into a thought than you would be able to otherwise. Alternate example: a novice chess player has to look at every piece, think about likely moves of every one, likely responses, etc. She becomes overwhelmed very quickly. An expert chess player quickly focuses on learned series of moves, known gambits and visible openings, which allows her to see several steps ahead.

One of the major failures in modern society is the illusion of understanding in complex systems. Any analysis picks out a small number of items we can keep in mind at one time, and then bases the "solutions" on them (Watts's "Everything is Obvious" book has a great overview of this). Add more places to the working memory, and you suddenly have humans who have a qualitatively improved ability to understand complex systems. Maybe still not fully, but far better than anyone else. Sociology, psychology, neuroscience, economics... A human being with a few dozen working memory spaces would be for economy the same thing a quantum computer with eight qubits would be for cryptography - whoever develops one first, can take wreak havoc as they like.)

When this work starts in earnest (ten to twelve years from now would be my estimate), how do we control the outcomes? Will we have tightly controlled superhumans, surrounded and limited by safety mechanisms? Or will we try to find "humans we trust" to become first enhanced humans? Will we have a panic against such developments (which would then force further work to be done in secret, probably associated with military uses)?

Negative scenarios are manifold (lunatic superhumans destroying the world, or establishing tyranny; lobotomized/drugged superhumans used as weapons of war or for crowd manipulation; completely sane superhumans destroying civilization due to their still present and unmodified irrational biases; etc.). Positive scenarios are comparable to Friendly AI (unlimited scientific development, cooperation on a completely new scale, reorganization of human life and society...).

How do we avoid the negative scenarios, and increase the probability of the positive ones? Very few people seem to be talking about this (some because it still seems crazy to the average person, some explicitly because they worry about the panic/push into secrecy response).

Comment author: Dustin 12 May 2012 11:26:23PM 4 points [-]

I like this series of thoughts, but I wonder about just how superior a human with 2 or 3 times the working memory would be.

Currently, do all humans have the same amount of working memory? If not, how "superior" are those with more working memory ?

Comment author: TheOtherDave 13 May 2012 01:23:37AM 6 points [-]

A vaguely related anecdote: working memory was one of the things that was damaged after my stroke; for a while afterwards I was incapable of remembering more than two or three items when asked to repeat a list. I wasn't exactly stupider than I am now, but I was something pretty similar to stupid. I couldn't understand complex arguments, I couldn't solve logic puzzles that required a level of indirection, I would often lose track of the topic of a sentence halfway through.

Of course, there was other brain damage as well, so it's hard to say what causes what, and the plural of anecdote is not data. But subjectively it certainly felt like the thing that was improving as I recovered was my ability to hold things in memory... not so much number of items, as reliability of the buffers at all. I often had the thought as I recovered that if I could somehow keep improving my working memory -- again, not so much "add slots" but make the whole framework more reliable -- I would end up cleverer than I started out.

Take it for what it's worth.

Comment author: kalla724 13 May 2012 04:56:17AM 5 points [-]

It would appear that all of us have very similar amounts of working memory space. It gets very complicated very fast, and there are some aspects that vary a lot. But in general, its capacity appears to be the bottleneck of fluid intelligence (and a lot of crystallized intelligence might be, in fact, learned adaptations for getting around this bottleneck).

How superior would it be? There are some strong indication that adding more "chunks" to the working space would be somewhat akin to adding more qubits to a quantum computer: if having four "chunks" (one of the most popular estimates for an average young adult) gives you 2^4 units of fluid intelligence, adding one more would increase your intelligence to 2^5 units. The implications seem clear.

Comment author: Kaj_Sotala 14 May 2012 07:13:40AM *  4 points [-]

Although the exact relationship isn't known, there's a strong connection between IQ and working memory - apparently both in humans and animals. E.g. Matzel & Kolata 2010:

Accumulating evidence indicates that the storage and processing capabilities of the human working memory system co-vary with individuals’ performance on a wide range of cognitive tasks. The ubiquitous nature of this relationship suggests that variations in these processes may underlie individual differences in intelligence. Here we briefly review relevant data which supports this view. Furthermore, we emphasize an emerging literature describing a trait in genetically heterogeneous mice that is quantitatively and qualitatively analogous to general intelligence (g) in humans. As in humans, this animal analog of g co-varies with individual differences in both storage and processing components of the working memory system. Absent some of the complications associated with work with human subjects (e.g., phonological processing), this work with laboratory animals has provided an opportunity to assess otherwise intractable hypotheses. For instance, it has been possible in animals to manipulate individual aspects of the working memory system (e.g., selective attention), and to observe causal relationships between these variables and the expression of general cognitive abilities. This work with laboratory animals has coincided with human imaging studies (briefly reviewed here) which suggest that common brain structures (e.g., prefrontal cortex) mediate the efficacy of selective attention and the performance of individuals on intelligence test batteries. In total, this evidence suggests an evolutionary conservation of the processes that co-vary with and/or regulate “intelligence” and provides a framework for promoting these abilities in both young and old animals.

or Oberauer et al. 2005:

Hence, we might conclude—setting aside the above mentioned caveats for such analyses—that [Working Memory Capacity] and g share the largest part of their variance (72%) but are not identical. [...] Our methodological critique notwithstanding, we believe that Ackerman et al. (2005) are right in claiming that WMC is not the same as g or as gf or as reasoning ability. Our argument for a distinction between these constructs does not hinge on the size of the correlation but on a qualitative difference: On the side of intelligence, there is a clear factorial distinction between verbal and numerical abilities (e.g., Su¨ß et al., 2002); on the side of WMC, tasks with verbal contents and tasks with numerical contents invariably load on the same factor (Kyllonen & Christal, 1990; Oberauer et al., 2000). This mismatch between WMC and intelligence constructs not only reveals that they must not be identified but also provides a hint as to what makes them different. We think that verbal reasoning differs from numerical reasoning in terms of the knowledge structures on which they are based: Verbal reasoning involves syntax and semantic relations between natural concepts, whereas numerical reasoning involves knowledge of mathematical concepts. WMC, in contrast, does not rely on conceptual structures; it is a part of the architecture that provides cognitive functions independent of the knowledge to which they are applied. Tasks used to measure WMC reflect this assumption in that researchers minimize their demand on knowledge, although they are bound to never fully succeed in that regard. Still, the minimization works well enough to allow verbal and numerical WM tasks to load substantially on a common factor. This suggests that WMC tests come closer to measuring a feature of the cognitive architecture than do intelligence tests.

Comment author: MarkusRamikin 10 May 2012 03:47:35PM *  2 points [-]

Not a big deal, but for me your "more" links don't seem to be doing anything. Firefox 12 here.

EDIT: Yup, it's fixed. :)

Comment author: HoldenKarnofsky 10 May 2012 04:12:28PM 3 points [-]

Thanks for pointing this out. The links now work, though only from the permalink version of the page (not from the list of new posts).

Comment author: Mitchell_Porter 11 May 2012 10:40:56AM 7 points [-]

Maybe I'm just jaded, but this critique doesn't impress me much. Holden's substantive suggestion is that, instead of trying to design friendly agent AI, we should just make passive "tool AI" that only reacts to commands but never acts on its own. So when do we start thinking about the problems peculiar to agent AI? Do we just hope that agent AI will never come into existence? Do we ask the tool AI to solve the friendly AI problem for us? (That seems to be what people want to do anyway, an approach I reject as ridiculously indirect.)

Comment author: Will_Newsome 11 May 2012 05:40:31PM 7 points [-]

(Perhaps I should note that I find your approach to be too indirect as well: if you really understand how justification works then you should be able to use that knowledge to make (invoke?) a theoretically perfectly justified agent, who will treat others' epistemic and moral beliefs in a thoroughly justified manner without your having to tell it "morality is in mind-brains, figure out what the mind-brains say then do what they tell you to do". That is, I think the correct solution should be just clearly mathematically and meta-ethically justified, question-dissolving, reflective, non-arbitrary, perfect decision theory. Such an approach is closest in spirit to CFAI. All other approaches, e.g. CEV, WBE, or oracle AI, are relatively arbitrary and unmotivated, especially meta-ethically.)

Comment author: taw 10 May 2012 06:04:43PM 3 points [-]

Existential risk reduction is a very worthy cause. As far as I can tell there are a few serious efforts - they have scenarios which by outside view have non-negligible chances, and in case of many of these scenarios these efforts make non-negligible difference to the outcome.

Such efforts are:

  • asteroid tracking
  • seed vaults
  • development of various ways to deal with potential pandemics (early tracking systems, drugs etc.) - this actually overlaps with "normal" medicine a lot
  • arguably, global warming prevention is a borderline issue, since there is a tiny chance of massive positive feedback loops that will make Earth nearly uninhabitable. These chances are believed to be tiny by modern climate science, but all chances for existential risk are tiny.

That's about the entire list I'm aware of (are there any others?)

And then there's huge number of efforts which claim to do something based on existential risk, but either theories behind risk they're concerning themselves with, or theories behind why their efforts are likely to help, are based on assumptions not shared by vast majority of competent people.

All FAI-related stuff suffers from both of these problems - their risk is not based on any established science, and their answer is even less based in reality. If it suffered from only one of these problems it might be fixable, but as far as I can tell it is extremely unlikely to join the category of serious efforts ever.

The best claim those non-serious effort can make is that tiny chance that the risk is real * tiny change the organization will make a difference * huge risk is still a big number, but that's not a terribly convincing argument.

I'm under impression that we're doing far less than everything we can with these serious efforts, and we haven't really identified everything that can be dealt with with such serious effort. We should focus there (and on a lot of things which are not related to existential risk).

Comment author: Rain 10 May 2012 09:10:51PM *  3 points [-]
Comment author: taw 10 May 2012 10:16:44PM 2 points [-]

Most of entries on the list are either not quantifiable even approximately to within order of magnitude. Of those that are (which is pretty much only "risks from nature" in Bostrom's system) many are still bad candidates for putting significant effort into, because:

  • we either have little ways to deal with them (like nearby supernova explosions)
  • we have a lot of time and future will be better equipped to deal with them (like eventual demise of Sun)
  • they don't actually seem to get anywhere near civilization-threatening levels (like volcanoes)

About the only new risk I see on the list which can and should be dealt with is having some backup plans for massive solar flares, but I'm not sure what we can do about it other than putting some extra money into astrophysics departments so they can figure things out better and give us better estimates.