Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

SI's Summer 2012 Matching Drive Ends July 31st

13 lukeprog 20 July 2012 05:48AM

The Singularity Institute's summer 2012 matching drive ends on July 31st! Donate by the end of the month to have your gift matched, dollar for dollar.

As of this posting, SI has raised $70,000 of the $150,000 goal.

The announcement says:

Since we published our strategic plan in August 2011, we have achieved most of the near-term goals outlined therein...
In the coming year, the Singularity Institute plans to do the following: If you're planning to earmark your donation to CFAR (Center for Applied Rationality), here's a preview of what CFAR plans to do in the next year:
  • Develop additional lessons teaching the most important and useful parts of rationality. CFAR has already developed and tested over 18 hours of lessons so far, including classes on how to evaluate evidence using Bayesianism, how to make more accurate predictions, how to be more efficient using economics, how to use thought experiments to better understand your own motivations, and much more.
  • Run immersive rationality retreats to teach from our curriculum and to connect aspiring rationalists with each other. CFAR ran pilot retreats in May and June. Participants in the May retreat called it “transformative” and “astonishing,” and the average response on the survey question, “Are you glad you came? (1-10)” was a 9.4. (We don't have the June data yet, but people were similarly enthusiastic about that one.)
  • Run SPARC, a camp on the advanced math of rationality for mathematically gifted high school students. CFAR has a stellar first-year class for SPARC 2012; most students admitted to the program placed in the top 50 on the USA Math Olympiad (or performed equivalently in a similar contest).
  • Collect longitudinal data on the effects of rationality training, to improve our curriculum and to generate promising hypotheses to test and publish, in collaboration with other researchers. CFAR has already launched a one-year randomized controlled study tracking reasoning ability and various metrics of life success, using participants in our June minicamp and a control group.
  • Develop apps and games about rationality, with the dual goals of (a) helping aspiring rationalists practice essential skills, and (b) making rationality fun and intriguing to a much wider audience. CFAR has two apps in beta testing: one training players to update their own beliefs the right amount after hearing other people’s beliefs, and another training players to calibrate their level of confidence in their own beliefs. CFAR is working with a developer on several more games training people to avoid cognitive biases.
  • And more!
continue reading »

Reply to Holden on The Singularity Institute

43 lukeprog 10 July 2012 11:20PM

Holden Karnofsky of GiveWell has objected to the Singularity Institute (SI) as a target for optimal philanthropy. As someone who thinks that existential risk reduction is really important and also that the Singularity Institute is an important target of optimal philanthropy, I would like to explain why I disagree with Holden on these subjects. (I am also SI's Executive Director.)

Mostly, I'd like to explain my views to a broad audience. But I'd also like to explain my views to Holden himself. I value Holden's work, I enjoy interacting with him, and I think he is both intelligent and capable of changing his mind about Big Things like this. Hopefully Holden and I can continue to work through the arguments together, though of course we are both busy with many other things.

I appreciate the clarity and substance of Holden's objections, and I hope to reply in kind. I begin with an overview of some basic points that may be familiar to most Less Wrong veterans, and then I reply point-by-point to Holden's post. In the final section, I summarize my reply to Holden.

Holden raised many different issues, so unfortunately this post needed to be long. My apologies to Holden if I have misinterpreted him at any point.


Contents

  • Existential risk reduction is a critical concern for many people, given their values and given many plausible models of the future. Details here.
  • Among existential risks, AI risk is probably the most important. Details here.
  • SI can purchase many kinds of AI risk reduction more efficiently than other groups can. Details here.
  • These points and many others weigh against many of Holden's claims and conclusions. Details here.
  • Summary of my reply to Holden

continue reading »

Reply to Holden on 'Tool AI'

93 Eliezer_Yudkowsky 12 June 2012 06:00PM

I begin by thanking Holden Karnofsky of Givewell for his rare gift of his detailed, engaged, and helpfully-meant critical article Thoughts on the Singularity Institute (SI). In this reply I will engage with only one of the many subjects raised therein, the topic of, as I would term them, non-self-modifying planning Oracles, a.k.a. 'Google Maps AGI' a.k.a. 'tool AI', this being the topic that requires me personally to answer.  I hope that my reply will be accepted as addressing the most important central points, though I did not have time to explore every avenue.  I certainly do not wish to be logically rude, and if I have failed, please remember with compassion that it's not always obvious to one person what another person will think was the central point.

Luke Mueulhauser and Carl Shulman contributed to this article, but the final edit was my own, likewise any flaws.

Summary:

Holden's concern is that "SI appears to neglect the potentially important distinction between 'tool' and 'agent' AI." His archetypal example is Google Maps:

Google Maps is not an agent, taking actions in order to maximize a utility parameter. It is a tool, generating information and then displaying it in a user-friendly manner for me to consider, use and export or discard as I wish.

The reply breaks down into four heavily interrelated points:

First, Holden seems to think (and Jaan Tallinn doesn't apparently object to, in their exchange) that if a non-self-modifying planning Oracle is indeed the best strategy, then all of SIAI's past and intended future work is wasted.  To me it looks like there's a huge amount of overlap in underlying processes in the AI that would have to be built and the insights required to build it, and I would be trying to assemble mostly - though not quite exactly - the same kind of team if I was trying to build a non-self-modifying planning Oracle, with the same initial mix of talents and skills.

Second, a non-self-modifying planning Oracle doesn't sound nearly as safe once you stop saying human-English phrases like "describe the consequences of an action to the user" and start trying to come up with math that says scary dangerous things like (he translated into English) "increase the correspondence between the user's belief about relevant consequences and reality".  Hence why the people on the team would have to solve the same sorts of problems.

Appreciating the force of the third point is a lot easier if one appreciates the difficulties discussed in points 1 and 2, but is actually empirically verifiable independently:  Whether or not a non-self-modifying planning Oracle is the best solution in the end, it's not such an obvious privileged-point-in-solution-space that someone should be alarmed at SIAI not discussing it.  This is empirically verifiable in the sense that 'tool AI' wasn't the obvious solution to e.g. John McCarthy, Marvin Minsky, I. J. Good, Peter Norvig, Vernor Vinge, or for that matter Isaac Asimov.  At one point, Holden says:

One of the things that bothers me most about SI is that there is practically no public content, as far as I can tell, explicitly addressing the idea of a "tool" and giving arguments for why AGI is likely to work only as an "agent."

If I take literally that this is one of the things that bothers Holden most... I think I'd start stacking up some of the literature on the number of different things that just respectable academics have suggested as the obvious solution to what-to-do-about-AI - none of which would be about non-self-modifying smarter-than-human planning Oracles - and beg him to have some compassion on us for what we haven't addressed yet.  It might be the right suggestion, but it's not so obviously right that our failure to prioritize discussing it reflects negligence.

The final point at the end is looking over all the preceding discussion and realizing that, yes, you want to have people specializing in Friendly AI who know this stuff, but as all that preceding discussion is actually the following discussion at this point, I shall reserve it for later.

continue reading »

Help Fund Lukeprog at SIAI

40 Eliezer_Yudkowsky 24 August 2011 07:16AM

Singularity Institute desperately needs someone who is not me who can write cognitive-science-based material. Someone smart, energetic, able to speak to popular audiences, and with an excellent command of the science. If you’ve been reading Less Wrong for the last few months, you probably just thought the same thing I did: “SIAI should hire Lukeprog!” To support Luke Muelhauser becoming a full-time Singularity Institute employee, please donate and mention Luke (e.g. “Yay for Luke!”) in the check memo or the comment field of your donation - or if you donate by a method that doesn’t allow you to leave a comment, tell Louie Helm (louie@intelligence.org) your donation was to help fund Luke.

Note that the Summer Challenge that doubles all donations will run until August 31st. (We're currently at $31,000 of $125,000.)

continue reading »

The $125,000 Summer Singularity Challenge

20 Kaj_Sotala 29 July 2011 09:02PM

From the SingInst blog:

Thanks to the generosity of several major donors, every donation to the Singularity Institute made now until August 31, 2011 will be matched dollar-for-dollar, up to a total of $125,000.

Donate now!

(Visit the challenge page to see a progress bar.)

Now is your chance to double your impact while supporting the Singularity Institute and helping us raise up to $250,000 to help fund our research program and stage the upcoming Singularity Summit… which you can register for now!

$125,000 in backing for this challenge is being generously provided by Rob Zahra, Quixey, Clippy, Luke Nosek, Edwin Evans, Rick Schwall, Brian Cartmell, Mike Blume, Jeff Bone, Johan Edström, Zvi Mowshowitz, John Salvatier, Louie Helm, Kevin Fischer, Emil Gilliam, Rob and Oksana Brazell, Guy Srinivasan, John Chisholm, and John Ku.


2011 has been a huge year for Artificial Intelligence. With the IBM computer Watson defeating two top Jeopardy! champions in February, it’s clear that the field is making steady progress. Journalists like Torie Bosch of Slate have argued that “We need to move from robot-apocalypse jokes to serious discussions about the emerging technology.” We couldn’t agree more — in fact, the Singularity Institute has been thinking about how to create safe and ethical artificial intelligence since long before the Singularity landed on the front cover of TIME magazine.

The last 1.5 years were our biggest ever. Since the beginning of 2010, we have:

In the coming year, we plan to do the following:

  • Hold our annual Singularity Summit, in New York City this year.
  • Publish three chapters in the upcoming academic volume The Singularity Hypothesis, along with several other papers.
  • Improve organizational transparency by creating a simpler, easier-to-use website that includes Singularity Institute planning and policy documents.
  • Publish a document of open research problems related to Friendly AI, to clarify the research space and encourage other researchers to contribute to our mission.
  • Add additional skilled researchers to our Research Associates program.
  • Publish well-researched documents making the case for existential risk reduction as optimal philanthropy.
  • Diversify our funding sources by applying for targeted grants and advertising our affinity credit card program.

We appreciate your support for our high-impact work. As PayPal co-founder and Singularity Institute donor Peter Thiel said:

“I’m interested in facilitating a forum in which there can be… substantive research on how to bring about a world in which AI will be friendly to humans rather than hostile… [The Singularity Institute represents] a combination of very talented people with the right problem space [they’re] going after… [They’ve] done a phenomenal job… on a shoestring budget. From my perspective, the key question is always: What’s the amount of leverage you get as an investor? Where can a small amount make a big difference? This is a very leveraged kind of philanthropy.”

Donate now, and seize a better than usual chance to move our work forward. Credit card transactions are securely processed through Causes.com, Google Checkout, or PayPal. If you have questions about donating, please call Amy Willey at (586) 381-1801.

SIAI - An Examination

143 BrandonReinhart 02 May 2011 07:08AM

12/13/2011 - A 2011 update with data from the 2010 fiscal year is in progress. Should be done by the end of the week or sooner.

 

Disclaimer

Notes

  • Images are now hosted on LessWrong.com.
  • The 2010 Form 990 data will be available later this month.
  • It is not my intent to propagate misinformation. Errors will be corrected as soon as they are identified.

Introduction

Acting on gwern's suggestion in his Girl Scout Cookie analysis, I decided to look at SIAI funding. After reading about the Visiting Fellows Program and more recently the Rationality Boot Camp, I decided that the SIAI might be something I would want to support. I am concerned with existential risk and grapple with the utility implications. I feel that I should do more.

I wrote on the mini-boot camp page a pledge that I would donate enough to send someone to rationality mini-boot camp. This seemed to me a small cost for the potential benefit. The SIAI might get better at building rationalists. It might build a rationalist who goes on to solve a problem. Should I donate more? I wasn’t sure. I read gwern’s article and realized that I could easily get more information to clarify my thinking.

So I downloaded the SIAI’s Form 990 annual IRS filings and started to write down notes in a spreadsheet. As I gathered data and compared it to my expectations and my goals, my beliefs changed. I now believe that donating to the SIAI is valuable. I cannot hide this belief in my writing. I simply have it.

My goal is not to convince you to donate to the SIAI. My goal is to provide you with information necessary for you to determine for yourself whether or not you should donate to the SIAI. Or, if not that, to provide you with some direction so that you can continue your investigation.

continue reading »

Rationality Boot Camp

72 Jasen 22 March 2011 08:37AM

It’s been over a year since the Singularity Institute launched our ongoing Visiting Fellows Program and we’ve learned a lot in the process of running it.  This summer we’re going to try something different.  We’re going to run Rationality Boot Camp.

We are going to try to take ten weeks and fill them with activities meant to teach mental skills - if there's reading to be done, we'll tell you to get it done in advance.  We aren't just aiming to teach skills like betting at the right odds or learning how to take into account others' information, we're going to practice techniques like mindfulness meditation and Rejection Therapy (making requests that you know will be rejected), in order to teach focus, non-attachment, social courage and all the other things that are also needed to produce formidable rationalists.  Participants will learn how to draw (so that they can learn how to pay attention to previously unnoticed details, and see that they can do things that previously seemed like mysterious superpowers).  We will play games, and switch games every few days, to get used to novelty and practice learning.

We're going to run A/B tests on you, and track the results to find out which training activities work best, and begin the tradition of evidence-based rationality training.

In short, we're going to start constructing the kind of program that universities would run if they actually wanted to teach you how to think.

continue reading »

Tallinn-Evans $125,000 Singularity Challenge

27 Kaj_Sotala 26 December 2010 11:21AM

Michael Anissimov posted the following on the SIAI blog:

Thanks to the generosity of two major donors; Jaan Tallinn, a founder of Skype and Ambient Sound Investments, and Edwin Evans, CEO of the mobile applications startup Quinly, every contribution to the Singularity Institute up until January 20, 2011 will be matched dollar-for-dollar, up to a total of $125,000.

Interested in optimal philanthropy — that is, maximizing the future expected benefit to humanity per charitable dollar spent? The technological creation of greater-than-human intelligence has the potential to unleash an “intelligence explosion” as intelligent systems design still more sophisticated successors. This dynamic could transform our world as greatly as the advent of human intelligence has already transformed the Earth, for better or for worse. Thinking rationally about these prospects and working to encourage a favorable outcome offers an extraordinary chance to make a difference. The Singularity Institute exists to do so through its research, the Singularity Summit, and public education.

We support both direct engagements with the issues as well as the improvements in methodology and rationality needed to make better progress. Through our Visiting Fellows program, researchers from undergrads to Ph.Ds pursue questions on the foundations of Artificial Intelligence and related topics in two-to-three month stints. Our Resident Faculty, up to four researchers from three last year, pursues long-term projects, including AI research, a literature review, and a book on rationality, the first draft of which was just completed. Singularity Institute researchers and representatives gave over a dozen presentations at half a dozen conferences in 2010. Our Singularity Summit conference in San Francisco was a great success, bringing together over 600 attendees and 22 top scientists and other speakers to explore cutting-edge issues in technology and science.

We are pleased to receive donation matching support this year from Edwin Evans of the United States, a long-time Singularity Institute donor, and Jaan Tallinn of Estonia, a more recent donor and supporter. Jaan recently gave a talk on the Singularity and his life at a entrepreneurial group in Finland. Here’s what Jaan has to say about us:

“We became the dominant species on this planet by being the most intelligent species around. This century we are going to cede that crown to machines. After we do that, it will be them steering history rather than us. Since we have only one shot at getting the transition right, the importance of SIAI’s work cannot be overestimated. Not finding any organisation to take up this challenge as seriously as SIAI on my side of the planet, I conclude that it’s worth following them across 10 time zones.”
– Jaan Tallinn, Singularity Institute donor

Make a lasting impact on the long-term future of humanity today — make a donation to the Singularity Institute and help us reach our $125,000 goal. For more detailed information on our projects and work, contact us at institute@intelligence.org or read our new organizational overview.

-----

Kaj's commentary: if you haven't done so recently, do check out the SIAI publications page. There are several new papers and presentations, out of which I thought that Carl Shulman's Whole Brain Emulations and the Evolution of Superorganisms made for particularly fascinating (and scary) reading. SIAI's finally starting to get its paper-writing machinery into gear, so let's give them money to make that possible. There's also a static page about this challenge; if you're on Facebook, please take the time to "like" it there.

(Full disclosure: I was an SIAI Visiting Fellow in April-July 2010.)

The danger of living a story - Singularity Tropes

23 patrissimo 14 November 2010 10:39PM

The following should sound familiar:

A thoughtful and observant young protagonist dedicates their life to fighting a great world-threatening evil unrecognized by almost all of their short-sighted elders (except perhaps for one encouraging mentor), gathering a rag-tag band of colorful misfits along the way and forging them into a team by accepting their idiosyncrasies and making the most of their unique abilities, winning over previously neutral allies, ignoring those who just don't get it, obtaining or creating artifacts of great power, growing and changing along the way to become more powerful, fulfilling the potential seen by their mentors/supporters/early adopters, while becoming more human (greater empathy, connection, humility) as they collect resources to prepare for their climactic battle against the inhuman enemy.

Hmm, sounds a bit like SIAI!  (And while I'm throwing stones, let me make it clear that I live in a glass house, since the same story could just as easily be adapted to TSI, my organization, as well as many others)

This story is related to Robin's Abstract/Distant Future Bias

Regarding distant futures, however, we’ll be too confident, focus too much on unlikely global events, rely too much on trends, theories, and loose abstractions, while neglecting details and variation.  We’ll assume the main events take place far away (e.g., space), and uniformly across large regions.  We’ll focus on untrustworthy consistently-behaving globally-organized social-others.  And we’ll neglect feasibility, taking chances to achieve core grand symbolic values, rather than ordinary muddled values.

More bluntly, we seem primed to confidently see history as an inevitable march toward a theory-predicted global conflict with an alien united them determined to oppose our core symbolic values, making infeasible overly-risky overconfident plans to oppose them.  We seem primed to neglect the value and prospect of trillions of quirky future creatures not fundamentally that different from us, focused on their simple day-to-day pleasures, mostly getting along peacefully in vastly-varied uncoordinated and hard-to-predict local cultures and life-styles. 

Living a story is potentially risky, for example Tyler Cowen warns us to be cautious of stories as there are far fewer stories than there are real scenarios, and so stories must oversimplify.  Our view of the future may be colored by a "fiction bias", which leads us to expect outcomes like those we see in movies (climactic battles, generally interesting events following a single plotline).  Thus stories threaten both epistemic rationality (we assume the real world is more like stories than it is) and instrumental rationality (we assume the best actions to effect real-world change are those which story heroes take).

Yet we'll tend to live stories anyway because it is fun - it inspires supporters, allies, and protagonists.  The marketing for "we are an alliance to fight a great unrecognized evil" can be quite emotionally evocative.  Including in our own self-narrative, which means we'll be tempted to buy into a story whether or not it is correct.  So while living a fun story is a utility benefit, it also means that story causes are likely to be over-represented among all causes, as they are memetically attractive.  This is especially true for the story that there is risk of great, world-threatening evil, since those who believe it are inclined to shout it from the rooftops, while those who don't believe it get on with their lives.  (There are, of course, biases in the other direction as well).

Which is not to say that all aspects of the story are wrong - advancing an original idea to greater prominence (scaling) will naturally lead to some of these tropes - most people disbelieving, a few allies, winning more people over time, eventual recognition as a visionary.  And Michael Vassar suggests that some of the tropes arise as a result of "trying to rise in station beyond the level that their society channels them towards".  For these aspects, the tropes may contain evolved wisdom about how our ancestors negotiated similar situations.

And whether or not a potential protagonist believes in this wisdom, the fact that others do will surely affect marketing decisions.  If Harry wishes to not be seen as Dark, he must care what others see as the signs of a Dark Wizard, whether or not he agrees with them.  If potential collaborators have internalized these stories, skillful protagonists will invoke them in recruiting, converting, and team-building.  Yet the space of story actions is constrained, and the best strategy may sometimes lie far outside them.

Since this is not a story, we are left with no simple answer.  Many aspects of stories are false but resonate with us, and we must guard against them lest they contaminate our rationality.  Others contain wisdom about how those like us have navigated similar situations in the past - we must decide whether the similarities are true or superficial.  The most universal stories are likely to be the most effective in manipulating others, which any protagonist must due to amplify their own efforts in fighting for their cause.  Some of these universal stories are true and generally applicable, like scaling techniques, yet the set of common tropes seems far too detailed to reflect universal truths rather than arbitrary biases of humanity and our evolutionary history.

May you live happily ever after (vanquishing your inhuman enemy with your team of true friends, bonded through a cause despite superficial dissimilarities).

The End.

What I would like the SIAI to publish

27 XiXiDu 01 November 2010 02:07PM

Major update here.

Related to: Should I believe what the SIAI claims?

Reply to: Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It)

... pointing out that something scary is possible, is a very different thing from having an argument that it’s likely. — Ben Goertzel

What I ask for:

I want the SIAI or someone who is convinced of the Scary Idea1 to state concisely and mathematically (and with possible extensive references if necessary) the decision procedure that led they to make the development of friendly artificial intelligence their top priority. I want them to state the numbers of their subjective probability distributions2 and exemplify their chain of reasoning, how they came up with those numbers and not others by way of sober calculations.

The paper should also account for the following uncertainties:

  • Comparison with other existential risks and how catastrophic risks from artificial intelligence outweigh them.
  • Potential negative consequences3 of slowing down research on artificial intelligence (a risks and benefits analysis).
  • The likelihood of a gradual and controllable development versus the likelihood of an intelligence explosion.
  • The likelihood of unfriendly AI4 versus friendly and respectively abulic5 AI.
  • The ability of superhuman intelligence and cognitive flexibility as characteristics alone to constitute a serious risk given the absence of enabling technologies like advanced nanotechnology.
  • The feasibility of “provably non-dangerous AGI”.
  • The disagreement of the overwhelming majority of scientists working on artificial intelligence.
  • That some people who are aware of the SIAI’s perspective do not accept it (e.g. Robin Hanson, Ben Goertzel, Nick Bostrom, Ray Kurzweil and Greg Egan).
  • Possible conclusions that can be drawn from the Fermi paradox6 regarding risks associated with superhuman AI versus other potential risks ahead.

Further I would like the paper to include and lay out a formal and systematic summary of what the SIAI expects researchers who work on artificial general intelligence to do and why they should do so. I would like to see a clear logical argument for why people working on artificial general intelligence should listen to what the SIAI has to say.

Examples:

Here are are two examples of what I'm looking for:

The first example is Robin Hanson demonstrating his estimation of the simulation argument. The second example is Tyler Cowen and Alex Tabarrok presenting the reasons for their evaluation of the importance of asteroid deflection.

Reasons:

I'm wary of using inferences derived from reasonable but unproven hypothesis as foundations for further speculative thinking and calls for action. Although the SIAI does a good job on stating reasons to justify its existence and monetary support, it does neither substantiate its initial premises to an extent that an outsider could draw the conclusions about the probability of associated risks nor does it clarify its position regarding contemporary research in a concise and systematic way. Nevertheless such estimations are given, such as that there is a high likelihood of humanity's demise given that we develop superhuman artificial general intelligence without first defining mathematically how to prove the benevolence of the former. But those estimations are not outlined, no decision procedure is provided on how to arrive at the given numbers. One cannot reassess the estimations without the necessary variables and formulas. This I believe is unsatisfactory, it lacks transparency and a foundational and reproducible corroboration of one's first principles. This is not to say that it is wrong to state probability estimations and update them given new evidence, but that although those ideas can very well serve as an urge to caution they are not compelling without further substantiation.


1. If anyone is actively trying to build advanced AGI succeeds, we’re highly likely to cause an involuntary end to the human race.

2. Stop taking the numbers so damn seriously, and think in terms of subjective probability distributions [...], Michael Anissimov (existential.ieet.org mailing list, 2010-07-11)

3. Could being overcautious be itself an existential risk that might significantly outweigh the risk(s) posed by the subject of caution? Suppose that most civilizations err on the side of caution. This might cause them to either evolve much slower so that the chance of a fatal natural disaster to occur before sufficient technology is developed to survive it, rises to 100%, or stops them from evolving at all for being unable to prove something being 100% safe before trying it and thus never taking the necessary steps to become less vulnerable to naturally existing existential risks. Further reading: Why safety is not safe

4. If one pulled a random mind from the space of all possible minds, the odds of it being friendly to humans (as opposed to, e.g., utterly ignoring us, and being willing to repurpose our molecules for its own ends) are very low.

5. Loss or impairment of the ability to make decisions or act independently.

6. The Fermi paradox does allow for and provide the only conclusions and data we can analyze that amount to empirical criticism of concepts like that of a Paperclip maximizer and general risks from superhuman AI's with non-human values without working directly on AGI to test those hypothesis ourselves. If you accept the premise that life is not unique and special then one other technological civilisation in the observable universe should be sufficient to leave potentially observable traces of technological tinkering. Due to the absence of any signs of intelligence out there, especially paper-clippers burning the cosmic commons, we might conclude that unfriendly AI could not be the most dangerous existential risk that we should worry about.

View more: Next