Nick_Beckstead comments on A Proposed Adjustment to the Astronomical Waste Argument - Less Wrong

19 Post author: Nick_Beckstead 27 May 2013 03:39AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (38)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 27 May 2013 04:58:17PM 14 points [-]

I worry that this post seems very abstract.

The specific case I've made for "just build the damn FAI" does not revolve only around astronomical waste, but subtheses like:

  • Stable goals in sufficiently advanced self-improving minds imply very strong path dependence on the point up to where the mind is sufficiently advanced, and no ability to correct mistakes beyond that point
  • Friendly superintelligences negate other x-risks once developed
  • CEV (more generally indirect normativity) implies that there exists a broad class of roughly equivalently-expectedly-good optimal states if we can pass a satisficing test (i.e., in our present state of uncertainty, we would expect something like CEV to be around as good as it gets, given our uncertainty on the details of goodness, assuming you can build a CEV-SI); there is not much gain from making FAI programmers marginally nicer people or giving them marginally better moral advice provided that they are satisficingly non-jerks who try to build an indirectly normative AI
  • Very little path dependence of the far future on anything except the satisficing test of building a good-enough FAI, because a superintelligent singleton has enough power to correct any bad inertia going into that point
  • The Fragility of Value thesis implies that value drops off very fast short of a CEV-style FAI (making the kinds of mistakes that people like to imagine leading to flawed-utopia story outcomes will actually just kill you instantly when blown up to a superintelligent scale) so there's not much point in trying to make things nicer underneath this threshold
  • FAI is hard (relative to the nearly nonexistent quantity and quality of work that we've seen most current AGI people intending to put into it, or mainstream leaders anticipating a need for, or current agencies funding, FAI is technically far harder than that); so most of the x-risk comes from failure to solve the technical problem
  • Trying to ensure that "Western democracies remain the most advanced and can build AI first" or "ensuring that evil corporations don't have so much power that they can influence AI-building" is missing the point (and a rather obvious attempt to map the problem into someone's favorite mundane political hobbyhorse) because goodness is not magically sneezed into the AI from well-intentioned builders, and favorite-good-guy-of-the-week is not making anything like a preliminary good-faith-effort to do high-quality work on technical FAI problems, and probably won't do so tomorrow either

You can make a case for MIRI with fewer requirements than that, but my model of the future is that it's just a pass-fail test on building indirectly normative stable self-improving AI, before any event occurs which permanently prevents anyone from building FAI (mostly self-improving UFAI (possibly neuromorphic) but also things like nanotechnological warfare). If you think that building FAI is a done deal because it's such an easy problem (or because likely builders are already guaranteed to be supercompetent), you'd focus on preventing nanotechnological warfare or something along those lines. To me it looks more like we're way behind on our dues.

Comment author: Nick_Beckstead 27 May 2013 05:37:40PM *  8 points [-]

Eliezer, I see this post as a response to Nick Bostrom's papers on astronomical waste and not a response to your arguments that FAI is an important cause. I didn't intend for this post to be any kind of evaluation of FAI as a cause or MIRI as an organization supporting that cause. Evaluating FAI as a cause would require lots of analysis I didn't attempt, including:

  • Whether many of the claims you have made above are true

  • How effectively we can expect humanity in general to respond to AI risk absent our intervention

  • How tractable the cause of improving humanity's response is

  • How much effort is currently going into this cause

  • Whether the cause could productively absorb additional resources

  • What our leading alternatives are

My arguments are most relevant to evaluating FAI as a cause for people whose interest in FAI depends heavily on their acceptance of Bostrom's astronomical waste argument. Based on informal conversations, there seem to be a number of people who fall into this category. My own view is that whether FAI is a promising cause is not heavily dependent on astronomical waste considerations, and more dependent on many of these messy details.

Comment author: Eliezer_Yudkowsky 27 May 2013 06:28:04PM 7 points [-]

Mm, k. I was trying more to say that I got the same sense from your post that Nick Bostrom seems to have gotten at the point where he worried about completely general and perfectly sterile analytic philosophy. Maxipok isn't derived just from the astronomical waste part, it's derived from pragmatic features of actual x-risk problems that lead to ubiquitous threshold effects that define "okayness" - most obviously Parfit's "Extinguishing the last 1000 people is much worse than extinguishing seven billion minus a thousand people" but also including things like satisficing indirect normativity and unfriendly AIs going FOOM. The degree to which x-risk thinking has properly adapted to the pragmatic landscape, not just been derived starting from very abstract a priori considerations, was what gave me that worried sense of overabstraction while reading the OP; and that trigged my reflex to start throwing out concrete examples to see what happened to the abstract analysis in that case.

Comment author: Nick_Beckstead 27 May 2013 06:48:29PM 6 points [-]

It may be overly abstract. I'm a philosopher by training and I have a tendency to get overly abstract (which I am working on).

I agree that there are important possibilities with threshold effects, such as extinction and perhaps including your point about threshold effects with indirect normativity AIs. I also think that other scenarios, such as Robin Hanson's scenario, other decentralized market/democracy set-ups, and other scenarios we can't think of are live possibilities. More continuous trajectory changes may be very relevant in these other scenarios.

Comment author: Pablo_Stafforini 28 May 2013 03:36:31AM *  8 points [-]

For what it's worth, I loved this post and don't think it was very abstract. Then again, my background is also in philosophy.