Eliezer_Yudkowsky comments on How does MIRI Know it Has a Medium Probability of Success? - Less Wrong

19 Post author: peter_hurford 01 August 2013 11:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (137)

You are viewing a single comment's thread.

Comment author: Eliezer_Yudkowsky 02 August 2013 08:51:38AM 24 points [-]

Most fundamentally, it's based on taking at face value a world in which nobody appears to be doing similar work or care sufficiently to do so. In the world taken at face value, MIRI is the only organization running MIRI's workshops and trying to figure out things like tiling self-modifying agents and getting work started early on what is probably a highly serial time-sensitive task.

Success is defined most obviously as actually constructing an FAI, and it would be very dangerous to have any organizational model in which we were not trying to do this (someone who conceives of themselves as an ethicist whose duty it is to lecture others, and does not intend to solve the problem themselves, is exceedingly unlikely to confront the hardest problems). But of course if our work were picked up elsewhere and reused after MIRI itself died as an organization for whatever reason, or if in any general sense the true history as written in the further future says that MIRI mattered, I should not count my life wasted, nor feel that we had let down MIRI's donors.

Comment author: Furcas 02 August 2013 02:55:33PM *  9 points [-]

Ok, but that doesn't increase the probability to 'medium' from the very low initial probability of MIRI or another organization benefiting from MIRI's work solving the extremely hard problem of Friendly AI before anyone else screws it up.

I've read all your posts in the threads linked by the OP, and if multiplying the high beneficial impact of Friendly AI by the low probability of success isn't allowed, I honestly don't see why I should donate to MIRI.

Comment author: Eliezer_Yudkowsky 02 August 2013 05:42:05PM 13 points [-]

If this was a regular math problem and it wasn't world-shakingly important, why wouldn't you expect that funding workshops and then researchers would cause progress on it?

Assigning a very low probability to progress rests on a sort of backwards reasoning wherein you expect it to be difficult to do things because they are important. The universe contains no such rule. They're just things.

It's hard to add a significant marginal fractional pull to a rope that many other people are pulling on. But this is not a well-tugged rope!

Comment author: Furcas 02 August 2013 07:19:58PM *  10 points [-]

I'm not assigning a low probability to progress, I'm assigning a low probability to success.

Where FAI research is concerned, progress is only relevant in as much as it increases the probability of success, right?

Unlike a regular math problem, you've only got one shot at getting it right, and you're in a race with other researchers who are working on an easier problem (seed AI, Friendly or not). It doesn't matter if you're 80% of the way there if we all die first.

Edited to add and clarify: Even accounting for the progress I think you're likely to make, the probability of success remains low, and that's what I care about.

Comment author: Eliezer_Yudkowsky 02 August 2013 08:56:39PM 6 points [-]

Clarifying question: What do you think is MIRI's probability of having been valuable, conditioned on a nice intergalactic future being true?

Comment author: Furcas 02 August 2013 09:12:42PM 1 point [-]

Pretty high. More than 10%, definitely. Maybe 50%?

Comment author: CarlShulman 02 August 2013 09:56:33PM *  13 points [-]

A non-exhaustive list of some reasons why I strongly disagree with this combination of views:

  • AI which is not vastly superhuman can be restrained from crime, because humans can be so restrained, and with AI designers have the benefits of the ability to alter the mind's parameters (desires, intuitions, capability for action, duration of extended thought, etc) inhibitions, test copies in detail, read out its internal states, and so on, making the problem vastly easier (although control may need to be tight if one is holding back an intelligence explosion while this is going on)
  • If 10-50 humans can solve AI safety (and build AGI!) in less than 50 years, then 100-500 not very superhuman AIs at 1200x speedup should be able to do so in less than a month
  • There are a variety of mechanisms by which humans could monitor, test, and verify the work conducted by such systems
  • The AIs can also work on incremental improvements to the control mechanisms being used initially, with steady progress allowing greater AI capabilities to develop better safety measures, until one approaches perfect safety
  • If a small group can solve all the relevant problems over a few decades, then probably a large portion of the AI community (and beyond) can solve the problems in a fraction of the time if mobilized
  • As AI becomes visibly closer such mobilization becomes more likely
  • Developments in other fields may make things much easier: better forecasting, cognitive enhancement, global governance, brain emulations coming first, global peace/governance
  • The broad shape of AI risk is known and considered much more widely than MIRI: people like Bill Gates and Peter Norvig consider it, but think that acting on it now is premature; if they saw AGI as close, or were creating it themselves, they would attend to the control problems
Comment author: Wei_Dai 03 August 2013 11:40:55AM 9 points [-]

Paul Christiano, and now you, have started using the phrase "AI control problems". I've gone along with it in my discussions with Paul, but before many people start adopting it maybe we ought to talk about whether it makes sense to frame the problem that way (as opposed to "Friendly AI"). I see a number of problems with it:

  1. Control != Safe or Friendly. An AI can be perfectly controlled by a human and be extremely dangerous, because most humans aren't very altruistic or rational.
  2. The framing implicitly suggests (and you also explicitly suggest) that the control problem can be solved incrementally. But I think we have reason to believe this is not the case, that in short "safety for superintelligent AIs" = "solving philosophy/metaphilosophy" which can't be done by "incremental improvements to the control mechanisms being used initially".
  3. "Control" suggests that the problem falls in the realm of engineering (i.e., belongs to the reference class of "control problems" in engineering, such as "aircraft flight control"), whereas, again, I think the real problem is one of philosophy (plus lots of engineering as well of course, but philosophy is where most of the difficulty lies). This makes a big difference in trying to predict the success of various potential attempts to solve the problem, and I'm concerned that people will underestimate the difficulty of the problem or overestimate the degree to which it's parallelizable or generally amenable to scaling with financial/human resources, if the problem becomes known as "AI control".

Do you disagree with this, on either the terminological issue ("AI control" suggests "incremental engineering problem") or the substantive issue (the actual problem we face is more like philosophy than engineering)? If the latter, I'm surprised not to have seen you talk about your views on this topic earlier, unless you did and I missed it?

Comment author: CarlShulman 03 August 2013 12:59:58PM 3 points [-]

Thanks for those thoughts.

Nick Bostrom uses the term in his book, and it's convenient for separating out pre-existing problems with "we don't know what to do with our society long term, nor is it engineered to achieve that" and the particular issues raised by AI.

But I think we have reason to believe this is not the case, that in short "safety for superintelligent AIs" = "solving philosophy/metaphilosophy" which can't be done by "incremental improvements to the control mechanisms being used initially".

In the situation I mentioned, not vastly superintelligent initially (and capabilities can vary along multiple dimensions, e.g. one can have many compartmentalized copies of an AI system that collectively deliver a huge number of worker-years without any one of them possessing extraordinary capabilities.

What is your take on the strategy-swallowing point: if humans can do it, then not very superintelligent AIs can.

"Control" suggests that the problem falls in the realm of engineering (i.e., belongs to the reference class of "control problems" in engineering, such as "aircraft flight control")...I'm concerned that people will underestimate the difficulty of the problem or overestimate the degree to which it's parallelizable or generally amenable to scaling with financial/human resources, if the problem becomes known as "AI control".

There is an ambiguity there. I'll mention it to Nick. But, e.g. Friendliness just sounds silly. I use "safe" too, but safety can be achieved just by limiting capabilities, which doesn't reflect the desire to realize the benefits.

Comment author: Wei_Dai 03 August 2013 04:38:44PM 6 points [-]

What is your take on the strategy-swallowing point: if humans can do it, then not very superintelligent AIs can.

It's easy to imagine AIXI-like Bayesian EU maximizers that are powerful optimizers but incapable of solving philosophical problems like consciousness, decision theory, and foundations of mathematics, which seem to be necessary in order to build an FAI. It's possible that that's wrong, that one can't actually get to "not very superintelligent AIs" unless they possessed the same level of philosophical ability that humans have, but it certainly doesn't seem safe to assume this.

BTW, what does "strategy-swallowing" mean? Just "strategically relevant", or more than that?

But, e.g. Friendliness just sounds silly. I use "safe" too, but safety can be achieved just by limiting capabilities, which doesn't reflect the desire to realize the benefits.

I suggested "optimal AI" to Luke earlier, but he didn't like that. Here are some more options to replace "Friendly AI" with: human-optimal AI, normative AI (rename what I called "normative AI" in this post to something else), AI normativity. It would be interesting and useful to know what options Eliezer considered and discarded before settling on "Friendly AI", and what options Nick considered and discarded before settling on "AI control".

(I wonder why Nick doesn't like to blog. It seems like he'd want to run at least some of the more novel or potentially controversial ideas in his book by a wider audience, before committing them permanently to print.)

Comment author: torekp 04 August 2013 03:20:19AM 1 point [-]

it's convenient for separating out pre-existing problems with "we don't know what to do with our society long term, nor is it engineered to achieve that" and the particular issues raised by AI.

I don't think that separation is a good idea. Not knowing what to do with our society long term is a relatively tolerable problem until an upcoming change raises a significant prospect of locking-in some particular vision of society's future. (Wei-Dai raises similar points in your exchange of replies, but I thought this framing might still be helpful.)

Comment author: Vladimir_Nesov 03 August 2013 12:38:16PM *  1 point [-]

If we are talking about goal definition evaluating AI (and Paul was probably thinking in the context of some sort of indirect normativity), "control" seems like a reasonable fit. The primary philosophical issue for that part of the problem is decision theory.

(I agree that it's a bad term for referring to FAI itself, if we don't presuppose a method of solution that is not Friendliness-specific.)

Comment author: Kawoomba 03 August 2013 08:29:52AM 5 points [-]

What do you think is MIRI's probability of having been valuable, conditioned on a nice intergalactic future being true?

  • More than 10%, definitely. Maybe 50%?

A non-exhaustive list of some reasons why I strongly disagree with this combination of views

Not that it should be used to dismiss any of your arguments, but reading your other comments in this thread I thought you must be playing devil's advocate. Your phrasing here seems to preclude that possibility.

If you are so strongly convinced that while AGI is a non-negligible x-risk, MIRI will probably turn out to have been without value even if a good AGI outcome were to be eventually achieved, why are you a research fellow there?

I'm puzzled. Let's consider an edge case: even if MIRI's factual research turned out to be strictly non-contributing to an eventual solution, there's no reasonable doubt that it has raised awareness of the issue significantly (in relative terms).

Would the current situation with the CSER or FHI be unchanged or better if MIRI had never existed? Do you think those have a good chance of being valuable in bringing about a good outcome? Answering 'no' to the former and 'yes' to the latter would transitively imply that MIRI is valuable as well.

I.e. that alone --nevermind actual research contributions -- would make it valuable in hindsight, given an eventual positive outcome. Yet you're strongly opposed to that view?

Comment author: CarlShulman 03 August 2013 01:17:26PM *  7 points [-]

The "combination of views" includes both high probability of doom, and quite high probability of MIRI making the counterfactual difference given survival. The points I listed address both.

If you are so strongly convinced that while AGI is a non-negligible x-risk, MIRI will probably turn out to have been without value even if a good AGI outcome were to be eventually achieved, why are you a research fellow there?

I think MIRI's expected impact is positive and worthwhile. I'm glad that it exists, and that it and Eliezer specifically have made the contributions they have relative to a world in which they never existed. A small share of the value of the AI safety cause can be quite great. That is quite consistent with thinking that "medium probability" is a big overestimate for MIRI making the counterfactual difference, or that civilization is almost certainly doomed from AI risk otherwise.

Lots of interventions are worthwhile even if a given organization working on them is unlikely to make the counterfactual difference. Most research labs working on malaria vaccines won't invent one, most political activists won't achieve big increases in foreign aid or immigration levels or swing an election, most counterproliferation expenditures won't avert nuclear war, asteroid tracking was known ex ante to be far more likely to discover we were safe than that there was an asteroid on its way and ready to be stopped by a space mission.

The threshold for an x-risk charity of moderate scale to be worth funding is not a 10% chance of literally counterfactually saving the world from existential catastrophe. Annual world GDP is $80,000,000,000,000, and wealth including human capital and the like will be in the quadrillions of dollars. A 10% chance of averting x-risk would be worth trillions of present dollars.

We've spent tens of billions of dollars on nuclear and bio risks, and even $100,000,000+ on asteroids (averting dinosaur-killer risk on the order of 1 in 100,000,000 per annum). At that exchange rate again a 10% x-risk impact would be worth trillions of dollars, and governments and philanthropists have shown that they are ready to spend on x-risk or GCR opportunities far, far less likely to make a counterfactual difference than 10%.

Comment author: Kawoomba 03 August 2013 08:18:44PM 1 point [-]

I see. We just used different thresholds for valuable, you used "high probability of MIRI making the counterfactual difference given survival", while for me just e.g. speeding Norvig/Gates/whoever a couple years along the path until they devote efforts to FAI would be valuable, even if it were unlikely to Make The Difference (tm).

Whoever would turn out to have solved the problem, it's unlikely that their AI safety evaluation process ("Should I do this thing?") would work in a strict vacuum, i.e. whoever will one day have evaluated the topic and made up their mind to Save The World will be highly likely to have stumbled upon MIRI's foundational work. Given that at least some of the steps in solving the problem are likely to be quite serial (sequential) in nature, the expected scenario would be that MIRI's legacy would at least provide some speed-up; a contribution which, again, I'd call valuable, even if it were unlikely to make or break the future.

If the Gates Foundation had someone evaluate the evidence for AI-related x-risk right now, you probably wouldn't expect MIRI research, AI researcher polls, philosophical essays etc. to be wholly disregarded.

Comment author: Dr_Manhattan 05 August 2013 05:14:03PM *  0 points [-]

combination of views

Sorry hard to tell from the thread which combination of views. Eliezer's?

Comment author: CarlShulman 05 August 2013 07:06:03PM 2 points [-]

The view presented by Furcas, of probable doom, and "[m]ore than 10%, definitely. Maybe 50%" probability that MIRI will be valuable given the avoidance of doom, which in the context of existential risk seems to mean averting the risk.

Comment author: Eliezer_Yudkowsky 02 August 2013 09:35:41PM 5 points [-]

...um.

It seems to me that if I believed what I infer you believe, I would be donating to MIRI while frantically trying to figure out some way to have my doomed world actually be saved.

Comment author: Furcas 02 August 2013 10:09:15PM *  3 points [-]

It seems to me that if I believed what I infer you believe, I would be donating to MIRI

Why? You (and everybody else) will almost certainly fail anyway, and you say I shouldn't multiply this low probability by the utility of saving the world.

while frantically trying to figure out some way to have my doomed world actually be saved.

The only way I see is what MIRI is doing.

Edited to add: While this is interesting, what I was really asking in my first post is, if you think the odds of MIRI succeeding are not low, why do you think so?

Comment author: Eliezer_Yudkowsky 02 August 2013 10:29:21PM 11 points [-]

Because sometimes the impossible can be done, and I don't know how to estimate the probability of that. What would you have estimated in advance, without knowing the result, was the chance of success for the AI-Box Experiment? How about if I told you that I was going to write the most popular Harry Potter fanfiction in the world and use it to recruit International Mathematical Olympiad medalists? There may be true impossibilities in this world. Eternal life may be one such, if the character of physical law is what is it appears to be, to our sorrow. I do not think that FAI is one of those. So I am going to try. We can work out what the probability of success was after we have succeeded. The chance which is gained is not gained by turning away or by despair, but by continuing to engage with and attack the problem, watching for opportunities and constantly advancing.

If you don't believe me about that aspect of heroic epistemology, feel free not to believe me about not multiplying small probabilities either.

Comment author: Wei_Dai 05 August 2013 05:47:24AM *  9 points [-]

If you don't believe me about that aspect of heroic epistemology, feel free not to believe me about not multiplying small probabilities either.

Multiplying small probabilities seems fine to me, whereas I really don't get "heroic epistemology".

You seem to be suggesting that "heroic epistemology" and "multiplying small probabilities" both lead to the same conclusion: support MIRI's work on FAI. But this is the case only if working on FAI has no negative consequences. In that case, "small chance of success" plus "multiplying small probabilities" warrants working on FAI, just as "medium probability of success" and "not multiplying small probabilities" does. But since working on FAI does have negative consequences, namely shortening AI timelines and (in the later stages) possibly directly causing the creation an UFAI, just allowing multiplication by small probabilities is not sufficient to warrant working on FAI if the probability of success is low.

I am really worried that you are justifying your current course of action through a novel epistemology of your own invention, which has not been widely vetted (or even widely understood). Most new ideas are wrong, and I think you ought to treat your own new ideas with deeper suspicion.

Comment author: CarlShulman 04 August 2013 04:43:40AM *  9 points [-]

heroic epistemology

Could you give a more precise statement of what this is supposed to entail?

Comment author: lukeprog 10 September 2013 09:01:49PM 1 point [-]
Comment author: Furcas 02 August 2013 11:37:09PM 0 points [-]

The most charitable way I can interpret this is:

"Yeah, the middle point of my probability interval for a happy ending is very low, but the interval is large enough that its upper bound isn't that low, so it's worth my time and your money trying to reach a happy ending."

Am I right?

feel free not to believe me about not multiplying small probabilities either.

I don't. :)

Comment author: roystgnr 02 August 2013 06:48:16PM 3 points [-]

a world in which nobody appears to be doing similar work or care sufficiently to do so.

This is astonishingly good evidence that MIRI's efforts will not be wasted via redundancy, de facto "failure" only because someone else will independently succeed first.

But it's actually (very weak) evidence against the proposition that MIRI's efforts will not be wasted because you've overestimated the problem, and it isn't evidence either way concerning the proposition that you haven't overestimated the problem but nobody will succeed at solving it.

Comment author: Eliezer_Yudkowsky 02 August 2013 08:58:48PM 2 points [-]

you're asking about the probability of having some technical people get together and solve basic research problems. I don't see why anyone else should expect to know more about that than workshop MIRI participants. Besides backward reasoning from the importance of a good result (which ordinarily operates through implying already-well-tugged ropes) is there any reason why you should be more skeptical of this than any other piece of basic research on an important problem?

Comment author: roystgnr 03 August 2013 03:40:10PM 8 points [-]

I'm concerned about the probability of having some technical people get together and solve some incredibly deep research problems before some perhaps-slightly-less-technical people plough ahead and get practical results without the benefit of that research. I'm skeptical that we'll see FAI before UFAI for the same reason I'm skeptical that we'll see a Navier-Stokes existence proof before a macroscale DNS solution, I'm skeptical that we'll prove P!=NP or even find a provably secure encryption scheme before making the world's economy dependent on unproven schemes, etc.

Even some of the important subgoals of FAI, being worked on with far more resources than MIRI has yet, are barely showing on the radar. IIRC someone only recently produced a provably correct C compiler (and in the process exposed a bunch of bugs in the industry standard compilers) - wouldn't we feel foolish if a provably FAI human-readable code turned UF simply because a bug was automatically introduced in the compilation? Or if a cosmic ray or slightly-out-of-tolerance manufacturing defect affected one of the processors? Fault-tolerant MPI is still leading-edge research, because although we've never needed it before, at exascale and above the predicted mean time between hardware-failures-on-some-node goes down to hours.

One of the reasons UFAI could be such an instant danger is the current ubiquitous nature of exploitable bugs on networked computers... yet "how do we write even simple high performance software without exploitable bugs" seems to be both a much more popular research problem than and a prerequisite to "how do we write a FAI", and it's not yet solved.

Comment author: jsteinhardt 03 August 2013 08:54:55PM 1 point [-]

I'm skeptical that we'll prove P!=NP or even find a provably secure encryption scheme before making the world's economy dependent on unproven schemes, etc.

Nitpick, but finding a provably secure encryption scheme is harder than proving P!=NP, since if P=NP then no secure encryption scheme can exist.

Comment author: Kawoomba 03 August 2013 09:20:34PM 0 points [-]

if P=NP then no [provably] secure encryption scheme can exist.

What? Why? Just because RSA would be broken? Shor's algorithm would also do so, even in a proven P!=NP world. There may be other substitutes for RSA, using different complexity classes. There are other approaches altogether. Not to mention one-time pads.

Comment author: gwern 03 August 2013 10:32:09PM 3 points [-]

As I understand it, if P=NP in a practical sense, then almost all cryptography is destroyed as P=NP destroys one-way functions & secure hashes in general. So RSA goes down, many quantum-proof systems go down, and so on and so forth, and you're left with basically just http://en.wikipedia.org/wiki/Information-theoretic_security

http://www.karlin.mff.cuni.cz/~krajicek/ri5svetu.pdf discusses some of this.

Not to mention one-time pads.

And life was so happy with just one-time pads?

Comment author: Kawoomba 03 August 2013 11:35:42PM 1 point [-]

Really, if P=NP, then encoding your messages would be quite low on the priority list ... however we're not debating the practical impact here, but that "finding a provably secure encryption scheme is harder than proving P!=NP", which was raised as a nitpick, and is clearly not the case.

Happiness or unhappiness of life with one-time pads notwithstanding.

Comment author: shminux 02 August 2013 09:48:45PM *  0 points [-]

Suppose some new rich sponsor wanted to donate a lot to MIRI, subject to an independent outside group of experts evaluating the merits of some of its core claims, like that AGI is a near-term (under 100 years) x-risk and that MIRI has non-negligible odds (say, a few percent or more) of mitigating it. Who would you suggest s/he would engage for review?

Comment author: CarlShulman 02 August 2013 10:13:33PM *  3 points [-]

like that AGI is a near-term (under 100 years) x-risk

FHI sent a survey the top 100 most cited authors in AI and got a response rate of ~1/3, and the median estimates backed this (although this needs to be checked for response bias). Results will be published in September at PT-AI.

x-risk and that MIRI has non-negligible odds (say, a few percent or more) of mitigating it.

I.e. a probability of a few percent that there is AI risk, MIRI solves it, and otherwise it wouldn't have been solved and existential catastrophe would have resulted? That would not happen with non-gerrymandered criteria for the expert group.

But if a credible such group did deliver that result believably, then one could go to Gates or Buffett (who has spent hundreds of millions on nuclear risk efforts with much lower probability of averting nuclear war) or national governments and get billions in funding. All the work in that scenario is coming from the independent panel concluding the thing is many orders of magnitude better than almost any alternative use of spending, way past the threshold for funding.

The rich guy who says he would donate based on it is an irrelevancy in the hypo.

Comment author: Eliezer_Yudkowsky 02 August 2013 09:58:21PM 3 points [-]

Damned if I know. Oddly enough, anyone chooses to spend a bunch of their life becoming an expert on these issues tends to be sympathetic to the claims, and most random others tend to make up crap on the spot and stick with it. If they could manage to pay Peter Norvig enough money to spend a lot of time working through these issues I'd be pretty optimistic, but Peter Norvig works for Google and would be hard to pay sufficiently.

Comment author: drethelin 02 August 2013 10:55:24PM 1 point [-]

Do you guys deliberately go out of your way to evangeliz to Jaan tallin and thiel or is that source of funds a lucky break?

Comment author: lukeprog 03 August 2013 09:58:22PM *  7 points [-]

I agree with Eliezer that the main difficulty is in getting top-quality, relatively rational people to spend hundreds of hours being educated, working through the arguments, etc.

Jaan has done a surprising amount of that and also read most or all of the Sequences. Thiel has not yet decided to put in that kind of time.

Here's a list of people I'd want on that committee if they were willing to put in hundreds of hours catching up and working through the arguments with us: Scott Aaronson, Peter Norvig, Stuart Russell, Michael Nielsen.

I'd probably be able to add lots more names to that list if I could afford to spend more time becoming familiar with the epistemic standards and philosophical sophistication of more high-status CS people. I would trust Carl Shulman, Paul Christiano, Jacob Steinhardt, and a short list of others to add to my list with relatively little personal double-checking from me.

But yeah; the main problem seems to me that I don't know how to get 400 hours of Andrew Ng's time.

Although with Ng in particular it might not take 400 hours. When Louie and I met with him in Nov. '12 he seemed to think AI was almost certainly a century or more away, but by May '13 (after getting to do his deep learning work on Google's massive server clusters for a few months) he changed his tune, saying "It gives me hope –- no, more than hope –- that we might be able to [build AGI]... We clearly don’t have the right algorithms yet. It’s going to take decades. This is not going to be an easy one, but I think there’s hope.” (On the other hand, maybe he just made himself sound more optimistic than he anticipates inside because he was giving a public interview on behalf of pro-AI Google.)

Comment author: drethelin 04 August 2013 05:14:55AM 3 points [-]

This is a great answer but actually a little tangential to my question, sorry for being vague. Mine was actually about the part of shminux's proposal that involved finding potential mega donors. Relatedly, how much convincing do you think it would take to get Tallinn or thiel to increase their donations by an order of magnitude, something they could easily afford? This seems like a relatively high leverage plan if you can swing it. With x million dollars you can afford to actually pay to hire people like google can, if on a much smaller scale.

Comment author: wedrifid 02 August 2013 01:56:38PM 1 point [-]

Success is defined most obviously as actually constructing an FAI, but of course if our work were picked up elsewhere and reused after MIRI itself died as an organization for whatever reason,

(Or if it were picked up elsewhere and MIRI is merely overtaken. Dying isn't necessary.)