Followup toEngelbart: Insufficiently Recursive

The computer revolution had cascades and insights aplenty.  Computer tools are routinely used to create tools, from using a C compiler to write a Python interpreter, to using theorem-proving software to help design computer chips.  I would not yet rate computers as being very deeply recursive - I don't think they've improved our own thinking processes even so much as the Scientific Revolution - yet.  But some of the ways that computers are used to improve computers, verge on being repeatable (cyclic).

Yet no individual, no localized group, nor even country, managed to get a sustained advantage in computing power, compound the interest on cascades, and take over the world.  There was never a Manhattan moment when a computing advantage temporarily gave one country a supreme military advantage, like the US and its atomic bombs for that brief instant at the end of WW2.  In computing there was no equivalent of "We've just crossed the sharp threshold of criticality, and now our pile doubles its neutron output every two minutes, so we can produce lots of plutonium and you can't."

Will the development of nanotechnology go the same way as computers - a smooth, steady developmental curve spread across many countries, no one project taking into itself a substantial fraction of the world's whole progress?  Will it be more like the Manhattan Project, one country gaining a (temporary?) huge advantage at huge cost?  Or could a small group with an initial advantage cascade and outrun the world?

Just to make it clear why we might worry about this for nanotech, rather than say car manufacturing - if you can build things from atoms, then the environment contains an unlimited supply of perfectly machined spare parts.  If your molecular factory can build solar cells, it can acquire energy as well.

So full-fledged Drexlerian molecular nanotechnology can plausibly automate away much of the manufacturing in its material supply chain.  If you already have nanotech, you may not need to consult the outside economy for inputs of energy or raw material.

This makes it more plausible that a nanotech group could localize off, and do its own compound interest, away from the global economy.  If you're Douglas Engelbart building better software, you still need to consult Intel for the hardware that runs your software, and the electric company for the electricity that powers your hardware.  It would be a considerable expense to build your own fab lab for your chips (that makes chips as good as Intel) and your own power station for electricity (that supplies electricity as cheaply as the utility company).

It's not just that this tends to entangle you with the fortunes of your trade partners, but also that - as an UberTool Corp keeping your trade secrets in-house - you can't improve the hardware you get, or drive down the cost of electricity, as long as these things are done outside.  Your cascades can only go through what you do locally, so the more you do locally, the more likely you are to get a compound interest advantage.  (Mind you, I don't think Engelbart could have gone FOOM even if he'd made his chips locally and supplied himself with electrical power - I just don't think the compound advantage on using computers to make computers is powerful enough to sustain k > 1.)

In general, the more capabilities are localized into one place, the less people will depend on their trade partners, the more they can cascade locally (apply their improvements to yield further improvements), and the more a "critical cascade" / FOOM sounds plausible.

Yet self-replicating nanotech is a very advanced capability.  You don't get it right off the bat.  Sure, lots of biological stuff has this capability, but this is a misleading coincidence - it's not that self-replication is easy, but that evolution, for its own alien reasons, tends to build it into everything.  (Even individual cells, which is ridiculous.)

In the run-up to nanotechnology, it seems not implausible to suppose a continuation of the modern world.  Today, many different labs work on small pieces of nanotechnology - fortunes entangled with their trade partners, and much of their research velocity coming from advances in other laboratories.  Current nanotech labs are dependent on the outside world for computers, equipment, science, electricity, and food; any single lab works on a small fraction of the puzzle, and contributes small fractions of the progress.

In short, so far nanotech is going just the same way as computing.

But it is a tad premature - I would even say that it crosses the line into the "silly" species of futurism - to exhale a sigh of relief and say, "Ah, that settles it - no need to consider any further."

We all know how exponential multiplication works:  1 microscopic nanofactory, 2 microscopic nanofactories, 4 microscopic nanofactories... let's say there's 100 different groups working on self-replicating nanotechnology and one of those groups succeeds one week earlier than the others.  Rob Freitas has calculated that some species of replibots could spread through the Earth in 2 days (even given what seem to me like highly conservative assumptions in a context where conservatism is not appropriate).

So, even if the race seems very tight, whichever group gets replibots first can take over the world given a mere week's lead time -

Yet wait!  Just having replibots doesn't let you take over the world.  You need fusion weapons, or surveillance bacteria, or some other way to actually govern.  That's a lot of matterware - a lot of design and engineering work.  A replibot advantage doesn't equate to a weapons advantage, unless, somehow, the planetary economy has already published the open-source details of fully debugged weapons that you can build with your newfound private replibots.  Otherwise, a lead time of one week might not be anywhere near enough.

Even more importantly - "self-replication" is not a binary, 0-or-1 attribute.  Things can be partially self-replicating.  You can have something that manufactures 25% of itself, 50% of itself, 90% of itself, or 99% of itself - but still needs one last expensive computer chip to complete the set.  So if you have twenty-five countries racing, sharing some of their results and withholding others, there isn't one morning where you wake up and find that one country has self-replication.

Bots become successively easier to manufacture; the factories get successively cheaper.  By the time one country has bots that manufacture themselves from environmental materials, many other countries have bots that manufacture themselves from feedstock.  By the time one country has bots that manufacture themselves entirely from feedstock, other countries have produced some bots using assembly lines.  The nations also have all their old conventional arsenal, such as intercontinental missiles tipped with thermonuclear weapons, and these have deterrent effects against crude nanotechnology.  No one ever gets a discontinuous military advantage, and the world is safe.  (?)

At this point, I do feel obliged to recall the notion of "burdensome details", that we're spinning a story out of many conjunctive details, any one of which could go wrong.  This is not an argument in favor of anything in particular, just a reminder not to be seduced by stories that are too specific.  When I contemplate the sheer raw power of nanotechnology, I don't feel confident that the fabric of society can even survive the sufficiently plausible prospect of its near-term arrival.  If your intelligence estimate says that Russia (the new belligerent Russia under Putin) is going to get self-replicating nanotechnology in a year, what does that do to Mutual Assured Destruction?  What if Russia makes a similar intelligence assessment of the US?  What happens to the capital markets?  I can't even foresee how our world will react to the prospect of various nanotechnological capabilities as they promise to be developed in the future's near future.  Let alone envision how society would actually change as full-fledged molecular nanotechnology was developed, even if it were developed gradually...

...but I suppose the Victorians might say the same thing about nuclear weapons or computers, and yet we still have a global economy - one that's actually lot more interdependent than theirs, thanks to nuclear weapons making small wars less attractive, and computers helping to coordinate trade.

I'm willing to believe in the possibility of a smooth, gradual ascent to nanotechnology, so that no one state - let alone any corporation or small group - ever gets a discontinuous advantage.

The main reason I'm willing to believe this is because of the difficulties of design and engineering, even after all manufacturing is solved.  When I read Drexler's Nanosystems, I thought:  "Drexler uses properly conservative assumptions everywhere I can see, except in one place - debugging.  He assumes that any failed component fails visibly, immediately, and without side effects; this is not conservative."

In principle, we have complete control of our computers - every bit and byte is under human command - and yet it still takes an immense amount of engineering work on top of that to make the bits do what we want.  This, and not any difficulties of manufacturing things once they are designed, is what takes an international supply chain of millions of programmers.

But we're still not out of the woods.

Suppose that, by a providentially incremental and distributed process, we arrive at a world of full-scale molecular nanotechnology - a world where designs, if not finished material goods, are traded among parties.  In a global economy large enough that no one actor, or even any one state, is doing more than a fraction of the total engineering.

It would be a very different world, I expect; and it's possible that my essay may have already degenerated into nonsense.  But even if we still have a global economy after getting this far - then we're still not out of the woods.

Remember those ems?  The emulated humans-on-a-chip?  The uploads?

Suppose that, with molecular nanotechnology already in place, there's an international race for reliable uploading - with some results shared, and some results private - with many state and some nonstate actors.

And suppose the race is so tight, that the first state to develop working researchers-on-a-chip, only has a one-day lead time over the other actors.

That is - one day before anyone else, they develop uploads sufficiently undamaged, or capable of sufficient recovery, that the ems can carry out research and development.  In the domain of, say, uploading.

There are other teams working on the problem, but their uploads are still a little off, suffering seizures and having memory faults and generally having their cognition degraded to the point of not being able to contribute.  (NOTE:  I think this whole future is a wrong turn and we should stay away from it; I am not endorsing this.)

But this one team, though - their uploads still have a few problems, but they're at least sane enough and smart enough to start... fixing their problems themselves?

If there's already full-scale nanotechnology around when this happens, then even with some inefficiency built in, the first uploads may be running at ten thousand times human speed.  Nanocomputers are powerful stuff.

And in an hour, or around a year of internal time, the ems may be able to upgrade themselves to a hundred thousand times human speed, and fix some of the remaining problems.

And in another hour, or ten years of internal time, the ems may be able to get the factor up to a million times human speed, and start working on intelligence enhancement...

One could, of course, voluntarily publish the improved-upload protocols to the world, and give everyone else a chance to join in.  But you'd have to trust that not a single one of your partners were holding back a trick that lets them run uploads at ten times your own maximum speed (once the bugs were out of the process).  That kind of advantage could snowball quite a lot, in the first sidereal day.

Now, if uploads are gradually developed at a time when computers are too slow to run them quickly - meaning, before molecular nanotech and nanofactories come along - then this whole scenario is averted; the first high-fidelity uploads, running at a hundredth of human speed, will grant no special advantage.  (Assuming that no one is pulling any spectacular snowballing tricks with intelligence enhancement - but they would have to snowball fast and hard, to confer advantage on a small group running at low speeds.  The same could be said of brain-computer interfaces, developed before or after nanotechnology, if running in a small group at merely human speeds.  I would credit their world takeover, but I suspect Robin Hanson wouldn't at this point.)

Now, I don't really believe in any of this - this whole scenario, this whole world I'm depicting.  In real life, I'd expect someone to brute-force an unFriendly AI on one of those super-ultimate-nanocomputers, followed in short order by the end of the world.  But that's a separate issue.  And this whole world seems too much like our own, after too much technological change, to be realistic to me.  World government with an insuperable advantage?  Ubiquitous surveillance?  I don't like the ideas, but both of them would change the game dramatically...

But the real point of this essay is to illustrate a point more important than nanotechnology: as optimizers become more self-swallowing, races between them are more unstable.

If you sent a modern computer back in time to 1950 - containing many modern software tools in compiled form, but no future history or declaratively stored future science - I would guess that the recipient could not use it to take over the world.  Even if the USSR got it.  Our computing industry is a very powerful thing, but it relies on a supply chain of chip factories.

If someone got a future nanofactory with a library of future nanotech applications - including designs for things like fusion power generators and surveillance bacteria - they might really be able to take over the world.  The nanofactory swallows its own supply chain; it incorporates replication within itself.  If the owner fails, it won't be for lack of factories.  It will be for lack of ability to develop new matterware fast enough, and apply existing matterware fast enough, to take over the world.

I'm not saying that nanotech will appear from nowhere with a library of designs - just making a point about concentrated power and the instability it implies.

Think of all the tech news that you hear about once - say, an article on Slashdot about yada yada 50% improved battery technology - and then you never hear about again, because it was too expensive or too difficult to manufacture.

Now imagine a world where the news of a 50% improved battery technology comes down the wire, and the head of some country's defense agency is sitting down across from engineers and intelligence officers and saying, "We have five minutes before all of our rival's weapons are adapted to incorporate this new technology; how does that affect our balance of power?"  Imagine that happening as often as "amazing breakthrough" articles appear on Slashdot.

I don't mean to doomsay - the Victorians would probably be pretty surprised we haven't blown up the world with our ten-minute ICBMs, but we don't live in their world - well, maybe doomsay just a little - but the point is:  It's less stable.  Improvements cascade faster once you've swallowed your manufacturing supply chain.

And if you sent back in time a single nanofactory, and a single upload living inside it - then the world might end in five minutes or so, as we bios measure time.

The point being, not that an upload will suddenly appear, but that now you've swallowed your supply chain and your R&D chain.

And so this world is correspondingly more unstable, even if all the actors start out in roughly the same place.  Suppose a state manages to get one of those Slashdot-like technology improvements - only this one lets uploads think 50% faster - and they get it fifty minutes before anyone else, at a point where uploads are running ten thousand times as fast as human (50 mins = ~1 year) - and in that extra half-year, the uploads manage to find another couple of 50% improvements...

Now, you can suppose that all the actors are all trading all of their advantages and holding nothing back, so everyone stays nicely synchronized.

Or you can suppose that enough trading is going on, that most of the research any group benefits from comes from outside that group, and so a 50% advantage for a local group doesn't cascade much.

But again, that's not the point.  The point is that in modern times, with the modern computing industry, where commercializing an advance requires building a new computer factory, a bright idea that has gotten as far as showing a 50% improvement in the laboratory, is merely one more article on Slashdot.

If everything could instantly be rebuilt via nanotech, that laboratory demonstration could precipitate an instant international military crisis.

And if there are uploads around, so that a cute little 50% advancement in a certain kind of hardware, recurses back to imply 50% greater speed at all future research - then this Slashdot article could become the key to world domination.

As systems get more self-swallowing, they cascade harder; and even if all actors start out equivalent, races between them get much more unstable.  I'm not claiming it's impossible for that world to be stable.  The Victorians might have thought that about ICBMs.  But that subjunctive world contains additional instability compared to our own, and would need additional centripetal forces to end up as stable as our own.

I expect Robin to disagree with some part of this essay, but I'm not sure which part or how.

New to LessWrong?

New Comment
24 comments, sorted by Click to highlight new comments since: Today at 2:05 AM
There was never a Manhattan moment when a computing advantage temporarily gave one country a supreme military advantage, like the US and its atomic bombs for that brief instant at the end of WW2.

Only if you completely ignore The Colossus.

"By the end of the war, 10 of the computers had been built for the British War Department, and they played an extremely significant role in the defeat of Nazi Germany, by virtually eliminating the ability of German Admiral Durnetz to sink American convoys, by undermining German General Irwin Rommel in Northern Africa, and by confusing the Nazis about exactly where the American Invasion at Normandy France, was actually going to take place."

I.E. 10 computers rendered the entire German navy essentially worthless. I'd call that a 'supreme advantage' in naval military terms.

http://www.acsa2000.net/a_computer_saved_the_world.htm

"The Colossus played a crucial role in D-Day. By understanding where the Germans had the bulk of their troops, the Allies could decide which beaches to storm and what misinformation to spread to keep the landings a surprise."

http://kessler.blogs.nytimes.com/tag/eniac/

Sure, it didn't blow people up into little bits like an atomic bomb, but who cares? It stopped OUR guys getting blown up into little bits, and also devastated the opposing side's military intelligence and command/control worldwide. It's rather difficult to measure the lives that weren't killed, and the starvation and undersupply that didn't happen.

Arguably, algorithmic approaches had a war-winning level of influence even earlier:

http://en.wikipedia.org/wiki/Zimmermann_Telegram.

Anonymous.

So nanotechnology can plausibly automate away much of the manufacturing in its material supply chain. If you already have nanotech, you may not need to consult the outside economy for inputs of energy or raw material.

Why would you not make use of resources from the outside world?

IMO, the issue in this area is with folks like Google - who take from the rest of the world, but don't contribute everything they build back again - so they develop their own self-improving ecosystem that those outside the company have no access to. Do that faster than your competitors in a suitably-diverse range of fields and eventually you find yourself getting further and further ahead - at least until the monopolies commission takes notice.

Well 6 years later Google is in everything from self-driving cars to thermostats. You might just be right.

Apparently no.

You might already have addressed this, but it seems to me that you have an underlying assumption that potential intelligence/optimization power is unbounded. Given what we currently know of the rules of the universe: the speed of light, the second law of thermodynamics, Amdahl's law etc., this does not seem at all obvious to me.

Of course, the true upper limit might be much higher than current human intelligence But if there exists any upper bound, it should influence the "FOOM"-scenario. Then 30 minutes head start would only mean arriving at the upper bound 30 minutes earlier.

If a self-replicating microbot has the same computing power as a 2020 computer chip half its size, and if it can get energy from sugar/oil while transforming soil into copies of itself, modular mobile supercomputers of staggering ability could be built from these machines very quickly at extremely low cost. Due to Amdahl's law and the rise of GP-GPUs, not to mention deep learning, there has already been a lot of research into parallelizing various tasks that were once done serially, and this can be expected to continue.

But also, I would guess that a self-replicating nanofabricator that can build arbitrary molecules at the atomic scale will have the ability to produce computer chips that are much more efficient than today's chips because it will be able to create smaller features. It should also be possible to decrease power consumption by building more efficient transistors. And IIUC quantum physics doesn't put any bound on the amount of computation that can be performed with a unit of energy, so there's lots of room for improvement there too.

How much of current R&D time is humans thinking, and how much is compiling projects, running computer simulations or doing physical experiments?

E.g. would having faster than human speed uploads, speed up getting results from the LHC by the ratio of their speed to us?

Also do you have some FLOPS per cubic centimeter estimations for nanocomputers? I looked at this briefly, and I couldn't find anything. It references a previous page that I can't find.

Why is Eliezer assuming that sustainable cycles of self-improvement are necessary in order to build an UberTool that will take over most industries? The Japanese Fifth Generation Computing Project was a credible attempt to build such an UberTool, but it did not much rely on recursive self-improvement (apart from such things as using current computer systems to design next-generation electronics). Contrary to common misconceptions, it did not even rely on human level AI, let alone superhuman intelligence.

If this was a credible project (check the contemporary literature and you'll find extensive discussions about its political implications and the like) why not Douglas Engelbart's set of tools?

Well at long last you finally seem to be laying out the heart of your argument. Dare I hope that we can conclude our discussion by focusing on these issues, or are there yet more layers to this onion?

Will

"Also do you have some FLOPS per cubic centimeter estimations for nanocomputers? I looked at this briefly, and I couldn't find anything. It references a previous page that I can't find."

FLOPs are not a good measure of computing performance since Floating Point Calculations are only one small aspect of what computers have to do. Further the term nanocomputers as used is misleading since all of todays processors could be classified as nanocomputers the current ones using the 45nm process moving to the 32nm process.

Eliezer

"Just to make it clear why we might worry about this for nanotech, rather than say car manufacturing - if you can build things from atoms, then the environment contains an unlimited supply of perfectly machined spare parts. If your moleculary factory can build solar cells, it can acquire energy as well."

Ignoring the other obvious issues in your post, this is of course not true. One cannot just bond any atom to any atom this is well known and have something useful. I would also like to point out that everyone tosses around the term nano including the Foresight institute but the label has been so abused through projects that don't deserve it that it seems a bit meaningless.

The other issue is of course this concept that we will build everything from atoms in the future that you seem to imply. This is of course silly since building a 747 from atoms up is much harder then just doing it the way we do it now. Nano engineering has to be applied to the right aspects to make it useful.

"I don't think they've improved our own thinking processes even so much as the Scientific Revolution - yet. But some of the ways that computers are used to improve computers, verge on being repeatable (cyclic)."

This is not true either, current computers are designed by the previous generation. If we look at how things are done on the current processors and how they were done we see large improvements. The computing industry has made huge leaps forward since the early days.

Finally I have trouble with the assumption that once we have advanced nanotech whatever that means that we will all of a sudden have access to tremendously more computing power. Nanotech as such will not do this, regardless of whether we ever have molecular manufacturing we will have 16nm processors in a few years. Computing power should continue to follow Moore's law till processor components are measured in angstroms. This being the case the computer power to run the average estimates of the human brains computational power already exist. The IBM Roadrunner system is one example. The current issue is the software there is no end to possible hardware improvement but unless software matches who cares.

By nanocomputer I meant rod-logic or whatever the state of the art in hypothetical computing is. I want to see how it compares to standard computing.

I think the lure of nano computing is supposed to be low power consumption and easy 3d stackability that that entails as well. It it not sufficient to have small components if they are in a 2D design and you can't have too many together without overheating.

Some numbers would be nice though.

There was never a Manhattan moment when a computing advantage temporarily gave one country a supreme military advantage, like the US and its atomic bombs for that brief instant at the end of WW2.

Only if you ignore Colossus, the computer whose impact on the war was so great that in the UK, they destroyed it afterwards rather than risk it falling into enemy hands.

"By the end of the war, 10 of the computers had been built for the British War Department, and they played an extremely significant role in the defeat of Nazi Germany, by virtually eliminating the ability of German Admiral Durnetz to sink American convoys, by undermining German General Irwin Rommel in Northern Africa, and by confusing the Nazis about exactly where the American Invasion at Normandy France, was actually going to take place."

I.E. 10 computers rendered the German navy essentially worthless. I'd call that a 'supreme advantage' in naval military terms.

http://www.acsa2000.net/a_computer_saved_the_world.htm

"The Colossus played a crucial role in D-Day. By understanding where the Germans had the bulk of their troops, the Allies could decide which beaches to storm and what misinformation to spread to keep the landings a surprise."

http://kessler.blogs.nytimes.com/tag/eniac

Sure, it didn't blow people up into little bits like an atomic bomb, but who cares? It stopped OUR guys getting blown up into little bits, and also devastated the opposing side's military intelligence and command/control worldwide. It's rather difficult to measure the lives that weren't killed, and the starvation and undersupply that didn't happen.

Arguably, algorithmic approaches had a war-winning level of influence even earlier:

http://en.wikipedia.org/wiki/Zimmermann_Telegram

Anonymous.

An interesting modern analogy is the invention of the CDO in finance.

Its development lead to a complete change of the rules of the game.

If you had asked a bank manager 100 years ago to envisage ultimate consequences assuming the availability of a formula/spreadsheet for splitting up losses over a group of financial assets, so there was a 'risky' tier and a 'safe' tier, etc., I doubt they would have said 'The end of the American Financial Empire'.

Nonetheless it happened. The ability to sell tranches of debt at arbitary risk levels lead to the banks lending more. That led to mortgages becoming more easily available. That lead to dedicated agents making commission from the sheer volume of lending that became possible. That lead to reduction of lending standards, more agents, more lending. That lead to higher profits which had to be maintained to keep shareholders happy. That lead to increased use of CDOs, more agents, more lending, lower standards... a housing boom... which lead to more lending... which lead to excessive spending... which has left the US over-borrowed and talking about the second great depression.

etc.

It's not quite the FOOM Eliezer talks about, but it's a useful example of the laws of unintended consequences.

Anonymous.

Robin: Well at long last you finally seem to be laying out the heart of your argument. Dare I hope that we can conclude our discussion by focusing on these issues, or are there yet more layers to this onion?

It takes two people to make a disagreement; I don't know what the heart of my argument is from your perspective!

This essay treats the simpler and less worrisome case of nanotech. Quickie preview of AI:

When you upgrade to AI there are harder faster cascades because the development idiom is even more recursive, and there is an overhang of hardware capability we don't understand how to use;

There are probably larger development gaps between projects due to a larger role for insights;

There are more barriers to trade between AIs, because of the differences of cognitive architecture - different AGI projects have far less in common today than nanotech projects, and there is very little sharing of cognitive content even in ordinary AI;

Even if AIs trade improvements among themselves, there's a huge barrier to applying those improvements to human brains, uncrossable short of very advanced technology for uploading and extreme upgrading;

So even if many unFriendly AI projects are developmentally synchronized and mutually trading, they may come to their own compromise, do a synchronized takeoff, and eat the biosphere; without caring for humanity, humane values, or any sort of existence for themselves that we regard as worthwhile...

But I don't know if you regard any of that as the important part of the argument, or if the key issue in our disagreement happens to be already displayed here. If it's here, we should resolve it here, because nanotech is much easier to understand.

"In real life, I'd expect someone to brute-force an unFriendly AI on one of those super-ultimate-nanocomputers, followed in short order by the end of the world."

indeed.

There was never a Manhattan moment when a computing advantage temporarily gave one country a supreme military advantage, like the US and its atomic bombs for that brief instant at the end of WW2.

Did atomic bombs give the US "a supreme military advantage" at the end of WW2?

If Japan had got the bomb in late 1945 instead of the US, could it have conquered the world? Or Panama, if it were the sole nuclear power in 1945?

If not, then did possession of the bomb give "a supreme military advantage"?

If Japan had had the bomb when we did, and we were where they were in terms of research, and in the numbers we did, they could have easily done a number on our navy, thus converting certain imminent overwhelming defeat into... uncertain, non-immediate overwhelming defeat. Simply on account of our wiping out everything on mainland Japan - we already had them in checkmate.

If they'd gotten this in 1943, though, things would have been... rather different. It's difficult to say what they couldn't have done.

Panama... well, they'd certainly have a local supreme military advantage. No one at all would go after them. There were probably too few Panamanians with too little delivery capability to take over the whole world.

The very limitations of these analogies amplify Eliezer's points - swallowing your supply chain makes you care less about the annihilation of your industrial infrastructure. Gray goo doesn't need occupation troops, and it can deliver itself.

"In real life, I'd expect someone to brute-force an unFriendly AI on one of those super-ultimate-nanocomputers, followed in short order by the end of the world."

If you believe this you should be in favor of the slowing down of AI research and the speeding up of work on enhancing human intelligence. The smarter we are the more likely we are to figure out friendly AI before we have true AI.

Also, if you really believe this shouldn't you want the CIA to start assassinating AI programmers?

I can accelerate the working-out of FAI theory by applying my own efforts and by recruiting others. Messing with macro tech developmental forces to slow other people down doesn't seem to me to be something readily subject to my own decision.

I don't trust that human intelligence enhancement can beat AI of either sort into play - it seems to me to be running far behind at the moment. So I'm not willing to slow down and wait for it.

Regarding the CIA thing, I have ethics.

It's worth noting that even if you consider, say, gentle persuasion, in a right-tail problem, eliminating 90% of the researchers doesn't get you 10 times as much time, just one standard deviation's worth of time or whatever.

The sort of theory that goes into hacking up an unFriendly AI and the sort of theory that goes into Friendly AI are pretty distinct as subjects.

In your one upload team a day ahead scenario, by "full-scale nanotech" you apparently mean oriented around very local production. That is, they don't suffer much efficiency reduction by building everything themselves on-site via completely automated production. The overall efficiency of this tech with available cheap feedstocks allows a doubling time of much less than one day. And in much less than a day this tech plus feedstocks cheaply available to this one team allow it to create more upload-equivalents (scaled by speedups) than all the other teams put together. Do I understand you right?

As I understand nanocomputers, it shouldn't really take all that much nanocomputer material to run more uploads than a bunch of bios - like, a cubic meter of nanocomputers total, and a megawatt of electricity, or something like that. The key point is that you have such-and-such amount of nanocomputers available - it's not a focus on material production per se.

Also, bear in mind that I already acknowledged that you could have a slow runup to uploading such that there's no hardware overhang when the very first uploads capable of doing their own research are developed - the one-day lead and the fifty-minute lead are two different scenarios above.

EY: "In real life, I'd expect someone to brute-force an unFriendly AI on one of those super-ultimate-nanocomputers, followed in short order by the end of the world."

I'm not convinced that any realistic amount of computing power will let you "brute force" AI. If you've written a plausibility argument for this, then do link me...

Of course, the true upper limit might be much higher than current human intelligence But if there exists any upper bound, it should influence the "FOOM"-scenario. Then 30 minutes head start would only mean arriving at the upper bound 30 minutes earlier.

Rasmus Faber: plausible upper limits for the ability of intelligent beings include such things as destroying galaxies and creating private universes.

What stops an Ultimate Intelligence from simply turning the Earth (and each competitor) into a black hole in those 30 minutes of nigh-omnipotence? Even a very weak intelligence could do things like just analyze the OS being used by the rival researchers and break in. Did they keep no backups? Oops; game over, man, game over. Did they keep backups? Great, but now the intelligence has just bought itself a good fraction of an hour (it just takes time to transfer large amounts of data). Maybe even more, depending on how untried and manual their backup system is. And so on.

Interesting article. I would add two other reasons for which replication is not a binary phenomena :

  1. The speed of the replication. I somehow imagine that nanotech will be able to replicate itself very fast (in a matter of minutes), but that may not be the case, there may be reasons (like need to store enough energy before doing some operations, and the energy only arriving slowly) which would make it much slower.

  2. Most importantly, the conditions in which the nanobot can self-replicate. Between a nanobot able to replicate itself under very carefully controlled conditions of temperature, humidity, light exposure, concentration of supplies in the different atoms it requires, and one able to reproduce itself in the rainforest, the crater of a volcano and the moon (not even speaking of deep space), there is a large margin, both in term of breakthrough required to make the bot, and in term of gain advantage. I would give a high confidence level (above 90%) than the first fully-replicated nanobot will only be self-replicating in carefully controlled conditions, not in the "wild".

On another topic, I don't agree with « thanks to nuclear weapons making small wars less attractive ». Nuclear weapons made big wars (direct clash of major powers, or world wars) much less attractive, but it didn't made "small" wars less attractive. Quit the opposite, it made the major powers struggle with each other by fighting "small wars" in foreign countries, and it didn't (at least not significantly) lower the amount of "small wars" between non-major countries.