In this essay I argue the following:

Brain emulation requires enormous computing power; enormous computing power requires further progression of Moore’s law; further Moore’s law relies on large-scale production of cheap processors in ever more-advanced chip fabs; cutting-edge chip fabs are both expensive and vulnerable to state actors (but not non-state actors such as terrorists). Therefore: the advent of brain emulation can be delayed by global regulation of chip fabs.

Full essay: http://www.gwern.net/Slowing%20Moore%27s%20Law

New to LessWrong?

New Comment
89 comments, sorted by Click to highlight new comments since: Today at 2:27 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Predictions that improvements in manufacturing will lead to lower prices are made ceteris paribus; rising prices caused by a temporary disruption cannot be used to conclude manufacturing costs have gone up until the original conditions have been restored or been shown to be unable to be restored. Since R&D has largely gone on unmodified, there is no particular reason yet to expect that hard drive prices per unit capacity will be any higher in 2013, after most manufacturing facilities are restored and the market has had time to readjust, than an extrapolation made 1990-2010 would have predicted.

And the relevant question as to whether a facility is too expensive to rebuild is not one of the size of firms in that business currently, but of the expected rate of return on the capital. Sunk costs in the form of destroyed fabs will not prevent new capital from coming in to build new fabs (though it might bankrupt specific firms). For sabotage to actually have a long-term effect, it would have to happen regularly enough and effectively enough to significantly drive down the expected rate of return on capital invested in building fabs.

Related (about memory chips, but probably still relevant): A 0.07-Second Power Problem at Toshiba Chip-Plant May Affect Digital Device Availability/Prices.

There was a Wall Street Journal news story (among others) this morning reporting that "there was a sudden drop in voltage that caused a 0.07-second power interruption at Toshiba's Yokkaichi memory-chip plant in Mie prefecture" causing a problem which Toshiba said will reduce its shipments of NAND flash memory by 20% for the next two months. This, the Journal article says, would translate into a 7.5% reduction in world-wide shipments through February.

NAND memory is used in everything from USB flash drives to MP3 players to digital cameras to smartphones and tablet PCs. [...]

The WSJ article also says that apparently the uninterruptible power supply system at the Toshiba plant failed when the region was hit by a drop in voltage, causing the chips being fabricated to be ruined.

http://www.evolvingexcellence.com/blog/2010/12/the-value-of-007-seconds.html :

Toshiba's troubles started early Wednesday when, according to power supplier Chubu Electric Power Co., there was a sudden drop in voltage that caused a 0.07-second po

... (read more)
4gwern12y
That's a very good example. I'll add that and the Phillips fire as an additional example next to the floods.

Norwegian construction reports (with their long experience in underground construction) for power stations and waste treatment plants indicate that underground construction premiums are more like 1.25x and may even save money in the long run. These hardenings, however, are not robust against nuclear or bunker-busting bombs, which require overburdens up to 2000 feet.

Ok, but what kind of actor are you envisioning as wanting to blow up the fabs? Nukes and bunker-busters seems to me to indicate nation-states, which - if genuinely convinced of the dangers of Moore's Law - have all kinds of other options available to them, like regulation. (Just look at what they've done to nuclear power plants...) If you decided to go terrorist and had access to a nuke, would a chip fab really be your highest-priority target?

7gwern12y
The usual argument made against the strategy of regulation is that the economic pressures would drive production underground, possibly literally, frustrating any attempt at regulation or enforcement. My point is that other economic pressures force consolidation into a few efficient and extremely expensive & vulnerable facilities, which can be affected by ordinary military mechanisms - hence, regulation + military-level enforcement may well work in contrast to the usual pessimism.
1RolfAndreassen12y
I have got to say that when you're talking about capital requirements in the billions, and construction of the sort where you use the world's largest cranes, and a need for clean-room facilities, I have a really hard time seeing how production could go underground. Explosives, drugs, and weaponised bacteria, yes; these are all technologies that were known well before 1800, you'll note. Chips? I really don't see it. Did you perhaps mean that the fabricators will go for unregulated markets, but build openly there? Possible, but they still have to sell their products. I suggest that the usual smuggling paths are not going to be very useful here, because where is the demand for forbidden hardware? Drugs go through porous borders because they'll make money on the other side. But high-performance chips? Are you suggesting a black market of AI researchers, willing to deal with criminal syndicates if only they can get their fix of ever higher performance? And even if you are, note again the capital requirements. Any dirt farmer can set up to grow opium, and many do. If drugs required as much capital, and all in one place at that, as a modern fab does, I suggest that the War On Drugs would be over pretty quickly. Really, hardening against nukes seems like a completely wrong approach unless you're suggesting that the likes of China would be hiding this fab from the US. For private actors the problem is in finding, not blowing up. If you have military-level enforcement, nukes (!) are just total overkill; send in a couple of platoons of infantry and be done. What are the illegal fabbers going to do, hire the Taliban as plant security? (I mean, maybe they would, and we all know how that fight would go, right?) I think you've got, not the wrong end of the stick, but the wrong stick entirely, when you start talking about nuclear hardening.
9gwern12y
It does seem implausible that any non-state actor could setup an underground chip fab; but someone is going to suggest just that, despite the idiocy of the suggestion when you look at what cutting-edge chip fabs entail, so I have to argue against it. With that argument knocked out of the way, the next issue is whether a state actor could set up a cutting edge chip fab hardened or underground somehow, which is not quite so moronic and also needs to be argued against.
0RolfAndreassen12y
Ok, but then why not point out what I just did, which is that anyone but a state building such a thing is quite implausible? Pointing out that it takes a lot of hardening to protect against nukes just seems like a mis-step. I think perhaps you need to make clearer who you envision as wanting to stop Moore's Law. Are we talking about private actors, ie basically terrorists with the resulting budget limitations; or states with actual armies?

A variety of links are broken- these include the link about suppression of firearms, the Cheyenne mountain link, the 2011 Thailand flood link, and the experience curve effect. It appears that something has messed up all the links that were to Wikipedia.

This piece seems to be proposing a solution to something that isn't obviously the thing to worry about. There are a variety of other threats to long-term human survival that require technological improvement to combat. Three obvious issues are asteroid impacts, large scale disease (due to modern infrastructure allowing the fast spread of otherwise localized diseases), and resource limitations (such as running out of easily accessible fossil fuels). Some of these are not directly connected to chip improvements- the Apollo program happened with mid 1960s level technology, and it is likely that the technological barriers to dealing with an annoying asteroid or comet are not strongly connected to computer tech level. However, others are not so limited- better computers mean better treatment of disease from better drug design and detection. Similarly, more efficient chips mean less use of oil (since less energy cost for the same comput... (read more)

5gwern12y
This is true. I'm not claiming that ending Moore's law via regulating or attacking chip fabs would only affect brain uploads & de novo AGI, without affecting any other existential threat. The question here is whether the chip fabs are vulnerable and whether they would affect uploads, which I think I've established fairly well. It's not clear to me how the latter would go: nanotech and bioterrorism both seem to be encouraged by widespread cheap computing power, and forcing research onto highly supervised grant-paid-for supercomputers would both slow it down and make it harder for a rogue researcher (as compared to running it on his own laptop), but the slowdown in global economic growth has horrific opportunity costs involved. Hence, whether this is a strategy anyone would ever actually want to use depends on some pretty difficult utilitarian calculuses. Yes, I've read about that. Even the contrarians admit that guns were hardly used and locally manufactured guns were far behind the local state of the art.
0Thomas12y
I think it is. For to answer the question "What is the minimal action to avert all the near Earth objects for a long time?" - a lot of computing would be needed. And the computed answer might be "Just send a rocket with mass M, at the time T, from the location L, in the direction D, with the speed S - and it will meet enough of those objects and redirect them, to Earth be safe at least for the next 100 years." If such a trajectory exists at all, it could be calculated with enough computing power at hand. If it doesn't exist, there is a minimal number of them and that could be calculated also.
6JoshuaZ12y
Even if one had near indefinite computing power, making such a calculation would be extremely difficult simply due to the lack of precision of observations. Small changes in the estimated size or trajectory of an asteroid could have drastic results on their long-term behavior. Comets are even more ill-behaved. The limiting factor to such calculations would be at least as much observational as calculational. Moreover, since large impacts are extremely rare threats, dealing with individual impact threats as they arise is a much more optimal strategy.

TBH i'd rather share my planet with the mind uploaded folks than the folks that bomb factories. Both of those folks are potentially non-friendly non-me intelligences, except the latter type is for certain non-friendly while the former type might end up friendly enough in the sense in which corporations and governments - as meta-organisms - are friendly enough.

Looking at the essay more, I would say: the chip fabrication labs are amazingly cheap and even highly coordinated efforts by all terrorist groups can not make even a small dent in the progress (excluding scenarios such as big nuclear war)

The several billion dollars apiece for a fab is cheap . The margins may be thin when everyone else, too, is making chips. But I can pay 10x of what I am paying for my electronic equipment, if needs be, and all chances are you can too if you are posting here. The income distribution being what it is (power law, see Pareto distribution), the costs can be raised massively while retaining the revenue as long as the cheap alternatives are not available. More than 50% of your customers can and would pay >2x (if the alternatives weren't available). Think about it. 20% own 80% of everything (or even more skewed), and its like this all the way to the top. With this kind of income distribution the luxury market alone can support entire industry along with progress. Look at Apple, and how successful it is even though the products are not all that popular worldwide - they make huge margin.

Anyhow, with regards to the de-novo AGI, really, all we have is extre... (read more)

2gwern12y
I can't. And as already pointed out, costs for large tech companies or datacenters are already dominated by such apparently trivial factors as energy costs - a 10x increase in raw processor price would be catastrophic. /sees a lot of vague economic buzzwords, no specific arguments or data on price elasticity You will notice that the processor companies are only able to engage in price discrimination up to around $1000 a processor or so. (The most expensive Intel processor I could find at Newegg was $1050.) This suggests you are wrong.
2Dmytry12y
I'm not sure this would apply if the 10x increase in price applies to everyone else as well. We wouldn't pay Google real money if there's Bing for free, but we would have to if there's no competition. edit: also for the energy cost, a: it's by no means trivial and b: only goes to show that if the CPUs get 10x more expensive it won't be such a huge rise in final operational costs. Competition from the lower-range products. edit: actually, I want to somewhat clarify paying 10x of what I am paying. I would probably upgrade home hardware less often, especially at first (but factor of 10 is only 5 years of Moore's law or so). The hardware for business is another matter entirely. It is not at all expensive to pay Google >>10x of what it makes off you via showing you ads. The ads are amazingly bad at generating income. They still provide quite enough for Google. I pay ~20$/month for 100 megabits/second up and down (Lithuania, the world #1 internet speed). From what I know (my girlfriend is from US) typical costs in US are 50$ for perhaps 10 megabits on a typical day; god knows how much equally reliable 100megabit costs there. And this ain't hardware costs, because hardware is not any cheaper here than in US. It's probably the labour costs. My brother lives in Russia. They make way less money here; they still buy computers; if the computers were to cost 10x more, they wouldn't be able to afford it; but people in the developed countries would merely be having to save the money some. edit: And with regards to purely-luxury spendings, if diamond prices become 10x larger, the people will be spending about same on diamonds and buying 1/10 the diamonds, while the diamond industry will have about same income (actually i could expect income of diamond industry to rise because more expensive diamonds are then possible in the rings without mechanical considerations). The diamonds in jewellery are inherently only worth as much as you pay for them. Now apply this to the chips. Pic

Are you actually suggesting that people attack chip fab plants in attempt to prevent WBE from occurring before de novo AGI?

I think if you were successful, you'd be more likely to prevent either from occurring than to prevent WBE from occurring first. It takes a whole lot of unfounded technological optimism to estimate that friendly de novo AGI is simple enough that an action like this would make it occur first, when we don't even know what the obstacles really are.

5gwern12y
Maybe. What do your probability estimates and expected value calculations say? It would take a lot of optimism. Good thing I never suggested that.
7CasioTheSane12y
I agree with your assessment that this would effectively delay WBE, and therefore increase the chances of it occurring last but I can't even guess at how likely that is to actually be effective without at least a rough estimate of how long it will take to develop de novo AGI. Your idea is very interesting, but I'll admit I had a strong negative emotional reaction. It's hard for me to imagine any situation under which I would sit down and do some calculations, check them over a few times, and then go start killing people and blowing stuff up over the result. I'm not prepared to joint the Rational Liberation Front just yet. //edit: I also really want to see where brain emulation research goes, out of scientific curiosity.
5John_Maxwell12y
This seems understandable, but I hope it isn't a significant factor in your decision making...

This is haunting the site. I see that your perspective is: "Does this imply that regulation could be accomplished by any modestly capable group, such as a Unabomber imitator or a souped-up ITS? No (reasons)" and that your position is that Terrorism is not effective. However, I have found several mentions of people being creeped out by this article around the site. Here is the last mention of someone being creeped out I noticed.. I think there is a serious presentation problem with this piece that goes like this:

  1. Person clicks article titl

... (read more)

I wasn't sure that this was worth acting on, but I see that another person seems to be taking it the wrong way, so I guess you are right. I've done the following:

  • Substantially edited the summary here and there to make the logic clearer and mention upfront that terrorism won't work
  • Changed the title of the essay
  • Deleted the body here so people will have to read the full up-to-date version and not whatever old version is here
  • Reworked the intro sections to give more background and a hopefully more logical flow
0Epiphany11y
Oh thank goodness you did something about this! I guess you didn't read every comment on your thread, or you just didn't take rwallace seriously at first, but rwallace actually decided to quit LessWrong because of your essay. You can tell for sure because that's the last thing they said here and they haven't posted anything since March: http://lesswrong.com/user/rwallace/ Maybe somebody should let them know... since they don't come to the site anymore, that would be hard, but if you know who the person's friends are, you could ask if they'll pass the message on. You know, it's really hard to tell how people will take one's writing before it is posted. If you'd like, I will read your stuff before you post it if you'll read mine - we can trade each other pages 1 for 1. That should reduce the risk of this happening to a much lower level.
7gwern11y
I think that would be pretty pointless; if he could think that after reading the original, the amendments aren't going to impress him. If he's that careless a reader, LW may be better off without him. (I did read his comment: I subscribe via RSS to the comments on every article I post.) If you were to track him down and ask him to re-read, I'd give <35% that he'd actually come back and participate (where participate is defined as eg. post >=1 comment a month for the next 6 months). Nah, I'm fine with #lesswrong as 'beta readers', as it were.
3Epiphany11y
I don't think the problem was careless reading. When you open with a comment about attacking chip fabs without specifying that you mean a government level military and your audience is mainly in a country where everyone has been bathed in the fear of terrorism for years, this is bound to trigger mind kill reactions. You could argue "Good LW'ers should stay rational while thinking about terrorism." but aside from the fact that everyone has flaws and that's a pretty common one, more importantly, you seem to overlook the fact that the entire rest of the world can see what you wrote. Humans aren't known for being rational. They're known for doing things like burning "witches" and poisoning Socrates. In this time and place, you're less likely to get us killed by an angry mob, but you could easily provoke the equivalent of that in negative attention. Reddit and The Wall Street Journal have both done articles on Less Wrong recently. Big fish are starting to pay attention to LessWrong. That means people are paying attention YOU. Yes, you. This post has gotten 1,344 page views. One of Luke's older posts got 200,000 views. (Google analytics). For contrast, a book is considered a bestseller if it sells 100,000 copies. YOU could end up getting that much attention on this site, Gwern, and the attention is not just from us. There are only about 13,000 registered users in the user database, and only 500-1000 of them are active in a given month. That just doesn't account for all of the traffic to the posts. Even if it were true that all the people who misread this are schmoes, choosing to leave the site over it may be a perfectly valid application of one's intelligence. Not associating one's reputation with a group that is mistakenly thought to be in favor of terrorism is a perfectly sane survival move. I wondered if I should quit, myself before deciding to suggest that you edit the post. Considering the amount of attention that LessWrong is getting, and the fact that you are a
1gwern11y
Mm, maybe. It is difficult for me to see such things; as I pointed out in another comment, before I wrote this, I spent scores of hours reading up on and researching terrorism and indeed posted that to LW as well; to me, terrorism is such an obviously useless approach - for anything but false flag operations - that nothing needs to be said about it. Page views are worth a lot less than an entire book sale, IMO - one will spend much more time on a book than even a long essay. 1344 page views doesn't impress me. For example, for this October, gwern.net had 51x more or 69,311 total page views. The lifetime total for this essay on my site is already at 7,515, and most of that is from before I deleted the version here so I expect that will boost the numbers a bit in the future.
0Epiphany11y
Agreed, especially if deciding things like whether to invest in publishing a particular author's new book. However, my purpose was just to make the number seem more real. Humans have problems with that - "One death is a tragedy, a million deaths is a statistic." as they say. I think it was an okay metaphor for that purpose. I'm not trying to say Luke's article is a "bestseller" (in fact it has a bounce rate of about 90%), just that LW posts can get a lot of exposure so even if it is the standard that LW members should be rational enough not to mindkill on posts like that one, we should probably care about it if non-rationalists from the world at large are mind-killing on stuff written here.
0[anonymous]11y
So, much less traffic than gwern.net gets in a month, on an arguably less controversial topic than the usual gwern.net fare. He's using the IRC channel, #lesswrong, as his beta testers. #lesswrong is a different thing from LessWrong.
0Epiphany11y
Then it's very odd that he doesn't seem to care that people are mistaking him as being in support of terrorism. Oh dear. I assumed from the context (the fact that this thing got out onto the site without him appearing to know / care that people would think it was pro terrorism) that he was referring to the website. Does everyone here have Asperger's or something? Note: I removed the part in my post that referred to using LW as beta testers.
7wedrifid11y
No, almost certainly less than 90% of the people here have Asperger's!
0Epiphany11y
I'm confused about how this happened. Edit: I think I figured it out.
2[anonymous]11y
It would be incredibly improbable. Not-so-subtly suggesting your interlocutors aren't neurotypical is such a wonderful debate tactic, though; it'd be a pity to let the base rate get in the way.
6Epiphany11y
I'm genuinely confused at this point and just trying to figure out how this happened. From my point of view, the fact that this got posted without him realizing that it was going to be mistaken as a pro-terrorism piece is, by itself, surprising. That it was beta tested by other LW'ers first and STILL made it out like this is even more surprising. I'm not trying to convince you of anything, paper-machine. This isn't a debate. I am just going WTF.
4Kindly11y
I think you should consider the hypothesis that you are over-reacting before the hypothesis that lots of different beta readers are all under-reacting. (Which in turn is more likely than the hypothesis that the beta readers have a neurological condition that causes them to under-react.)
4Epiphany11y
Except that I didn't over-react. I wasn't upset. I just went "Is this a piece endorsing terrorism?" looked into it further, realized this interpretation is false, and wandered away for a while. Then I saw mention after mention around the site saying that people were creeped out by this piece. I came back and saw that someone had left because of it - like, for real, as in they haven't posted since they said they were leaving due to the piece. And then I went "Wow a lot of people are creeped out by this. This is making LessWrong look bad. Even if it IS a misinterpretation, thinking that this post supports terrorism could be a serious PR problem." My position is still that beta testers should ideally catch any potential PR disasters, and I don't think that's an over-reaction. At all. For the record, even though it did occur to me for a moment as a possible explanation, I didn't say that because I really believed it was likely that everyone here has Asperger's. That would be stupid. I said it as an expression of surprise. I figured it would be obvious that it was an expression of surprise and not a rational assessment. I think my surprise was due to hindsight bias.
0Kindly11y
To be specific, the hypothesis I am suggesting is that you are now, currently, over-reacting by calling this a "potential PR disaster".
1Epiphany11y
I really didn't expect that. As I see it, a post that multiple people took as being in support of terrorism and somebody quit over is definitely sensational enough to generate a buzz. Surely, you have seen reporters take things out of context. Eliezer has already been targeted for a hatchet job by one reporter. RationalWiki sometimes takes stuff out of context. For instance, the Eliezer facts thread has a "fact" where an LWer edited a picture of him speaking beside a diagram that shows a hierarchy of increasingly more intelligent entities including animals, Einstein and God. The LW'er added Eliezer to the diagram, at a level well beyond God. You can see below this that Eliezer had to add a note for RationalWiki because they had apparently made the mistake of taking this photoshopped diagram out of context. If some idiot who happens to have a RationalWiki account dropped by or a reporter who was hard up for a scoop discovered this, do you think it isn't likely for them to take it out of context either to make it more sensational or because of mindkill? I, for one, do not think there was anything special about the original post that would prevent it from becoming the subject of a hatchet job. People act crazy when they're worried about being attacked. I have a friend who has dual citizenship. He came to visit (America) and was harassed at the airport simply because he lives in a different country and the security guard was paranoid about terrorism. I don't see this post getting LW shut down by the government or anything, but it could result in something really disappointing like Eliezer being harassed at airports, or something bad in between. Considering all this, do you still think the risk of bad publicity is insignificant?
1Kindly11y
Pretty much, yeah. The opinion of RationalWiki is probably worth somewhere in between the opinion of 4chan and the opinion of Conservapedia. And people quit forums all the time, that's not something to worry about. I see this as a case of "the original version of the article was unclear, and has been edited to make it clearer". Not a scandal of any kind.
0Epiphany11y
So do I, to all of the above, so you apparently have more faith in humanity than I do in regards to people taking things out of context and acting stupid about it.
4cata11y
There's a strong feeling in the culture here that it's virtuous to be able to discuss weird and scary ideas without feeling weirded out or scared. See: torture and dust specks, AI risk, uploading, and so on. Personally, I agree with you now about this article, because I can see that you and the fellow above and probably others feel strongly about it. But when I read it originally, it never occurred to me to feel creeped out, because I've made myself to just think calmly about ideas, at least until they turn into realities -- I think many other readers here are the same. Since I don't feel it automatically, quantifying "how weird" or "how scary" these things are to other people takes a real conscious effort; I forget to do it and I'm not good at it either. So that's how it happens.
0Epiphany11y
I like entertaining ideas that others find weird and scary, too, and I don't mind that they're "weird". I have nothing against it. Even though my initial reaction was "Does this guy support terrorism?" I was calm enough to investigate and discover that no, he does not support terrorism. Yeah, I relate to this. Not on this particular piece though. I'm having total hindsight bias about it, too. I am like "But I see this, how the heck is it not obvious to everyone else!?" You know what? I think it might be amount of familiarity with Gwern. I'm new and I've read some of Gwern's stuff but I hadn't encountered his "Terrorism isn't effective" piece, so I didn't have any reason to believe Gwern is against terrorism. Maybe you guys automatically interpreted Gwern's writing within the context of knowing him, and I didn't...

Isn't the first rule of Fight Club that you don't talk about Fight Club?

It seems to me that the key point is Moore's second law, leading to centralization. Centralized facilities are easy to sabotage. But if this law keeps going, it might end Moore's law all on its own.

If the capital expense of a fab keeps growing exponentially, pretty soon there will be a monopoly on state of the art silicon. What happens to the economics then? It seems to remove much of the incentive to build better fabs. Even if pricing keeps on as normal, the exponentially increasing cost of the fabs seems hard to finance.

The obvious solution is not to ma... (read more)

Great analysis. I am skeptical, though, that a campaign of targeted disruption could permanently derail Moore's law as long as the economy as a whole remains strong. Markets are really good at responding to dislocations like the destruction of a chip fabrication plant: other facilities would increase production in response to higher prices, unnecessary consumption would be curtailed (think of all those college kids using their fabulously advanced computers to surf Facebook), and future facility development would incorporate the threat of attack into their designs. We might even see companies start to trade special forms of insurance against such attacks.

I have some ideas for slowing Moore's law as well and, I'm wondering what you guys think of them (Gwern, too). I'm thinking of making these into post/s of their own and am curious about whether they'd be well-received or what, if anything, should to be done first.

Upvoted; very interesting.

the advent of brain emulation can be delayed by global regulation of chip fabs

I think it might be a hard sell to convince governments to intentionally retard their own technological progress. Any country who willingly does this will put themselves at a competitive disadvantage economically and defense-wise.

Nukes are probably an easier sell because they are specific to war - there's no other good use for them.

I think this might be more like Eliezer's "let it out of the box" experiments: The prospect of using the technology is too appealing to... (read more)

0gwern11y
Yes, I've pointed out most of those as reasons effective regulation would not be done (especially in China).
0Epiphany11y
Oh, sorry about that! After this dawned on me, I just kind of skimmed the rest and the subtitle "The China question" did not trigger a blip on my "you must read this before posting that idea" radar. What did you think of my ideas for slowing Moore's law?
4gwern11y
* Patents is a completely unworkable idea. * Convincing programmers might work, if we think very few programmers or AI researchers are the ones making actual progress. Herding programmers is like herding cats, so this works only in proportion to how many key coders there are - if you need to convince more than, say, 100,000, I don't think it would work. * PR nightmare seems to be the same thing. * Winning the race is a reasonable idea but I'm not sure the dynamic actually works that way: someone wanting to produce and sell an AI period might be discouraged by an open-source AI, but a FLOSS AI would just be catnip to anyone who wants to throw it on a supercomputer and make $$$.
-2Epiphany11y
I wish this was on the idea comment rather than over here... I'm sorry but I think I will have to relocate my response to you by putting it on the other thread where my comment is. This is because discussing it here will result in a bunch of people jumping into the conversation on this thread when the comment we're talking about is on a different thread. So, for the sake of keeping it organized, my response to you regarding the feasibility of convincing programmers to refuse risky AI jobs is on the other thread.

If we wanted to shift probability towards de novo AGI (the desirability of uploads is contentious, with pro and con), then we might withhold support from hardware development or actively sabotage it.

Sabotage? Isn't that going to be illegal?

Depends on whether the ones doing it are also the ones making the laws.

2see12y
And there is the really big flaw in a regulatory scheme. Can you really think of a way to arrange international coordination against making better and faster chips? How well has international coordination on, say, carbon emissions worked? If some countries outlaw making better chips, the others are likely to see that as a place where they can get a competitive advantage. If some countries outlaw importing better chips, that too will be seen by others as a place to get advantage, by using the chips themselves. And smuggling in high-capability chips from places where they are legal will be tempting for anyone who needs computing power. The presence of the facilities in US allies is not itself particularly useful to overcoming the problem of coordination. The power of the US over, say, Taiwan is limited to the point where seeking shelter under the protection of the nuclear arsenal of the People's Republic of China would seem a better option. There doesn't seem to me to be any plausible scenario where current governments would weigh existential risk from Moore's Law higher than existential risk of nuclear war from bombing a country that has a nuclear umbrella.
4gwern12y
Looking at the pre-requisites and requirements for keeping a chip fab going, and then considering how much additional input is necessary to improve on the state of the art, I think I can safely say that stopping Moore's law is easier than nuclear nonproliferation. And note that no rogue state has ever developed H-bombs as oppose to merely A-bombs; nor have any nation (rogue or otherwise) ever improved on the Russian and American state of the art.
1see12y
I didn't suggest it would be physically harder than stopping nuclear proliferation. I suggested it would be politically harder. The success of the scheme "depends on whether the ones doing it are also the ones making the laws", and that means it depends on international politics. Nuclear proliferation is a very, very bad analogy for your proposal because the NPT imposed no controls on any of the five countries that were already making nuclear weapons at the time the treaty was adopted, who between them were militarily dominant throughout the Earth. It was in the immediate self-interest of the five nuclear powers to organize a cartel on nuclear weapons, and they had the means to enforce one, too. Further, of the countries best situated to defect many were already getting the benefit of having virtually their own nuclear weapons without the expense of their own R&D programs through the NATO Nuclear Sharing program. On the other hand, organizing to end Moore's Law is to the immediate disadvantage of any country that signs on (much like agreeing to cuts in carbon emissions), and even any country that simply wants to buy better computers. The utterly predictable result is not a world consensus with a handful of rogue nations trying to build fabs in the dark lest they get bombed by the US. Rather, it's the major current manufacturers of semiconductors never agreeing to end R&D and implementation. And then even if they do, a country like the People's Republic of China blowing off the existential risk, then going ahead with its own domestic efforts.
2gwern12y
Actually, it's fantastic because everyone predicted that nuclear proliferation would fail abysmally within decades of starting and practically every nation would possess nukes, which couldn't have been more wrong. Chip fabs and more advanced chips are even harder than nukes because to continue the Moore's law, we can translate into atomic bomb terms: "another country must not just create A-bombs, not just create H-bombs, but actually push the atomic bomb frontier exponentially, to bombs orders of magnitude more powerful than the state of the art, to bombs not just in the gigaton range but to the teraton range." This dramatically brings out just how difficult the task is. It may theoretically be in a country's interest to make a chip fabs, but chips are small & hugely fungible so they will capture little of the surplus, in contrast to atomic bombs which never leave their hands. How many decades would it take the PRC to catch up? Their only existing designs are based on industrial espionage of Intel & AMD processors, my understanding was. How many decades with the active opposition of either the US or a world government such that they can use no foreign suppliers for any of the hundreds and thousands of extremely specialized steps and technologies required to make merely a state of the art chip, never mind advancing the state of the art as Moore's law requires?
-5timtyler12y

Maybe there's an intermediate possibility between WBE and de novo AI-- upload an animal brain (how about a crow? a mouse?) and set it to self-improvement. You'd still have to figure out Friendliness for it, but Friendliness might be a hard problem even for an uploaded human brain. How would you identify sufficient Friendliness when you're choosing humans to unpload? I'm assuming that human ems would self-improve, even if it's a much harder problem than improving a de novo AI.

Moore's Second Law reminds me of a good enough for sf notion I've got. Chip fabs k... (read more)

7gwern12y
It's possible, but I don't know of any reason we would expect an animal brain could recursively improve. As matters stand, all we know is that groups of humans can self-improve - so we don't even know if a single human is smart enough to self-improve as an upload. (Maybe human brains can fall into ruts or inevitably degenerate after enough subjective time.) This isn't too optimistic about crows or mice or lobsters, however excellent stories they make for. It's been operating since at least the '80s as far as I can tell, and is stable out to 2015 or so. That's a pretty good run.
1Dmytry12y
Single human, or cat, or other mammal, is smart enough to self improve without being uploaded (learning and training, we do start off pretty incapable). Think about it. It may well be enough to upload then just add more cortical columns, over the time, or do some other simple and dumb process that does not require understanding, to make use of the built-in self improvement ability.
-3billswift12y
Or the lobsters in Accelerando.

The real question is how to accelerate it, not ti stop it. And more likely to happen, too.

In some homes, the electricity used to power computers is already a significant fraction of total household power used. If a carbon tax were applied to the natural gas bonanza from fracking, prices would discourage buying just straight additional CPUs, in favour of making better use of the CPU speed we have already - it would simply be uneconomic for a company to run 100 times the CPU power it currently does.

EDITED TO ADD:

I guess I should expand on my reasoning a little.

Moore's law continues, in part, because there is the demand for additional computing... (read more)

1ZankerH12y
Actually, the current trend in CPU development is minor (10%-20%) performance increases at same or lower power usage levels. The efficiency is improving dramatically.

There are 2 main routes to Singularity, brain emulation/upload and de novo AGI.

Brain emulations are a joke. Intelligence augmentation seems much more significant - though it is not really much of an alternative to machine intelligence.

1CasioTheSane12y
Why would you think they're a joke? We seem to be on a clear path to achieve it in the near future.
0timtyler12y
As a route to machine intelligence they don't make sense - because they will become viable too late - they will be beaten.
4CasioTheSane12y
How do you know that?
0timtyler12y
Multiple considerations are involved. One of them is to do with bioinspiration. To quote from my Against Whole Brain Emulation essay:
4CasioTheSane12y
The existence of non-biomimetic technology does not prove that biomimetics are inherently impractical. There's plenty of recent examples of successful biomimetics... Biomimetic solar: http://www.youtube.com/watch?v=sBpusZSzpyI Anisotropic dry adhesives: http://bdml.stanford.edu/twiki/bin/view/Rise/StickyBot Self cleaning paints: http://www.stocorp.com/blog/?tag=lotusan Genetic algorithms: http://gacs.sourceforge.net/ The reason we didn't have much historical success with biomimetics is because biological systems are far to complex to just understand with a cursory look. We need modern bioinformatics, imaging, and molecular biology techniques to begin understanding how natural systems work, and be able to manipulate things on a small enough scale to replicate them. It's just now becoming possible. Engineers didn't look at biology before, because they didn't know anything about biology, and lacked tools to manipulate molecular systems. Bioengineering itself is a very new field, and a good portion of the academic bioengineering departments that exist now are less than 5 years old! Bioengineering now is in a similar situation as physics was in the 19th century. I looked at your essay, and don't see that you have any evidence showing that WBE is infeasible, or will take longer to develop than de novo AI. I would argue there's no way to know how long either will take to develop, because we don't even know what the obstacles are really. WBE could be as simple as building a sufficiently large network with neuron models like the ones we have already, or we could be missing some important details that make it far more difficult than that. It's clear that you don't like WBE, and you have some interesting reasons why we might not want to use WBE.
0timtyler12y
That seems as though it is basically my argument. Biomimetic approaches are challenging and lag behind engineering-based ones by many decades. I don't think WBE is infeasible - but I do think there's evidence that it will take longer. We already have pretty sophisticated engineered machine intelligence - while we can't yet create a WBE of a flatworm. Engineered machine intelligence is widely used in industry; WBE does nothing and doesn't work. Engineered machine intelligence is in the lead, and it is much better funded. If one is simpler than the other, absolute timescales matter little - but IMO, we do have some idea about timescales.
3CasioTheSane12y
Polls of "expert" opinions on when we will develop a technology are not predictors when we will actually develop them. Their opinions could all be skewed in the same direction by missing the same piece of vital information. For example, they could all be unaware of a particular hurdle that will be difficult to solve, or of an upcoming discovery that makes it possible to bypass problems they assumed to be difficult.
3CarlShulman12y
This is an important generalization, but there are also many counterexamples in our use of biotech in agriculture, medicine, chemical production, etc. We can't design a custom cell, but Craig Venter can create a new 'minimal' genome from raw feedstuffs by copying from nature, and then add further enhancements to it. We produce alcohol using living organisms rather than a more efficient chemical process, and so forth. It looks like humans will be able to radically enhance human intelligence genetically through statistical study of human variation rather than mechanistic understanding of different pathways. Creating an emulation involves a lot of further work, but one might put it in a reference class with members like the extensive work needed to get DNA synthesis, sequencing, and other biotechnologies to the point of producing Craig Venter's 'minimal genome' cells.
-2timtyler12y
Sure - but again, it looks as though that will mostly be relatively insignificant and happen too late. We should still do it. It won't prevent a transition to engineered machine intelligence, though it might smooth the transition a little. As I argue in my Against Whole Brain Emulation essay the idea is more wishful thinking and marketing than anything else. Whole brain emulation as a P.R. exercise is a pretty stomach-churing idea from my perspective - but that does seem to be what is happening. Possibly biotechnology will result in nanotechnological computing substrates. However, that seems to be a bit different from "whole brain emulation".
1CarlShulman12y
People like Kurzweil (who doesn't think that WBE will come first) may talk about it in the context of "we will merge with the machines, they won't be an alien outgroup" as a P.R. exercise to make AI less scary. Some people also talk about whole brain emulation as an easy-to-explain loose upper bound on AI difficulty. But people like Robin Hanson who argue that WBE will come first do not give any indications of being engaged in PR, aside from their disagreement with you on the difficulty of theoretical advances in AI and so forth.
-6timtyler12y