I think your predictions about where Moore's Law will stop are wildly pessimistic. You quote EETimes saying that "28nm is actually the last node of Moore's Law", but Intel is already shipping processors at 22nm! Meanwhile on an axis entirely orthogonal to transistor size and count, there's a new architecture in the pipeline (Mill) which credibly claims an order of magnitude improvement in perf/power and 2x in single-threaded speed. Based on technical details which I can't really get into, I think there's another 2x to be had after that.
I think continued progress of Moore's law is quite plausible, and that was one of the scenarios I considered (Scenario #2). That said, it's interesting that you express high confidence in this scenario relative to the other scenarios, despite the considerable skepticism of computer scientists, engineers, and the McKinsey report.
Would you like to make a bet for a specific claim about the technological progress we'll see? We could do it with actual money if you like, or just an honorary bet. Since you're claiming more confidence than I am, I'd like the odds in my favor, at somewhere between 2:1 and 4:1 (details depend on the exact proposed bet).
My suggestion to bet (that you can feel free to ignore) isn't intended to be confrontational. cf.
http://econlog.econlib.org/archives/2012/05/the_bettors_oat.html
Off the top of my head, other rare events worth anticipating:
Assassination of a head of state and/or coup d'état
War between and/or within highly developed countries
A new pandemic
Unavoidable meteorite
Extraterrestrial invasion
Thanks! I added pandemics (though not in the depth I should have). I'll look at some of the others.
Is this what it comes down to, that Gore refused to bet, so they presumed to make a pretend bet for him?
Boo. Lame. Worse than lame. Deceptive. (On their part.)
Tell me it aint so.
http://www.theclimatebet.com/?p=206&cpage=1#comment-229
“Now, assume that Armstrong and Gore made a gentleman‟s bet (no money) and that the ten years of the bet started on January 1, 2008. Armstrong‟s forecast was that there would be no change in global mean temperature over the next ten years. Gore did not specify a method or a forecast. Nor did searches of his book or the Internet reveal any quantitative forecasts or any methodology that he relied on. He did, however, imply that the global mean temperature would increase at a rapid rate – presumably at least as great as the IPCC‟s 1992 projection of 0.03°C-per-year. Thus, the IPCC‟s 1992 projection is used as Gore‟s forecast.”
The full correspondence is here:
http://www.theclimatebet.com/?page_id=4
Maybe it's lame (?) but I don't think they're being deceptive -- they're quite explicit that Gore refused to bet.
The fact that he refused to bet could be interpreted either as evidence that the bet was badly designed and didn't reflect the fundamental point of disagreement between Gore and Armstrong, or as evidence that Gore was unwilling to put his money where his mouth is.
I'm not sure what interpretation to take.
btw, here's a bet that was actually properly entered into by both parties (neither of them a climate scientist):
http://econlog.econlib.org/archives/2014/06/bauman_climate.html
I hope some high profile people start challenging big talkers with public bets. Put up or shut up, publicly.
Have you looked at http://www.theclimatebet.com (mentioned in an UPDATE at the end of Critique #1 in my post)?
(Your quote is mangled, you probably have four spaces at the beginning which makes the rendering engine interpret it as a needing to be formatted like code, i.e. No linebreaks)
When you're done with this sequence, you should really make a summary post in Main laying out links to them in order, along with brief descriptions of each. I'd hate to see these posts disappear into the abyss of old open threads and links.
Thanks for both the appreciation and the suggestion.
I intend to do a concluding post on the MIRI blog, linking to all of these; if Luke agrees, I can cross-post that to LessWrong and accompany that with a full listing of blog posts.
I'll also put a list of all my posts on my personal website later on.
A big hole in your list is forecasting of financial markets which is highly lucrative (when it works) and so attracts a considerable amount of effort and talent.
Good point. I'd looked at financial market forecasting along with macroeconomic forecasting, when I was investigating survey-based macroeconomic forecasting. I have some of the collected material, but I don't think I ever wrote it up. Thanks for reminding me! I'll add it to this post later.
Thanks for a comprehensive summary - that was helpful.
It seems that A&G contacted the working scientists to identify papers which (in the scientists' view) contained the most credible climate forecasts. Not many responded, but 30 referred to the recent (at the time) IPCC WP1 report, which in turn referenced and attempted to summarize over 700 primary papers. There also appear to have been a bunch of other papers cited by the surveyed scientists, but the site has lost them. So we're somewhat at a loss to decide which primary sources climate scientists find most credible/authoritative. (Which is a pity, because those would be worth rating, surely?)
However, A&G did their rating/scoring on the IPCC WP1, Chapter 8. But they didn't contact the climate scientists to help with this rating (or they did, but none of them answered?) They didn't attempt to dig into the 700 or so underlying primary papers, identify which of them contained climate forecasts, and/or had been identified by the scientists as containing the most credible forecasts and then rate those. Or even pick a random sample, and rate those? All that does sound just a tad superficial.
What I find really bizarre is their site's conclusion that because IPCC got a low score by their preferred rating principles, then a "no change" forecast is superior, and more credible! That's really strange, since "no change" has historically done much worse as a predictor than any of the IPCC models.
Actually, it's somewhat unclear whether the IPCC scenarios did better than a "no change" model -- it is certainly true over the short time period, but perhaps not over a longer time period where temperatures had moved in other directions.
Co-author Green wrote a paper later claiming that the IPCC models did not do better than the no change model when tested over a broader time period:
http://www.kestencgreen.com/gas-improvements.pdf
But it's just a draft paper and I don't know if the author ever plans to clean it up or have it published.
I would really like to see more calibrations and scorings of the models from a pure outside view approach over longer time periods.
Armstrong was (perhaps wrongly) confident enough of his views that he decided to make a public bet claiming that the No Change scenario would beat out the other scenario. The bet is described at:
Overall, I have high confidence in the view that models of climate informed by some knowledge of climate should beat the No Change model, though a lot depends on the details of how the competition is framed (Armstrong's climate bet may have been rigged in favor of No Change). That said, it's not clear how well climate models can do relative to simple time series forecasting approaches or simple (linear trend from radiative forcing + cyclic trend from ocean currents) type approaches. The number of independent out-of-sample validations does not seem to be enough and the predictive power of complex models relative to simple curve-fitting models seems to be low (probably negative). So, I think that arguments that say "our most complex, sophisticated models show X" should be treated with suspicion and should not necessarily be given more credence than arguments that rely on simple models and historical observations.
Thanks for a comprehensive summary - that was helpful.
It seems that A&G contacted the working scientists to identify papers which (in the scientists' view) contained the most credible climate forecasts. Not many responded, but 30 referred to the recent (at the time) IPCC WP1 report, which in turn referenced and attempted to summarize over 700 primary papers. There also appear to have been a bunch of other papers cited by the surveyed scientists, but the site has lost them. So we're somewhat at a loss to decide which primary sources climate scientists find most credible/authoritative. (Which is a pity, because those would be worth rating, surely?)
However, A&G did their rating/scoring on the IPCC WP1, Chapter 8. But they didn't contact the climate scientists to help with this rating (or they did, but none of them answered?) They didn't attempt to dig into the 700 or so underlying primary papers, identify which of them contained climate forecasts, and/or had been identified by the scientists as containing the most credible forecasts and then rate those. Or even pick a random sample, and rate those? All that does sound just a tad superficial.
What I find really bizarre is their site's conclusion that because IPCC got a low score by their preferred rating principles, then a "no change" forecast is superior, and more credible! That's really strange, since "no change" has historically done much worse as a predictor than any of the IPCC models.
See the last sentence in my longer quote:
We sent out general calls for experts to use the Forecasting Audit Software to conduct their own audits and we also asked a few individuals to do so. At the time of writing, none have done so.
It's not clear how much effort they put into this step, and whether e.g. they offered the Forecasting Audit Software for free to people they asked (if they were trying to sell the software, which they themselves created, that might have seemed bad).
My guess is that most of the climate scientists they contacted just labeled them mentally along with the numerous "cranks" they usually have to deal with, and didn't bother engaging.
I also am skeptical of some aspects of Armstrong and Green's exercise. But a first outside-view analysis that doesn't receive much useful engagement from insiders can only go so far. What would have been interesting was if, after Armstrong and Green published their analysis and it was somewhat clear that their critique would receive attention, climate scientists had offered a clearer and more direct response to the specific criticisms, and perhaps even read up more about the forecasting principles and the evidence cited for them. I don't think all climate scientists should have done so, I just think at least a few should have been interested enough to do it. Even something similar to Nate Silver's response would have been nice. And maybe that did happen -- if so, I'd like to see links. Schmidt's response, on the other hand, seems downright careless and bad.
My focus here is the critique of insularity, not so much the effect it had on the factual conclusions. Basically, did climate scientists carefully consider forecasting principles (or statistical methods, or software engineering principles) then reject them? Had they never heard of the relevant principles? Did they hear about the principles, but dismiss them as unworthy of investigation? Armstrong and Green's audit may have been sloppy (though perhaps a first pass shouldn't be expected to be better than sloppy) but even if the audit itself wasn't much use, did it raise questions or general directions of inquiry worthy of investigation (or a simple response pointing to past investigation)? Schmidt's reaction seems evidence in favor of the dismissal hypothesis. And in the particular instance, maybe he was right, but it does seem to fit the general idea of insularity.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
To be blunt, I don't believe Dally. A while back, in the context of technological stagnation, I compared a 2012 Ford Focus to a 1970 Ford Maverick -- both popular midrange compact cars for their time -- and found that the Focus beat the pants off the Maverick on every metric but price (it cost about twice what the Maverick did, adjusted for inflation). Roughly twice the engine power with 1.5 to 2x the gas mileage; more interior room; far safer and more reliable; vastly better amenities.
It's not scaling as fast as Moore's Law by any means, but progress is happening. That might be tempered a bit by the price point, but reliability alone would be a strong counter to that once you amortize over the lifetime of the car.
My scenario #1 explicitly says that even in the face of a slowdown, we'll see doubling times of 10-25 years: "If the doubling time reverts to the norm seen in other cutting-edge industrial sectors, namely 10-25 years, then we'd probably see the introduction of revolutionary new product categories only about once a generation."
So I'm not predicting complete stagnation, just a slowdown where computing power gains aren't happening fast enough for us to see new products every few years.