Wow, yes of course thinking the future of intelligence can't go foom would be bad. Since your non-profit plans on doing just that and it's really dangerous. I get that, except the foom part.
I don't see how you could argue this without a better understanding of how the brain is so smart.
All right, so it's your job to be extremely abstract. Well, abstract away. :P
If you look at the whole list, the predictions aren't terribly bad. I think the main problem is that they are far too concrete--Kurzweil got the gist of trends in electronics. He should have simply stated the trends he sees that justified the predictions he made. Then, instead of these obviously flawed predictions, he would have some worthwhile observations about trends in consumer electronics.
I don't know about the "Luddite movement" prediction, though. He should leave those kind of predictions to the social scientists.
I looked at the list and thought it strange. As you said, some items have more details than others. Why? Did Ray see stronger reasons for less likely predictions, to put them on par with the vaguer ones? What role does his Law of Accelerating Returns play in this? As the more detailed claims are more wrong than the vague ones, has he become more skeptical of his ability to make predictions using his rationale?
I also agree that (this sort of) futurism isn't about prediction. Many of the claims aren't useful. Worse still, not only are some of the predictions are vague but some are difficult to interpret specifically in ways that introduce bias. For example: what is a "growing" Luddite movement? Is (2007AD, 5,000ppl) -> (2008AD, 10,000ppl) growing? Is "dramatically lighter and thinner" meant to be wearable computing (which seems to be implied from the theme of the other predictions), netbooks (which actually exist, and seem to be the default assumption of people looking at it today) or something else entirely?
If exponential growth means you get 90% of the progress in the last 10% of the time, I've got to say, that list looks surprisingly doable given that we've got another year to do it in. We're certainly more than 10% of the way there now! The iPhone alone checks off about a quarter of the list...
Prediction can't be anything but a naive attempt at extrapolating past & current trends out into the future. Most of Kurzwel's accurate predictions are just trends about technology that most people can easily notice. Social trends are much more complex, and those predictions of Kurzweil are off. Also, the occasional black swan is unpredictable by definition, and is usually what causes the most significant changes in our societies.
I like how sci-fi authors talk about their writings as not predicting what the future is going to look like (that's impossible and getting more so), but as using the future to critique the present.
Lastly, Alan Kay's quote always comes in handy when talking about future forecasting: "The best way to predict the future is to invent it."
It might be useful to put a little check or X market next to these items, to indicate which were right vs. wrong, so the eye could quickly scan down the list to see the overall trend. But yes it won't look good for Kurzweil, and checking such track records is very important.
Quite a number of the incorrect predictions are related to one development only: speech recognition. The speech recognition software we have nowadays is not yet sophisticated enough to encourage wider use. (And it won't become sophisticated enough until it incorporates (near-)human-level context knowledge for disambiguation.) So, all the predictions of it being the main interaction channel with our computers is not borne out.
Another, connected set of predictions, deals with machine translation and is not borne out at this time for the same reasons.
The first set (about portable computers) can be seen as mainly correct, except for the objects which are listed as portable computers for daily use. Instead of earrings and other jewelry we have cell phones and PAs, mp3 players, other media players, GPS nav systems in cars. The trend here is towards incorporating all of it into one object, rather than distributing into several and connecting them with a LAN. Look at how the cell phones/PAs evolved in the last few years: web browsers, emails, media players, document editors, calculators, converters, GPS or other (Google Maps) navigation, etc. I would look to them becoming the "portable computers" of Kurzweil's prediction.
The rest is not that far from the current state of technology.
Interpretation of terms is a significant issue, in terms of what qualifies as a computer and whether you fulfill terms like "routinely," "commonly," and "growing." Karo has covered that first bit.
As I score it, 9/31 are right by a generous interpretation of terms, maybe a few more depending on areas I do not know well or with very broad uses of "portable computer." Do cell phones outnumber land lines yet? By a stricter interpretation, 2/31 (lighter and thinner portables, many digital objects).
Rather than belabor Karo's point, I will note that we keep defining "computer" up, such that most of us do not think of MP3 players and cell phones as computers. You can, however, buy reasonably priced cell phones that are more powerful computers than the desktop I had in 1999.
Publishers choose from a wide range of authors who want to make predictions. They choose those that are most exciting. These come from the "fast change" and "strange assumptions" end of the distribution. Any prediction you actually hear about is therefore likely to be wrong.
I would agree with Karo, I think. I'm actually surprised by how accurate this list of predictions is; it's not at 50% but I'm not sure why we would expect it to be with predictions this specific. (I'm not saying he was epistemically justified, just that he's more accurate than I would have expected).
Following up on Eliezer's point, it seems like the core of his claims are: 1) computers will become smaller and people will have access to them basically 24/7. If you remember that even my cell phone, which is a total piece of crap and cost $10, would look like a computer to someone from 1999, this seems fairly accurate. 2) Keyboards will cease to be the primary method of interaction with computers. We're getting there--slowly--between improved voice recognition and the growth of tablet PCs. But we're not there yet. I wouldn't be surprised at all if this were true by 2020 (wouldn't be surprised if it weren't, either. I don't know how far off we are from speech recognition good enough that people can assume it will work). 3) People will start using computers for things that in 1999 had to be done in hard copy. This is starting to happen, but we're not there yet. Again, wouldn't surprise me either way in 2020. 4) People will be able to use computers as their primary means of interaction with the world. Some basement-dwelling geeks like myself aside, not quite true. People like dealing with other people. I think this is the least likely to be true ten years from now.
I'd say most of those things have in fact come to pass. The ones that haven't are primarily non-technological matters of preference (ie, keyboards) or highly specialized/low profit market niches (services for the disabled, primary education).
Rotating memory is on the way out, we've all got (or can easily have) high resolution wireless telephony/internet, and printed newspapers and magazines are essentially dead.
He's also very right about the growing neo-luddite movement. Case in point the organic anti-GM lobby and the anti-nuclear coal fetishists.
I'd say most of those things have in fact come to pass. The ones that haven't are primarily non-technological matters of preference (ie, keyboards) or highly specialized/low profit market niches (services for the disabled, primary education).
Rotating memory is on the way out, we've all got (or can easily have) high resolution wireless telephony/internet, and printed newspapers and magazines are essentially dead.
He's also very right about the growing neo-luddite movement. Case in point the organic anti-GM lobby and the anti-nuclear coal fetishists.
If you remember that even my cell phone, which is a total piece of crap and cost $10, would look like a computer to someone from 1999, this seems fairly accurate.
If I can't do my homework on it, it probably doesn't count as a personal computer. :P
Of course, if you go to, say, Japan, you see people using their phones for a lot more than you do in the United States. We have the world's crappiest cell phones.
Karo, regarding speech recognition being one of the least accurate areas, what does it say that this is the one field in which Kurzweil is an acknowledged expert? It ties into Tetlock's work where he found this same effect, at least among so-called "hedgehogs", people who focus on one single style of cognitive analysis and prediction (as compared to bringing in a wide range of viewpoints and strategies to get a handle on complex data). I don't know if Kurzweil really matches the hedgehog personality but it does seem that he tries to get a lot of mileage out of exponential curve fitting at the expense of other tools.
Tallying these, it looks like roughly one in six have actually come true. Another one in six seems likely to come true in the readily-forseeable future (say, five to eight years). Note that many of these depend on what you're willing to call a "computer". I contend that just because something has a microcontroller running it doesn't make it count as a computer; e.g., a traffic light doesn't qualify. But, should a cheap-ass dumb cellphone count? I think a certain amount of user-mediated flexibility should be a requirement, but ultimately it's a semantic argument anyway...
One weakness is pretty clear -- excessive optimism in the speed of development/adoption. There's no technological barrier to doing most of these things today, or in 1999 for that matter (although the robocar seems to be following the flying car along the path to perennial futurism). The most obvious problem is economic: at what point does the price come down to the point where it's worth bothering?
However, the less obvious problem is that many of the predicted technologies are simply not as practically useful as they sounded like. Speech recognition (the topic of multiple predictions) is in fact the perfect example of this; dictation software has in fact improved immensely since 1999, and extremely accurate commercial software is available today. However, the market for it is small (outside of niche markets like phone hells), and shows no signs of explosive growth.
The sad fact of the matter is that, technological wizardry notwithstanding, when you actually try out speech recognition, it is less useful for everyday tasks than a keyboard, for most people and purposes. The same kind of problem is encountered, to varying degrees, by virtual reality, haptic interfaces, educational software, and e-books, among others.
Finally, I don't contend that all of these functional deficits are irremediable; just that there is no particular evidence that they will ever become practical, and at any rate a great deal more work would have to be done to make them so. And the moral of this story, I think, is that it's easy to grossly underestimate the friction that engineering problems generate. So, when you're worrying about a hard takeoff from nanotech or whatever, bear in mind the modest fate of Dragon NaturallySpeaking.
In order to score forecasts, what we really want is:
As far as I can tell, Kurzweil's methodology explicitly predicts a vaguely defined "rate of change" that is currently roughly doubling every fifteen years. Within that methodology, he extracts fairly strait-forward preditions, assumes instant adoption of technologies once they are feasible and ignores black swans, and largely ignores basic science in general focusing on technology. In addition he adds a few idiosyncratic predictions that match his interests and which are not calibrated to the general rate of change being predicted. In particlar, these predictions tend to accurately estimate near future computer pattern recognition capabilities in natural domains but grossly underestimate human pattern recognition in such domains and to be grossly overoptimistic about market enthusiasm for computer systems which underperform relative to humans in pattern recognition functions. The predictions on the above list generally seem to fit this methodology.
...and everyone has heard of Ray Kurzweil, who has done a lot of very productive things and is rich, and no one has heard of you, who have not. Irritating, isn't it?
His predictions were much better than I expected. Your headline is misleading given this data point.
Most of his failures on that list seem to be a disconnection from social trends and/or a misunderstanding of the ways that people will actually want to use technology, as well as overestimating adoption rates. He also seems to think that as soon as all the component parts of a device have been made commonly available, inventions that combine them in straightforward ways (i.e. GPS navigators for the blind) will appear instantly. In short, were it not for sociology, he'd be doing a lot better.
Then again, my understanding of the way he does business is to fill in those gapes himselves. The products his company makes seem to be focused on combining new technologies, and he encourages people to do the same.
He was wrong about voice recognition, but once that's there, suddenly about a third of that list will fall into the same category: just a matter of overestimating the initiative that the market would take. It looks good to me.
A few things: There are rumors of Google planning to do speech recognition on all youtube videos, to make them searchable. The tech is probably closer than we give it credit for. Perhaps Kurzweil is more guilty of overestimating how much people would want to adopt speech recognition. Who would have predicted that people would prefer to tap out messages on numeric keypads?
If your cellphone has bluetooth, you can probably hook up a standard bluetooth keyboard and mouse. Possibly a display, for example: http://www.celiocorp.com/
Given that i'm lying in bed with my iPhone commenting on this post, I'd say ray did ok.
His extrapolations of computer hardware seem to be pretty good.
His extrapolations of computer software are far too optimistic. He clearly made the mistake of vastly underestimating how much work our brains do when we translate natural language or turn speech into text.
In 1999 we hadn't experienced the dotcom crash or 911. Those events may have slowed consumer technology application by a few years. On the other hand, military robots, remotely piloted aircraft, and "social" software for tracking terrorist groups have seen accelerated development. Concerns about global warming and oil shortages will likely accelerate nanotech and biotech associated with energy production while reducing the pace of development in other fields. Computational power continues to increase by a factor of a thousand per decade. Biotech continues to advance exponentially.
If humanity were really approaching a technological singularity I'd expect to see rapid increases in average wealth. Stock market performance in the last decade doesn't reflect a growth in real wealth. Also death rates for common diseases aren't showing a significant decline.
Kurzweil predicted good machine language by 2004/2005 on NPR in 1999. This is close (I predicted by 2006 back in 1996 by just throwing ten years of computational power on top of what I saw that the internet could do) If you look at MT for many languages, he was a few years off - yet the vast majority thought 25 years or never. I think these predictions will look good by 2011 or 2012, and that isn't so far off since he made them in 1999.
Same for a possible luddite movement. So far, top social scientists are almost completely ignoring technological change as one sees with the health care projections out to 2025 or 2040, the GAO social security scare of 2040, etc.
I'm not sure if this criticism is correct, but the most opposed seem to be cocky baby boomers in their 50s like John Horgan who imply they know there won't be much change because it didn't land exacctly on 2001 as they expected.
By the way, many deaths from disease have been in decline for decades including heart disease and stroke. Death from cancer is also likely to see a sharp decline from 2010.
This is interesting. Kurzweil did pretty well on technological predictions - I count him 6 for 7 on purely technical predictions. He did very badly on socio-economic predictions. I give him 5 out of 24 based on a lot of half points. In many cases however the technological component of these predictions was available (except on speech recognition which is interesting in itself for HalFinney's reasons) and he was just wrong on his predictions of how society would adopt it. I score him 12/18 where he was right about the technology but wrong about the socio-economics. Ignoring speech recognition he's 12 for 13 on the tech (I granted a couple of half points).
I note that this was posted in December 2008 so given Kurzweil's exponential growth theory it seems a bit premature to pass judgement at that point - you would expect quite a few of the claims to only come true in the last year of the decade if his theory were correct.
Now, just to be clear, I don't want you to look at all that and think, "Gee, the future goes more slowly than expected - technological progress must be naturally slow."
Actually, that looks like a reasonable assessment from a few years later, given that more of these things do appear to be coming about.
From 2023 Ray Kurzweil's list doesn't look all that bad, just ahead of reality by some 20 years. The main point of this post (futurism is mostly entertainment) seems true though.
This seems worth posting around now... As I've previously observed, futuristic visions are produced as entertainment, sold today and consumed today. A TV station interviewing an economic or diplomatic pundit doesn't bother to show what that pundit predicted three years ago and how the predictions turned out. Why would they? Futurism Isn't About Prediction.
But someone on the ImmInst forum actually went and compiled a list of Ray Kurzweil's predictions in 1999 for the years 2000-2009. We're not out of 2009 yet, but right now it's not looking good...
Now, just to be clear, I don't want you to look at all that and think, "Gee, the future goes more slowly than expected - technological progress must be naturally slow."
More like, "Where are you pulling all these burdensome details from, anyway?"
If you looked at all that and said, "Ha ha, how wrong; now I have my own amazing prediction for what the future will be like, it won't be like that," then you're really missing the whole "you have to work a whole lot harder to produce veridical beliefs about the future, and often the info you want is simply not obtainable" business.