comingstorm

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Tallying these, it looks like roughly one in six have actually come true. Another one in six seems likely to come true in the readily-forseeable future (say, five to eight years). Note that many of these depend on what you're willing to call a "computer". I contend that just because something has a microcontroller running it doesn't make it count as a computer; e.g., a traffic light doesn't qualify. But, should a cheap-ass dumb cellphone count? I think a certain amount of user-mediated flexibility should be a requirement, but ultimately it's a semantic argument anyway...

One weakness is pretty clear -- excessive optimism in the speed of development/adoption. There's no technological barrier to doing most of these things today, or in 1999 for that matter (although the robocar seems to be following the flying car along the path to perennial futurism). The most obvious problem is economic: at what point does the price come down to the point where it's worth bothering?

However, the less obvious problem is that many of the predicted technologies are simply not as practically useful as they sounded like. Speech recognition (the topic of multiple predictions) is in fact the perfect example of this; dictation software has in fact improved immensely since 1999, and extremely accurate commercial software is available today. However, the market for it is small (outside of niche markets like phone hells), and shows no signs of explosive growth.

The sad fact of the matter is that, technological wizardry notwithstanding, when you actually try out speech recognition, it is less useful for everyday tasks than a keyboard, for most people and purposes. The same kind of problem is encountered, to varying degrees, by virtual reality, haptic interfaces, educational software, and e-books, among others.

Finally, I don't contend that all of these functional deficits are irremediable; just that there is no particular evidence that they will ever become practical, and at any rate a great deal more work would have to be done to make them so. And the moral of this story, I think, is that it's easy to grossly underestimate the friction that engineering problems generate. So, when you're worrying about a hard takeoff from nanotech or whatever, bear in mind the modest fate of Dragon NaturallySpeaking.

@michael e sullivan: re "Monte Carlo methods can't buy you any correctness" -- actually, they can. If you have an exact closed-form solution (or a rapidly-converging series, or whatever) for your numbers, you really want to use it. However not all problems have such a thing; generally, you either simplify (giving a precise, incorrect number that is readily computable and hopefully close), or you can do a numerical evaluation, which might approach the correct solution arbitrarily closely based on how much computation power you devote to it.

Quadrature (the straightforward way to do numerical integration using regularly-spaced samples) is a numeric evaluation method which is efficient for smooth, low-dimensional problems. However, for higher-dimensional problems, the number of samples becomes impractical. For such difficult problems, Monte Carlo integration actually converges faster, and can sometimes be the only feasible method.

Somewhat ironically, one field where Monte Carlo buys you correctness is numeric evaluation of Bayesian statistical models!

What about Monte Carlo methods? There are many problems for which Monte Carlo integration is the most efficient method available.

(you are of course free to suggest and to suspect anything you like; I will, however, point out that suspicion is no substitute for building something that actually works...)

This "perfectly rational" game-theoretic solution seems to be fragile, in that the threshold of "irrationality" necessary to avoid N out of N rounds of defection seems to be shaved successively thinner as N increases from 1.

Also, though I don't remember the details, I believe that slight perturbations in the exact rules may also cause the exact game-theoretic solution to change to something more interesting. Note that adding uncertainty in the exact number of rounds has the effect of removing your induction premise: e.g., a 1% chance of ending the iteration each round has the effect of making the hanging genuinely unexpected.

Anyway, the iterated prisoner's dilemma is a better approximation of our social intuition, as in a social context, we expect at least the possibility of having to deal repeatedly with others. The alternate framing in the previous article seems to have been designed to remove such a social context, but in the interests of Overcoming Bias, we should probably avoid such spin-doctoring in favor of an explicit, above-board articulation of the problem.