http://vimeo.com/22099396
What do people think of this, from a Bayesian perspective?
It is a talk given to the Oxford Transhumanists. Their previous speaker was Eliezer Yudkowsky. Audio version and past talks here: http://groupspaces.com/oxfordtranshumanists/pages/past-talks
Deutsch argues that the future is fundamentally unpredictable, that for example expected utility considerations can't be applied to the future, because we are ignorant of the possible outcomes and intermediate steps leading to those outcomes, and the options that will be available; and there is no way to get around this. The very use of the concept of probability in this context, Deutsch says, is invalid.
As illustration, among other things, he lists some failed predictions made by smart people in the past, attributing failure to unavailability of the ideas relevant for the predictions, ideas that will only be discovered much later.
(If it's unknowable, how can we know that a certain prediction strategy is going to be systematically biased in a known direction? Biased with respect to what knowable standard?)
Deutsch explains:
On a more constructive if not clearly argued note:
(Possibly an example of the halo effect: the good guys are good, the progress is good, so the good guys will make faster progress than the bad guys. Quite probably, there was better reasoning behind this argument, but Deutsch doesn't give it, and doesn't hint at its existence, probably because he considers the conclusion obvious, which is in any case a flaw of the talk.)
For the next 10 minutes or so he argues for the possibility of essentially open-ended technological progress.
Here, Deutsch seemingly makes the same mistake he discussed at the beginning of the talk: making detailed predictions about future technology that depend on the set of technology-defining ideas presently available (which, by his own argument, can lead to underestimation of progress).
The conclusion is basically a better version of Kurzweil's view of Singularity, that ordinary technological progress is going to continue indefinitely (Deutsch's progress is exponential in subjective time, not in physical time). Yudkowsky wrote in 2002:
Deutsch considers Popper's views on the process of development of knowledge, pointing out that there are no reliable sources of knowledge, and so instead we should turn to finding and correcting errors. From this he concludes:
(This doesn't terribly help with existential risks. Also, this optimism thing seems to be one magically reliable source of knowledge, strong enough to ignore whatever best conclusions it is possible to draw using the best tools currently available, however poor they seem on the great cosmic scale.)
This was addressed in Knowability of Friendly AI and many later Yudkowsky's writings, most recently in his joint paper with Bostrom. Basically, you can't predict the moves of a good chess AI, otherwise you'd be at least that good chess player yourself, and yet you know it's going to win the game.
Deutsch continues:
(Or, presumably, so Optimism demands, since the AIs are unpredictable, and technology.)
Finally, Deutsch summarizes the meaning of the overarching notion of "optimism" he has been using throughout the talk:
(No good questions in the quite long Q&A session. No LWers in the audience, I guess, or only the shy ones.)
This is surely a real effect. The government is usually stronger than the mafia. The army is stronger than the terrorists. The cops usually beat the robbers, etc.