Deutsch argues that the future is fundamentally unpredictable, that for example expected utility considerations can't be applied to the future, because we are ignorant of the possible outcomes and intermediate steps leading to those outcomes, and the options that will be available; and there is no way to get around this. The very use of the concept of probability in this context, Deutsch says, is invalid.
As illustration, among other things, he lists some failed predictions made by smart people in the past, attributing failure to unavailability of the ideas relevant for the predictions, ideas that will only be discovered much later.
[Science can't] predict any phenomenon whose course is going to be affected by the growth of knowledge, by the creation of new ideas. This is the fundamental limitation on the reach of scientific explanation and prediction.
[Predictions that are serious attempts to extract unknowable answers from existing knowledge] are going to be biased towards bad outcomes.
(If it's unknowable, how can we know that a certain prediction strategy is going to be systematically biased in a known direction? Biased with respect to what knowable standard?)
Deutsch explains...
It's worth noting that most of the Nazi superiority in technology wasn't actually due to Nazi efforts, but rather due to a previous focus on technological and scientific development; for example, Germans won 14 of the first 31 Nobel Prizes in Chemistry, the vast majority of initial research into quantum mechanics was done by Germans, etc. But Nazi policies actually did actively slow down progress, by e.g. causing the emigration of free-thinking scientists like John von Neumann, Hans Bethe, Leo Szilard, Max Born, Erwin Schrodinger, and Albert Einstein, and by replacing empirically based science with inaccurate political ideology. (Hitler personally believed that the stars were balls of ice, tried to avoid harmful "earth-rays" mapped out for him with a dowsing rod, and drank a toxic gun-cleaning fluid for its supposed health benefits, not to mention his bizarre racial theories.) Membership in the Society of German Natural Researchers and Physicians shrank nearly in two between 1929 and 1937; during World War II, nearly half of German artillery came from its conquered neighbors, its supply system relied in part on 700,000-2,800,000 horses, its tanks and aircraft were in many...
But different conditions hold today. The Gothic armies were virtually identical to the armies of the earlier Celts/Gauls who the Romans had crushed; even the Magyars (~1500's CE) used more or less the same tactics and organization as the Cimmerians (~ 700 BCE), though they did have stirrups, solid saddle trees, and stiff-tipped composite bows. Similarly, IIRC, the Roman armies didn't make use of any major recent technological innovations. This no longer holds today; the idea of an army using technology hundreds of years old being a serious military threat to any modern nation is frankly ludicrous. Technological and scientific development has become much, much more important than it was during Roman times.
(And, btw, it's not really accurate to say that, in practice, the barbarians were all that much much worse than the Romans in terms of development and innovation; technological development in Europe didn't really slow down all that much during the Dark Ages and the Romans had very few scientific (as opposed to engineering) advances anyways- most of their scientific knowledge (not to mention their mythology, art, architecture, etc.) was borrowed from the Greeks.)
Similarly, in WW2, Japan did quite well for itself, and if a handful of major battles had gone slightly differently, the outcome would have been very different.
You are wrong about this. Even if every single American ship magically got sunk at some point in 1941 or 1942, and if every single American soldier stationed outside of the U.S. mainland magically dropped dead at the same time, it would only have taken a few years longer for the U.S. to defeat Japan. Once the American war production was up and running, the U.S. could outproduce Japan by at least two orders of magnitude and soon overwhelm the Japanese navy and air force no matter what their initial advantage. Starting the war was a suicidal move for the Japanese leadership, and even the sane people among them knew it.
I think you're also overestimating the chances Germans had, and underestimating how well Hitler did given the circumstances, though that's more controversial. Also, Germany lost the technological race in pretty much all theaters of war where technology was decisive -- submarine warfare, cryptography, radars and air defense, and nuclear weapons all come to mind. The only exceptions I can think of are jet aircr...
It's inconsistent to expect the future to be better than one expects. If you think your probability estimates are too pessimistic adjust them until you don't know whether they are too optimistic or too pessimistic. No one stops you from assigning probability mass to outcomes like "technological solution that does away with problem X" or "scientific insight that makes the question moot". Claimed knowledge that the best possible probability estimate is biased in a particular direction cannot possibly ever be correct.
I stopped listening fairly quickly, after determining that it was rubbish from a Bayesian perspective. Specifically I stopped listening when he says that the future of humanity is different from russian roulette because the future can't be modeled by probability. This is the belief that there is a basic "probability-ness" that dice have and gun chambers have but people don't, and that things with "probability-ness" can be described by probability, but things without "probability-ness" can't be. But of course, we're all fermions and bosons in the end - there is no such thing as "probability-ness," probability is simply what happens when you reason from incomplete information.
This should not have been made as a top-level post without some more explanation to let people evaluate whether to watch the video.
It is a talk given to the Oxford Transhumanists. Their previous speaker was Eliezer Yudkowsky.
To clarify what I originally misinterpreted on reading this description: according to this page, Yudkowsky was giving a talk on 25 Jan 2011, while Deutsch on 10 Mar 2011, so "previous speaker" doesn't refer to giving talks in succession.
Thanks for posting this. I would definitely enjoy seeing a debate between Deutsch and Yudkowsky.
The part that dealt with ethics was incredibly naive. About 47 minutes in, for example, he is counseling us not to fear ET, because ET's morality will inevitably be superior to our own. And the slogan: "All evils are due to lack of knowledge". Why does this kind of thing remind me of George W. Bush?
But I agreed with some parts of his argument for the superiority of a a Popperian approach over a Bayesian one when 'unknown unknowns' regarding the gro...
From the very beginning of the talk:
I don't have to persuade you that, for instance, life is better than death; and I don't have to explain exactly why knowledge is a good thing, and that the alleviation of suffering is good, and communication, and travel, and space exploration, and ever-faster computers, and excellence in art and design, all good.
One of these things is not like the others.
It's slow loading for me due to a slow internet connection, but if the questions at the end are included, I was the one who asked about insurance companies.
I don't think his response was very satisfactory, though I have a better version of my question.
Suppose I give you some odds p:q and force you to bet on some proposition X (say, Democrats win in 2012) being true, but I let you pick which side of the bet you take; a payoff of p if X is true, or a payoff of q if X is false. For some (unique) value of p/q, you'll switch which side you want to take.
It seems this can force you to assign probabilities to arbitrary hypothesis.
My first reaction to his unlimited progress riff was "every specific thing I care about will be gone". The answer is presumably that there will be more things to care about. However, that initial reaction is probably common enough that it might be worth working on replies to people who are less inclined to abstraction than I am.
I'll take the edge off his optimism somewhat by pointing out that individuals and cultures can be rolled over by change, even if the whole system is becoming more capable, and we care about individuals and cultures (especi...
Deutsch gives Malthus as an example of a failed pessimistic prediction - at 23:00. However, it still looks as though Malthus is likely to have been correct. Populations increase exponentially, while resources expand at most in a polynomial fashion - due to the light cone. Deutsch discusses this point 38:00 minutes in, claiming relatavistic time dilation changes this conclusion, which I don' t think it really does: you still wind-up with most organisms being resource-limited, just as Malthus described.
Martin Rees is misrepresented 4:04 in. What Rees actually said was:
the odds are no better than 50-50 that our present civilisation on Earth will survive to the end of the present century without a serious setback'
...whatever a "serious setback" is supposed to mean.
Considering that Deutsch was talking about new knowledge, and I use the same terminology as him, it is relevant.
Then the define the term.
I know that? And if I played Kasparov I might win. It's not a 100% guaranteed prediction.
So what? How is that at all relevant. It isn't 100% guaranteed that if I jump off a tall building that I will then die. That doesn't mean I'm going to try. You can't use the fact that something isn't definite as an argument to ignore the issue wholesale.
Deutsch doesn't think AGI's will do fast recursive self-improvement. They can't because the first ones will already be universal and there's nothing much left to improve, besides their knowledge (not their design, besides making it faster).
Ok. So I'm someone who finds extreme recursive self-improvement to be unlikely and I find this to be a really unhelpful argument. Improvements in speed matter. A lot. Imagine for example, that our AI finds a proofs that P=NP and that this proof gives a O(n^2) algorithm for solving your favorite NP-complete problem, and that the constant in the O is really small. That means that the AI will do pretty much everything faster, and the more computing power it gets the more disparity there will be between it and the entities that don't have access to this algorithm. It wants to engineer a new virus? Oh what luck, protein folding is under many models NP-compete. The AI decides to improve its memory design? Well, that involves graph coloring and the traveling salesman, also NP-complete problems. The AI decides that it really wants access to all the world's servers and add them to its computational power? Well most of those have remote access capability that is based on cryptographic problems which are much weaker than NP-complete. So, um, yeah. It got those too.
Now, this scenario seems potentially far-fetched. After all, most experts consider it to be unlikely that P=NP, and consider it to be extremely unlikely that there's any sort of fast algorithm for NP complete problems. So let's just assume instead that the AI tries to make itself a lot faster. Well, let's see, what can our AI do. It could give itself some nice quantum computing hardware and then use Shor's algorithm to break factoring in polynomial time and then all AI can just take over lots of computers and have fun that way.
Improving knowledge with intelligence is the same process for AGI and humans. It won't magically get super fast
This is not at all obvious. Humans can't easily self-modify our hardware. We have no conscious access to most of our computational capability, and our computational capability is very weak. We're pathetic sacks of meat that can't even multiply four or five digits numbers in our heads. We also can't save states and swap out cognitive modules. An AGI can potentially do all of that.
Don't underestimate the dangers of a recursively self-improving entity or the value of speed.
http://vimeo.com/22099396
What do people think of this, from a Bayesian perspective?
It is a talk given to the Oxford Transhumanists. Their previous speaker was Eliezer Yudkowsky. Audio version and past talks here: http://groupspaces.com/oxfordtranshumanists/pages/past-talks