David Deutsch on How To Think About The Future

4 Post author: curi 11 April 2011 07:08AM

http://vimeo.com/22099396

What do people think of this, from a Bayesian perspective?

It is a talk given to the Oxford Transhumanists. Their previous speaker was Eliezer Yudkowsky. Audio version and past talks here: http://groupspaces.com/oxfordtranshumanists/pages/past-talks

Comments (197)

Comment author: CarlShulman 09 April 2011 07:10:08PM *  7 points [-]

This should not have been made as a top-level post without some more explanation to let people evaluate whether to watch the video.

Comment author: curi 09 April 2011 07:13:46PM -1 points [-]

I don't want to bias the reactions.

Comment author: JoshuaZ 09 April 2011 09:17:14PM 5 points [-]

Unfortunately, this attitude and your decision to put this in main rather than the discussion section is getting it downvoted. That will likely continue. Moreover, downvotes for main section articles hurt a lot more than downvotes in the discussion section. I strongly urge you to move this into the discussion section where it will be considered a much more reasonable post.

Comment author: PhilGoetz 10 April 2011 12:44:35AM *  0 points [-]

Everyone who downvotes links posted in the main section because they think it's a cheap way to get karma - you can just choose not to vote for them. Thus, trying to discourage people from posting to the main page for karma reasons is trying to make karma voting decisions for other LWers.

Karma is supposed to indicate which articles and comments are worth reading. Karma doesn't function to tell people whose opinions to respect, so people should stop worrying that other people are getting easy karma. Trust me - I have 15,000 karma, and people don't cut me any more slack than when I had none.

Comment author: JoshuaZ 10 April 2011 12:51:26AM 2 points [-]

Curi's karma has repeatedly dropped low enough that his posting rate is moderated. If that's going to happen then it should occur based on the quality of posts not to him being socially tone-deaf about community norms of where to post things.

(Incidentally, there's another reason to downvote short link posts and the like in the main section- some people just have the RSS feed for the main posts and don't want every little link to show up).

Comment author: PhilGoetz 10 April 2011 12:56:32AM *  2 points [-]

Curi's karma has repeatedly dropped low enough that his posting rate is moderated. If that's going to happen then it should occur based on the quality of posts not to him being socially tone-deaf about community norms of where to post things.

That's Curi's decision.

(Incidentally, there's another reason to downvote short link posts and the like in the main section- some people just have the RSS feed for the main posts and don't want every little link to show up).

Okay - a valid reason.

I would still like to say that, when considering whether to impose a social norm against posting certain things on the main page, saying that you think they're unworthy of karma is not a good reason, because (a) karma point accumulation to users beyond getting enough to post does not give them any advantage, and (b) you can choose not to vote, and therefore you can object only because you don't trust the judgement of other users on LW and so would like to deprive them of the freedom to vote for such articles.

This may not have been your reason, but this seemed like a good place to make my point.

Comment author: pjeby 09 April 2011 07:30:11PM 13 points [-]

You might want to move this to the discussion section, then; unadorned links like this are generally not considered appropriate to the main LW section.

(You can move it by editing the article, then changing where it's being published to.)

Comment author: Dorikka 09 April 2011 08:33:41PM 0 points [-]

Yep. I would downvote this, but it's already invisible on the top-level page.

Comment author: Larks 09 April 2011 11:21:36PM 0 points [-]

The main page is for things we think of sufficient quality that they're worth the time and cognitive effort of reading. Is this worth an hour of your time to read? If not, it should be downvoted to invisibility.

Comment author: Dorikka 09 April 2011 11:32:38PM 1 point [-]

As of now and when I first saw the post appear on the sidebar, it is/was invisible on the main page and visible only through the sidebar.

Comment author: Larks 09 April 2011 11:34:46PM 1 point [-]

Yup.

It's worth noting in general that the 'main page' is actually the 'promoted' page, which requires an admin to move you there. But you're right, the article is also not visible on the 'new' page either.

Comment author: Vladimir_Nesov 09 April 2011 08:14:40PM *  27 points [-]

Deutsch argues that the future is fundamentally unpredictable, that for example expected utility considerations can't be applied to the future, because we are ignorant of the possible outcomes and intermediate steps leading to those outcomes, and the options that will be available; and there is no way to get around this. The very use of the concept of probability in this context, Deutsch says, is invalid.

As illustration, among other things, he lists some failed predictions made by smart people in the past, attributing failure to unavailability of the ideas relevant for the predictions, ideas that will only be discovered much later.

<18:57> [Science can't] predict any phenomenon whose course is going to be affected by the growth of knowledge, by the creation of new ideas. This is the fundamental limitation on the reach of scientific explanation and prediction.

<20:33> [Predictions that are serious attempts to extract unknowable answers from existing knowledge] are going to be biased towards bad outcomes.

(If it's unknowable, how can we know that a certain prediction strategy is going to be systematically biased in a known direction? Biased with respect to what knowable standard?)

Deutsch explains:

And the basic reason for that is that, as I said, the growth of knowledge is good, so that kind of prophesy, which can't imagine it, is going to be biased against prophesying good.

<24:54> Reason and science are the means to progress. They are not means to prophesy.

On a more constructive if not clearly argued note:

<25:33> Merely pulling the trigger less often doesn't change the inevitability of doom. [...] One of the most important uses of technology is to counteract disasters and to recover from disasters, both from foreseen and unforeseen evil. Therefore, the speed of progress itself is one of the things that is a defense against catastrophe.

<26:53> The speed of progress is one of the things that gives the good guys the edge over the bad guys, because good guys make faster progress.

(Possibly an example of the halo effect: the good guys are good, the progress is good, so the good guys will make faster progress than the bad guys. Quite probably, there was better reasoning behind this argument, but Deutsch doesn't give it, and doesn't hint at its existence, probably because he considers the conclusion obvious, which is in any case a flaw of the talk.)

For the next 10 minutes or so he argues for the possibility of essentially open-ended technological progress.

<39:27> The amount of knowledge in an environment of rational thought that allows it to grow, grows exponentially relative to the speed of computation.

[...] It's a mistake to think of the so-called singularity as being a shock, where we find that we can't cope with life, because iPhone updates are coming [...] every second. That's a mistake, because when progress reaches that speed, our technologically enhanced speed of thinking will have increased in proportion, and so subjectively again we will experience mere exponential growth.

Here, Deutsch seemingly makes the same mistake he discussed at the beginning of the talk: making detailed predictions about future technology that depend on the set of technology-defining ideas presently available (which, by his own argument, can lead to underestimation of progress).

The conclusion is basically a better version of Kurzweil's view of Singularity, that ordinary technological progress is going to continue indefinitely (Deutsch's progress is exponential in subjective time, not in physical time). Yudkowsky wrote in 2002:

I've come to the conclusion that what Kurzweil calls the "Singularity" is what we would call "the ordinary progress of technology." In Kurzweil's world, the Grinding Gears of Industry churn out AI, superhuman AI, uploading, brain-computer interfaces and so on, but these developments do not affect the nature of technological progress except insofar as they help to maintain Kurzweil's curves exactly on track.

Deutsch considers Popper's views on the process of development of knowledge, pointing out that there are no reliable sources of knowledge, and so instead we should turn to finding and correcting errors. From this he concludes:

<44:48> Optimism demands that we not try to extract prophesies of everything that could go wrong in order to forestall it from our existing scanty and misconception-laden existing knowledge. Instead, we need policies and institutions that are capable of correcting mistakes and recovering from disasters when they happen. When, not if.

(This doesn't terribly help with existential risks. Also, this optimism thing seems to be one magically reliable source of knowledge, strong enough to ignore whatever best conclusions it is possible to draw using the best tools currently available, however poor they seem on the great cosmic scale.)

<46:00> The way to prevent that nightmare of rogue AI apocalypse is not try to enslave our AIs, because if the AIs are creating new knowledge (and that's a definition of AI), then successfully enslaving them would require foretelling (prophesying) the ideas that they could have, and the consequences of those ideas, which is impossible.

This was addressed in Knowability of Friendly AI and many later Yudkowsky's writings, most recently in his joint paper with Bostrom. Basically, you can't predict the moves of a good chess AI, otherwise you'd be at least that good chess player yourself, and yet you know it's going to win the game.

Deutsch continues:

So instead, just as for our fellow humans, and for the same reason, we must allow AIs to integrate into the institutions of our open society.

(Or, presumably, so Optimism demands, since the AIs are unpredictable, and technology.)

<47:55> The only moral values that permit sustained progress are the objective values of an open society and more broadly of the enlightenment. No doubt, the [extraterrestrials'] morality would not be the same as ours, but nor will it be the same as that of 16th century conquistadors. It will be better than ours.

Finally, Deutsch summarizes the meaning of the overarching notion of "optimism" he has been using throughout the talk:

<49:50> Optimism in this sense that I have argued for is not a feeling, is not a bias or spin that we put on facts, like, you know, half-full instead of half-empty, nor on predictions, it's not hope for the best, nor blind expectation of the best (in some sense it's quite the contrary, we expect errors). It is a cold, hard, far-reaching implication of rejecting irrationality, nothing else. Thank you for listening.

(No good questions in the quite long Q&A session. No LWers in the audience, I guess, or only the shy ones.)

Comment author: timtyler 15 April 2011 02:31:58PM 2 points [-]

Deutsch argues that the future is fundamentally unpredictable, that for example expected utility considerations can't be applied to the future, because we are ignorant of the possible outcomes and intermediate steps leading to those outcomes, and the options that will be available; and there is no way to get around this. The very use of the concept of probability in this context, Deutsch says, is invalid.

This bit starts about 12 minutes in. It is complete nonsense - Deutsch does not have a clue about the subject matter he is talking about :-(

Comment author: XiXiDu 10 April 2011 11:02:18AM *  2 points [-]

Basically, you can't predict the moves of a good chess AI, otherwise you'd be at least that good chess player yourself, and yet you know it's going to win the game.

This is a really good point. When I read it I first thought I would have to disagree, after all we've designed the chess AI and therefore do understand it. But since I am currently reading Daniel Dennett's 'Darwin's Dangerous Idea' my next thought was that disagreeing with it seems to be a general bias assuming that a design is always inferior to its designer. But it should be obvious that our machines are faster and stronger than us, why not better thinkers too?

Unlike the blind idiot God we can pinpoint our own flaws and devise solutions but are also unable to apply them to ourselves effectively, which will be realized by the next level of self-redesigning things. But even now our designs can be superior to us as they mirror our own improved upon capabilities, our skills minus our flaws. We are still able to understand our machines but unable to mimic their capabilities as we've been able to recreate some of our skills but haven't been able to benefit from the improvements we devised. We know that steel is tougher than bones, beware "steel" that knows this fact as well.

Comment author: curi 09 April 2011 10:00:04PM *  2 points [-]

Merely pulling the trigger less often doesn't change the inevitability of doom. [...] One of the most important uses of technology is to counteract disasters and to recover from disasters, both from foreseen and unforeseen evil. Therefore, the speed of progress itself is one of the things that is a defense against catastrophe.

The idea is: if you're going to pull the trigger once every 100 years, instead of once every 5, and it's a 2% chance of doom each time, you're still doomed eventually. Any static society is doomed in that way. The delays don't help anything because nothing is changing in the mean time, so eventually doom happens.

The attitude of not making progress, but just trying to sustain a fixed lifestyle forever, cannot work. Even if the chance of doom per year is made low, there is some chance so it will have to destroy them eventually. There's nothing to stop it from doing so.

It's only in a dynamic society creating new knowledge and progress that lasting longer matters to whether you're doomed eventually, b/c in that extra time more progress is made.

Comment author: curi 09 April 2011 08:55:15PM *  1 point [-]

(If it's unknowable, how can we know that a certain prediction strategy is going to be systematically biased in a known direction? Biased with respect to what knowable standard?)

I forget how much detail there is on this later in this talk, but it is in his book. The systematic bias towards pessimism is due to the method of trying to imagine the future using today's knowledge (which is less than the future's knowledge).

Quoting Deutsch from The Beginning of Infinity:

Trying to know the unknowable leads inexorably to error and self-deception. Among other things, it creates a bias towards pessimism. For example, in 1894, the physicist Albert Michelson made the following prophecy about the future of physics:

The more important fundamental laws and facts of physical science have all been discovered, and these are now so firmly established that the possibility of their ever being supplanted in consequence of new discoveries is exceedingly remote. … Our future discoveries must be looked for in the sixth place of decimals. (Albert Michelson, address at the opening of the Ryerson Physical Laboratory,
 University of Chicago, 1894)

What exactly was Michelson doing when he judged that there was only an ‘exceedingly remote’ chance that the foundations of physics as he knew them would ever be superseded? He was prophesying the future. How? On the basis of the best knowledge available at the time. But that consisted of the physics of 1894! Powerful and accurate though it was in countless applications, it was not capable of predicting the content of its successors. It was poorly suited even to imagining the changes that relativity and quantum theory would bring – which is why the physicists who did imagine them won Nobel prizes. Michelson would not have put the expansion of the universe, or the existence of parallel universes, or the non-existence of the force of gravity, on any list of possible discoveries whose probability was ‘exceedingly remote’. He just didn’t conceive of them at all.

Comment author: FAWS 10 April 2011 01:51:08AM *  8 points [-]

It's inconsistent to expect the future to be better than one expects. If you think your probability estimates are too pessimistic adjust them until you don't know whether they are too optimistic or too pessimistic. No one stops you from assigning probability mass to outcomes like "technological solution that does away with problem X" or "scientific insight that makes the question moot". Claimed knowledge that the best possible probability estimate is biased in a particular direction cannot possibly ever be correct.

Comment author: curi 09 April 2011 11:24:59PM 0 points [-]

(Presumably, since the AIs are unpredictable, and technology, Optimism demands that we all live happily ever after.)

No. Deutsch's "principle of optimism" states:

All evils are caused by insufficient knowledge.

optimism demands that they can live happily ever after if they learn how. it does not predict that they will.

Comment author: Vladimir_Nesov 09 April 2011 11:57:26PM *  0 points [-]

Agreed. The "we all live happily ever after" inference does contradict Deutsch's idea, which I noticed a little after writing this, and so corrected the wording (before seeing your comment) thusly:

(Or, presumably, so Optimism demands, since the AIs are unpredictable, and technology.)

Comment author: curi 09 April 2011 10:12:45PM 0 points [-]

(Possibly an example of the halo effect: the good guys are good, the progress is good, so the good guys will make faster progress than the bad guys. Quite probably, there was better reasoning behind this argument, but Deutsch doesn't give it, and doesn't hint at its existence, probably because he considers the conclusion obvious, which is in any case a flaw of the talk.)

He doesn't consider it obvious. He considers nothing obvious in general (in a serious, not vacuous way). This in particular he has thought about, not because it is obvious but because it isn't.

The basic reason "good guys" make progress faster than "bad guys" (in the sense of: immoral guys, like prone to violence) is that they have more stable, peaceful, cooperative societies that are better suited to making progress. It's because good values are more effective in real life.

There's discussion of this stuff in his book The Beginning of Infinity.

Comment author: JoshuaZ 09 April 2011 10:40:46PM *  4 points [-]

The basic reason "good guys" make progress faster than "bad guys" (in the sense of: immoral guys, like prone to violence) is that they have more stable, peaceful, cooperative societies that are better suited to making progress. It's because good values are more effective in real life.

This sort of claim seems to run into historical problems. A lot of major expansionist violent empires have done quite well for themselves. In modern times, some of the most "bad" groups have done well as well. The Nazis in many ways had much better technology than the Allies. If they hadn't been ruled by an insane dictator they would have done much better. Similarly, if they had expanded just as much but waited to start the serious discrimination and genocide until after they already had won they would have likely won. Similarly, in WW2, Japan did quite well for itself, and if a handful of major battles had gone slightly differently, the outcome would have been very different.

Or to use a different, but potentially more controversial example, in North America and in Australia, the European colonizers won outright, despite having extremely violent, expansionist policies. In North America, you actually had multiple different European groups fighting amongst themselves as well and yet they still won.

Overall, this is a pleasant, optimistic claim that seems to be depressingly difficult to reconcile with actual history.

Comment author: Randaly 10 April 2011 01:10:46AM *  11 points [-]

It's worth noting that most of the Nazi superiority in technology wasn't actually due to Nazi efforts, but rather due to a previous focus on technological and scientific development; for example, Germans won 14 of the first 31 Nobel Prizes in Chemistry, the vast majority of initial research into quantum mechanics was done by Germans, etc. But Nazi policies actually did actively slow down progress, by e.g. causing the emigration of free-thinking scientists like John von Neumann, Hans Bethe, Leo Szilard, Max Born, Erwin Schrodinger, and Albert Einstein, and by replacing empirically based science with inaccurate political ideology. (Hitler personally believed that the stars were balls of ice, tried to avoid harmful "earth-rays" mapped out for him with a dowsing rod, and drank a toxic gun-cleaning fluid for its supposed health benefits, not to mention his bizarre racial theories.) Membership in the Society of German Natural Researchers and Physicians shrank nearly in two between 1929 and 1937; during World War II, nearly half of German artillery came from its conquered neighbors, its supply system relied in part on 700,000-2,800,000 horses, its tanks and aircraft were in many ways technologically inferior to those of many of its neighbors, etc.

"If they hadn't been ruled by an insane dictator they would have done much better. Similarly, if they had expanded just as much but waited to start the serious discrimination and genocide until after they already had won they would have likely won."

But that's Deutch's entire point- that that's what the "bad guys" do, what makes them the "bad guys". Sure if Hitler hadn't been Hitler, or somehow not been human, German science wouldn't have been at a massive disadvantage. But I don't see much evidence that the "bad guys" have an advantage; at best, if you assume best case conditions and that the "bad guys" don't act like humans, you get an equal playing field.

(And we see similar things among the other "bad guys" of history- Lysenkoism, the Great Leap Forwards, etc.)

"Or to use a different, but potentially more controversial example, in North America and in Australia, the European colonizers won outright, despite having extremely violent, expansionist policies."

Conditions then no longer hold; nations are no longer isolated, the ideas of science/democracy/capitalism are fairly generally known, etc. And it's also worth noting that the colonizers have generally been transformed into "good guys".

Comment author: Vladimir_M 12 April 2011 03:18:11AM 7 points [-]

during World War II, nearly half of German artillery came from its conquered neighbors, its supply system relied in part on 7,000 horses,

According to this article published by the German Federal Archives, 2.8 million horses served in the German armed forces in WW2. The article also notes how successfully the German wartime propaganda portrayed the Wehrmacht as a high-tech motorized army, an image widely held in the public to this day, while in reality horses were its main means of transport.

Comment author: JoshuaZ 10 April 2011 01:15:29AM *  4 points [-]

You make a very strong case that the Nazi example does go in the other direction. I withdraw that example. If anything it goes strongly in favor of Deutsch's point.

I'm not convinced by the relevancy of your point about the historical state during the colonization of North America. The point is not whether or not someone eventually transformed, the point is that violent, expansionist groups can win over less expansionist groups.

Comment author: curi 10 April 2011 01:23:50AM *  2 points [-]

Deutsch's definition of "the bad guys" is not the most expansionist groups.

He would regard the colonizers as the good guys (well, better guys) because their society was less static, more open to improvement, more tolerant of non-conformist people, more tolerant of new ideas, more free, and so on. There's a reason the natives had worse technology and their culture remained static for so long: they had a society that squashes innovation.

Comment author: JoshuaZ 10 April 2011 01:27:26AM 4 points [-]

You'd have to convince me that they were more open to non-conformists. A major cause of the European colonization was flight of non-conforming groups (such as the Puritans) to North America where they then proceeded to persecute everyone who disagreed with them.

There's a reason the natives had worse technology and their culture remained static for so long: they had a society that squashes innovation.

I'm curious what you think of "Guns, Germs, and Steel" or similar works. What causes one society or another to adopt or even make innovations can be quite complicated.

Comment author: Randaly 10 April 2011 09:09:15PM 7 points [-]

The Renaissance/much of modern science originated in Italy, not in England (thus, e.g. Galileo, da Vinci, etc.) And the Italian city-states of the time were fairly free: Pisa, Milan, Arezzo, Lucca, Bologna, Siena, Florence, and Venice were all at some point governed by elected officials. They were also remarkably meritocratic: as the influential Neapolitan defender of atomism Francesco D'Andrea put it, describing Naples:

There is no city in the world where merit is more recognized and where a man who has no other asset than his own worth can rise to high office and great wealth.

(Even if he's only boasting about his own city-state, it's significant that meritocracy was considered worth boasting about.)

Similarly, merchants, not priests, politicians, etc. were considered the highest status group: nobles up to and including national leaders (e.g. the Doge of Venice) dressed like merchants.

(Incidentally, the other factors you mentioned below also played a role: competition between city-states and the influence of outside science from Byzantium and the Islamic world showing what could be done. Nevertheless, Italian freedoms were also necessary: e.g. Galileo was only able to publish his ideas because he lived in the free Republic of Venice, where Jesuits were banned and open inquiry encouraged; he was persecuted and forced to recant his theories when he moved to Tuscany.)

Comment author: curi 10 April 2011 01:46:29AM *  -1 points [-]

read The Beginning of Infinity by Deutsch. It discusses that Diamond book and other similar works.

Yes European society was not favorable to non-conformists. One period I've studied, which is later (so, i think, better in this regard) is around 1790 ish. At that time, to take one example, the philosopher william godwin's wife died in childbirth and he published memoirs and people got really pissed off because she had had sex out of wedlock and stuff along those lines. when godwin's daughter ran off with shelley there were rumors he had sold her. meanwhile, for example, there was lots of discrimination against irish catholics. i know some stuff about how biased and intolerant people can be.

but what i also know is a bit about static societies (again, see the book for more details, or at least check out my website, e.g. http://fallibleideas.com/tradition).

when a society doesn't change for thousands of years that means it's even harsher than the european society i was talking about. preventing change for such a long period is hard. stuff is done to prevent it. the non-conformists don't even get off the ground. everyone's spirits are squashed in childhood -- thoroughly -- and so the adults don't rebel at all. if there were adults who were eccentric then the society simply wouldn't stay the same so long. european society was already getting fairly near fairly rapid changes (e.g. industrial revolution) when it started colonizing the new world.

Comment author: JoshuaZ 10 April 2011 02:01:12AM *  4 points [-]

when a society doesn't change for thousands of years that means it's even harsher than the european society i was talking about.

This doesn't follow. (Incidentally, I don't know why you sometimes drop back to failing to capitalize but it makes what you write much harder to read.) For example, if one doesn't have good nutrition then people won't be as smart and so won't innovate. Similarly, if one doesn't have free time people won't innovate. Some technologies and cultural norms also reinforce innovation. For example, having a written language allows a much larger body of ideas, and having market economies gives market incentives to coming up with new technologies.

Moreover, innovation can occur directly through competition. When you are convinced that your religion or tribe is the best and that you need to beat the others by any means necessary you'll do a lot better at innovating.

There's also a self-reinforcing spiral: the more you innovate the more people think that innovation is possible. If your society hasn't changed much then there's no reason to think that new technologies are easy to find.

There's no reason to think that Native American populations were systematically preventing change. There's a very large difference between having infrastructural and systemic issues that make the development of new technologies unlikely and the claim that "everyone's spirits are squashed in childhood -- thoroughly".

Comment author: JoshuaZ 10 April 2011 02:16:33AM *  0 points [-]

Minor remark: Your essay about tradition is much more readable than a lot of the other material on your site. I'm not sure why but if you took a different approach to writing/thinking about it, you might want to apply that approach elsewhere.

Comment author: curi 10 April 2011 03:59:28AM 1 point [-]

I think the difference is you. I wrote that entire site in a short time period. I regard it as all being broadly similar in style and quality. I attempted to use the same general approach to the whole site; I didn't change my mind about something midway. I think it's a subject you understand better than epistemology directly (it is about epistemology, indirectly. traditions are long lived knowledge). The response I've had from other readers has varied a lot, not matched your response.

I do know how to write in a variety of different styles, and have tried each in various places. The one I've used here in the last week is not the best in various senses. But it serves my purpose.

Comment author: Desrtopa 10 April 2011 04:05:05PM *  3 points [-]

The first example that comes to mind for me is the collapse of the Roman empire. The Romans might have been "bad", being aggressive and expansionist, but the people they fell to were markedly worse from the perspective of truth seeking and pursuit of enlightenment, the standard Deutsch and curi are applying, and their replacements ushered in the Dark Ages.

Comment author: Randaly 10 April 2011 08:25:58PM *  6 points [-]

But different conditions hold today. The Gothic armies were virtually identical to the armies of the earlier Celts/Gauls who the Romans had crushed; even the Magyars (~1500's CE) used more or less the same tactics and organization as the Cimmerians (~ 700 BCE), though they did have stirrups, solid saddle trees, and stiff-tipped composite bows. Similarly, IIRC, the Roman armies didn't make use of any major recent technological innovations. This no longer holds today; the idea of an army using technology hundreds of years old being a serious military threat to any modern nation is frankly ludicrous. Technological and scientific development has become much, much more important than it was during Roman times.

(And, btw, it's not really accurate to say that, in practice, the barbarians were all that much much worse than the Romans in terms of development and innovation; technological development in Europe didn't really slow down all that much during the Dark Ages and the Romans had very few scientific (as opposed to engineering) advances anyways- most of their scientific knowledge (not to mention their mythology, art, architecture, etc.) was borrowed from the Greeks.)

Comment author: Desrtopa 10 April 2011 08:31:54PM 0 points [-]

Yes, but the culture of enlightenment and innovation within Greek and Roman culture had already been falling apart from within. The culture of Classical Antiquity was outcompeted by less enlightened memes.

Comment author: Randaly 10 April 2011 09:27:11PM 1 point [-]

How so? I'm not sure when, specifically, you're talking about, but the post-expansion Roman Empire still produced such noted philosophers as Marcus Aurelius, Apuleius, Boethius, St. Augustine, etc.

Comment author: Desrtopa 10 April 2011 10:46:10PM 2 points [-]

I'm thinking of the decline of Hellenist philosophy, especially the mathematical and empirical outlooks propounded by those such as Hypatia.

Comment author: Jayson_Virissimo 11 April 2011 07:13:01PM *  2 points [-]

I'm thinking of the decline of Hellenist philosophy, especially the mathematical and empirical outlooks propounded by those such as Hypatia.

As far as I know, Hypatia was a Neoplatonist like Saint Augustine. What evidence do you know of that she had an empirical outlook?

Comment author: Randaly 11 April 2011 04:34:50PM 1 point [-]

Well of course the previously dominant branch of philosophy declined- that happens all the time in philosophy. But I don't think that there's grounds for proclaiming Hellenist philosophy to be significantly better than its successors: it was hardly empirical (Hypatia herself was an anti-empirical Platonist) and typically more concerned with e.g. confused explanations of the world in terms of a single property (all is fire! no, water!) or confusion regarding words (e.g. the Sorites paradox) than any kind of research valuable/relevant today.

And the group which continued the legacy of Hellenist/Roman thought, the Islamic world, did in fact continue and, IMHO, vastly augment the level of empirical thought; for example, it's widely believed that the inventor of the Scientific Method was an Arab scientist, Alhazen. Even though Europe saw a drop in learning due to the collapse of the unsustainable centralized Roman economy and the resulting wars and deurbanization, all that occurred was that its knowledge was passed onto new civilizations large/wealthy/secure enough to support science/math/philosophy. (Specifically, Persia and Byzantium, and later the Caliphates.)

Comment author: Vladimir_M 12 April 2011 01:38:54AM *  10 points [-]

Similarly, in WW2, Japan did quite well for itself, and if a handful of major battles had gone slightly differently, the outcome would have been very different.

You are wrong about this. Even if every single American ship magically got sunk at some point in 1941 or 1942, and if every single American soldier stationed outside of the U.S. mainland magically dropped dead at the same time, it would only have taken a few years longer for the U.S. to defeat Japan. Once the American war production was up and running, the U.S. could outproduce Japan by at least two orders of magnitude and soon overwhelm the Japanese navy and air force no matter what their initial advantage. Starting the war was a suicidal move for the Japanese leadership, and even the sane people among them knew it.

I think you're also overestimating the chances Germans had, and underestimating how well Hitler did given the circumstances, though that's more controversial. Also, Germany lost the technological race in pretty much all theaters of war where technology was decisive -- submarine warfare, cryptography, radars and air defense, and nuclear weapons all come to mind. The only exceptions I can think of are jet aircraft and long-range missiles, but even in these areas, they produced mostly flashy toys rather than strategically relevant weapons.

Overall, I think it's clear that the insanity of the regimes running Germany and Japan hampered their technological progress and also led to their suicidal aggressiveness. At the same time, the relative sanity of the regimes running the U.K. and the U.S. did result in significant economic and technological advantages, as well as somewhat saner strategy. Of course, that need not have been decisive -- after all, the biggest winner of the war was Stalin, who was definitely closer to the defeated sides in all the relevant respects, if not altogether in the same league with them.

Comment author: JoshuaZ 12 April 2011 01:45:47AM 4 points [-]

Ok. So all my World War 2 examples have now decisively been shown to be wrong. I don't have any other modern examples to give that go in this direction. All other modern examples go pretty strongly in the other direction. I withdraw the claim wholesale and am updating to accept the claim for post-enlightenment human societies.

Comment author: curi 09 April 2011 10:50:37PM *  -1 points [-]

This sort of claim seems to run into historical problems

Athens lost to sparta. But it was a close call. Sparta excelled at nothing but war. Athens spread its efforts around and was good at everything. And it was close! That's how much more powerful Athens was: it did tons of other stuff and nearly won the war anyway.

If Athens had had an extra 100 years to improve, it would have gotten a big lead on Sparta. Long term, that kind of society wins.

A lot of major expansionist violent empires have done quite well for themselves.

Not long term.

Or to use a different, but potentially more controversial example, in North America and in Australia, the European colonizers won outright, despite having extremely violent, expansionist policies.

They were up against closed societies that were much worse than they themselves were in pretty much every respect including morally. The natives were not non-violent philosophers.

Comment author: XiXiDu 10 April 2011 11:17:27AM *  1 point [-]

Basically, you can't predict the moves of a good chess AI, otherwise you'd be at least that good chess player yourself, and yet you know it's going to win the game.

I just realized you tried to make a different point here. That one can prove the behavior of computationally unpredictable systems. Reminds me of the following:

6) Disproving mathematical proofs within the terms of their own definitions. This falls within the realm of self-contradiction. No transapient has disproved the Pythagorean Theorem for Euclidean spaces as defined by classical Greek mathematicians, for instance, or disproved Godel's Incompleteness Theorem on its own terms. (Encyclopedia Galactica - Limits of Transapient Power)

Sounds reasonable but I have no idea to what extent one could prove "friendliness" while retaining a degree of freedom that would allow a seed AI to recursively-selfimprove towards superhuman intelligence quickly. Intuitively it seems to me that the level of abstraction of a definition of "friendliness" will be somehow correlated with the capability of an AGI.

Comment author: timtyler 15 April 2011 06:29:56PM *  0 points [-]

Possibly an example of the halo effect: the good guys are good, the progress is good, so the good guys will make faster progress than the bad guys.

This is surely a real effect. The government is usually stronger than the mafia. The army is stronger than the terrorists. The cops usually beat the robbers, etc.

Comment author: Perplexed 10 April 2011 07:51:56PM *  5 points [-]

Thanks for posting this. I would definitely enjoy seeing a debate between Deutsch and Yudkowsky.

The part that dealt with ethics was incredibly naive. About 47 minutes in, for example, he is counseling us not to fear ET, because ET's morality will inevitably be superior to our own. And the slogan: "All evils are due to lack of knowledge". Why does this kind of thing remind me of George W. Bush?

But I agreed with some parts of his argument for the superiority of a a Popperian approach over a Bayesian one when 'unknown unknowns' regarding the growth of knowledge are involved. For example, 42:30 in when he quotes Popper advising us to drop the hopeless search for an inerrant source of knowledge, and to instead search for a fairly reliable method of eliminating error once it has become established. Maybe a good idea.

I have mixed feelings, though, about his advocacy of optimism. He argues that Malthus's pessimistic predictions failed simply because Malthus had no way of foreseeing the positive effects of the growth of knowledge. But by the same token, optimistic predictions of a positive future for mankind are also liable to fail because they attempt to predict that the growth of knowledge will include specific breakthroughs.

Comment author: Eugine_Nier 10 April 2011 10:54:24PM 6 points [-]

And the slogan: "All evils are due to lack of knowledge". Why does this kind of thing remind me of George W. Bush?

Well, it reminds me of Plato, which is much more damning.

Comment author: curi 10 April 2011 08:09:20PM 1 point [-]

. And the slogan: "All evils are due to lack of knowledge".

You should read his book, The Beginning of Infinity. It's not a slogan but a philosophical position which he explains at length. Learn why he thinks it. He's not an idiot.

Since you partly agree with him, and have mixed feelings, I think it'd be worth looking into for you, so I wanted to let you know it's much more than a slogan! And "optimism" to DD does not mean "predicting a positive future", it's not about wearing rose colored glasses.

Comment author: timtyler 15 April 2011 03:41:59PM *  0 points [-]

The part that dealt with ethics was incredibly naive. About 47 minutes in, for example, he is counseling us not to fear ET, because ET's morality will inevitably be superior to our own.

This seems pretty daft to me too. It looks like a kind of moral realism - according to which being eaten by aliens might well be "good" - since it leads to more "goodness".

Comment author: Perplexed 15 April 2011 03:54:43PM 1 point [-]

Right. But moral realism is not necessarily daft. It only becomes so when you add in universalism and a stricture against self-indexicality.

Comment author: timtyler 15 April 2011 06:21:58PM *  1 point [-]

I have some sympathies for the idea that convergent evolution is likely to eventually result in a universal morality - rather than, say, pebble sorters and baby eaters. If true, that might be considered to be a kind of moral realism.

Comment author: Perplexed 15 April 2011 06:49:59PM 2 points [-]

It is a kind of moral realism if you add in the proclamation that one ought to do now that which we all converge toward doing later. Plus you probably need some kind of argument that the limit of the convergence is pretty much independent of the starting point.

My own viewpoint on morality is closely related to this. I think that what one morally ought to do now is the same as what one prudentially and pragmatically ought to do in an ideal world in which all agents are rational, communication between agents is cheap, there are few, if any, secrets, and lifetimes are long. In such a society, a strongly enforced "social contract" will come into existence, which will have many of the characteristics of a universal morality. At least within a species. And to some degree, between species.

Comment author: timtyler 16 April 2011 01:14:41PM *  1 point [-]

It is a kind of moral realism if you add in the proclamation that one ought to do now that which we all converge toward doing later.

...or if you think what we ought to be doing is helping to create the thing with the universal moral values.

I'm not really convinced that the convergence will be complete, though. If two advanced alien races meet, they probably won't agree on all their values - perhaps due to moral spontaneous symmetry breaking - and small differences can become important.

Comment author: Vladimir_Nesov 10 April 2011 01:51:32AM *  5 points [-]

It is a talk given to the Oxford Transhumanists. Their previous speaker was Eliezer Yudkowsky.

To clarify what I originally misinterpreted on reading this description: according to this page, Yudkowsky was giving a talk on 25 Jan 2011, while Deutsch on 10 Mar 2011, so "previous speaker" doesn't refer to giving talks in succession.

Comment author: Vladimir_Nesov 09 April 2011 11:45:22PM 13 points [-]

I think this talk motivates a Yudkowsky-Deutsch debate on bloggingheads.

Comment author: alexflint 10 April 2011 05:17:11PM 1 point [-]

Oh boy oh boy oh boy that would rock my socks

Comment author: Manfred 10 April 2011 02:00:55AM *  10 points [-]

I stopped listening fairly quickly, after determining that it was rubbish from a Bayesian perspective. Specifically I stopped listening when he says that the future of humanity is different from russian roulette because the future can't be modeled by probability. This is the belief that there is a basic "probability-ness" that dice have and gun chambers have but people don't, and that things with "probability-ness" can be described by probability, but things without "probability-ness" can't be. But of course, we're all fermions and bosons in the end - there is no such thing as "probability-ness," probability is simply what happens when you reason from incomplete information.

Comment author: NancyLebovitz 10 April 2011 09:39:45AM 5 points [-]

Duetsch is arguing (and I think correctly) that there's a difference between knowing the full range of possibilities in a system and not knowing it.

Comment author: Manfred 10 April 2011 04:05:23PM *  4 points [-]

That seems pretty reasonable. "What will the future be like" is a pretty undetermined question.

However, he was applying this same logic to "will civilization be destroyed," where "destroyed" and "not destroyed" are a pretty complete range of possibilities.

Unless maybe he meant that you have to know every possible way civilization could be destroyed in order to estimate a probability, which seems like searching for a reason that civilization doesn't have probability-ness.

Comment author: NancyLebovitz 10 April 2011 09:37:36AM 2 points [-]

My first reaction to his unlimited progress riff was "every specific thing I care about will be gone". The answer is presumably that there will be more things to care about. However, that initial reaction is probably common enough that it might be worth working on replies to people who are less inclined to abstraction than I am.

I'll take the edge off his optimism somewhat by pointing out that individuals and cultures can be rolled over by change, even if the whole system is becoming more capable, and we care about individuals and cultures (especially if they're us or ours) as well as the whole system.. Taking European diseases to the New World happened by accident.

Still, the pursuit of knowledge and competence may well be the least bad strategy the vast majority of the time (rather than a guarantee of things becoming more wonderful for what we personally care about), and I'm intrigued by the idea of explicitly intending to increase the returns for cooperation.

Comment author: Vladimir_Nesov 09 April 2011 07:29:57PM *  1 point [-]

From the very beginning of the talk:

I don't have to persuade you that, for instance, life is better than death; and I don't have to explain exactly why knowledge is a good thing, and that the alleviation of suffering is good, and communication, and travel, and space exploration, and ever-faster computers, and excellence in art and design, all good.

One of these things is not like the others.

Comment author: lukeprog 09 April 2011 07:50:33PM 3 points [-]

Ever-faster computers jumped out at me when I first heard that sentence.

Comment author: Matt_Simpson 11 April 2011 12:50:28AM 0 points [-]

me too. Instrumental vs terminal values.

Comment author: JoshuaZ 09 April 2011 09:14:35PM 0 points [-]

Really? The comment about art and design jumped out at me.

Comment author: curi 09 April 2011 09:17:00PM *  1 point [-]

FYI DD's talk on why flowers are beautiful:

http://193.189.74.53/~qubitor/people/david/index.php?path=Video/Why%20Are%20Flowers%20Beautiful

That URL is weird. In case it breaks, it's on youtube in parts:

http://www.youtube.com/watch?v=56o2n8sVvM8

Comment author: curi 09 April 2011 07:31:25PM 5 points [-]

Which?

Comment author: Larks 09 April 2011 11:18:52PM 1 point [-]

It's slow loading for me due to a slow internet connection, but if the questions at the end are included, I was the one who asked about insurance companies.

I don't think his response was very satisfactory, though I have a better version of my question.

Suppose I give you some odds p:q and force you to bet on some proposition X (say, Democrats win in 2012) being true, but I let you pick which side of the bet you take; a payoff of p if X is true, or a payoff of q if X is false. For some (unique) value of p/q, you'll switch which side you want to take.

It seems this can force you to assign probabilities to arbitrary hypothesis.

Comment author: Eugine_Nier 10 April 2011 04:31:30PM 2 points [-]

Suppose I give you some odds p:q and force you to bet on some proposition X (say, Democrats win in 2012) being true, but I let you pick which side of the bet you take; a payoff of p if X is true, or a payoff of q if X is false. For some (unique) value of p/q, you'll switch which side you want to take.

It seems this can force you to assign probabilities to arbitrary hypothesis.

So, how precise should these probabilities be? Any why can't I apply this argument to force the probabilities to have arbitrary high precision?

Comment author: Larks 10 April 2011 06:57:27PM 1 point [-]

Not that I can think of, besides memory/speed constaints, and how much updating you can have done with the evidence you've recieved.

Comment author: Eugine_Nier 10 April 2011 07:31:53PM 2 points [-]

and how much updating you can have done with the evidence you've recieved.

Why can't it happen that you have so little and/or such weak evidence, that the amount of precision you should have is none at all?

Comment author: Manfred 10 April 2011 08:01:44PM *  0 points [-]

Imagine that you had to give a probability density to each probability estimate you could make of Obama winning in 2012 being the correct one. You'd end up with something looking like a bell curve over probabilities, centered somewhere around "Obama has a 70% (or something) chance of winning." Then to make a decision based on that distribution using normal decision theory, you would average over the possible results of an action, weighted by the probability. But this is equivalent to taking the mean of your bell curve - no matter how wide or narrow the bell curve, all that matters to your (standard decision theory) decision is the location of the mean.

Less evidence is like a wider bell curve, more evidence like a sharper one. But as long as the mean stays the same, the average result of each decision stays the same, so your decision will also be the same.

So there are two kinds of precision here: the precision of the mean probability given your current (incomplete) information, which can be arbitrarily high, and the precision with which you estimate the true answer, which is the width of the bell curve. So when you say "precision," there is a possible confusion. Your first post was about the "how precise can these probabilities be," which was the first (and boring, since it's so high) kind of precision, while this post seems to be talking about the second kind, the kind that is more useful because it reflects how much evidence you have.

Comment author: Eugine_Nier 10 April 2011 08:48:20PM 2 points [-]

So there are two kinds of precision here: the precision of the mean probability given your current (incomplete) information, which can be arbitrarily high, and the precision with which you estimate the true answer, which is the width of the bell curve.

I'm not sure what you mean by the "true answer". After all, in some sense the true probability is either 0 or 1 it's just that we don't know which.

Comment author: Manfred 10 April 2011 09:09:11PM 1 point [-]

That's a good point. So I guess the second kind of precision doesn't make sense in this case (like it would if the bell curve were over, say, the number of beans in a jar), and "precision" should only refer to "precision with which we can extract an average probability from our information," which is very high.

Comment author: [deleted] 11 April 2011 02:46:14PM 0 points [-]

Imagine that you had to give a probability density to each probability estimate you could make of Obama winning in 2012 being the correct one. You'd end up with something looking like a bell curve over probabilities

Bell curves prefer to live on unbounded intervals! It would be less jarring, (and less convenient for you?), if he ended up with something looking like a uniform distribution over probabilities.

Comment author: Manfred 11 April 2011 06:07:57PM 0 points [-]

It's equally convenient, since the mean doesn't care about the shape. I don't think it's particularly jarring - just imagine it going to 0 at the edges.

The reason you'll probably end up with something like a bell curve is a practical one - the central limit theorem. For complicated problems, you very often get what looks something like a bell curve. Hardly watertight, but I'd bet decent amounts of money that it is true in this case, so why not use it to add a little color to the description?

Comment author: Larks 10 April 2011 08:03:45PM 0 points [-]

Well, your prior gives you a unique value, and bayes theorem is a function, so it gives you a unique value for every input.

Comment author: Eugine_Nier 10 April 2011 08:50:33PM 2 points [-]

Well, your prior gives you a unique value,

So the claim is that you have arbitrary precision priors. What are they, and where are they stored?

Comment author: Larks 10 April 2011 09:38:21PM 0 points [-]

Sorry, I haven't been very clear. A perfect bayesian agent would have a unique real number to represent it's level of belief in every hypothesis.

The betting-offer system I described about can force people (and force any hypothetical agent) to assign unique values.

Of course, an actual person won't be capable of this level of precision or coherence.

Comment author: Eugine_Nier 10 April 2011 08:17:05PM 1 point [-]

Yes, but actually computing that function is computationally intractable in all but the simplest examples.

Comment author: timtyler 15 April 2011 02:50:51PM *  0 points [-]

Deutsch gives Malthus as an example of a failed pessimistic prediction - at 23:00. However, it still looks as though Malthus is likely to have been correct. Populations increase exponentially, while resources expand at most in a polynomial fashion - due to the light cone. Deutsch discusses this point 38:00 minutes in, claiming relatavistic time dilation changes this conclusion, which I don' t think it really does: you still wind-up with most organisms being resource-limited, just as Malthus described.

Comment author: timtyler 15 April 2011 02:21:50PM *  0 points [-]

Martin Rees is misrepresented 4:04 in. What Rees actually said was:

the odds are no better than 50-50 that our present civilisation on Earth will survive to the end of the present century without a serious setback'

...whatever a "serious setback" is supposed to mean.

Comment author: vallinder 14 July 2011 05:56:07PM 1 point [-]

Do you have a reference for that? My copy of Our Final Hour contains the same sentence minus "without a serious setback".

Comment author: timtyler 14 July 2011 10:35:07PM *  1 point [-]

Our Final Century, page 8 line 4.

It seems as though Rees - rather confusingly - said different things on the topic in Our Final Century and Our Final Hour.

Comment author: vallinder 18 July 2011 10:52:06AM 1 point [-]

Ah, that's interesting. Thanks for clarifying.

Comment author: JGWeissman 09 April 2011 07:17:00PM 0 points [-]

How was Curi able to post this without having 20 karma?

Comment author: curi 09 April 2011 07:18:34PM *  3 points [-]

I had 20 karma. I don't anymore. My karma has had a lot of fluctuations.

edit: see. back to 21 now.