Duetsch is arguing (and I think correctly) that there's a difference between knowing the full range of possibilities in a system and not knowing it.
That seems pretty reasonable. "What will the future be like" is a pretty undetermined question.
However, he was applying this same logic to "will civilization be destroyed," where "destroyed" and "not destroyed" are a pretty complete range of possibilities.
Unless maybe he meant that you have to know every possible way civilization could be destroyed in order to estimate a probability, which seems like searching for a reason that civilization doesn't have probability-ness.
http://vimeo.com/22099396
What do people think of this, from a Bayesian perspective?
It is a talk given to the Oxford Transhumanists. Their previous speaker was Eliezer Yudkowsky. Audio version and past talks here: http://groupspaces.com/oxfordtranshumanists/pages/past-talks