(Presumably, since the AIs are unpredictable, and technology, Optimism demands that we all live happily ever after.)
No. Deutsch's "principle of optimism" states:
All evils are caused by insufficient knowledge.
optimism demands that they can live happily ever after if they learn how. it does not predict that they will.
Agreed. The "we all live happily ever after" inference does contradict Deutsch's idea, which I noticed a little after writing this, and so corrected the wording (before seeing your comment) thusly:
(Or, presumably, so Optimism demands, since the AIs are unpredictable, and technology.)
http://vimeo.com/22099396
What do people think of this, from a Bayesian perspective?
It is a talk given to the Oxford Transhumanists. Their previous speaker was Eliezer Yudkowsky. Audio version and past talks here: http://groupspaces.com/oxfordtranshumanists/pages/past-talks