Despite the fact that your audience is familiar with the singularity, I would still insist on the potential power of an AGI.
You could say something about the AI spreading on the Internet (from a 1000 to 1,000,000 time increase in processing power), bootstrapping nanotech and rewriting its source code, and that all of this could happen very quickly.
Ask them what they think such an AI would do, and if they show signs of anthropomorphism explain them that they are biased (mind projection fallacy for example).
You can also ask them what goal they would give to such an AI and show what kind of disaster may follow.
That can lead you to the complexity of wishes (a computer does not have common sens) and the complexity of human values.
I would also choose a nice set of links to lesswrong.org and singinst.com that they could read after the presentation.
It would be great if you could give us some feedback after your presentation : What worked, what did they find odd, what was their reaction, what questions did they ask.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
It sounds like you just needed something to convince yourself with. TDT isn't special in this regard. With some inventiveness you could also have used quantum mechanics, evolutionary biology, extrapolated volition, or any number of other LW topics :-)
Assuming that the effects of dieting for a day are very small, it is likely that the utility of not eating knots today is lower than the utility of eating them for every possible future behavior.
A CDT agent only decides what it does now, so a CDT agents chooses to eat knots.
But an EDT,TDT or UDT agent would choose to diet.