I did a talk at the 25th Oxford Geek night, in which I had five minutes to present the dangers of AI. The talk is now online. Though it doesn't contain anything people at Less Wrong would find new, I feel it does a reasonable job at pitching some of the arguments in a very brief format.

New Comment
5 comments, sorted by Click to highlight new comments since: Today at 11:45 AM

I liked your 5-min summary. Pretty good job, I'd say.

A couple of nitpicks: you mentioned that the reasons why AI can be bad are "technical" and "complicated", while showing a near-empty slide. I don't think that makes a convincing impression. Later on, you mentioned the "utility function", which extends the inferential distance a bit too far. Your last example, tiling the universe with smiling faces, seemed to fall flat, probably a sentence or two would fill the gap. In general, the audience's reaction shows quite clearly what worked and what did not.

The "tilling the universe" actually worked, as I remember - the audience did react well, just not audibly enough.

PS: thanks for your advice, btw

The best summary I can give here is that AIs are expected to be expected utility maximisers that completely ignore anything which they are not specifically tasked to maximise.

Counter example: incoming asteroid.

I thought utility maximizers were allowed to make the inference "Asteroid Impact -> reduced resources -> low utility -> action to prevent that from happening", kinda part of the reason for why AI is so dangerous: "Humans may interfere - > Humans in power is low utility -> action to prevent that from happening"

They ignore anything but what they're maximizing in the sense that they don't follow the Spirit of the code but rather its Letter, all the way to the potentially brutal (for Humans) conclusions.