Stuart_Armstrong comments on The Octopus, the Dolphin and Us: a Great Filter tale - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (233)
To be absolutely certain that you obtain that sandwich, that it's a genuine sandwich, that no-one steals it from you, that you cana lways make a replacement if this one goes bad or quantum tunnels, etc... you need to grab the universe.
Grabbing the universe only adds a tiny, tiny bit of extra expected utility, but since there is no utility drawback to doing so, AIs will often be motivated to do so. Bounded utility doesn't save you (though bounded satisficing would, but that's not stable http://lesswrong.com/lw/854/satisficers_want_to_become_maximisers/ ).
OK. Replace "efficient" with quick. Getting me a sandwich within a short amount of time precludes being able to take over the universe.
That seems safer (and is one of the methods we recommended in our paper on Oracles). There ware ways to make this misbehave as well, but they're more complex and less intuitive.
Eg: The easiest way this would go wrong is if the AI is still around after the deadline, and now spends its effort taking over the universe in order to probe basic physics and maybe discover time travel to go back and accomplish its function.