Stuart_Armstrong comments on The Octopus, the Dolphin and Us: a Great Filter tale - Less Wrong

48 Post author: Stuart_Armstrong 03 September 2014 09:37PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (233)

You are viewing a single comment's thread. Show more comments above.

Comment author: Stuart_Armstrong 05 September 2014 11:38:17AM 7 points [-]

To be absolutely certain that you obtain that sandwich, that it's a genuine sandwich, that no-one steals it from you, that you cana lways make a replacement if this one goes bad or quantum tunnels, etc... you need to grab the universe.

Grabbing the universe only adds a tiny, tiny bit of extra expected utility, but since there is no utility drawback to doing so, AIs will often be motivated to do so. Bounded utility doesn't save you (though bounded satisficing would, but that's not stable http://lesswrong.com/lw/854/satisficers_want_to_become_maximisers/ ).

Comment author: pinyaka 05 September 2014 01:22:28PM 3 points [-]

OK. Replace "efficient" with quick. Getting me a sandwich within a short amount of time precludes being able to take over the universe.

Comment author: Stuart_Armstrong 05 September 2014 02:38:50PM *  1 point [-]

That seems safer (and is one of the methods we recommended in our paper on Oracles). There ware ways to make this misbehave as well, but they're more complex and less intuitive.

Eg: The easiest way this would go wrong is if the AI is still around after the deadline, and now spends its effort taking over the universe in order to probe basic physics and maybe discover time travel to go back and accomplish its function.