Destroying the robot greatly diminishes its future ability to shoot, but it would also greatly diminishes its future ability to see blue. The robot doesn't prefer 'shooting blue' to 'not shooting blue', it prefers 'seeing blue and shooting' to 'seeing blue and not shooting'.
So the original poster was right.
Edit: I'm wrong, see below
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
You are assuming that the Turing machine needs to halt. In a universe much simpler than ours (?), namely the one where a single Turing machine runs, if you subscribe to Pattern Identity Theory, there's a simple way to host an infinite hierarchy of increasing intelligences: Simply run all Turing machines in parallel. (Using diagonalization from Hilbert's Hotel to give everyone infinite steps to work with.) The machine won't ever halt, but it doesn't need to. If an AGI in our universe can figure out a way to circumvent the heat death, it could do something similar.