@Will: The point is not that you should necessarily run the algorithm that would be optimal if you had unlimited computational resources. The point is that by understanding what that algorithm does, you have a better chance of coming up with a good approximation which you can run in a reasonable amount of time. If you are trying to build a locomotive it helps to understand Carnot Engines.

There are other scenarios when running the "optimal" algorithm is considered harmful. Consider a nascent sysop vaporising the oceans purely by trying to learn how to deal with humanity (if that amount of compute power is needed of course).

Probability theory was not designed about how to win, it was designed as way to get accurate statements about the world, assuming an observer whose computations have no impact on the world. This is a reasonable formalism for science, but only a fraction of how to win in the real world, and sometimes antithetical to winning. So if you want your system to win, don't necessarily approximate it to the best of your ability.

Ideally we want a theory of how to change energy into winning, not information and a prior into accurate hypotheses about the world, which is what probability theory gives us, and is very good at.

You might pull together a good message just based on the original question, "what advice do you give to Archimedes, and how do you say it into the chronophone." Yudkowsky's question was designed to make us think non-obvious thoughts, after all.

"Would you be able to ask anything meaningful through the chronophone?"

(My construction might not be quite right. I'm feeling all smug and Godelian, but it's 1 AM, so I've probably missed something.)