"To compute a consistent universe with a low-entropy terminal condition and high-entropy initial condition, you have to simulate lots and lots of universes, then throw away all but a tiny fraction of them that end up with low entropy at the end. With a low-entropy initial condition, you can compute it out locally, without any global checks. So I am not yet ready to throw out the arrowheads on my arrows."
Here's the problem with this argument. Your simulations are occurring as a sub-history of a universe where the second law of thermodynamics already holds. A simulation of a universe with increasing entropy will be a sub-history where entropy increases, and will therefore be more likely to occur than a simulation of a universe with decreasing entropy (i.e. a subhistory where entropy decreases.)
That is, unless your simulation finds a way to dump entropy into the environment. The usual way to do this is by erasing information cf: http://en.wikipedia.org/wiki/Von_Neumann-Landauer_limit . Throwing out the simulations where entropy increased would be one example. Likewise, simulating non-information preserving rules (e.g. Conway's game of life) will also allow entropy to decrease within the simulation -- for example, most random fields of a reasonably small size will settle into a pattern of stable oscillators. This can happen because it is perfectly possible for two ancestor states to go to the same descendent state within the rules of Conway's game, and when this happens, entropy must leak into the environment according to the second law.
The symmetry corresponding to conservation of energy is time translation, not time reversal. That said, I have other reasons to be doubtful of Yudkowsky's thesis here. In particular, I think it really does come down to entropy.
"To compute a consistent universe with a low-entropy terminal condition and high-entropy initial condition, you have to simulate lots and lots of universes, then throw away all but a tiny fraction of them that end up with low entropy at the end. With a low-entropy initial condition, you can compute it out locally, without any global checks. So I am not yet ready to throw out the arrowheads on my arrows."
Here's the problem with this argument. Your simulations are occurring as a sub-history of a universe where the second law of thermodynamics already holds. A simulation of a universe with increasing entropy will be a sub-history where entropy increases, and will therefore be more likely to occur than a simulation of a universe with decreasing entropy (i.e. a subhistory where entropy decreases.)
That is, unless your simulation finds a way to dump entropy into the environment. The usual way to do this is by erasing information cf: http://en.wikipedia.org/wiki/Von_Neumann-Landauer_limit . Throwing out the simulations where entropy increased would be one example. Likewise, simulating non-information preserving rules (e.g. Conway's game of life) will also allow entropy to decrease within the simulation -- for example, most random fields of a reasonably small size will settle into a pattern of stable oscillators. This can happen because it is perfectly possible for two ancestor states to go to the same descendent state within the rules of Conway's game, and when this happens, entropy must leak into the environment according to the second law.