You can't make PA complete by adding new axioms with a deterministic algorithm. But what if you used a randomized algorithm? Richard Lipton wrote a post about this idea: generate a random bitstring S, then add to PA the axiom that the K-complexity of S is high enough. That is probabilistically very likely to be true, but is always unprovable in PA for long enough S. Clearly this gives you a stronger theory, but how much stronger? In particular, is there any hope at all that you can approach completeness in some suitable probabilistic sense?
Nah, don't get your hopes up. In the comments to Lipton's post, Alexander Shen (amusingly, one of my former schoolteachers) and Harvey Friedman show that most true statements remain unreachable by this procedure. Leonid Levin proved a weaker but more general result, roughly saying that randomized algorithms cannot complete PA with positive probability.
So the idea doesn't seem to work. But it was a very nice try.
Here's an example that's similar to AMD in this respect, but a lot simpler. We make two copies of you and ask each to output a bit. If the bits are different, you win a million dollars, otherwise nothing. Any deterministic algorithm loses, but a randomized one can win with probability 50%. Does it overthrow Eliezer's principle? What exactly is Eliezer's principle? I don't know :-)
I wasn't trying to criticize it -- I think it's a great heuristic and I think it touches on a very fundamental, non-obvious aspect of reality. I just want to better understand what kind of exception AMD and your game are.
For example, in cases where you don't want to "improve" something for someone, randomness is, in a sense, good. For example, when hiding messages from an adversary, adding randomness is good -- though only because it's bad for someone else. This is consistent with the anti-randomness heuristic.
I phrased it one time as, "Randomness is like poison: yes, it can help you, but only if someone else takes it."