You can't make PA complete by adding new axioms with a deterministic algorithm. But what if you used a randomized algorithm? Richard Lipton wrote a post about this idea: generate a random bitstring S, then add to PA the axiom that the K-complexity of S is high enough. That is probabilistically very likely to be true, but is always unprovable in PA for long enough S. Clearly this gives you a stronger theory, but how much stronger? In particular, is there any hope at all that you can approach completeness in some suitable probabilistic sense?
Nah, don't get your hopes up. In the comments to Lipton's post, Alexander Shen (amusingly, one of my former schoolteachers) and Harvey Friedman show that most true statements remain unreachable by this procedure. Leonid Levin proved a weaker but more general result, roughly saying that randomized algorithms cannot complete PA with positive probability.
So the idea doesn't seem to work. But it was a very nice try.
No, but sometimes you can use randomness as a loophole if a theorem doesn't apply to the randomized situations. And when that does and does not help is often not clear. That's why for example it is a deep open problem whether in computational complexity randomness gives you anything new. There are two generalizations of polynomial time using randomness. One is BPP and is comparatively small, and is believed to probably be equal to P. But with only a tiny tweaking of the definition, one gets the class PP which is known to be large enough to at least contain all of NP. So if P != NP, then this is a context where randomness really does help. (Although note that some people see this as an argument for believing that P=NP since their intuition is strongly that randomness shouldn't help.)