You can't make PA complete by adding new axioms with a deterministic algorithm. But what if you used a randomized algorithm? Richard Lipton wrote a post about this idea: generate a random bitstring S, then add to PA the axiom that the K-complexity of S is high enough. That is probabilistically very likely to be true, but is always unprovable in PA for long enough S. Clearly this gives you a stronger theory, but how much stronger? In particular, is there any hope at all that you can approach completeness in some suitable probabilistic sense?

Nah, don't get your hopes up. In the comments to Lipton's post, Alexander Shen (amusingly, one of my former schoolteachers) and Harvey Friedman show that most true statements remain unreachable by this procedure. Leonid Levin proved a weaker but more general result, roughly saying that randomized algorithms cannot complete PA with positive probability.

So the idea doesn't seem to work. But it was a very nice try.

New Comment
17 comments, sorted by Click to highlight new comments since:
[-][anonymous]20

I can't understand Harvey Friedman's reasoning. Maybe because I don't know what the notation:

if phi(n*) then |{m: 1 <= m = 2^p-1

and

T + {m: 1 <= m = 2^p-1

is supposed to mean. Could you explain?

Oh fuck. I can't parse it either. Maybe the blog software ate some characters? (I only glanced at Friedman's conclusion before writing my post. Sorry about that, won't happen again.)

You can't trick a theorem by adding randomness. It's not like a person who can be surprised by the direction you went and admit, "oh maybe I hadn't thought of that".

The outlined approach wouldn't contradict Goedel's theorem if it worked. (The ideal outcome would be a randomized method for generating axiom systems that have a high probability of being "true" over the standard integers and a high probability of resolving some concrete math question. Goedel's theorem doesn't prohibit that.) I'm going to interpret your comment charitably, as saying you already understand that, but you think there's some general law of nature that adding randomness can never help. That's not always true, I think.

What was the reconciliation between that principle and the optimal solution on Absent-Minded Driver?

Here's an example that's similar to AMD in this respect, but a lot simpler. We make two copies of you and ask each to output a bit. If the bits are different, you win a million dollars, otherwise nothing. Any deterministic algorithm loses, but a randomized one can win with probability 50%. Does it overthrow Eliezer's principle? What exactly is Eliezer's principle? I don't know :-)

I wasn't trying to criticize it -- I think it's a great heuristic and I think it touches on a very fundamental, non-obvious aspect of reality. I just want to better understand what kind of exception AMD and your game are.

For example, in cases where you don't want to "improve" something for someone, randomness is, in a sense, good. For example, when hiding messages from an adversary, adding randomness is good -- though only because it's bad for someone else. This is consistent with the anti-randomness heuristic.

I phrased it one time as, "Randomness is like poison: yes, it can help you, but only if someone else takes it."

[-]saturn100

It seems like there's some word-trickery going on here. A randomized algorithm is just a deterministic algorithm plus a source of randomness, but the randomness source isn't counted as an "input" but instead "part of the algorithm". AMD is a situation where you want two copies of the same algorithm with the same inputs to have a different output, which is impossible. Using a "randomized" algorithm allows you to sneak around this limitation by giving each copy a possibly different input without calling it an input.

Good point, but how is that a case of word-trickery, and what would you call this category of exception to the anti-randomness heuristic?

I wasn't trying to criticize it -- I think it's a great heuristic and I think it touches on a very fundamental, non-obvious aspect of reality. I just want to better understand what kind of exception AMD and your game are.

Here's an old comment thread where I tried to explain how I think about this.

The short version is this: Adding randomness is only useful when you are trying to obfuscate. Otherwise, adding randomness per se is always bad or neutral. However, many cases that are described as "adding randomness" are really about adding some information that turns out to be just what the agent needs, plus some randomness that turns out not to do any harm.

For example, in the AMD problem, the optimal strategy is often described as "exit with probability 1/3rd". Now, what this really means is the following: The agent is given an input channel C, together with the knowledge that the input from C will belong to a set S such that some known set T contains 1/3rd of the elements of S (but no additional information). The agent then implements the deterministic algorithm of exiting iff the input from C belongs to the set T.

People often explain why this agent is able to do better than an agent without a "mixed" strategy by saying, "This agent has a source of randomness." But I think that it's better to say that the agent has an input channel about which it knows something, but not everything. In contrast, the agent employing a "non-mixed" strategy doesn't have this information about the channel. So, naturally, the agent with the "mixed" strategy does better, because it knows more.

Thanks. I had forgotten that a clearer resolution of those heuristics had eventually been offered as that thread developed, and I appreciate you summarizing it here.

For onlookers: also see this old thread, especially Scott Aaronson's comments.

See my comments there too: I think that's the only time I'll ever outwit Aaronson on computer science (if only because he kept cheating by chaning the question).

Edit: Okay, that may be overstating; let's just say that's the best I'll probably ever do against him on comp-sci.

You can't trick a theorem by adding randomness. It's not like a person who can be surprised by the direction you went and admit, "oh maybe I hadn't thought of that".

No, but sometimes you can use randomness as a loophole if a theorem doesn't apply to the randomized situations. And when that does and does not help is often not clear. That's why for example it is a deep open problem whether in computational complexity randomness gives you anything new. There are two generalizations of polynomial time using randomness. One is BPP and is comparatively small, and is believed to probably be equal to P. But with only a tiny tweaking of the definition, one gets the class PP which is known to be large enough to at least contain all of NP. So if P != NP, then this is a context where randomness really does help. (Although note that some people see this as an argument for believing that P=NP since their intuition is strongly that randomness shouldn't help.)

[-]djcb00

Well, not if you want to proof mathematical theorems.

Still, in some cases it can be useful to trait absolute certainty for only a probabilistic certainty (think bloom filters); and for some game-theoretical strategies it can be beneficial to add actual randomness.

[-]loqi10

Indeed. For me, cryptographic hashing is the most salient example of this. Software like git builds entire castles on the probabilistic certainty that SHA-1 hash collisions never happen.

Yes, but how do we recognize true statements and does this process contradict itself?