Khoth comments on Open thread for December 17-23, 2013 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (301)
There's been some prior discussion here about the problem of uncertainty of mathematical statements. Since most standard priors (e.g. Solomonoff) assume that one can do a large amount of unbounded arithmetic, issues of assigning confidence to say 53 being prime are difficult, as are issues connected to open mathematical problems (e.g. how does one discuss how likely one is to estimate that the Riemann hypothesis is true in ZFC?). The problem of bounded rationality here seems serious.
I've run across something that may be related, and at minimum seems hard to formalize. For a mathematical statement A, let F(A) be "A is proveable in ZFC) (you could use some other axiomatic system but this seems fine for now). Let G(A) be "A will be proven in ZFC by 2050". Then one can give examples of statements A and B, where it seems like P(F(A)) is larger than P(F(B)) but the reverse holds for P(G(A)) and P(G(B)).
The example that originally came to mind is technical: let A be the statement "ZPP is contained in P^X where X is an oracle for graph isomorphism" and let B be the statement "ZPP is contained in P^Y where Y is an oracle that answers whether Ackermann(n)+1 has an even or odd number of distinct prime factors." The intuition here is that one expects Ackermann(n)+1 to be essentially random in the parity of its number of distinct prime factors, and a strong source of pseudorandom bits forces collapse of ZPP. However, actually proving that Ackermann(n)+1 acts this way looks completely intractable. In contrast, there's no strong prior reason to think graph isomorphism has anything to do with making ZPP type problems easier (aside from some very minor aspects) but there's a lot of machinery out there that involves graph isomorphism and people thinking about it.
So, is this sort of thing meaningful? And are there other more straightforward, less complicated or less technical examples? I do have an analog involving not math but space exploration. P(Life on Mars) might be lower than P(Life on Europa) even though P(We discover life on Mars in the next 20 years) might be higher than P(We discover life on Europa in the next 20 years) simply because we send so many more probes to Mars. Is this a helpful analog or is it completely different?
How about the statements:
A: "The number of prime factors of 4678946132165798721321 is divisible by 3"
B: "The number of prime factors of 9876216987326578968732678968432126877 8498415465468 5432159878453213659873 1987654164163415874987 3674145748126589681321826878 79216876516857651 64549687962165468765632 132185913574684613213557 is divisible by 2"
P(F(A)) is about 1/3 and P(F(B)) is about 1/2.
But it's far more likely that someone will bother to prove A, just because the number is much smaller.
ETA: To clarify, I don't expect it to be particularly hard to prove or disprove, I just don't think anyone will bother.
Whether someone will bother really depends on why someone wants to know. You can simple type "primefactors of 9876216987326578968732678968432126877" into Wolfram Alpha and get your answer. It's not harder than typing "primefactors of 4678946132165798721321" into Wolfram Alpha
I don't know if this was due to an edit, but the second number in Khoth's post is far larger than 9876216987326578968732678968432126877, and indeed Alpha won't factor it.
To be honest I'm sort of surprised that Alpha is happy to factor 4678946132165798721321, I'd have thought that that was already too large.
The reason nobody will bother is that it's just one 200 digit number among another 10^200 similar numbers. Even if you care about one of them enough to ask Wolfram Alpha, it's vanishingly unlikely to be that particular one.
Technically it is harder, since there are more digits; apart from the additional work involved this also makes more opportunities for mistakes. In addition, of course, the computer at the other end is going to have to do more work.