Well that sucks. The last thing I want to do is post a subtly wrong paper. Just to make sure it actually is subtly wrong:
On closer look, Theorem from Appendix A is simply wrong. Trying to emulate polynomial multiplication inside real line pays off.
I don't understand this. Can you elaborate a bit?
All-in-all, some of the "proven" formulas are actually pre-assumed in the paper.
This was done on purpose. From the beginning, the author is trying to find nice axioms that will prove the things he wants to. I'm not sure this is a fair criticism (if I'm understanding you correctly).
The last thing I want to do is post a subtly wrong paper.
With a correct disclaimer (a nice construction, but skip the proofs as they are wrong) it could be still useful.
On closer look, Theorem from Appendix A is simply wrong. Trying to emulate polynomial multiplication inside real line pays off. I don't understand this. Can you elaborate a bit?
"a + b = [a] + [b] + arctg(tg(0.5 π {a}) + tg(0.5 π {b}))/(0.5 π)"
"[a]" is the floor of a, or the integer part of a, i.e. maximum integer no more than a. "{a}" is the fractional...
I've recently been getting into all of this wonderful Information Theory stuff and have come across a paper (thanks to John Salvatier) that was written by Kevin H. Knuth:
Foundations of Inference
The paper sets up some intuitive minimal axioms for quantifying power sets and then (seems to) use them to derive Bayesian probability theory, information gain, and Shannon Entropy. The paper also claims to use less assumptions than both Cox and Kolmogorov when choosing axioms. This seems like a significant foundation/unification. I'd like to hear whether others agree and what parts of the paper you think are the significant contributions.
If a 14 page paper is too long for you, I recommend skipping to the conclusion (starting at the bottom of page 12) where there is a nice picture representation of the axioms and a quick summary of what they imply.