All of Chris G's Comments + Replies

Still not entirely convinced. If 0.A > 0.9 then surely0.A... > 0.9...?

Or does the fact this is true only when we halt at an equal number of digits after the point make a difference? 0.A = 10/11 and 0.9 = 9/10, so 0.A > 0.9, but 0.A < 0.99.

1Slider
I think you are still treating infinite desimals with some approximation when the question you are pursuing relies on the more finer details. **Appeal to graphical asymptotes** Make a plot of the value of the series after x terms so that one plot F is 0.9, 0.99,0.999,... and another G is 0.A, 0.AA, 0.AAA,.... Now it is true that all of Gs have a F below them and that F never crosses "over" above G. Now consider the asymptotes of F and G (ie draw the line that F and G approach to). Now my claim is that the asymptotes of F and G are the same line. It is not the case that G has a line higher than F. They are of exactly the same height which happens to be 1. The meaning of infinite decimals is more closely connected to the asymptote rather than what happens "to the right" in the graph. There is a possibly surprising "taking of limit" which might not be totally natural. **constustruction of wedges that don't break limit** It might be illuminateing to take the reverse approach. Have an asymptote of 1 and ask what all series have it as it's asymtote. Note that among the candidates some might be strictly greater than others. If per term value domination forced a different limit that would push such "wedgings" to have a different limit. But given some series that has 1 as limit it's always possible to have another series that fits between 1 and the original series and the new series limit will be 1. Thus there should be series whose are per item-dominating but end up summing to the same thing. **Rate mismatch between accuracy and digits** If you have 0.9 and 0.99 the latter is more precise. This is also true with 0.A and 0.AA. However between 0.9 and 0.A, 0.A is a bit more precise. In general if the bases are not nice multiples of each other the level of accuracy won't be the same. However there are critical number of digits where the accuracy ends up being exactly the same. If you write out the sums as fractions and want to have a common denominator one lazy way to gu

So would you say that 0.999...(base10) = 0.AAA...(base11) = 0.111...(base2)= 1?

1Slider
Yes, it happens to be that way.

I think I see your first point.

0.A{base11} = 10/11

0.9 = 9/10

0.A - 0.9 = 0.0_09...

0.AA = 10/11 + 10/121

0.99 = 9/10 + 9/100

0.AA - 0.99 = 0.00_1735537190082644628099...

Does this mean that because the difference or "lateness" gets smaller tending to zero each time a single identical digit is added to 0.A and 0.9 respectively, then 0.A... = 0.9...?

(Whereas the difference we get when we do this to say 0.8 and 0.9 gets larger each time so we can't say 0.8... = 0.9...)

1Slider
No I believe you are reaching a different concept. It is true that the difference squashes towards 0 but that would be different line of thinking. In a contex where infinidesimal are allowed (ie non-real) we might associate the series to different amounts and indeed find that they differ by a "minuscule amount". But as we normally operate on reals we only get a "real precision" result. For example if you had to say whether 3/4, 1 and 5/4 name which integers probalby your best bet would be that all of them name the same integer 1, if you are only restricted to integer precision. In the same way you might have 1 and 1-epsilon to be differnt numbers when infinidesimal accuracy is allowed but a real + anything infinidesimal is going to be the same real regardless of the infinidesimal (1 and 1-epsilon are the same real in real precision) What I was actually going fo is that, for any r < 1 you can ask how many terms you need to get up to that level and both series will give a finite answer. Ie to get to the same "depth" as 0.999999... gets with 6 digits you might need a bit less with 0.AAAAA... .It's a "horizontal" difference instead of a "vertical" one. However there is no number that one of the series could reach but the other does not (and the number that both series fails to reach is 1, it might be helpful to remember that an suprenum is the smallest upper limit). if one series reaches a sum with 10 terms and other reaches the same sum in 10000 terms it's equally good, we are only interested what happens "eventually" or after all terms have been accounted for. The way we have come up what the repeating digit sign means refers to limits and it's pretty guaranteed to produce reals.
Chris G*00

0.9{base10}<0.99{base10} but 0.9...{base10}=0.99...{base10}

0.9{base10}<0.A{base11} but 0.9...{base10}=0.A...{base11}

0.8{base10}<0.9{base10} and 0.8...{base10}<0.9...({ase10}

0.9{base10}<0.A{base11} and 0.9...{base10}<0.A...{base11}


I'm not trying to prove "0.999...{base10}=1 "is false, nor that "0.111...(base2)=1" is either - in fact it's an even more fascinating result.

Also "not(not(true))=true" is good enough for me as well.


1Slider
You are assuming that there is a link between the per-term value and the whole series value. The connection just isn't there and if you think it would be it would be important to show why. I could have two small finite series of A=10 and B=2+3+5 and compare that 2<10, 3<10 and 5<10 and then be surprised when A=B. When the term amount is not finite it's harder to verify thjat you haven't made this kind of error.

It's instructive to set out the proof you give for 0.999...=1 in number bases other than ten. For example base eleven, in which the maximum value single digit is conventionally represented as A and amounts to 10 (base ten). 10 (base eleven) amounts to 11 (base ten). So

Let x = 0.AAA...

10x = A.AAA...

10x - x = A

Ax = A

x = 1

0.AAA... = 1

But 0. A (base eleven) = 10/11 (base ten) which is bigger than 0.9 (base ten) = 9/10 (base ten). So shouldn't that inequality apply to 0.AAA... (base eleven) and 0.999... (base ten) as well? (A debatable point maybe... (read more)

2Slider
f(x)=2/x g(x)=1/x f(x) > g(x) for all x but lim f(x) = lim g(x) = 0. Just becuause f gets there "later" does not mean it gets any less deep. Repeating decimals are far enough removed from decimals its like mixing rationals and integers.
2rossry
Not debatable, just false. Formally, the fact that xk<yk for all k does not imply that limk→∞xk<limk→∞yk. If I were to poke a hole in the (proposed) argument that 0.[k 9s]{base 10} < 0.[k As]{base 11} (0.9<0.A; 0.99<0.AA;...), I'd point out that 0.[2*k 9s]{base 10} > 0.[k As]{base 11} (0.99>0.A; 0.9999>0.AA;...), and that this gives the opposite result when you take k→∞ (in the standard sense of those terms). I won't demonstrate it rigorously here, but the faulty link here (under the standard meanings of real numbers and infinities) is that carrying the inequality through the limit just doesn't create a necessarily-true statement. 0.111...{binary} is 1, basically for the Dedekind cut reason in the OP, which is not base-dependent (or representation-dependent at all) -- you can define and identify real numbers without using Arabic numerals or place value at all, and if you do that, then 0.999...=1 is as clear as not(not(true))=true.