Comment author:taryneast
30 December 2010 12:01:44PM
*
-3 points
[-]

Except that 1.9999... < 2

Edit: here's the proof that I'm wrong mathematically (from the provided Wikipedia link):
"Multiplication of 9 times 1 produces 9 in each digit, so 9 × 0.111... equals 0.999... and 9 × 1⁄9 equals 1, so 0.999... = 1"

Comment author:taryneast
30 December 2010 04:22:19PM
*
2 points
[-]

Ok. Interesting.

I can see and agree that 0.999... can in the limit equal two, whereas in any finite representation would still be less than 2.

I don't consider them to be "the same number" in that sense... even though they algebraically equate (once the limit is reached) in a theoretical framework that can encompass infinities.

ie, in maths, I'd equate them but in the "real world" - I'd treat them separately.

Edit: and reading further... it seems I'm wrong again.
Of course, the whole point of putting "..." is to represent the fact that this is the limit of the decimal expansion of 0.999... to infinity.

therefore yep, 1.999... = 2

Where my understanding failed me is that 1.999... does not in fact represent the summation of the infinite set of 1 + 0.9 + 0.09 + ... which summation could, in fact, simply not be taken to its full limit. The representation "1.999..." can only represent either the set or the limit of the set, and mathematical convention has it as the latter, not the former.

Comment author:byrnema
30 December 2010 06:14:29PM
*
2 points
[-]

Another argument that may be more convincing on a gut level:

9x(1/9) is exactly equal to 1, correct?

Find the decimal representation of 1/9 using long division:
1/9=0.11111111... (note there is no different or superior way to represent this number as a decimal)

9x(1/9) = 9x(0.11111111...)=0.9999999... which we already agreed was exactly equal to 1.

Comment author:Sniffnoy
30 December 2010 08:51:57PM
1 point
[-]

Note also that it has to denote the limit, because we want it to denote a number, and the other object you describe (a sequence rather than a set, strictly speaking) isn't a number, just, well, a sequence of numbers.

Comment author:taryneast
30 December 2010 10:20:42PM
*
-1 points
[-]

it has to denote the limit, because we want it to denote a number,

This is the part I take issue with.

It does not have to denote a number, but we choose to let it denote a number (rather than a sequence) because that is how mathematicians find it most convenient to use that particular representation.

That sequence is also quite useful mathematically - just not as useful as the number-that-represents-the-limit. Many sequences are considered to be useful... though generally not in algebra - it's more common in Calculus, where such sequences are extremely useful. In fact I'd say that in calculus "just a sequence" is perhaps even more useful than "just a number".

My first impression (and thus what I originally got wrong) was that 1.999... represented the sequence and not the limit because, really, if you meant 2, why not just say 2? :)

Comment author:JoshuaZ
30 December 2010 10:30:16PM
*
0 points
[-]

We want it to denote a number for simple consistency. .11111... is a number. It is a limit. 3.14159... should denote a number. Why should 1.99999?... Be any different? If we are going to be at all consistent in our notation they should all represent the same sort of series. Otherwise this is extremely irregular notation to no end.

Comment author:taryneast
30 December 2010 11:47:44PM
-1 points
[-]

Yes, I totally agree with you: consistency and convenience are why we have chosen to use 1.9999... notation to represent the limit, rather than the sequence.

consistency and convenience tends to drive most mathematical notational choices (with occasional other influences), for reasons that should be extremely obvious.

It just so happened that, o this occasion, I was not aware enough of either the actual convention, or of other "things that this notation would be consistent with" before I guessed at the meaning of this particular item of notation.

And so my guessed meaning was one of the two things that I thought would be "likely meanings" for the notation.

In this case, my guess was for the wrong one of the two.

I seem to be getting a lot of comments that are implying that I should have somehow naturally realised which of the two meanings was "correct"... and have tried very hard to explain why it is not obvious, and not somehow inevitable.

Both of my possible interpretations were potentially valid, and I'd like to insist that the sequence-one is wrong only by convention (ie maths has to pick one or the other meaning... it happens to be the most convenient for mathematicians, which happens in this case to be the limit-interpretation)... but as is clearly evidenced by the fact that there is so much confusion around the subject (ref the wikipedia page) - it is not obvious intuitively that one is "correct" and one is "not correct".

I maintain that without knowledge of the convention, you cannot know which is the "correct" interpretation. Any assumption otherwise is simply hindsight bias.

Comment author:Sniffnoy
30 December 2010 10:55:02PM
*
0 points
[-]

OK, let me put it this way: If we are considering the question "Is 1.999...=2?", the context makes it clear that we must be considering the left hand side as a number, because the RHS is a number. (Would you interpret 2 in that context as the constant 2 sequence? Well then of course they're not equal, but this is obvious and unenlightening.) Why would you compare a number for equality against a sequence? They're entirely different sorts of objects.

Comment author:taryneast
30 December 2010 11:18:03PM
-1 points
[-]

is "x-squared = 2" ?
is a perfectly valid question to ask in mathematics even though the LHS is not obviously an number

In this case, it is a formula that can equate to a number... just as the sequence is a (very limited) formula that can equate to 2 - if we take the sequence to its limit; or that falls just shy of 2 - if we try and represent it in any finite/limited way.

In stating that 1.9999... is a number, you are assuming the usage of the limit/number, rather than the other potential usage ie, you are falling into the same assumption-trap that I fell into...
It's just that your assumption happens to be the one that matches with common usage, whereas mine wasn't ;)

Using 1.9999. to represent the limit of the sequence (ie the number) is certainly true by convention (ie "by definition"), but is no means the only way to interpret the symbols. It could just as easily represent the sequence itself... we just don't happen to do that - we define what mathematical symbols refer to... they're just the word/pointers to what we're talking about yes?

Comment author:Sniffnoy
31 December 2010 12:39:48AM
1 point
[-]

is "x-squared = 2" ? is a perfectly valid question to ask in mathematics even though the LHS is not obviously an number

Er... yes it is? In that context, x^2 is a number. We just don't know what number it might be. By contrast, the sequence (1, 1.9, 1.99, ...) is not a number at all.

Furthermore, even if we insist on regarding x^2 as a formula with a free variable, your analogy doesn't hold. The sequence (1, 1.9, 1.99, ...) has no free variables; it's one specific sequence.

You are correct that the convention could have been that 1.999... represents the sequence... but as I stated before, in that case, the question of whether it equals 2 would not be very meaningful. Given the context you can deduce that we are using the convention that it designates a number.

Comment author:taryneast
31 December 2010 08:53:38AM
*
-2 points
[-]

By contrast, the sequence (1, 1.9, 1.99, ...) is not a number at all

yes I agree, a sequence is not a number, it's sequence... though I wonder if we're getting confused, because we're talking about the sequence, instead of the infinite series (1 + 0.9 + 0.09 +...) which is actually what I had in my head when I was first thinking about 1.999...

Along the way, somebody said "sequence" and that's the word I started using... when really I've been thinking about the infinite series.... anyway

The infinite series has far less freedom than x^2, but that doesn't mean that it's a different thing entirely from x^2.

Lets consider "x - 1"

"x -1 " is not a number, until we equate it to something that lets us determine what x is...

If we use: "x -1 =4 " however. We can solve-for-x and there are no degrees of freedom.

If we use "1.9 < x -1 < 2" we have some minor degree of freedom... and only just a few more than the infinite series in question.

Admittedly, the only degree of freedom left to 1.9999... (the series) is to either be 2 or an infinitesimal away from 2. But I don't think that makes it different in kind to x -1 = 4

anyway - I think we're probably just in "violent agreement" (as a friend of mine once used to say) ;)

All the bits that I was trying to really say we agree over... now we're just discussing the related maths ;)

the question of whether it equals 2 would not be very meaningful

Ok, lets move into hypothetical land and pretend that 1.9999... represents what I originally though it represents.

The comparison with the number 2 provides the meaning that what you want to do is to evaluate the series at its limit.

It's totally supportable for you to equate 1.9999... = 2 and determine that this is a statement that is:
1) true when the infinite series has been evaluated to the limit
2) false when it is represented in any finite/limited way

Edit: ah... that's why you can't use stars for to-the-power-of ;)

Comment author:Sniffnoy
06 January 2011 06:03:29AM
*
1 point
[-]

anyway - I think we're probably just in "violent agreement" (as a friend of mine once used to say) ;)

Er, no... there still seems to be quite a bit of confusion here...

All the bits that I was trying to really say we agree over... now we're just discussing the related maths ;)

Well, if you really think that's not significant... :P

yes I agree, a sequence is not a number, it's sequence... though I wonder if we're getting confused, because we're talking about the sequence, instead of the infinite series (1 + 0.9 + 0.09 +...) which is actually what I had in my head when I was first thinking about 1.999...

Along the way, somebody said "sequence" and that's the word I started using... when really I've been thinking about the infinite series.... anyway

It's not clear to me what distinction you're drawing here. A series is a sequence, just written differently.

The infinite series has far less freedom than x^2, but that doesn't mean that it's a different thing entirely from x^2.

It's not at all clear to me what notion of "degrees of freedom" you're using here. The sequence is an entirely different sort of thing than x^2, in that one is a sequence, a complete mathematical object, while the other is an expression with a free variable. If by "degrees of freedom" you mean something like "free variables", then the sequence has none. Now it's true that, being a sequence of real numbers, it is a function from N to R, but there's quite a difference between the expression 2-10^(-n), and the function (i.e. sequence) n |-> 2-10^(-n) ; yes, normally we simply write the latter as the former when the meaning is understood, but under the hood they're quite different. In a sense, functions are mathematical, expressions are metamathematical.

When I say "x^2 is a number", what I mean is essentially, if we're working under a type system, then it has the type "real number". It's an expression with one free variable, but it has type "real number". By contrast, the function x |-> x^2 has type "function from reals to reals", the sequence (1, 1.9, 1.99, ...) has type "sequence of reals"... (I realize that in standard mathematics we don't actually technically work under a type system, but for practical purposes it's a good way to think, and it's I'm pretty sure it's possible to sensibly formulate things this way.) To equate a sequence to a number may technically in a sense return "false", but it's better to think of it as returning "type error". By contrast, equating x^2 to 2 - not equating the function x|->x^2 to 2, which is a type error! - allows us to infer that x^2 is also a number.

Admittedly, the only degree of freedom left to 1.9999... (the series) is to either be 2 or an infinitesimal away from 2. But I don't think that makes it different in kind to x -1 = 4

Note, BTW, that the real numbers don't have any infinitesimals (save for 0, if you count it).

It's totally supportable for you to equate 1.9999... = 2 and determine that this is a statement that is: 1) true when the infinite series has been evaluated to the limit 2) false when it is represented in any finite/limited way

Sorry, what does it even mean for it to be "represented in a finite/limited way"? The alternative to it being a number is it being an infinite sequence, which is, well, infinite.

I am really getting the idea you should go read the standard stuff on this and clear up any remaining confusion that way, rather than try to argue this here...

Comment author:[deleted]
30 December 2010 10:58:22PM
1 point
[+]
(3
children)

Comment author:[deleted]
30 December 2010 10:58:22PM
1 point
[-]

If we wanted to talk about the sequence we would never denote it 1.999... We would write {1, 1.9, 1.99, 1.999, ...} and perhaps give the formula for the Nth term, which is 2 - 10^-N.

Comment author:taryneast
30 December 2010 11:32:23PM
*
0 points
[-]

Hi Misha, I might also turn that argument back on you and repeat what I said before:
"if you meant 2, why not just say 2?" It's as valid as "if you meant the sequence, why not just write {1, 1.9, 1.99, 1.999, ...}"?

Clearly there are other reasons for using something that is not the usual convention. There are definitely good reasons for representing infinite series or sequences... as you have pointed out. However - there is no particular reason why mathematics has chosen to use 1.999... to mean the limit, as opposed to the actual infinite series. Either one could be equally validly used in this situation.

It is only by common convention that mathematics uses it to represent the actual limit (as n tends to infinity) instead of the other possibility - which would be "the actual limit as n tends to infinity... if we actually take it to infinity, or an infinitesimal less than the limit if we don't", which is how I assumed (incorrectly) that it was to be used

However, the other thing you say that "we never denote it 1.999..." pulls out an interesting though, and if I grasp what you're saying correctly, then I disagree with you.

As I've mentioned in another comment now - mathematical symbolic conventions are the same as "words" - they are map, not territory. We define them to mean what we want them to mean. We choose what they mean by common consensus (motivated by convenience). It is a very good idea to follow that convention - which is why I decided I was wrong to use it the way I originally assumed it was being used... and from now on, I will use the usual convention...

However, you seem to be saying that you think the current way is "the one true way" and that the other way is not valid at all... ie that "we would never denote it 1.9999..." as being some sort of basis of fact out there in reality, when really it's just a convention that we've chosen, and is therefore non-obvious from looking at the symbol without the prior knowledge of the convention (as I did).

I am trying to explain that this is not the case - without knowing the convention, either meaning is valid... it's only having now been shown the convention that I now know what is generally "by definition" meant by the symbol, and it happened to be a different way to what I automatically picked. without prior knowledge.

so yes, I think we would never denote the sequence as 1.999... but not because the sequence is not representable by 1.999... - simply because it is conventional to do so.

Comment author:[deleted]
31 December 2010 02:47:38AM
1 point
[+]
(1
child)

Comment author:[deleted]
31 December 2010 02:47:38AM
1 point
[-]

You have a point. I tend to dislike arguments about mathematics that start with "well, this definition is just a choice" because they don't capture any substance about any actual math. As a result, I tried to head that off by (perhaps poorly) making a case for why this definition is a reasonable choice.

In any case, I misunderstood the nature of what you were saying about the convention, so I don't think we're in any actual disagreement.

I might also turn that argument back on you and repeat what I said before: "if you meant 2, why not just say 2?"

If I meant 2, I would say 2. However, our system of writing repeating decimals also allows us to (redundantly) write the repeating decimal 1.999... which is equivalent to 2. It's not a very useful repeating decimal, but it sometimes comes out as a result of an algorithm: e.g. when you multiply 2/9 = 0.222... by 9, you will get 1.999... as you calculate it, instead of getting 2 straight off the bat.

Comment author:taryneast
31 December 2010 09:02:15AM
1 point
[-]

You have a point. I tend to dislike arguments about mathematics that start with "well, this definition is just a choice"

Me too! Especially as I've just been reading that sequence here about "proving by definition" and "I can define it any way I like"... that's why I tried to make it very clear I wasn't saying that... I also needed to head of the heading off ;)

Anyway - I believe we are just in violent agreement here, so no problems ;)

Comment author:Houshalter
12 June 2013 07:48:26AM
-1 points
[-]

For what it's worth (and why do I have to pay karma to reply to this comment, I don't get it) there is an infinitesimal difference between the two. An infinitesimal is just like infinity in that it's not a real number. For all practical purposes it is equal to zero, but just like infinity, it has useful mathematical purposes in that it isn't exactly equal to zero. You could plug an infinitesimal into an equation to show how close you can get to zero without actually getting there. If you just replaced it with zero the equation could come out undefined or something.

Likewise using 1.999... because of the property that it isn't exactly equal to 2 but is practically equal to 2, could be useful.

Comment author:ialdabaoth
12 June 2013 07:57:57AM
*
2 points
[-]

er... I'm not sure if this is the right way to look at it.

1.999999... is 2. Exactly 2. The thing is, there is an infinitesimal difference between '2' and '2'. 1.999999.... isn't "Two minus epsilon", it's "The limit of two minus epsilon as epsilon approaches zero", which is two.

EDIT: And to explain the following objection:

Weird things happen when you apply infinity, but can it really change a rule that is true for all finite numbers?

Yes, absolutely. That's part of the point of infinity. One way of looking at certain kinds of infinity (note that there are several kinds of infinity) is that infinity is one of our placeholders for where rules break down.

Comment author:Houshalter
14 June 2013 05:55:56AM
2 points
[-]

This is one of those things that isn't worth arguing over at all, but I will anyways because I'm interested. I'm probably wrong because people much smarter than me have thought about this before, but this still doesn't make any sense to me at all.

1.9 is just 2 minus 0.1, right? And 1.99 is just 2 minus 0.01. Each time you add another 9, you are dividing the number you are subtracting by 10. No matter how many times you divide 0.1 by ten, you will never exactly reach zero. And if it's not exactly zero, then two minus the number isn't exactly two.

Even if you do it 3^^^3 times, it will still be more than zero. Weird things happen when you apply infinity, but can it really change a rule that is true for all finite numbers? You can say it approaches 2 but that's not the same as it ever actually reaching it. Does this make any sense?

## Comments (117)

Old