Romashka comments on Is Scott Alexander bad at math? - LessWrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (219)
I am not sure for how many people it is true, but my own bad-at-mathness is largely about being bad at reading really terse, dense, succint text, because my mind is used to verbose text and thus filtering out half of it or not really paying close attention.
I hate the living guts out of notation, Greek variables or single-letter variables. Even the Bayes theorem is too terse, succint, too information-dense for me. I find it painful that in something like P(B|A) all three bloody letters mean a different thing. It is just too zipped. I would far more prefer something more natural langauge like Probability( If-True (Event1), Event2) (this looks like a software code - and for a reason).
This is actually a virtue when writing programs, I am never the guy who uses single letter variables, my programs are always like MarginPercentage = DivideWODivZeroError((SalesAmount-CostAmount), SalesAmount) * 100. So never too succint, clearly readable.
Let's stick to the Bayer Theorem. My brain is screaming don't give me P, A, B. Give me "proper words" like Probability, Event1, and Event2. So that my mind can read "Pro...", then zone out and rest while reading "bability" and turn back on again with the next word.
This is basically the inability to focus really 100%, needing the "fillers", the low information density of natural language text for allowing my brain to zone out and rest for fractions of a second, of finding too dense, too terse notation, where losing a single letter means not understanding the problem.
This is largely a redudancy problem. Natural language is redundant, you can say "probably" as "prolly" and people still understand it - so your mind can zone out during reading half of a text and you still get its meaning. Math notation is highly not redundant, miss one single tiny itty bitty letter and you don't understand a proof.
So I guess I could be better at math if there was an inflated, more redudant, not single-letter-variables, more natural language like version of it.
I guess programming fills that gap well.
I figure Scott does not like terse, dense notation either, however he seems to be good at doing the work of inflating it to something more readable for himself.
I guess I am not reinventing warm water here. There is probably a reason why a programmer would more likely write Probability(If-True(Event1), Event2) than P(A|B) - this is more understandable for many people. I guess it should be part of math education to learn to cope with the denser, terser, less redundant second notation. I guess my teachers did not really manage to impart that to me.
That resembles what happens to me when I have to 'read some math'. I hated integrals and multivariate equations because solving them always took so long, and somewhere halfway through I invariably began thinking 'okay, we did this and then broken off this piece to pretend it's actually like that, though we'll have to weld it back on someday... is there really no meaning to what we've already done, except that in some distant future we'll write the final "="?..'
In highschool physics, when they gave you an equation, you sure could use it as kind of a lever to turn the model Earth.