pure-awesome comments on Is Scott Alexander bad at math? - Less Wrong

31 Post author: JonahSinick 04 May 2015 05:11AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (219)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 04 May 2015 11:50:22AM *  22 points [-]

I am not sure for how many people it is true, but my own bad-at-mathness is largely about being bad at reading really terse, dense, succint text, because my mind is used to verbose text and thus filtering out half of it or not really paying close attention.

I hate the living guts out of notation, Greek variables or single-letter variables. Even the Bayes theorem is too terse, succint, too information-dense for me. I find it painful that in something like P(B|A) all three bloody letters mean a different thing. It is just too zipped. I would far more prefer something more natural langauge like Probability( If-True (Event1), Event2) (this looks like a software code - and for a reason).

This is actually a virtue when writing programs, I am never the guy who uses single letter variables, my programs are always like MarginPercentage = DivideWODivZeroError((SalesAmount-CostAmount), SalesAmount) * 100. So never too succint, clearly readable.

Let's stick to the Bayer Theorem. My brain is screaming don't give me P, A, B. Give me "proper words" like Probability, Event1, and Event2. So that my mind can read "Pro...", then zone out and rest while reading "bability" and turn back on again with the next word.

This is basically the inability to focus really 100%, needing the "fillers", the low information density of natural language text for allowing my brain to zone out and rest for fractions of a second, of finding too dense, too terse notation, where losing a single letter means not understanding the problem.

This is largely a redudancy problem. Natural language is redundant, you can say "probably" as "prolly" and people still understand it - so your mind can zone out during reading half of a text and you still get its meaning. Math notation is highly not redundant, miss one single tiny itty bitty letter and you don't understand a proof.

So I guess I could be better at math if there was an inflated, more redudant, not single-letter-variables, more natural language like version of it.

I guess programming fills that gap well.

I figure Scott does not like terse, dense notation either, however he seems to be good at doing the work of inflating it to something more readable for himself.

I guess I am not reinventing warm water here. There is probably a reason why a programmer would more likely write Probability(If-True(Event1), Event2) than P(A|B) - this is more understandable for many people. I guess it should be part of math education to learn to cope with the denser, terser, less redundant second notation. I guess my teachers did not really manage to impart that to me.

Comment author: pure-awesome 08 May 2015 06:23:22PM 1 point [-]

I find that what helps for me is re-writing maths as I'm learning it.

When I glance at an equation or formula (especially an unfamiliar one), I usually can't take it in because my mind is trying to glance it all at once. I have to force myself to scan it slowly, either by re-writing it, writing out its definition, or by (holding a ruler under it) and scanning one symbol at a time.

Then again, I'm currently studying a postgraduate degree in maths and I'm not someone who's ever considered themselves 'bad at math'.