Comment author: Lumifer 10 March 2016 10:00:13PM 3 points [-]

RTS is a bit of a special case because a lot of the skill involved is micromanagement and software is MUCH better at micromanagement than humans.

I don't expect to see highly sophisticated AI in games (at least adversarial, battle-it-out games) because there is no point. Games have to be fun which means that the goal of the AI is to gracefully lose to the human player after making him exert some effort.

You might be interested in Angband Borg.

Comment author: richard_reitz 10 March 2016 11:03:32PM *  4 points [-]

And yet, humans currently have the edge in Brood War. Humans are probably doomed once StarCraft AIs get AlphaGo-level decision-making, but flawless micro—even on top of flawless* macro—won't help you if you only have zealots when your opponent does a muta switch. (Zealots can only attack ground and mutalisks fly, so zealots can't attack mutalisks; mutalisks are also faster than zealots.)

*By flawless, I mean macro doesn't falter because of micro elsewhere; often, even at the highest levels, players won't build new units because they're too busy controlling a big engagement or heavily multitasking (dropping at one point, defending a poke elsewhere, etc). If you look at it broadly, making the correct units is part of macro, but that's not what I'm talking about when I say flawless macro.

Comment author: Strangeattractor 29 January 2016 11:27:54AM 2 points [-]

Your comment made me think, and I'll look up some of the recommendations. I like the analogy with musicians and also the part where you talked about how the analogy breaks down.

However, I'd like to offer a bit of a different perspective to the original poster on this part of what you said.

To summarize: the math I think you're looking to learn is proofy, not computational, in nature.

Your advice is good, given this assumption. But this assumption may or may not be true. Given that the post says:

I don't care what field it is.

I think there's the possibility that the original poster would be interested in computational mathematics.

Also, it's not either or. It's a false dichotomy. Learning both is possible and useful. You likely know this already, and perhaps the original poster does as well, but since the original poster is not familiar with much math, I thought I'd point that out in case it's something that wasn't obvious. It's hard to tell, writing on the computer and imagining a person at the other end.

If the word "computational" is being used to mean following instructions by rote without really understanding why, or doing the same thing over and over with no creativity or insight, then it does not seem to be what the original poster is looking for. However, if it is used to mean creatively understanding real world problems, and formulating them well enough into math that computer algorithms can help give insights about them, then I didn't see anything in the post that would make me warn them to steer clear of it.

There are whole fields of human endeavor that use math and include the term "computational" and I wouldn't want the original poster to miss out on them because of not realizing that the word may mean something else in a different context, or to think that it's something that professional mathematicians or scientists or engineers don't do much. Some mathematicians do proofs most of the time, but others spend time on computation, or even proofs about computation.

Fields include computational fluid dynamics, computational biology, computational geometry...the list goes on.

Speaking of words meaning different things in different contexts, that's one thing that tripped me up when I was first learning some engineering and math beyond high school. When I read more advanced books, I knew when I was looking at an unfamiliar word that I had to look it up, but I hadn't realized that some words that I already was familiar with had been redefined to mean something else, given the context, or that the notation had symbols that meant one thing in one context and another thing in another context. For example, vertical bars on either side of something could mean "the absolute value of" or it could mean "the determinant of this matrix", and "normal forces" meant "forces perpendicular to the contact surface". Textbooks are generally terribly written and often leave out a lot.

In other words, the jargon can be sneaky and sound exactly like words that you already know. It's part of why mathematical books seem so nonsensical to outsiders.

Comment author: richard_reitz 29 January 2016 01:35:53PM *  2 points [-]

Excellent points; "rigorous" would have been a better choice. I haven't yet had the time to study any computational fields, but I'm assuming the ones you list aren't built on the "fuzzy notions, and hand-waving" that Tao talks about.

I should also add I don't necessarily agree 100% with every in Lockhart's Lament; I do think, however, that he does an excellent job of identifying problems in how secondary school math is taught and does a better job than I could of contrasting "follow the instructions" math with "real" math to a lay person.

Comment author: richard_reitz 29 January 2016 10:17:01AM *  7 points [-]

I once took a math course where the first homework assignment involved sending the professor an email that included what we wanted to learn in the course (this assignment was mostly for logistical reasons: professor's email now autocompletes, eliminating a trivial inconvenience of emailing him questions and such, professor has all our emails, etc). I had trouble answering the question, since I was after learning unknown unknowns, thereby making it difficult to express what exactly it was I was looking to learn. Most mathematicians I've talked to agree that, more or less, what is taught in secondary school under the heading of "math" is not math, and it certainly bears only a passing resemblance to what mathematicians actually do. You are certainly correct that the thing labelled in secondary schools as "math" is probably better learned differently, but insofar as you're looking to learn the thing that mathematicians refer to as "math" (and the fact you're looking at Spivak's Calculus indicates you, in fact, are), looking at how to better learn the thing secondary schools refer to as "math" isn't actually helpful. So, let's try to get a better idea of what mathematicians refer to as math and then see what we can do.

The two best pieces I've read that really delve into the gap between secondary school "math" and mathematician's "math" are Lockhart's Lament and Terry Tao's Three Levels of Rigour. The common thread between them is that secondary school "math" involves computation, whereas mathematician's "math" is about proof. For whatever reason, computation is taught with little motivation, largely analogously to the "intolerably boring" approach to language acquisition; proof, on the other hand, is mostly taught by proving a bunch of things which, unlike computation, typically takes some degree of creativity, meaning it can't be taught in a rote manner. In general, a student of mathematics learns proofs by coming to accept a small set of highly general proof strategies (to prove a theorem of the form "if P then Q", assume P and derive Q); they first practice them on the simplest problems available (usually set theory) and then on progressively more complex problems. To continue Lockhart's analogy to music, this is somewhat like learning how to read the relevant clef for your instrument and then playing progressively more difficult music, starting with scales. [1] There's some amount of symbol-pushing, but most of the time, there's insight to be gleaned from it (although, sometimes, you just have to say "this is the correct result because the algebra says so", but this isn't overly common).

Proofs themselves are interesting creatures. In most schools, there's a "transition course" that takes aspiring math majors who have heretofore only done computation and trains them to write proofs; any proofy math book written for any other course just assumes this knowledge but, in my experience (both personally and working with other students), trying to make sense of what's going on in these books without familiarity with what makes a proof valid or not just doesn't work; it's not entirely unlike trying to understand a book on arithmetic that just assumes you understand what the + and * symbols mean. This transition course more or less teaches you to speak and understand a funny language mathematicians use to communicate why mathematical propositions are correct; without taking the time to learn this funny language, you can't really understand why the proof of a theorem actually does show the theorem is correct, nor will you be able to glean any insight as to why, on an intuitive level, the theorem is true (this is why I doubt you'd have much success trying to read Spivak, absent a transition course). After the transition course, this funny language becomes second nature, it's clear that the proofs after theorem statements, indeed, prove the theorems they claim to prove, and it's often possible, with a bit of work [2], to get an intuitive appreciation for why the theorem is true.

To summarize: the math I think you're looking to learn is proofy, not computational, in nature. This type of math is inherently impossible to learn in a rote manner; instead, you get to spend hours and hours by yourself trying to prove propositions [3] which isn't dull, but may take some practice to appreciate (as noted below, if you're at the right level, this activity should be flow-inducing). The first step is to do a transition, which will teach you how to write proofs and discriminate between correct proofs from incorrect; there will probably some set theory.

So, you want to transition; what's the best way to do it?

Well, super ideally, the best way is to have an experienced teacher explain what's going on, connecting the intuitive with the rigorous, available to answer questions. For most things mathematical, assuming a good book exists, I think it can be learned entirely from a book, but this is an exception. That said, How to Prove It is highly rated, I had a good experience with it, and other's I've recommended it to have done well. If you do decide to take this approach and have questions, pm me your email address and I'll do what I can.


  1. This analogy breaks down somewhat when you look at the arc musicians go through. The typical progression for musicians I know is (1) start playing in whatever grade the music program of the school I'm attending starts, (2) focus mainly on ensemble (band, orchestra) playing, (3) after a high (>90%) attrition rate, we're left with three groups: those who are in it for easy credit (orchestra doesn't have homework!); those who practice a little, but are too busy or not interested enough to make a consistent effort; and those who are really serious. By the time they reach high school, everyone in this third group has private instructors and, if they're really serious about getting good, goes back and spends a lot of times practicing scales. Even at the highest level, musicians review scales, often daily, because they're the most fundamental thing: I once had the opportunity to ask Gloria dePasquale what the best way to improve general ability, and she told me that there's 12 major scales and 36 minor scales and, IIRC, that she practices all of them every day. Getting back to math, there's a lot here that's not analogous to math. Most notably, there's no analogue to practicing scales, no fundamental-level thing that you can put large amounts of time into practicing and get general returns to mathematical ability: there's just proofs, and once you can tell a valid proof from an invalid proof, there's almost no value that comes from studying set theory proofs very closely. There's certainly an aesthetic sense that can be refined, but studying whatever proofs happen to be at to slightly above your current level is probably the most helpful (like in flow), if it's too easy, you're just bored and learn nothing (there's nothing there to learn), and if it's too hard, you get frustrated and still learn nothing (since you're unable to understand what's going on).)

  2. "With a bit of work", used in a math text, means that a mathematically literate reader who has understood everything up until the phrase's invocation should be able to come up with the result themselves, that it will require no real new insight; "with a bit of work, it can be shown that, for every positive integer n, (1 + 1/n)^n < e < (1 + 1/n)^(n+1)". This does not preclude needing to do several pages of scratch work or spending a few minutes trying various approaches until you figure out one that works; the tendency is for understatement. Related, most math texts will often leave proofs that require no novel insights or weird tricks as exercises for the reader. In Linear Algebra Done Right, for instance, Axler will often state a theorem followed by "as you should verify", which should require some writing on the reader's part; he explicitly spells this out in the preface, but this is standard in every math text I've read (and I only bother reading the best ones). You cannot read mathematics like a novel; as Axler notes, it can often take over an hour to work through a single page of text.

  3. Most math books present definitions, state theorems, and give proofs. In general, you definitely want to spend a bit of time pondering definitions; notice why they're correct/how the match your intuition, and seeing why other definitions weren't used. When you come to a theorem, you should always take a few minutes to try to prove it before reading the book's proof. If you succeed, you'll probably learn something about how to write proofs better by comparing what you have to what the book has, and if you fail, you'll be better acquainted with the problem and thus have more of an idea as to why the book's doing what it's doing; it's just an empirical result (which I read ages ago and cannot find) that you'll understand a theorem better by trying to prove it yourself, successful or not. It's also good practice. There's some room for Anki (I make cards for definitions—word on front, definition on back—and theorems—for which reviews consist of outlining enough of a proof that I'm confident I could write it out fully if I so desired to) but I spend the vast majority of my time trying to prove things.

Comment author: richard_reitz 04 December 2015 12:52:26PM 3 points [-]

It has happened more than once that a professor has assigned a textbook, which I bought, only for the professor to say in the first class that the only reason they assigned a textbook is because they were required to, but will never use it. Holding off on buying textbooks until after the first class (or, I guess, emailing the professor to ask if they plan on using the textbook) would have saved me several hundreds of dollars. (Having textbooks to study from is nice—they are, to me, the most efficient way of getting up to speed in math or science—but the ones professors assign because they need to put something down tend not to be the best ones.)

Comment author: Gram_Stone 07 October 2015 12:47:18AM *  0 points [-]

Thanks for the feedback. I think you can construct all graphs and use it to prove the theorem if you prove that you can add an arbitrary number of additional edges and nodes to an arbitrary graph and keep the sum of the degrees of all nodes even, instead of just one additional node and one additional edge. I also see what you mean about this:

I can't quite tell if you actually rely on that construction.

I think the inductive hypothesis in the rest of that paragraph might be enough, and I just wrote down how I intuitively visualized the proof before that without realizing that it wasn't necessary (nor sufficient, I now know) for the argument to carry through.

If you have an idea of how you would write the proof, I'd be interested in seeing it. I looked at the book and the proof is actually even less formal there.

Comment author: richard_reitz 07 October 2015 12:45:18PM *  2 points [-]

Lemma: sum of the degrees of the nodes is twice the number of edges.

Proof: We proceed by induction on the number of edges. If a graph has 0 edges, the the sum of degrees of edges is 0=2(0). Now, by way of induction, assume, for all graphs with n edges, the sum of the degrees of the nodes 2n; we wish to show that, for all graphs with n+1 edges, the sum of the degrees of the nodes is 2(n+1). But the sum of the degrees of the nodes is (2n)+2 = 2(n+1). ∎

The theorem follows as a corollary.


If you want practice proving things and haven't had much experience so far, I'd recommend Mathematics for Computer Science, a textbook from MIT and distributed under a free license, along with the associated video lectures *. To use Terry Tao's words, Sipser is writing at both level 1 and 3: he's giving arguments an experienced mathematician is capable of filling in the details to form a rigorous argument, but also doing so in such a way that a level 1 mathematician can follow along. Critically, however, from what I understand from reading Sipser's preface, he's definitely not writing a book to move level 1 mathematicians to level 2, which is a primary goal of the MIT book. If you're looking to prove things because you haven't done it much before, I infer you're essentially looking to transition from level 1 to 2, hence the recommendation.

A particular technique I picked up from the MIT book, which I used here, was that, for inductive proofs, it's often easier to prove a stronger theorem, since it gives you stronger assumptions in the inductive step.

PM me if you want someone to look over your solutions (either for Sipser or the MIT book). In the general case, I'm a fan learning from textbooks and believe that working things out for yourself without being helped by an instructor makes you stronger, but I'm also convinced that you need feedback from a human when you're first getting learning how to prove things.

* The lectures follow an old version of the book, which ~350 pages shorter and, crucially, lacks exercises.

In response to Two Growth Curves
Comment author: RomeoStevens 02 October 2015 01:43:31AM *  15 points [-]

Related: The Valley of Bad X. Learning new skills is especially hard in domains in which your first few attempts are likely to fall far short of your mental picture of improvement or even make you worse initially. I find it helps to explicitly visualize people who I perceive as being skilled in X failing at it over and over again when they were first learning. Rather than think of myself as wanting to affiliate with the end result I think of myself as wanting to affiliate with the process.

Also related: Punctuated equilibrium skill growth vs linear skill growth (ht Ethan Dickinson). You will be especially discouraged if you are expecting linear growth and instead get lumpy growth.

Comment author: richard_reitz 02 October 2015 12:18:52PM 3 points [-]

It helps to explicitly visualize people who I perceive as being skilled in X failing at it over and over again

Some of the greatest value I've gotten out of attending math lectures comes from seeing math Ph.Ds (particularly good ones) make mistakes or even forget exactly how a proof works and have to dismiss class early. It never happened often, but just often enough to keep me from getting discouraged.

Comment author: FrameBenignly 28 September 2015 10:29:05PM 4 points [-]

This video talks about high school curriculum design issues; advocating greater focus on concrete life skills and less of a focus on classes with "intangible" value like history or more advanced mathematics. If I recall correctly, he doesn't say anything about science class, which I think there's a lot to criticize there too. A lot of common counter-arguments to his point do not seem scientific. The argument that history teaches critical thinking for instance is very popular, but there's no good definition of critical thinking and research seems to be all over the place. I generally agree that education should provide more direct value. The commentary I saw on it kept bringing up the issue of standardized testing which is unrelated. I don't hold much hope out for improvements when the average person can't even stay on topic.

Comment author: richard_reitz 30 September 2015 08:50:27AM 4 points [-]

Paul Graham writes that studying fields with hard, solved problems (eg mathematics) is useful, because it gives you practice solving hard problems and the approaches and habits of mind that you develop solving those problems are useful when you set out to tackle new (technical) problems. This claim seems at least plausible to me and seems to line up with me personal experience, but you seem like a person who might know why I shouldn't believe this, so I ask, is there any reason I should doubt that the problem-solving approaches and habits of mind I develop studying mathematics won't help me as I run into novel technical problems?

Comment author: Lumifer 28 September 2015 05:37:15PM 1 point [-]

not to give detailed feedback to bad answers

If your goal is to foster understanding instead of giving canned answers, this seems counterproductive.

Comment author: richard_reitz 30 September 2015 08:15:24AM *  4 points [-]

If you're after feedback-for-understanding, providing a student with a list of questions they got wrong and a good solutions manual (which you only have to write once) works most of the time (my guess is around 90% of the time, but I have low confidence in my estimates because I'm capable of successfully working through entire textbooks' worth of material and needing no human feedback, which I'm told is not often the case). Doing this should be more effective than having the error explained outright a la generation effect.

Another interesting result is that the best feedback for fostering understanding often comes not from experts, who have such a deep degree of understanding and automaticity that it impairs their ability to simulate and communicate with minds struggling with new material, but from students who just learned the material. There's a risk of students who believe the right thing for the wrong reason propagating their misunderstanding, but I think that pairing up a student who's struggling with some concept (i.e., throwing a solutions manual at them hasn't helped them bridge the conceptual gap that caused them to get the question wrong) with a student who understands it is often helpful. IIRC, Sal Khan described using this technique with some success in team after the season had ended and can only describe its efficacy as "definitely witchcraft".

I think there's a place for graders to give detailed feedback to bad answers, but most of the time, it's better to force students to do the work themselves and locate their own errors/conceptual gaps, and in most of the remaining cases, to pawn off the responsibility to students (this could be construed as teachers being lazy, but it's also what, to my knowledge, produces the best learning outcomes). Since detailed feedback is only desirable after two rounds of other approaches that (in my deeply nonrepresentative experience) usually work, I don't think it makes sense to produce detailed feedback to every wrong answer.

Then again, I don't fully understand what context you're thinking in. In my original post, I was thinking about purely diagnostic math tests given to postsecondary students for employers that wouldn't so much as tell students which questions they got wrong, along the lines of the Royal Statistical Society's Graduate Diploma (five three-hour tests which grant a credential equivalent to a "good UK honours degree"). In writing this, I'm mostly imagining standardized math tests for secondary students in America (which, I'm given to understand, already have written components), which currently don't give per-question feedback, but changing that is much less of a pipe dream than creating tests that effectively test understanding. Come to think of it, I think the above approach applies even better to classroom instructors giving their own tests, at either the secondary or postsecondary level.

Tangentially related: the best professor I ever had would type 3–4 pages of general commentary (common errors and why they were wrong and how to do them better, as well as things the class did well) for the class after every problem set and test, generally by the next class. I found this commentary was extraordinarily helpful, not just because of feedback, but because (a) it helped dispel the misperception that everyone else understood everything and I was struggling because I was stupid, (b) taught us to discriminate between bad, mediocre, and good work, and (c) comments like "most of you did [x], which was suboptimal because of [y], but one of you did [z], which takes a bit more work but is a better approach because [~y]" really drove me to not do the minimum amount of work to get an answer when I could do a bit more work to get a stronger solution. (The course was in numerical methods so, as an example, we once had a problem where we had to use some technique where error exploded (I've now forgotten since I didn't have Anki back then) to locate a typo in some numeric data. A sufficient answer would have been to identify the incorrect entry; a stronger answer was to identify the incorrect entry, figure out the error (two digits typed in the wrong order), and demonstrate that fixing the error caused explosions to not happen.)

Comment author: NancyLebovitz 28 September 2015 04:34:48PM *  6 points [-]

How do you identify people who can grade answers to questions which show deep understanding?

Comment author: richard_reitz 30 September 2015 07:12:15AM *  4 points [-]

If we assume that the questions are designed such that a student can answer them upon initial exposure if and only if they deeply understand the material, then the question of identifying graders turns into the much easier question of identifying people who can discriminate between valid and invalid answers. I'm told that being able to discriminate between valid and invalid responses is a necessary condition for subject expertise, so anyone who's a relevant expert works. One way to demonstrate expertise is by building something that requires expertise. In an extreme example, I'm confident that Grigori Perelman understands topology because he proved the Poincare conjecture, and, for similar reasons, I'm (mostly) confident that Ph.Ds are experts. If we have well-designed tests, we can set the set of people qualified to grade tests as "has built something requiring expertise or has passed a well-designed test graded by someone already in this set."

Comment author: richard_reitz 28 September 2015 01:59:31PM *  6 points [-]

It seems conventional wisdom that tests are generally gameable in the sense that an (most?) effective way to produce the best scores involves teaching password guessing rather than actually learning material deeply, i.e. such that the student can use it in novel and useful ways. Indeed, I think this is the case for many (most, even) tests, but also think it possible to write tests that are most easily passed by learning the material deeply. In particular, I don't see how to game questions like "state, prove, and provide an intuitive justification for Pascal's combinatorial identity" or "Under what conditions does f(x) = ax^3 + bx^2 + cx + d have only one critical point?'', but that's more a statement about my mind than the gameability of tests. I would greatly appreciate learning how a test consisting of such questions could be gamed, thereby unlearning an untrue thing; and if no one here can (or, at least, is willing to take the time to) explain how such a thing could be done, well, that's useful to know, too.

View more: Prev | Next