You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Dagon comments on Open thread, Sep. 28 - Oct. 4, 2015 - Less Wrong Discussion

3 Post author: MrMind 28 September 2015 07:13AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (198)

You are viewing a single comment's thread. Show more comments above.

Comment author: richard_reitz 28 September 2015 01:59:31PM *  6 points [-]

It seems conventional wisdom that tests are generally gameable in the sense that an (most?) effective way to produce the best scores involves teaching password guessing rather than actually learning material deeply, i.e. such that the student can use it in novel and useful ways. Indeed, I think this is the case for many (most, even) tests, but also think it possible to write tests that are most easily passed by learning the material deeply. In particular, I don't see how to game questions like "state, prove, and provide an intuitive justification for Pascal's combinatorial identity" or "Under what conditions does f(x) = ax^3 + bx^2 + cx + d have only one critical point?'', but that's more a statement about my mind than the gameability of tests. I would greatly appreciate learning how a test consisting of such questions could be gamed, thereby unlearning an untrue thing; and if no one here can (or, at least, is willing to take the time to) explain how such a thing could be done, well, that's useful to know, too.

Comment author: Dagon 28 September 2015 02:54:50PM 3 points [-]

Testing and credentialism is a mess. The basic problem is that it's unclear what the result should measure: how much the student knows, how much the student has learned, how intelligent the student is, how conscientious, or how well the student's capabilities line up with the topic. The secondary problem is that in most settings, the test should be both hard-to-game AND perfectly objective, such that there is no argument about correctness of answer (and such that grading can be done quickly).

I spend a lot of time interviewing and training interviewers for tech jobs. This doesn't have the first problem: we have a clear goal (determine whether the candidate is likely to perform well in the role, usually tested by solving similar problems as would be faced in the role). The second difficulty is similar - a good interview generates actual evidence of the candidate's likely success, not just domain knowledge. This takes a lot of interviewing skill to get the best from the candidate, and a lot of judgement in how to evaluate the approach and weigh the various aspects tested. We put a lot of time into this, and accept the judgement aspect rather than trying to reduce the time spent, automate the results, or be purely objective in assessment.