sixes_and_sevens comments on Open thread, 11-17 August 2014 - Less Wrong

5 Post author: David_Gerard 11 August 2014 10:12AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (268)

You are viewing a single comment's thread. Show more comments above.

Comment author: sixes_and_sevens 12 August 2014 01:52:22PM 1 point [-]

I've often considered a self-assessment system where the sitter is prompted with a series of terms from the topic at hand, and asked to rate their understanding on a scale of 0-5, with 0 being "I've never heard of this concept", and 5 being "I could build one of these myself from scratch".

The terms are provided in a random order, and include red-herring terms that have nothing to do with the topic at hand, but sound plausible. Whoever provides the dictionary of terms should have some idea of the relative difficulty of each term, but you could refine it further and calibrate it against a sample of known diverse users, (novices, high-schoolers, undergrads, etc.)

When someone sits the test, you report their overall score relative to your calibrated sitters ("You scored 76, which puts you at undergrad level"), but you also report something like the Spearman rank coefficient of their answers against the difficulty of the terms. This provides a consistency check for their answers. If they frequently claim greater understanding of advanced concepts than basic ones, their understanding of the topic is almost certainly off-kilter (or they're lying). The presence of red-herring terms (which should all have canonical score of 0) means the rank coefficient consistency check is still meaningful for domain experts or people hitting the same value for every term.

Actually, this seems like a very good learning-a-new-web-framework dev project. I might give this a go.

Comment author: somnicule 13 August 2014 11:10:00PM 2 points [-]

Look up Bayesian Truth Serum, not exactly what you're talking about but a generalized way to elicit subjective data. Not certain on its viability for individual rankings, though.

Comment author: sixes_and_sevens 14 August 2014 09:12:01AM 1 point [-]

This is all sorts of useful. Thanks.

Comment author: Luke_A_Somers 12 August 2014 02:57:59PM *  2 points [-]

One problem that could crop up if you're not careful is a control term being used in an educational source not considered - a class, say, or a nonstandard textbook. I have a non-Euclidean geometry book that uses names for Euclidean geometry features that I certainly never encountered in geometry class. If those terms had been placed as controls, I would provide a non-zero rating for them.

Comment author: NancyLebovitz 12 August 2014 03:25:08PM 0 points [-]

Who's going to do the rather substantial amount of work needed to put the system together?

Comment author: sixes_and_sevens 12 August 2014 04:59:06PM 2 points [-]

Do you mean to build the system or to populate it with content? The former would be "me, unless I get bored or run out of time and impetus", and the latter is "whichever domain experts I can convince to list and rank terms from their discipline".

Comment author: NancyLebovitz 12 August 2014 07:57:46PM 1 point [-]

I was thinking about the work involved in populating it.