The social bookmarking site metafilter has a sister site called metatalk, which works the same way but is devoted entirely to talking about metafilter itself. Arguments about arguments, discussions about discussions, proposals for changes in site architecture, etc.
Arguments about arguments are often less productive than the arguments they are about, but they CAN be quite productive, and there's certainly a place for them. The only thing wrong with them is when they obstruct the discussion that spawned them, and so the idea of splitting off metatalk into its own site is really quite a clever one.
Lesswrong's problem is a peculiar one. It is ENTIRELY devoted to meta-arguments, to the extent that people have to shoehorn anything else they want to talk about into a cleverly (or not so cleverly) disguised example of some more meta topic. It's a kite without a string.
Imagine if you had been around the internet, trying to have a rational discussion about topic X, but unable to find an intelligent venue, and then stumbling upon lesswrong. "Aha!" you say. "Finally a community making a concerted effort to be rational!"
But to your dismay, you find that the ONLY thing they talk about is being rational, and a few other subjects that have been apparently grandfathered in. It's not that they have no interest in topic X, there's just no place on the site they're allowed to talk about it.
What I propose is a "non-meta" sister site, where people can talk and think about anything BESIDES talking and thinking. Well, you know what I mean.
Yes?
Another advantage of this type of CAPTCHA is that it doesn't discriminate against intelligent computer programs who aren't very good at visual character recognition.
How would you feel about a less quantitative, more philosophical reasoning test?
A simpler idea might be to just have a karma filter, so no one below a threshold karma value on LW could post on meta.
A more philosophical reasoning test would not feature scary scary math. It would probably, of necessity, be more subjective, which could create a bottleneck in processing test results if we didn't want to arbitrarily limit possible responses and miss out on some possible nuance.