1 min read22nd Dec 200931 comments

6

The social bookmarking site metafilter has a sister site called metatalk, which works the same way but is devoted entirely to talking about metafilter itself. Arguments about arguments, discussions about discussions, proposals for changes in site architecture, etc.

Arguments about arguments are often less productive than the arguments they are about, but they CAN be quite productive, and there's certainly a place for them. The only thing wrong with them is when they obstruct the discussion that spawned them, and so the idea of splitting off metatalk into its own site is really quite a clever one.

Lesswrong's problem is a peculiar one. It is ENTIRELY devoted to meta-arguments, to the extent that people have to shoehorn anything else they want to talk about into a cleverly (or not so cleverly) disguised example of some more meta topic. It's a kite without a string.

Imagine if you had been around the internet, trying to have a rational discussion about topic X, but unable to find an intelligent venue, and then stumbling upon lesswrong. "Aha!" you say. "Finally a community making a concerted effort to be rational!"

But to your dismay, you find that the ONLY thing they talk about is being rational, and a few other subjects that have been apparently grandfathered in. It's not that they have no interest in topic X, there's just no place on the site they're allowed to talk about it.

What I propose is a "non-meta" sister site, where people can talk and think about anything BESIDES talking and thinking. Well, you know what I mean.

Yes?

New to LessWrong?

New Comment
31 comments, sorted by Click to highlight new comments since: Today at 8:34 PM

Would've been a nice comment in the December 2009 Meta Thread.

I wouldn't have read it then.

I wouldn't have read it then.

This is perhaps not entirely divorced from the point.

This reminds me of the prospect of rationalist arbitration that was brought up a while ago. I like the idea of a place where above-averagely rational folks congregate for purposes that aren't related exclusively to the pursuit of ever-greater rationality. However, Less Wrong's only barrier to entry (in the comments, at any rate, which I find comprise about 60% of the site's value) is obscurity and esotericism. Anybody can make an account and post comments (if they suck, they become invisible to people with certain preference settings, but that doesn't prevent further posting). Our high signal to noise ratio comes about because people who spout noise mostly have no interest in us; if we start a sister site where we talk about D&D and kittens and how to get cranberry juice out of upholstery, where's the wall around our garden?

If that genuinely became a problem, we could require people to solve a simple Bayesian problem before registering.

Sort of a rationalism troll CAPTCHA. I'd like it, but solving those problems requires math - I probably would never have joined if I'd had to do it.

Another advantage of this type of CAPTCHA is that it doesn't discriminate against intelligent computer programs who aren't very good at visual character recognition.

How would you feel about a less quantitative, more philosophical reasoning test?

A simpler idea might be to just have a karma filter, so no one below a threshold karma value on LW could post on meta.

A more philosophical reasoning test would not feature scary scary math. It would probably, of necessity, be more subjective, which could create a bottleneck in processing test results if we didn't want to arbitrarily limit possible responses and miss out on some possible nuance.

But now that you've actually been here for a while, you probably wouldn't find it as much of a barrier. Right? So it wouldn't be so much of a math filter, as a having-read-LW filter, which is what we want.

I didn't learn about Bayes' theorem for the first time on LW; I learned it in my epistemology class when I was a sophomore in college. Having read LW has not made me better at or more affectionate towards or more enthusiastic about spending time on math. (It probably has contributed towards convincing me that if I devoted a lot of time to it, I could become good at math, but hasn't motivated me to do so.) I've come to value participating on LW enough that I'd solve a simple Bayes problem to stay. (Or at least goad a friend into giving me the answer.)

But my point wasn't about me so much - it was about future possible contributors. Assuming people here think it's good to have me around, introducing barrier conditions that would have deterred me may be unwise, because they could deter people like me.

I'm curious to see an example or two of what these Bayesian problems might look like, if anybody has any ideas. I mean, it may be relevant to know just what difficulty level this test would be. Of course, what's simple for some LessWrong contributors is probably not simple for everyone.

The standard one goes something like, "The dangerous disease itchyballitis has a frequency of 1% in the general population of men. The test for the disease has an accuracy of 95% (for both false positives and false negatives). A randomly selected dude gets tested and the result is positive. What's the probability he has the disease?"

But most people get that wrong. A correct answer is more likely when the problem is phrased in equivalent but more concrete terms as follows: "The dangerous disease itchyballitis affects 100 out of 10,000 men. The test for the disease gives the correct answer 95 times out of 100. A randomly selected dude gets tested and the result is positive. What's the chance he has the disease?"

Or, for the approximate answer, just compare the base rate with the false positive rate (multiplying by .9something has small impacts that mostly cancel out). About 1% of people test positive due to having the disease (a bit less, actually), about 5% of people test positive because of an inaccurate test (a bit less, actually), so a person with a positive test has about a 1 in 6 chance of having the disease.

p(test_positive|itchyballitis) = 0.95

p(test_positive|!itchyballitis) = 0.05

p(itchyballitis) = 0.01

p(test_positive) = p(test_positive|itchyballitis) * p(itchyballitis) + p(test_positive|!itchyballitis) * p(!itchiballitis)

= 0.95 * 0.01 + 0.05 * 0.99

= 0.059

p(itchyballitis|test_positive) = (p(test_positive|itchyballitis) * p(itchyballitis)) / p(test_positive)

= (0.95 * 0.01) / 0.059

= 0.161

Edit: If anyone else is thinking of writing math stuff in a comment, don't do what I did. Read http://wiki.lesswrong.com/wiki/Comment_formatting first! Also, thanks Vladimir_Nesov.

It's more intuitive to use odds. Prior odds are 1:99, likelihood ratio (strength of evidence) given by a positive test is 95:5, so posterior odds are (1:99)*(95:5)=19:99, or probability of 19/118 (about 16%).

[-][anonymous]14y10

Use backslash before stars \* to have them in the comment * without turning text into italics.

I agree with Alicorn. Unless you want an echo chamber, math problems seem like a bad filter. Diversity is valuable.

You don't think there is diversity of thought among trained mathematicians?

Is there a small Bayesian quiz of sorts lying around here somewhere? I would certainly benefit from such a thing while learning the ropes.

Another possibility for a wall: If this is built, maybe it should have a robots.txt specifying that it doesn't show up in search results and bounces links from all but a handful of sites (LessWrong proper, the wiki, OvercomingBias, etc). That'll make it less likely that someone will wander in because they're interested in getting getting kitten juice out of their upholstery, becoming upset at our godless utility-maximizing contents, and making a big fuss about it. To make sure people that are interested in the main contents of the LW site are aware of it, we can have a (collapsible) banner advertising the new site for registered users, or show the same to people who's first visit was x-many days ago (determined via cookie).

So, basically, it could be an "open secret," invisible to the majority of the Internet.

I like to think there is such a site and I just haven't impressed the right people enough/ accumulated enough karma to warrant an invitation.

I haven't either, if there is.

Of course you would say that. Its a secret.:-)

Don't think us journeymen rationalists haven't heard the rumors.

He says, creating the rumor.

I could have just ignored you. I think the obligation to produce a reply here is sufficiently low that if there were such a site and such a secret (to which I was party), I would not find my only option to be to lie.

I agree that applied rationality is important, but I'm not sure that there needs to be another site for that to happen. This recent post, for example, seems like an example of exactly what the OP wants to see. Perhaps what should be done is creating an official "Applied Rationality" tag for all such posts and an easy way to filter them. That way, if a bad scenario happens where new readers more interested in politicized fighting than rationality are drawn to this site because there's a discussion on gun control, they can be easily quarantined. But if this site maintains its high signal/noise ratio, the community benefits from trying out its tools in action.

All we need is a message board.

Give it a rest, please.

Kites without strings are fun if you've ever ridden in one... They're called airplanes... (Trying not to sound like a dick when I say this)... I get the meaning of meta-talk though...

My point is that meta-talk can take one places just as well as regular talk may... It's just that meta-talk might just be the ride to every location (at least that IS the implication that I see)... Or is that just me... If it's just me... I'll shut up...

Please, don't abuse the ellipses.

I see that people don't like my ellipses. Check that.