Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

nshepperd comments on On the importance of Less Wrong, or another single conversational locus - Less Wrong

84 Post author: AnnaSalamon 27 November 2016 05:13PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (362)

You are viewing a single comment's thread. Show more comments above.

Comment author: nshepperd 27 November 2016 07:06:01PM 14 points [-]

I think you're right that wherever we go next needs to be a clear schelling point. But I disagree on some details.

  1. I do think it's important to have someone clearly "running the place". A BDFL, if you like.

  2. Please no. The comments on SSC are for me a case study in exactly why we don't want to discuss politics.

  3. Something like reddit/hn involving humans posting links seems ok. Such a thing would still be subject to moderation. "Auto-aggregation" would be bad however.

  4. Sure. But if you want to replace the karma system, be sure to replace it with something better, not worse. SatvikBeri's suggestions below seem reasonable. The focus should be on maintaining high standards and certainly not encouraging growth in new users at any cost.

  5. I don't believe that the basilisk is the primary reason for LW's brand rust. As I see it, we squandered our "capital outlay" of readers interested in actually learning rationality (which we obtained due to the site initially being nothing but the sequences) by doing essentially nothing about a large influx of new users interested only in "debating philosophy" who do not even read the sequences (Eternal November). I, personally, have almost completely stopped commenting since quite a while, because doing so is no longer rewarding.

Comment author: Sniffnoy 30 November 2016 08:39:31AM *  12 points [-]

doing essentially nothing about a large influx of new users interested only in "debating philosophy" who do not even read the sequences (Eternal November).

This is important. One of the great things about LW is/was the "LW consensus", so that we don't constantly have to spend time rehashing the basics. (I dunno that I agree with everything in the "LW consensus", but then, I don't think anyone entirely did except Eliezer himself. When I say "the basics", I mean, I guess, a more universally agreed-on stripped down core of it.) Someone shows up saying "But what if nothing is real?", we don't have to debate them. That's the sort of thing it's useful to just downvote (or otherwise discourage, if we're making a new system), no matter how nicely it may be said, because no productive discussion can come of it. People complained about how people would say "read the sequences", but seriously, it saved a lot of trouble.

There were occasional interesting and original objections to the basics. I can't find it now but there was an interesting series of posts responding to this post of mine on Savage's theorem; this response argued for the proposition that no, we shouldn't use probability (something that others had often asserted, but with much less reason). It is indeed possible to come up with intelligent objections to what we consider the basics here. But most of the objections that came up were just unoriginal and uninformed, and could, in fact, correctly be answered with "read the sequences".

Comment author: TheAncientGeek 04 December 2016 01:12:34PM *  5 points [-]

That's the sort of thing it's useful to just downvote (or otherwise discourage, if we're making a new system), no matter how nicely it may be said, because no productive discussion can come of it.

When it's useful it's useful, when it's damaging it's damaging, It's damaging when the sequences don't actually solve the problem. The outside view is that all too often one is directed to the sequences only to find that the selfsame objection one has made has also been made in the comments and has not been answered. It's just too easy to silently downvote, or write "read the sequences". In an alternative universe there is a LW where people don't RTFS unless they have carefully checked that the problem has really been resolved, rather than superficially pattern matching. And the overuse of RTFS is precisely what feeds the impression that LW is a cult...that's where the damage is coming from.

Unfortunately, although all of that is fixable, it cannot be fixed without "debating philosophy".

ETA

Most of the suggestions here have been about changing the social organisation of LW, or changing the technology. There is a third option which is much bolder than than of those: redoing rationality. Treat the sequences as a version 0.0 in need of improvement. That's a big project which will provide focus, and send a costly signal of anti-cultishness, because cults don't revise doctrine.

Comment author: Alexei 05 December 2016 11:19:19PM 2 points [-]

Good point. I actually think this can be fixed with software. StackExchange features are part of the answer.

Comment author: TheAncientGeek 06 December 2016 08:54:26AM 1 point [-]

I'm not sure so what you mean. Developing Sequences 0.1 can be done with the help of technology, but it can't be done without community effort, and without a rethink of the status of the sequences.

Comment author: gwillen 27 November 2016 10:59:00PM *  7 points [-]

I think the basilisk is at least a very significant contributor to LW's brand rust. In fact, guilt by association with the basilisk via LW is the reason I don't like to tell people I went to a CFAR workshop (because rationality -> "those basilisk people, right?")

Comment author: John_Maxwell_IV 28 November 2016 03:26:11AM *  2 points [-]

Reputations seem to be very fragile on the Internet. I wonder if there's anything we could do about that? The one crazy idea I had was (rot13'd so you'll try to come up with your own idea first): znxr n fvgr jurer nyy qvfphffvba vf cevingr, naq gb znxr vg vzcbffvoyr gb funer perqvoyr fperrafubgf bs gur qvfphffvba, perngr n gbby gung nyybjf nalbar gb znxr n snxr fperrafubg bs nalbar fnlvat nalguvat.

Comment author: ingres 28 November 2016 09:22:05PM 2 points [-]

Ooh, your idea is interesting. Mine was to perngr n jro bs gehfg sbe erchgngvba fb gung lbh pna ng n tynapr xabj jung snpgvbaf guvax bs fvgrf/pbzzhavgvrf/rgp, gung jnl lbh'yy xabj jung gur crbcyr lbh pner nobhg guvax nf bccbfrq gb univat gb rinyhngr gur perqvovyvgl bs enaqbz crbcyr jvgu n zrtncubar.

Comment author: TheAncientGeek 28 November 2016 02:05:23PM 3 points [-]

"debating philosophy

As opposed to what? Memorising the One true Philosophy?

Comment author: Vaniver 28 November 2016 05:07:44PM 5 points [-]

As opposed to what? Memorising the One true Philosophy?

The quotes signify that they're using that specifically as a label; in context, it looks like they're pointing to the failure mode of preferring arguments as verbal performance to arguments as issue resolution mechanism. There's a sort of philosophy that wants to endlessly hash out the big questions, and there's another sort of philosophy that wants to reduce them to empirical tests and formal models, and we lean towards the second sort of philosophy.

Comment author: TheAncientGeek 28 November 2016 06:16:14PM 2 points [-]

How many problems has the second sort solved?

Have you considered that there may be a lot of endless hashing out, not because some people have a preference for it, but because the problems are genuinely difficult?

Comment author: Vaniver 28 November 2016 08:04:10PM *  5 points [-]

How many problems has the second sort solved?

Too many for me to quickly count?

Have you considered that there may be a lot of endless hashing out, not because some people have a preference for it, but because the problems are genuinely difficult?

Yes. It seems to me that both of those factors drive discussions, and most conversations about philosophical problems can be easily classified as mostly driven by one or the other, and that it makes sense to separate out conversations where the difficulty is natural or manufactured.

I think a fairly large part of the difference between LWers and similarly intelligent people elsewhere is the sense that it is possible to differentiate conversations based on the underlying factors, and that it isn't always useful to manufacture difficulty as an opportunity to display intelligence.

Comment author: Kaj_Sotala 29 November 2016 10:44:47AM 2 points [-]

Too many for me to quickly count?

Name three, then. :)

Comment author: Vaniver 29 November 2016 04:18:16PM 3 points [-]

What I have in mind there is basically 'approaching philosophy like a scientist', and so under some views you could chalk up most scientific discoveries there. But focusing on things that seem more 'philosophical' than not:

How to determine causality from observational data; where the perception that humans have free will comes from; where human moral intuitions come from.

Comment author: TheAncientGeek 04 December 2016 01:01:10PM *  2 points [-]

Approaching philosophy as science is not new. It has had a few spectacular successes, such as the wholesale transfer of cosmology from science to philosophy, and a lot of failures, judging by the long list of unanswered philosophical questions (about 200, according to wikipedia). It also has the special pitfall of philosophically uninformed scientists answering the wrong question:-

How to determine causality from observational data;

What causality is is the correct question/.

where the perception that humans have free will comes from;

Whether humans have the power of free will is the correct question.

where human moral intuitions come from.

Whether human moral intuitions are correct is the correct question.

Comment author: Vaniver 04 December 2016 09:46:56PM 2 points [-]

What causality is is the correct question/.

Oh, if you count that one as a question, then let's call that one solved too.

Whether humans have the power of free will is the correct question.

Disagree; I think this is what it looks like to get the question of where the perception comes from wrong.

Whether human moral intuitions are correct is the correct question.

Disagree for roughly the same reason; the question of where the word "correct" comes from in this statement seems like the actual query, and is part of the broader question of where human moral intuitions come from.

Comment author: TheAncientGeek 05 December 2016 07:34:00PM *  1 point [-]

What causality is is the correct question/.

Oh, if you count that one as a question, then let's call that one solved too.

Solved where?

Whether humans have the power of free will is the correct question.

Disagree; I think this is what it looks like to get the question of where the perception comes from wrong.

How can philosophers be systematically wrong about the nature of their questions? And what makes you right?

Of course, inasmuch as you agree with Y., you are going to agree that the only question to be answered is where the perception comes for, but this is about truth, not opinion: the important point is that he never demonstrated that.

Whether human moral intuitions are correct is the correct question.

Disagree for roughly the same reason; the question of where the word "correct" comes from in this statement seems like the actual query, and is part of the broader question of where human moral intuitions come from.

if moral intuitions come from God, that might underpin correctness, but things are much less straightforward in naturalistic explanations.

Comment author: WalterL 01 December 2016 07:44:23PM 0 points [-]

Scientists don't approach philosophy though, they run screaming in the other dimension.

The Scientific Method doesn't work on untestable stuff.

Comment author: MugaSofer 29 November 2016 12:02:42PM *  3 points [-]

Off the top of my head: Fermat's Last Theorem, whether slavery is licit in the United States of America, and the origin of species.

Comment author: TheAncientGeek 29 November 2016 01:59:44PM -2 points [-]

Is that a joke?

Comment author: TheAncientGeek 29 November 2016 03:16:48PM 1 point [-]

to Too many for me to quickly count?

The last time I counted I came up with two and a half.

Comment author: eagain 23 January 2017 08:25:07PM 0 points [-]

Have you considered that there may be a lot of endless hashing out, not because some people have a preference for it, but because the problems are genuinely difficult?

I've considered that view and found it wanting, personally. Not every problem can be solved right now with an empirical test or a formal model. However, most that can be solved right now, can be solved in such a way, and most that can't be solved in such a way right now, can't be solved at all right now. Adding more "hashing out of big questions" doesn't seem to actually help; it just results in someone eventually going meta and questioning whether philosophy is even meant to make progress towards truth and understand anyway.

Comment author: TheAncientGeek 23 January 2017 10:22:27PM 0 points [-]

Can you tell which problems can never be solved?

Comment author: eagain 02 February 2017 05:13:16AM 0 points [-]

Only an ill-posed problem can never be solved, in principle.

Comment author: TheAncientGeek 03 February 2017 01:40:53PM 0 points [-]

Is there a clear, algorithmic way of determining which problems are ill posed?

Comment author: Cloakless 16 July 2017 05:03:28PM 0 points [-]

Yeah, you just need a halting oracle and you're sorted.