You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

gjm comments on What subjects are important to rationality, but not covered in Less Wrong? - Less Wrong Discussion

20 Post author: casebash 27 February 2015 11:57AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (66)

You are viewing a single comment's thread.

Comment author: gjm 27 February 2015 12:26:21PM 25 points [-]

I have felt for a long time that LW is short of discussion of what you might call "collective rationality": the art of effective collaborative truth-seeking. Of course LW is itself an attempt at collective rationality; but most of us, much of the time, are engaged in activities that (1) involve multiple people, (2) would benefit from better truth-finding techniques, and (3) are not Less Wrong.

Comment author: Vaniver 27 February 2015 02:20:30PM 13 points [-]

the art of effective collaborative truth-seeking.

It seems to me that industrial organization and industrial psychology have put quite a bit of effort into asking how to get committees and groups to think together effectively. Perhaps someone could do a literature survey / find some good books to review for LW?

Comment author: TheAncientGeek 28 February 2015 03:50:04PM 1 point [-]

If Mercier and Sperber's theory is correct, people are already optimized for arguing things out in groups ..which would mean that rationality training is really solo rationality training...and perhaps not that useful for many people.

Comment author: Vaniver 28 February 2015 04:26:57PM 4 points [-]

If Mercier and Sperber's theory is correct, people are already optimized for arguing things out in groups

Not really, no. People are optimized for winning arguments against untrained humans. The point of group rationality training is figuring out what norms / individual training / etc. makes it so that the best ideas (by some external metric) are most likely to win in a group discussion, rather than the best-championed ideas. Even if, say, I can identify why someone's argument is not helping push towards truth, there needs to be a group norm that I can call them out on that and that will be effective. (Think of "Objection!" or pointing out fallacies in debate club; both of those rest on the common acceptance on what things are worth objecting to or calling fallacious.)

Comment author: TheAncientGeek 01 March 2015 08:58:00AM 0 points [-]

The average person isn't as well optimized at group debate that the best debates, but people are still optimized for group debate in the sense of individual pondering.

Comment author: evand 02 March 2015 04:17:13AM 2 points [-]

Definitely agreed.

Of course LW is itself an attempt at collective rationality

In particular, it seems like it is a remarkably unexamined, unplanned attempt. Surely we've learned some ways to improve it. Surely there are better approaches out there than "hey, Reddit seems to work ok, let's modify a couple things, call it good, and leave it alone for a while".

Not that I know how to improve it. Predictably, I have a few complaints and a few minor tweaks to suggest, but I'd really prefer a more evidence-based approach than that. Actually, I don't even really know what process I would advocate for improving LW, let alone what the actual improvements would be that would come from that process.

Comment author: ChristianKl 07 March 2015 04:41:15PM 0 points [-]

In particular, it seems like it is a remarkably unexamined, unplanned attempt.

As far as I see we do have plenty of meta discussion that examine LW.

Comment author: evand 09 March 2015 01:31:46AM 0 points [-]

There is plenty of talk, less data, and only very tiny amounts of tested changes. Surely the rationalist approach to solving a problem like this should involve empirical examination, not just armchair discussions.

Comment author: ChristianKl 09 March 2015 12:00:53PM 0 points [-]

LW isn't very big and as such it's not clear whether there are strong returns to experimenting with software changes.