Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Gram_Stone 14 May 2017 09:27:36PM 2 points [-]

Tangentially, I thought you might find repair theory interesting, if not useful. Briefly, when students make mistakes while doing arithmetic, these mistakes are rarely the effect of a trembling hand; rather, most such mistakes can be explained via a small set of procedural skills that systematically produce incorrect answers.

Comment author: hamnox 27 April 2017 07:27:27PM 1 point [-]

I think there's a consistent epistemic failure that leads to throwing away millennia of instrumental optimization of group dynamics in favor of a clever idea that someone had last Thursday. The narrative of extreme individual improvement borders on insanity: you think you can land on a global optimum with 30 years of one-shot optimization?

Academia may have a better process, and individual intelligence may be more targeted, but natural + memetic selection has had a LOOOT more time and data to work with. We'll be much stronger for learning how to leverage already-existing processes than in learning how to reinvent the wheel really quickly.

Comment author: Gram_Stone 27 April 2017 08:19:20PM 0 points [-]

Do you think I disagree with that?

Comment author: Gram_Stone 27 April 2017 01:15:08AM 1 point [-]

I've had a strong urge to ask about the relation between Project Hufflepuff and group epistemic rationality since you started writing this sequence. This also seems like a good time to ask because your criticism of the essays that you cite (with the caveat that you believe them to contain grains of truth) seems fundamentally to be an epistemological one. Your final remarks are altogether an uncontroversial epistemological prescription, "We have time and we should use it because other things equal taking more time increases the reliability of our reasoning."

So, if I take it that your criticism of the lack of understanding in this area is an epistemological one, then I can imagine this sequence going one of two ways. The one way is that you'll solve the problem, or some of it, with your individual epistemological abilities, or at least start on this and have others assist. The other way is that before discussing culture directly, you'll discuss group epistemic rationality, bootstrapping the community's ability to reason reliably about itself. But I don't really like to press people on what's coming later in their sequence. That's what the sequence is for. Maybe I can ask some pointed questions instead.

Do you think group epistemic rationality is prior to the sort of group instrumental rationality that you're focusing on right now? I'm not trying to stay hyperfocused on epistemic rationality per se. I'm saying that you've demonstrated that the group has not historically done well in an epistemological sense on understanding the open problems in this area of group instrumental rationality that you're focusing on right now, and now I'm wondering if you, or anyone else, think that's just a failure thus far that can be corrected by individual epistemological means only and distributed to the group, or if you think that it's a systemic failure of the group to arrive at accurate collective judgments. Of course it's hardly a sharp dichotomy. If one thought the latter, then one might conclude that it is important to recurse to social epistemology for entirely instrumental reasons.

If group epistemic rationality is not prior to the sort of instrumental rationality that you're focusing on right now, then do you think it would be nevertheless more effective to address that problem first? Have you considered that in the past? Of course, it's not entirely necessary that these topics be discussed consecutively, as opposed to simultaneously.

How common do you think knowledge of academic literature relevant to group epistemic rationality is in this group? Like, as a proxy, what proportion of people do you think know about shared information bias? The only sort of thing like this I've seen as common knowledge in this group is informational cascades. Just taking an opportunity to try and figure out how much private information I have, because if I have a lot, then that's bad.

How does Project Hufflepuff relate to other group projects like LW 2.0/the New Rationality Organization, and all of the various calls for improving the quality of our social-epistemological activities? I now notice that all of those seem quite closely focused on discussion media.

Comment author: Gram_Stone 23 April 2017 06:21:51PM *  3 points [-]

I enjoyed this very much. One thing I really like is that your interpretation of the evolutionary origin of Type 2 processes and their relationship with Type 1 processes seems a lot more realistic to me than what I usually see. Usually the two are made to sound very adversarial, with Type 2 processes having some kind of executive control. I've always wondered how you could actually get this setup through incremental adaptations. It doesn't seem like Azathoth's signature. I wrote something relevant to this in correspondence:

If Type 2 just popped up in the process of human evolution, and magically got control over Type 1, what are the chances that it would amount to anything but a brain defect? You'd more likely be useless in the ancestral environment if a brand new mental hierarch had spontaneously mutated into existence and was in control of parts of a mind that had been adaptive on their own for so long. It makes way more sense to me to imagine that there was a mutant who could first do algorithmic cognition, and that there were certain cues that could trigger the use of this new system, and that provided the marginal advantage. Eventually, you could use that ability to make things safe enough to use the ability even more often. And then it would almost seem like it was the Type 2 that was in charge of the Type 1, but really Type 1 was just giving you more and more leeway as things got safer.

Comment author: Gram_Stone 05 April 2017 11:40:37AM 7 points [-]

Thank you for following up after all this time. Longitudinal studies seem important.

Comment author: Alicorn 17 March 2017 01:46:56AM 21 points [-]

If you like this idea but have nothing much to say please comment under this comment so there can be a record of interested parties.

Comment author: Gram_Stone 17 March 2017 01:29:19PM 0 points [-]

This is neat.

Comment author: Gram_Stone 09 March 2017 05:28:26PM 0 points [-]

I find the metaphor plausible. Let's see if I understand where you're coming from.

I've been looking into predecision processes as a means of figuring out where human decisionmaking systematically goes wrong. One such process is hypothesis generation. I found an interesting result in this paper; the researchers compared the hypothesis sets generated by individuals, natural groups and synthetic groups. In this study, a synthetic group's hypothesis set is agglomerated from the hypothesis sets of individuals who never interact socially. They found that natural groups generate more hypotheses than individuals, and that synthetic groups generate more hypotheses than either. It appears that social interaction somehow reduces the number of alternatives that a group considers relative to what the sum of their considerations would be if they were not a group.

Now, this could just be biased information search. One person poses a hypothesis aloud, and then the alternatives become less available to the entire group. But information search itself could be mediated by motivational factors, like if I write "one high-status person poses a hypothesis aloud...", and this is now a hypothesis about biased information search and a zero-sum social-control component. It does seem worth noting that biased search is currently a sufficient explanation by itself, so we might prefer it by parsimony, but at this level of the world model, it seems like things are often multiply determined.

Importantly, creating synthetic groups doesn't look like punishing memetic-warfare/social-control at all. It looks like preventing it altogether. This seems like an intervention that would be difficult to generate if you thought about the problem in the usual way.

In response to Am I Really an X?
Comment author: Zack_M_Davis 06 March 2017 03:18:36PM 13 points [-]

I don't think it's too controversial to propose that at least some of the transgender self-reports might result from the same mechanism as cisgender self-reports. Again, the idea is that there is some 'self-reporting algorithm', that takes some input that we don't yet know about, and outputs a gender category, and that both cisgender people and transgender people have this

I claim that this is knowably false. Rather than there are being any sort of gender-identity switch or self-reporting mechanism in the brain, there are two distinct classes of psychological conditions that motivate the development of a "gender identity" inconsistent with anatomic sex.

One of these etiologies is indeed a brain intersex condition (sufficiently behaviorally-masculine girls or behaviorally-feminine boys, who are a better fit for the gender role of the other anatomical sex).

The other etiology, far more common in natal males than females, is actually more like a sexual orientation (termed autogynephilia, "love of oneself as a woman") than a gender identity: we used to call these people "transvestites", men who derived emotional comfort and sexual pleasure from pretending to be women (and who sometimes availed themselves of feminizing hormones), but who typically didn't insist that they were literally an instance of the same natural category as biologically-female people.

Comment author: Gram_Stone 07 March 2017 12:58:50AM 0 points [-]

I was familiar with this.

I find the first etiology similar to my model. Did you mean to imply this similarity by use of the word 'indeed'? I can see how one might interpret my model as an algorithm that outputs a little 'gender token' black box that directly causes the self-reports, but I really didn't mean to propose anything besides "Once gendered behavior has been determined, however that occurs, cisgender males don't say "I'm a boy!" for cognitive reasons that are substantially different from the reasons that transgender males say "I'm a boy!" " Writing things like "behaviorally-masculine girls" just sounds like paraphrase to me. Should it not? On the other hand, as I understand it, the second etiology substantially departs from this. In that case it is proposed that transgender people that transition later in life perform similar behaviors for entirely different cognitive reasons.

I'll reiterate that I admit to the plausibility of other causes of self-report. I do find your confidence surprising, however. I realize the visible controversy is not much evidence that you're wrong, because we would expect controversy either way. Do you have thoughts on Thoughts on The Blanchard/Bailey Distinction? I'd just like to read them if they exist.

In response to Am I Really an X?
Comment author: bogus 05 March 2017 04:13:26AM *  6 points [-]

"Am I really an [insert self-reported gender category here]?"

What work is the word "really" actually doing here? ISTM that it refers to an implied assessment of typicality, in which case the actual question these folks are trying to ask is "Am I a typical [insert gender category here]?" And of course, it's quite sensible to answer "no" to this question, no matter what gender we're even talking about in the first place! The person is most likely not a typical male or female, for what it's worth - and any other question about their gender is probably highly confused. But this point of view has at least the undeniable benefit of quickly dissolving the potential feedback/resonance between "typicality judgments" and "gender judgments", simply by acknowledging the "typicality judgment" as such.

In response to comment by bogus on Am I Really an X?
Comment author: Gram_Stone 05 March 2017 04:42:25PM 0 points [-]

This is an excellent summary of my argument! Thank you so much for compressing this into a soundbite!

In response to comment by Dagon on Am I Really an X?
Comment author: Articulator 05 March 2017 08:04:18AM 4 points [-]

I think this topic is really only as political as you make it. Enough of the top voices in the LessWrong/Rationality community are in (apparent) concurrence on transgender identity as a whole that this seems to be reasonably uncontroversial premise to take.

In my opinion, it's nice to see rationality applied to more real-world understanding problems.

Comment author: Gram_Stone 05 March 2017 04:41:10PM 1 point [-]

I think this topic is really only as political as you make it.

I did in fact decide not to reply to the grandparent because I estimated that it would cause less harm in this respect than replying. This article is intended to be a contribution to the philosophy of gender identity in the style of EY's executable philosophy, and it is more directly a reply to lucidfox's Gender Identity and Rationality. This topic was perfectly acceptable in 2010.

View more: Next