For what it's worth, I have three years' experience with university-level competitive debating, specifically with the debate format known as British Parliamentary (which is the style used by the World Universities Debating Championship or WUDC). Since many people are unfamiliar with it, I'll briefly explain the rules: one BP debate comprises four teams of two members each. All four teams are ranked against each other, but two of them must argue for the affirmative ("government") side of the issue and the other two for the negative ("opposition") side. The objective is basically to persuade the adjudicators why your team should win. In this format you do not get to research the topic beforehand, and you don't even know what you are going to debate until 15 minutes before the debate starts -- which means that it requires a lot of quick brainstorming and improvisation. And since each individual speaker gets only 7 minutes to make their case, you have to prioritize the most important content and structure it coherently.
In our training sessions we actually do not study classical rhetoric. So I'm not familiar with terms like elocutio, dispositio or pronuntiatio -- although I can definitely recognize clear delivery, organized structure, and appeals to logic as important principles of varsity debating. I think there are skills one can learn from this kind of public speaking:
Of course one could also criticize this type of debating. Firstly, it inculcates a competitive spirit rather than a spirit of collaborative truth-seeking. Secondly, as a game it is in some ways detached from the nuances and practicalities of persuasion in the "real world", where things like statistical figures and budgetary limits and constitutionality do matter. Finally, one might become too adept at constructing plausible-seeming justifications for any conclusion one likes regardless of the actual evidence -- and this Eliezer warned us about:
And that problem—too much ready ammunition—is one of the primary ways that people with high mental agility end up stupid, in Stanovich's "dysrationalia" sense of stupidity.
You can think of people who fit this description, right? People with high g-factor who end up being less effective because they are too sophisticated as arguers?
When you start with a given position on a topic (let's say you have to argue against legalizing recreational drugs) and construct arguments in its favor, you are essentially engaging in rationalization instead of rationality.
So do the benefits outweigh these risks? I don't know.
In that case in what sense does he dislike his professor. From your example, him disliking his professor seems at be a free-floating XML tag.
I suppose it can be explained by the liking/wanting vs. approving distinction (you can have a feeling that you disapprove of) or Alicorn's idea of repudiating one's negative characteristics. And then the cognitive dissonance created by you giving an apple to someone you dislike may be resolved by shifting your attitude of the person in a positive direction -- so in this sense, Undoing is a strategy to reduce disapproved/repudiated properties.
This is especially notable with the way projection/reaction formation is discussed in practice: "He's opposing position X because he secretly supports it."
Interestingly, there is research showing that some people who oppose homosexuality or gay marriage do in fact show an unconscious attraction to the same sex -- see e.g. Weinstein et al. (2012). However, in this case I would agree that "he overtly opposes X because he covertly supports X" is the wrong way of looking at it; rather, he (the ego in Freudian terms) disapproves of his desire (the id). Of course, this doesn't imply that everybody who opposes X is doing so as a defense mechanism.
Edit: To clarify, I'm certainly not implying that homosexuality is a negative characteristic; just that some people are raised in a culture where it is stigmatized, and so they internalize the value that it is. The specific claim made by the Weinstein et al. paper is as follows:
I decided to edit this comment instead of replying directly to tempus' comment below, as I did not perceive that commenter to be acting charitably.
Vernor doesn't give the professor an apple because he dislikes the professor per se, but because he feels guilty about his dislike for the professor, which he tries to "fix" by giving a gift -- this works exactly because giving a gift usually indicates liking someone (putting aside other motives, such as ingratiation).
A different example of the "Undoing" defense mechanism would be an abusive alcoholic father who buys his kids lots of Christmas presents (see the sources here and here).
In psychoanalytic theory, these various phenomena are related in that they serve the function of protecting one's ego. But if you think that's a poor way of conceptualizing them, I'd be curious how you think we could do better.
Edit: For example, gworley's comment conceptualizes them as defending one's prior probability.
Thanks for pointing it out. I've fixed it and updated the link.
Thanks, I'm glad you found it useful!
The reason I didn't link to LW 2.0 is because it's still officially in beta, and I assumed that the URL (lesserwrong.com)will eventually change back to lesswrong.com (but perhaps I'm mistaken about this; I'm not entirely sure what the plan is). Besides, the old LW site links to LW 2.0 on the frontpage.
I'm wildly speculating here, but perhaps enforcing norms is a costly signal to others that you are a trustworthy person, meaning that in the long-term you gain more resources than others who don't behave similarly.
I cannot say much about CFAR techniques, but I'd nominate the following as candidates for LW "hammers":
Of course, the list is not exhaustive.
Thanks a lot for doing this!
Indeed ; )
For me, contemplating Zen koans for too long can make my brain "hurt".
Does a dog have Buddha-nature or not? Show me your original face before your mother and father were born. If you meet the Buddha, kill him. Look at the flower and the flower also looks.
I find it interesting because, unlike coding a program or solving a math equation or playing chess, it doesn't seem like koans have a well-defined problem/goal structure or a clear set of rules and axioms. Some folks might even call them nonsensical. So I'm not sure to what extent the notion of (in)efficient optimization is applicable here; and yet it also appears to be an example of "thinking hard". (Of course, a Zen instructor would probably tell you not to think about it too hard.)