The Hostile Arguer
“Your instinct is to talk your way out of the situation, but that is an instinct born of prior interactions with reasonable people of good faith, and inapplicable to this interaction…” – Ken White
One of the Less Wrong Study Hall denizens has been having a bit of an issue recently. He became an atheist some time ago. His family was in denial about it for a while, but in recent days they have 1. stopped with the denial bit, and 2. been less than understanding about it. In the course of discussing the issue during break, this line jumped out at me:
“I can defend my views fine enough, just not to my parents.”
And I thought: Well, of course you can’t, because they’re not interested in your views. At all.
I never had to deal with the Religion Argument with my parents, but I did spend my fair share of time failing to argumentatively defend myself. I think I have some useful things to say to those younger and less the-hell-out-of-the-house than me.
A clever arguer is someone that has already decided on their conclusion and is making the best case they possibly can for it. A clever arguer is not necessarily interested in what you currently believe; they are arguing for proposition A and against proposition B. But there is a specific sort of clever arguer, one that I have difficulty defining explicitly but can characterize fairly easily. I call it, as of today, the Hostile Arguer.
It looks something like this:
When your theist parents ask you, “What? Why would you believe that?! We should talk about this,” they do not actually want to know why you believe anything, despite the form of the question. There is no genuine curiosity there. They are instead looking for ammunition. Which, if they are cleverer arguers than you, you are likely to provide. Unless you are epistemically perfect, you believe things that you cannot, on demand, come up with an explicit defense for. Even important things.
In accepting that the onus is solely on you to defend your position – which is what you are implicitly doing, in engaging the question – you are putting yourself at a disadvantage. That is the real point of the question: to bait you into an argument that your interlocutor knows you will lose, whereupon they will expect you to acknowledge defeat and toe the line they define.
Someone in the chat compared this to politics, which makes sense, but I don’t think it’s the best comparison. Politicians usually meet each other as equals. So do debate teams. This is more like a cop asking a suspect where they were on the night of X, or an employer asking a job candidate how much they made at their last job. Answering can hurt you, but can never help you. The question is inherently a trap.
The central characteristic of a hostile arguer is the insincere question. “Why do you believe there is/isn’t a God?” may be genuine curiosity from an impartial friend, or righteous fury from a zealous authority, even though the words themselves are the same. What separates them is the response to answers. The curious friend updates their model of you with your answers; the Hostile Arguer instead updates their battle plan.[1]
So, what do you do about it?
Advice often fails to generalize, so take this with a grain of salt. It seems to me that argument in this sense has at least some of the characteristics of the Prisoner’s Dilemma. Cooperation represents the pursuit of mutual understanding; defection represents the pursuit of victory in debate. Once you are aware that they are defecting, cooperating in return is highly non-optimal. On the other hand, mutual defection – a flamewar online, perhaps, or a big fight in real life in which neither party learns much of anything except how to be pissed off – kind of sucks, too. Especially if you have reason to care, on a personal level, about your opponent. If they’re family, you probably do.
It seems to me that getting out of the game is the way to go, if you can do it.
Never try to defend a proposition against a hostile arguer.[2] They do not care. Your best arguments will fall on deaf ears. Your worst will be picked apart by people who are much better at this than you. Your insecurities will be exploited. If they have direct power over you, it will be abused.
This is especially true for parents, where obstinate disagreement can be viewed as disrespect, and where their power over you is close to absolute. I’m sort of of the opinion that all parents should be considered epistemically hostile until one moves out, as a practical application of the SNAFU Principle. If you find yourself wanting to acknowledge defeat in order to avoid imminent punishment, this is what is going on.
If you have some disagreement important enough for this advice to be relevant, you probably genuinely care about what you believe, and you probably genuinely want to be understood. On some level, you want the other party to “see things your way.” So my second piece of advice is this: Accept that they won’t, and especially accept that it will not happen as a result of anything you say in an argument. If you must explain yourself, write a blog or something and point them to it a few years later. If it’s a religious argument, maybe write the Atheist Sequences. Or the Theist Sequences, if that’s your bent. But don’t let them make you defend yourself on the spot.
The previous point, incidentally, was my personal failure through most of my teenage years (although my difficulties stemmed from school, not religion). I really want to be understood, and I really approach discussion as a search for mutual understanding rather than an attempt at persuasion, by default. I expect most here do the same, which is one reason I feel so at home here. The failure mode I’m warning against is adopting this approach with people who will not respect it and will, in fact, punish your use of it.[3]
It takes two to have an argument, so don’t be the second party, ever, and they will eventually get tired of talking to a wall. You are not morally obliged to justify yourself to people who have pre-judged your justifications. You are not morally obliged to convince the unconvinceable. Silence is always an option. “No comment” also works well, if repeated enough times.
There is the possibility that the other party is able and willing to punish you for refusing to engage. Aside from promoting them from “treat as Hostile Arguer” to “treat as hostile, period”, I’m not sure what to do about this. Someone in the Hall suggested supplying random, irrelevant justifications, as requiring minimal cognitive load while still subverting the argument. I’m not certain how well that will work. It sounds plausible, but I suspect that if someone is running the algorithm “punish all responses that are not ‘yes, I agree and I am sorry and I will do or believe as you say’”, then you’re probably screwed (and should get out sooner rather than later if at all possible).
None of the above advice implies that you are right and they are wrong. You may still be incorrect on whatever factual matter the argument is about. The point I’m trying to make is that, in arguments of this form, the argument is not really about correctness. So if you care about correctness, don’t have it.
Above all, remember this: Tapping out is not just for Less Wrong.
(thanks to all LWSH people who offered suggestions on this post)
After reading the comments and thinking some more about this, I think I need to revise my position a bit. I’m really talking about three different characteristics here:
- People who have already made up their mind.
- People who are personally invested in making you believe as they do.
- People who have power over you.
For all three together, I think my advice still holds. MrMind puts it very concisely in the comments. In the absence of 3, though, JoshuaZ notes some good reasons one might argue anyway; to which I think one ought to add everything mentioned under the Fifth Virtue of Argument.
But one thing that ought not to be added to it is the hope of convincing the other party – either of your position, or of the proposition that you are not stupid or insane for holding it. These are cases where you are personally invested in what they believe, and all I can really say is “don’t do that; it will hurt.” Even if you are correct, you will fail for the reasons given above and more besides. It’s very much a case of Just Lose Hope Already.
-
I’m using religious authorities harshing on atheists as the example here because that was the immediate cause of this post, but atheists take caution: If you’re asking someone “why do you believe in God?” with the primary intent of cutting their answer down, you’re guilty of this, too. ↩
-
Someone commenting on a draft of this post asked how to tell when you’re dealing with a Hostile Arguer. This is the sort of micro-social question that I’m not very good at and probably shouldn’t opine on. Suggestions requested in the comments. ↩
-
It occurs to me that the Gay Talk might have a lot in common with this as well. For those who’ve been on the wrong side of that: Did that also feel like a mismatched battle, with you trying to be understood, and them trying to break you down? ↩
Credence Calibration Icebreaker Game
The Aussie mega-meetup took place this past weekend. For it, a new kind of icebreaker was needed: one which is was not merely fun and sociable, but also instilled with the Way. Thus was the Credence Calibration Icebreaker forged.
A marriage of the credence game and the classic icebreaker, ‘Say three things about yourself, one of them a lie’, the game allows players to learn about each other, test their ability to deceive and detect deception, and discover just how calibrated they are.
How to play
Playing instructions here: docx pdf. Scoring spreadsheet.
Each turn a player makes three statements about themselves. One and only one the statement must be intentionally untrue. All others players assign probabilities of being false to each statement. These probability sum to 1: P(A’) + P(B’) + P(C’ ) = 1. The game is scored in the same manner as the credence game, but with reference to 33% rather than 50%.
The way we played it, a player would reveal which was the lie immediately after everyone else had assigned probabilities. The immediate feedback is more fun and allows players to recalibrate as they learn about their performance. Revealing which statements were lies at the end would require reminding everyone what the other statements were.
Many meetup groups have played the Aumann agree game where groups collectively assign credences to a collection of statements, however that game requires a collection of statements to be collected in advance. Once played, new statements must be collected for a new game. The credence calibration icebreaker has the advantage that players generate the statements allowing for easy replay.
Improvements
Restrictions should be placed on the nature of the lies in order to control which skills are tested. We played without restrictions and most players generated a lie by altering a minor detail of a true statement which didn’t affect its plausibility, e.g. ‘My father’s brain is frozen’1 vs. ‘My uncle’s brain is frozen’. This resulted in the game being less about appraising the plausibility of statements and more about detecting deception by tells and other clues.
Following the original icebreaker game, three statements were used. Reducing the number of statements to two would have the following benefits:
- The game is currently data entry intensive, requiring two numbers per question per player to be entered. Two statements would halve this number.
- Assigning probabilities of falsehood is counter-intuitive to many, using two statements would allow for the typical direct assignment of truth.
- People find generating three statements difficult, two statements would reduce the effort.
Statistics
Various statistics are computed in the scoring spreadsheet. Results from our game showed a high correlation between number correct and score, 0.72, and that players improved over the course of the game thanks to diminishing overconfidence.
1. True statement. As was 'I have three kidneys'.
Rethinking Education
Problems
Problems have bottlenecks. To solve problems, you need to overcome each bottleneck. If you fail to overcome just one bottleneck, the problem will go unsolved, and your effort will have been fruitless.
In reality, it’s a little bit more complicated than that. Some bottlenecks are tighter than others, and some progress might leak through, but it usually isn’t anything notable.
Education
There is a lot wrong with education. Attempts are being made to improve it, but they’re glossing over important bottlenecks. Consequently, progress is slowly dripping through. I think that it’d be a better use of our time to take the time to think through each bottleneck, and how it can be addressed.
I have a theory of how we can overcome enough bottlenecks such that progress will fall through, instead of drip through.
Consider how we learn. Say that you want to learn parent concept A. To do this, it’ll require you to understand a bunch of other things first
My groundbreaking idea: make sure that students know A1…An before teaching them A.
https://www.dropbox.com/s/4gnwamufalg5gqo/learning.jpg
The bottlenecks to understanding A are A1…An. Some of these bottlenecks are tighter than others, and in reality, there are constraints on our ability to teach, so it’s probably best to focus on the tighter bottlenecks. Regardless, this is the approach we’ll need to take if we want to truly change education.
How would this work?
1) Create a dependency tree.
2) Explain each cell in the tree.
3) Devise a test of understanding for each cell in the tree.
4) Teach accordingly.
Where does our system fail us?
- When you’re in class and the teacher is explaining A when you still don’t get, say A2 and A5.
- When you’re in class and the teacher is explaining A, when she never thought to explain A2 and A5.
- When you’re reading the textbook and you’re confused, but you don’t even know what child concepts you’re confused about.
- When you memorize for the test/assignment instead of properly filling out your dependency tree.
- When being too far ahead or behind the class leads to a lack of motivation.
- When lack of interest in the material leads to lack of motivation.
- When physical distractions divert your attention (tired, uncomfortable, hungry…).
My proposal
I propose that we pool all of our resources and make a perfect educational web app. It would have the dependency trees, have explanations for each cell in each tree, and have a test of understanding for each cell in each tree. It would test the user to establish what it is that he does and doesn’t know, and would proceed with lessons accordingly.
In other words, usage of this web app would be mastery-based: you’d only proceed to a parent concept when you’ve mastered the child concepts.
Motivation
Motivation would be another thing to optimize.
One way to do this would be to teach things to students at the right times. Lack of interest is often due to lack of understanding of child concepts, and thus lack of appreciation for the beauty and significance of a parent concept. By teaching things to students when they’re able to appreciate them, we could increase students’ motivation.
Another way to optimize motivation would be to do a better job of teaching students things that are useful to them (or things that are likely to be useful to them). In todays system, students are often times forced to memorize lots of details that are unlikely to ever be useful to them.
By making teaching more effective, I think motivation will naturally increase as well (it’ll eliminate the lack of motivation that comes with the frustration of bad teaching).
Pooling of resources
The pooling of resources to create this web app is analogous to how resources were pooled for Christopher Nolan to make a really cool movie. When you pool resources, a lot more becomes possible. When you don’t pool resources, the product often sucks. Imagine what would happen if you tried to reproduce Batman at a local high school. This is analogous to what we’re trying to do with education now.
How would this look?
I’m not quite sure. Technically, kids could just sit at home on their computers and work through the lessons that the web app gives them… but I sense that that wouldn’t be such a good idea. It’d probably be best to require kids to go to a “school-like institution”. Kids could work through the lessons by themselves, ask each other for help, work together on projects, compete with each other on projects etc.
Certificates
I envision that credentials would be certificate-based. You’d get smaller certificates that indicate that you have mastered a certain subject. Today, the credentials you get are for passing a grade, or passing a class, or getting a degree. They’re too big and inflexible. For example, maybe the plant unit in intro to biology isn’t necessary for you. Smaller certificates allow for more flexibility.
Deadlines
Deadlines are a tough issue. If they exist, there’s a possibility that you have to cram to meet the deadline, and cramming isn’t optimal for learning. However, if they don’t exist, students probably won’t have the incentive to learn. For this reason, I think that they probably do have to exist.
My first thought is that deadlines should be personalized. For example, if I moved 50 steps and the deadline was at 100 steps, the next deadline should be based on where I am now (step 50), not where the deadline was (step 100).
My second thought is that deadlines should be rather loose, because I think that flexibility and personalization are important, and that deadlines sacrifice those things.
My third thought, is that students should be given credit for going faster. In our one-size-fits-all system now, you can’t get credit for moving faster than your class. I think that if you want to work harder and make faster progress, you should be able to and you should be given credentials for the knowledge that you’ve acquired. Given the chance, I think that many students would do this. I think this would allow students to really thrive and pursue their interests.
Tutoring
I think that it’d be a good idea to require tutoring. Say, in order to get a certificate, after passing the tests, you’d have to tutor for x hours.
Tutoring helps you to master the concept, because having to explain something will expose the holes in your understanding. See The Feynman Technique.
Tutoring allows for social interaction, which is important.
Social Atmosphere
The social atmosphere in these “schools” would also be something to optimize. It's not something that people think too much about, but it has a huge impact on how people develop, and thus on how society develops.
I’m not sure exactly what would be best, but I have a few thoughts:
The idea of social value is horrible. In schools today, you grow up caring way too much about how you look, who you’re friends with, how athletic you are, how smart you are, how much success you have with the opposite sex… how “good” you are. This bleeds into our society, and does a lot to cause unhappiness. It should be avoided, if possible.
Relationships are based largely on repeated, unplanned interactions + an environment that encourages you to let your guard down. I think that schools should actively provide these situations to students, and should allow you to experience these situations with a variety of types of people (right now you only get these repeated, unplanned interactions with the cohort of students you happen to be with, which limits you in a lot of ways).
Rationality
I propose that rationality be a core part of the curriculum (the benefits of making people better at reasoning would trickle down into many aspects of life). I think that this should be done in two ways: the first is by teaching the ideas of rationality, and the second is by using them.
The ideas of rationality can be found right here. Some examples:
- Your beliefs have to be about anticipated experiences.
- Don’t commit the fallacy of gray.
- Understanding that you should optimize the terminal value.
- Don’t treat arguments like war.
- Disagree by refuting the central point.
- Be specific.
After the ideas are taught, they should be practiced. The best way that I could think of to do this is to have kids write and critique essays (writing is just thought on paper, and it’s often easier to argue in writing than it is in verbal conversation). Students could pick a topic that they want to talk about, make claims, and argue for them. And then they could read each others’ essays, and point out what they think are mistakes in each others’ reasoning (this should all be supervised by a teacher, who should probably be more of a benevolent dictator, and who should also contribute points to the discussions).
I think that some competition and social pressure could be useful too; maybe it’d be a good idea to divide students into classes, where the most insightful points are voted upon, and the number of mistakes committed would be tallied and posted.
Writing
Right now, essays in schools are a joke. No one takes them seriously. Students b.s. them, and teachers barely read them, and hardly give any feedback. And they’re also always on english literature, which sends a bad message to kids about what an essay really is. Good writing isn’t taught or practiced, and it should be.
Levels of Action
Certain levels of action have impacts that are orders of magnitude bigger than others. I think that improving education this much would be a high level action, and have many positive effects that’ll trickle down into many aspects of society. I’ll let you speculate on what they are.
On Straw Vulcan Rationality
There's a core meme of rationalism that I think is fundamentally off-base. It's been bothering me for a long time — over a year now. It hasn't been easy for me, living this double life, pretending to be OK with propagating an instrumentally expedient idea that I know has no epistemic grounding. So I need to get this off my chest now: Our established terminology is not consistent with an evidence-based view of the Star Trek canon.
According to TVtropes, a straw Vulcan is a character used to show that emotion is better than logic. I think a lot of people take "straw Vulcan rationality" it to mean something like, "Being rational does not mean being like Vulcans from Star Trek."
This is not fair to Vulcans from Star Trek.
Central to the character of Spock — and something that it's easy to miss if you haven't seen every single episode and/or read a fair amount of fan fiction — is that he's being a Vulcan all wrong. He's half human, you see, and he's really insecure about that, because all the other kids made fun of him for it when he was growing up on Vulcan. He's spent most of his life resenting his human half, trying to prove to everyone (especially his father) that he's Vulcaner Than Thou. When the Vulcan Science Academy worried that his human mother might be an obstacle, it was the last straw for Spock. He jumped ship and joined Starfleet. Against his father's wishes.
Spock is a mess of poorly handled emotional turmoil. It makes him cold and volatile.
Real Vulcans aren't like that. They have stronger and more violent emotions than humans, so they've learned to master them out of necessity. Before the Vulcan Reformation, they were a collection of warring tribes who nearly tore their planet apart. Now, Vulcans understand emotions and are no longer at their mercy. Not when they apply their craft successfully, anyway. In the words of the prophet Surak, who created these cognitive disciplines with the purpose of saving Vulcan from certain doom, "To gain mastery over the emotions, one must first embrace the many Guises of the Mind."
Successful application of Vulcan philosophy looks positively CFARian.
There is a ritual called "kolinahr" whose purpose is to completely rid oneself of emotion, but it was not developed by Surak, nor, to my knowledge, was it endorsed by him. It's an extreme religious practice, and I think the wisest Vulcans would consider it misguided1. Spock attempted kolinahr when he believed Kirk had died, which I take to be a great departure from cthia (the Vulcan Way) — not because he ultimately failed to complete the ritual2, but because he tried to smash his problems with a hammer rather than applying his training to sort things out skillfully. If there ever were such a thing as a right time for kolinahr, that would not have been it.
So Spock is both a straw Vulcan and a straw man of Vulcans. Steel Vulcans are extremely powerful rationalists. Basically, Surak is what happens when science fiction authors try to invent Eliezer Yudkowsky without having met him.
1) I admit that I notice I'm a little confused about this. Sarek, Spock's father and a highly influential diplomat, studied for a time with the Acolytes of Gol, who are the masters of kolinahr. If I've ever known what came of that, I've forgotten. I'm not sure whether that's canon, though.
2) "Sorry to meditate and run, but I've gotta go mind-meld with this giant space crystal thing. ...It's complicated."
Worse than Worthless
There are things that are worthless-- that provide no value. There are also things that are worse than worthless-- things that provide negative value. I have found that people sometimes confuse the latter for the former, which can carry potentially dire consequences.
One simple example of this is in fencing. I once fenced with an opponent who put a bit of an unnecessary twirl on his blade when recovering from each parry. After our bout, one of the spectators pointed out that there wasn't any point to the twirls and that my opponent would improve by simply not doing them anymore. My opponent claimed that, even if the twirls were unnecessary, at worst they were merely an aesthetic preference that was useless but not actually harmful.
However, the observer explained that any unnecessary movement is harmful in fencing, because it spends time and energy that could be put to better use-- even if that use is just recovering a split second faster! [1]
During our bout, I indeed scored at least one touch because my opponent's twirling recovery was slower than a less flashy standard movement. That touch could well be the difference between victory and defeat; in a real sword fight, it could be the difference between life and death.
This isn't, of course, to say that everything unnecessary is damaging. There are many things that we can simply be indifferent towards. If I am about to go and fence a bout, the color of the shirt that I wear under my jacket is of no concern to me-- but if I had spent significant time before the bout debating over what shirt to wear instead of training, it would become a damaging detail rather than a meaningless one.
In other words, the real damage is dealt when something is not only unnecessary, but consumes resources that could instead be used for productive tasks. We see this relatively easily when it comes to matters of money, but when it comes to wastes of time and effort, many fail to make the inductive leap.
[1] Miyamoto Musashi agrees:
The primary thing when you take a sword in your hands is your intention to cut the enemy, whatever the means. Whenever you parry, hit, spring, strike or touch the enemy's cutting sword, you must cut the enemy in the same movement. It is essential to attain this. If you think only of hitting, springing, striking or touching the enemy, you will not be able actually to cut him. More than anything, you must be thinking of carrying your movement through to cutting him. You must thoroughly research this.
One Sided Policy Debate - The Science of Literature
On HackerNews, this article was linked. The general idea is that companies are studying what people like to read, to help authors produce books that people like to read.
Now, for me, when I look at this idea, I see some down sides, but I certainly see some benefits as well.
Almost none of the commenters on NYTimes seemed to see any benefit whatsoever to studying reader behaviour. There were a few who saw the downsides as more mild than the other commenters. But most of the commenters basically saw this technology as some sort of 1984-esque idea that will turn all books into uninteresting, unimaginative pieces of paper that would better serve as a door stopper than as something for literary consumption. Out of 50 comments that I've read, only one person has said something along the lines of, 'This technology can possibly offer something to help authors improve their books'.
Is this just technophobia? Or am I missing something, and this really is a horrible, evil technology that should be avoided at all costs? [That's a rhetorical question -- I'd be surprised if even one LWian held that position]
I guess what I'm asking is, what are the psychological roots for the almost-unanimous aversion to this attempt at gathering and using information about what people want?
Stable and Unstable Risks
Related: Existential Risk, 9/26 is Petrov Day
Existential risks—risks that, in the words of Nick Bostrom, would "either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential," are a significant threat to the world as we know it. In fact, they may be one of the most pressing issues facing humanity today.
The likelihood of some risks may stay relatively constant over time—a basic view of asteroid impact is that there is a certain probability that a "killer asteroid" hits the Earth and that this probability is more or less the same every year. This is what I refer to as a "stable risk."
However, the likelihood of other existential risks seems to fluctuate, often quite dramatically. Many of these "unstable risks" are related to human activity.
For instance, the likelihood of a nuclear war at sufficient scale to be an existential threat seems contingent on various geopolitical factors that are difficult to predict in advance. That said, the likelihood of this risk has clearly changed throughout recent history. Nuclear war was obviously not an existential risk before nuclear weapons were invented, and was fairly clearly more of a risk during the Cuban Missile Crisis than it is today.
Many of these unstable, human-created risks seem based largely on advanced technology. Potential risks like gray goo rely on theorized technologies that have yet to be developed (and indeed may never be developed). While this is good news for the present day, it also means that we have to be vigilant for the emergence of potential new threats as human technology increases.
GiveWell's recent conversation with Carl Shulman contains some arguments as to why the risk of human extinction may be decreasing over time. However, it strikes me as perhaps more likely that the risk of human extinction is increasing over time—or at the very least becoming less stable—as technology increases the amount of power available to individuals and civilizations.
After all, the very concept of human-created unstable existential risks is a recent one. Even if Julius Caesar, Genghis Khan, or Queen Victoria for some reason decided to destroy human civilization, it seems almost certain that they would fail, even given all the resources of their empires.
The same cannot be said for Kennedy or Khrushchev.
Supplementing memory with experience sampling
If you asked me how happy I've been, I'd think back over my recent life and synthesize my memories into a judgement. Since I'm the one experiencing my life you would think this would be accurate, but our memories aren't fair. For example, people who had their hand in 57° water for 60 seconds rated the experience as less pleasant than people who had their hand in the same 57° water for the same 60 seconds, followed by 30 seconds with the water slowly rising to 59°. (Kahneman 1993, pdf) This is the peak-end rule where when we look back at an experience we don't really consider the duration and instead evaluate it based on how it was at its peak and how it ended.
This disagreement between emotion as it is experienced and emotion as it is remembered is called the memory-experience gap, and the peak-end rule is only one of the causes. The problem is, generally we only have access to memories of our emotion, which means if you're given the ice-water choice you'll repeatedly choose the option with more suffering. How can we get around this?
When psychologists want to get at experiential emotion they give people little timers. Every time the timer goes off the person writes down how happy/sad they are at that moment. This is an external sampling method that lets us use any sort of aggregation we would like, and it's fair in a way our internal methods are not. When I first read about this I thought "neat" and moved on, but recently I realized I that with a computer in my pocket I could do this myself. After asking around I ended up with the TagTime Android app, which is the only way I've found to do this that (a) works without an internet connection and (b) has an equal probability of sampling at every moment.
The response screen looks like:

You tap tags to say which ones currently apply. I have them sorted by frequency. To add new tags you turn the phone sideways and type text:

That's a little annoying, but most of the time I'm not entering a new tag.
I have tags for happiness (numbers 0-9, added as I need them), for aspects of activities, and for people I'm with. Every so often I email the data to myself and add it to my full log which backs a graph:
Retrospective happiness still matters; you want to be happy with your life looking back. Because this is our memory, however, we're already aware of it and already optimize for it in our life. Adding sampled data should allow us to adjust that optimization to fix the things that are important but hidden by our biased memories.
What truths are actually taboo?
LessWrong has been having fun lately with posts about sexism, racism, and academic openness. And here just like everywhere else, somebody inevitably claims taboo status for any number of entirely obvious truths, e.g. "top level mathematicians and physicists are almost invariably male," "black people have lower IQ scores than white people," and "black people are statistically more criminal than whites." In my experience, these are not actually taboo, and I think my experience is generalizable. I'll illustrate.
You're at a bar and you meet a fellow named Bill. Bill's a nice guy, but somehow the conversation strayed Hitler-game style to World War II. Bill thinks the war was avoidable. Bill thinks the Holocaust would not have happened were it not for the war, and that some of the Holocaust was a reaction to actual Jewish subterfuge and abuse. Bill thinks that the Holocaust was not an essential, early plan of the Nazis, because it only happened after the war began. Bill thinks that the number of casualties has been overestimated. Bill claims that Allied abuses, e.g. the bombing of Dresden, have been glossed over and ignored, while fantastic lies about Jews being systematically turned into soap have propagated. Bill thinks that the Holocaust has become a sort of national religion, abused by self-interested Jews and defenders of Zionist foreign policy, and that the freedom of those who doubt it is under serious attack. Bill starts listing other things he's not allowed to say. Bill doesn't think that the end of slavery was all that good for "the blacks," and that the negatives of busing and forced integration have often outweighed the positives. Bill has personally been the victim of black-on-white crimes and racism. Bill is a hereditarian. Bill doesn't think that dropping an n-bomb should ruin a public career.
Here's the problem: everything Bill has said is either true, a matter of serious debate, or otherwise a matter of high likelihood and reasonableness. Yet you feel nervous. Perhaps you're upset. That's the power of taboo, right? Society is punishing truth-telling! First they came for the realists... Rationalists, to arms!
Or.
We can recognize that statements like these correlate with certain false beliefs and nasty sentiments of the sort that actually are taboo. It's just like when somebody says, "well science doesn't know everything." To this, I think, "duh, and you're probably a creationist or medical quack or something similarly credible." Or when somebody says, "the government lies to us." To this, I think, "obviously, and you're likely a Truther or something." Bill is probably an anti-Semite, but Bill doesn't just say, "I'm an anti-Semite," because that really is taboo. He might even believe that he shouldn't be considered something awful like an anti-Semite. Bill probably doesn't think Bill so unpleasant.
That's the paradox: "taboo" statements like black crime statistics are to some extent "taboo" for sound, rationalist reasons. But "taboo" is not taboo: it's about context. People who think that such statements are taboo are probably bad at communicating, and people often think they're racists and misogynists because they probably are on good rationalist grounds. If you want to talk about statistical representatives on the topic of race, be ready to understand that those who are listening will have background knowledge about the other views you might hold.
All this is the leadup to my question: what highly probable or effectively certain truths are genuinely taboo? I'm trying to avoid answers like "there are fewer women in mathematics" or "the size of my penis," since these are context sensitive, but not really taboo within a reasonable range of circumstances. I'm also not particularly interested in value commitments or ideologies. Yes, employers will punish labor organizers and radical political views can get you filtered. But these aren't clear matters of fact. I also don't mean sensitive topics like abortion or religion, nor do I mean "taboo within a political party."
Is there really anything true that we simply cannot say? I have the US in mind especially, but I'm interested in other countries as well. I'm sure there are things that deserve the label, but I've found that the most frequently given examples don't hold water. I think hereditarianism is a close contender, but it's not an "obvious truth." Rather, my understanding is that it is a serious position. It's also only contextually taboo. If it were a definitive finding, it could perhaps become taboo, though I think it more likely that it would be somewhat reluctantly accepted.
Any suggestions? If we find some really serious examples, we might figure out a way to talk about them.
Four Tips for Public Speaking
TL;DR, I offered and promised in the Post Request Thread a guide to the four highest value tips I know for doing public speaking. Here they are, with explanations below:
- Fortissimo! Don't apologize for talking
- Know the first and last line of your comment before you open your mouth
- Think about speeches/comments as having a narrative arc
- Look for additional emotional tones to layer on the content
“I have only one word of advice to give you”“Give already”“That word is fortissimo… it’s Italian for loud. When in doubt, shout, that’s what I’m telling you.”“I should shout? Everyone will hear for sure how bad I am.”“But, my dear brother, if you sing loud and clear, it will be easier on the audience. You’re making it doubly hard on them. Hard to listen to and hard to hear.”
- Thesis
- Evidence 1
- Evidence 2
- Evidence 3
- Thesis restated
X is the main guy; he wants to do:Y is the bad guy; he wants to do:they meet at Z and all L breaks loose.If they don’t resolve Q, then R starts and if they do it’s L squared.
Ever notice how you always X when you'd really like to Y? So did I! I tried Z and it turned out to work, but I wasn't sure why! I poked around in the literature and found A,B, and C, which caused me to tweak my solution to Z' and now I Y all the time, and you can too!
Frustration: [Ever notice how you always X when you'd really like to Y?] Shared identity, all of us looking at the frustration together: [So did I!] I tried Z and it turned out to work, but pleased but perplexed: [I wasn't sure why!] I poked around in the literature and surprise, but increasing feeling of catharsis: [found A,B, and C, which caused me to tweak my solution to Z'] and triumph: [now I Y all the time], return of fellow feeling and pleasure at sharing something cool: [and you can too!]
But there's more you can add. One friend of mine was explaining a counterintuitive study in a fairly matter of fact way, but it was a lot more enjoyable and memorable to hear about if she shared her surprise at how it turned out. A lot of the time, it's simplest to just make sure you're letting your honest reactions to what you're saying come across.
But, if you're not sure what those are, or want to explore other options, you can try dividing what you're saying into beats. (Beats is a phrase used in theatre for subdivisions within scenes. In one conversation or story, the dominant emotional tone can change, and that transition is the start of a new beat). So, try dividing up your notes or your outline into sections and just experiment with the dominant tone for the section. Here's a reworking of the emotional beats in my teaching outline:
Sadness, regret: [Ever notice how you always X when you'd really like to Y?] Shame shared as vulnerability: [So did I!] I tried Z and it turned out to work, but tentative, a little uncertain: [I wasn't sure why!] I poked around in the literature and feeling of tinkering and assembly: [found A,B, and C, which caused me to tweak my solution to Z'] and peace, tranquility: [now I Y all the time], warmth, joy: [and you can too!]
Try looking at this list of some possible emotional tones, and see what it's like when you using them as you talk through your outline. Try reading wrong tones to a friend, to notice why they're wrong or to catch yourself if you were unnecessarily restricting your options. Sometimes tone can change a number of times in one passage (as in this marked up example), just pay attention to what prompts the shift. You can try picking a speech or a sentence that already exists, and reading it deliberately with different tones each time to get some practise and comfort using them.
So, if you work on these tips, people will be more comfortable listening to what you say (1), you'll open and close strongly (2), with a narrative arc that keeps you on track and makes your points memorable (3), and enough emotional variation to keep your audience engaged with you and your content (4). Huzzah!
View more: Next

Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)