A model of arguments
Original post: http://bearlamp.com.au/a-model-of-arguments/
Why do we argue, when we could be discussing things in a productive manner? Arguments often occur because the parties involved simply don't have the tools to transmit their ideas clearly. In this kind of situation, the whole conversation can completely break down. It’s easy to spend a lot of time saying "You're wrong", without accomplishing anything.
Let's imagine two people having an argument, represented by a Venn diagram. A in the black, and B in the blue. They each see the issue slightly differently. The blue circle to represent B’s opinion and a black circle to represent A’s opinion. The concept of “You’re wrong” falls into the area of describing the other person’s ideas.
Person A says, "You're wrong" to Person B. A description of the state of affairs of B’s ideas. Not one that really represents A’s own ideas. Naturally, Person B says, “No you’re wrong” back, equally making the unfounded claim on A’s conceptual real-estate. The thing that is hard to demonstrate is the conflict that is accidentally generated by crossing into each other’s territory to declare things.
To do this creates a crossing-over of ideas.
We don’t even need to know what the argument is about, but we can expect something like this to happen:
Now suppose instead of Person A saying, "You're wrong", where they place the burden of argument (and proof) on the opposition, they now say, "We disagree".
Person B can now continue to make the same argument of "You're wrong". But so long as Person A shrugs and replies "We disagree", there is no conflict in the argument.
For some Person Bs, Person A might get lucky, and the two could end up with a happy middle ground of "Yes, we disagree". This is already a step in the right direction, and will let the pair continue to sort out precisely where and how they disagree On the other hand, a stubborn Person B will still present a problem.
Hey, that's the internet for you! You win some, you lose some.
Nonetheless, the shared ground offered by "We disagree" will often spur constructive discussion.
As it turns out, there is another way. When you go to understand someone else's idea, instead of starting with "You are wrong", consider starting with, "I am wrong". Right from the start, this gives you an advantage. Rather than starting off from a position of conflict, you start off in a position of equality.
Sometimes the other party won't accept your peace offering. They will bristle and rage and prepare for the offensive.
But it's far more common to see an offer of equality met by an acceptance of that equality. Instead of things going downhill, this usually happens:
Or this:
And a pleasant discussion can ensue.
Why is this so great? Because what we're aiming for here - what we really want out of discussions - is this:
What we are aiming for is to trade knowledge until we can conclude the answers in the end.
This style of measured, polite and constructive conversation can only occur when parties meet each other on equal terms.
If there's one lesson to take home from this post, it's that the way you deliver your argument can easily be what makes it powerful. If you come in throwing punches, ready to take your opponent down a notch or two, you might enjoy yourself - but don't expect to have a constructive discussion. Whereas if you approach your opponent as an equal from the shared ground of "We disagree", or even from the vulnerable position of "I am wrong" - well, what reasonable opponent could disagree with that?
This post took me weeks of thinking about, and only 3 hours to write down and draw the first time. But it was rubbish. Didn’t make sense. The rewrite was contributed by the Captain and the slack, taking another 2 hours. This version gets the point of the idea across. I sent the original post to Tim@waitbutwhy but he is very busy and declined to draw pictures to go along with it.
Cross posted to lesswrong:
The ladder of abstraction and giving examples
Original post: http://bearlamp.com.au/examples/
When we talk about a concept or a point it's important to understand the ladder of abstraction. Covered before on lesswrong and in other places as advice for communicators on how to bridge a gap of knowledge.
Knowing, understanding and feeling the ladder of abstraction prevents things like this:
- Speakers who bury audiences in an avalanche of data without providing the significance.
- Speakers who discuss theories and ideals, completely detached from real-world practicalities.
When you talk to old and wise people, they will sometimes give you stories of their lives. "back in my day...". Seeing that in perspective is a good way to realise that might be people's way of shifting around the latter of abstraction. As an agenty-agent of agenty goodness - your job is to make sense of this occurrence. The ladder of abstraction is very powerful when used effectively and very frustrating when you find yourself on the wrong side of it.
The flipside to this example is when people talk at a highly theoretical level. I suspect this happens to philosophers, as well as hippies. They are very good at being able to tell you about the connections between things that are "energy" or "desire", but lack the grounding to explain how that applies to real life. I don't blame them. One day I will be able to think completely abstractly. Today is not that day. Since today is not that day, it is my duty and your's to ask and specify. To give the explanation of what the ladder of abstraction is, and then tell them you have no idea what they are talking about. Or as for the example above - ask them to go up a level in the ladder of abstraction. "If I were to learn something from your experiences - what would it be?".
Lesswrong doing it wrong
I care about adding the conceptual ladder of abstraction to the repertoire for a reason. LW'ers are very good at paying attention to details. A really powerful and important ability. After all - the fifth virtue is argument, the tenth is precision. If you can't be precise about what you are communication, you fail to value what we value.
Which is why it's great to see critical objections to what OP's provide as examples.
I object when defeating an example does not defeat the rule. Our delightful OP may see their territory, stride forth and exclaim to have a map for this territory and a few similar mountains or valleys. Correcting the mountains and valleys map mentioned doesn't change the rest of the territory and does not change the rest of the map.
This does matter. Recently a copy of this dissertation came around the slack - https://cryptome.org/2013/09/nolan-nctc.pdf. It is a report detailing the ridiculous culture inside the CIA and other US government security institutions. One of the biggest problems within that culture can be shown through this example (page 34 of the report):
The following exchange is a good example, told to me by a CIA analyst who was explaining the rules of baseball to visitors who didn’t know the game:
Analyst A: So there are four bases--
Analyst B: -- Well, no, it’s really three bases plus home plate.
Analyst A: ... Okay, three bases plus home plate. The batter hits the ball and advances through the bases one by one—
Analyst C: -- Well, no, it doesn’t have to be one base at a time.
And these ones on page 35:
The following excerpts from stories people have told me or that I witnessed further illustrate this concept:
John: I see you’ve drawn a star on that draft.
Bridget: Yeah, that’s just my doodle of choice. I just do it unconsciously sometimes.
John: Don’t you mean subconsciously?
Scott: Good morning!
Employee in the parking lot: Well, I don’t know if it’s good, but here we are.
Helene: I am so thirsty today! I seriously have a dehydration problem.
Lucy: Actually, you have a hydration problem.
Victoria: My hopes have been squashed like a pancake.
James: Don’t you mean flattened like a pancake?
For those of us that don't have time to read 215 pages. The point is that analyst culture does this. A lot. From the outside it might seem ridiculous. We can intellectually confidently say that the analysts A, B and C in the first example were all right, and if they paid attention to the object of the situation they would skip the interruptions and get to the point of explaining how baseball works. But that's not what it feels like when you are on the inside.
The report outlines that these things make analyst culture a difficult one to be a part of or be engaged in because of examples like these.
We do the same thing. We nitpick at examples, and fight over irrelevant things. If I were to change everyone's mind, I would rather see something like this:
Statements including "no one denies that ..." are usually false. Regardless, my goal here was to...
Taken literally, yes. However these statements are not intended to be taken literally...
Turn into:
(*yes this is not a very good example of an example, this is an example of a turn of speech that was challenged, but the same effect of nitpicking on irrelevant details is present).
Nitpicking is not necessary.
Sometimes we forget that we are all in the same boat together, racing down the river at the rate that we can uncover truth. Sometimes we feel like we are in different boats racing each other. In this sense it would be a good idea to compete and accuse each other of our failures on the journey to get ahead. However we do not want to do that.
It's in our nature to compete, the human need to be right! But we don't need to compete against each other, we need to support each other to compete against Moloch, Akrasia, Entropy, Fallacies and biases (among others).
I am guilty myself. In my personal life as well as on LW. If I am laying blame, I blame myself for failing to point this out sooner, more than I blame anyone else for nitpicking examples.
The plan of action.
Next time you go to comment; Next time I go to comment, think very carefully about if you can improve, if I can improve - the post I am commenting on, before I level my objections at it. We want to make the world a better place. People wiser, older, sharper and witter than me have already said it; "if you are looking for where to start... you need only look in the mirror".
Meta: this took 3 hours to write.
Lesswrong, Effective Altruism Forum and Slate Star Codex: Harm Reduction
Cross Posted at the EA Forum
At Event Horizon (a Rationalist/Effective Altruist house in Berkeley) my roommates yesterday were worried about Slate Star Codex. Their worries also apply to the Effective Altruism Forum, so I'll extend them.
The Problem:
Lesswrong was for many years the gravitational center for young rationalists worldwide, and it permits posting by new users, so good new ideas had a strong incentive to emerge.
With the rise of Slate Star Codex, the incentive for new users to post content on Lesswrong went down. Posting at Slate Star Codex is not open, so potentially great bloggers are not incentivized to come up with their ideas, but only to comment on the ones there.
The Effective Altruism forum doesn't have that particular problem. It is however more constrained in terms of what can be posted there. It is after all supposed to be about Effective Altruism.
We thus have three different strong attractors for the large community of people who enjoy reading blog posts online and are nearby in idea space.
Possible Solutions:
(EDIT: By possible solutions I merely mean to say "these are some bad solutions I came up with in 5 minutes, and the reason I'm posting them here is because if I post bad solutions, other people will be incentivized to post better solutions)
If Slate Star Codex became an open blog like Lesswrong, more people would consider transitioning from passive lurkers to actual posters.
If the Effective Altruism Forum got as many readers as Lesswrong, there could be two gravity centers at the same time.
If the moderation and self selection of Main was changed into something that attracts those who have been on LW for a long time, and discussion was changed to something like Newcomers discussion, LW could go back to being the main space, with a two tier system (maybe one modulated by karma as well).
The Past:
In the past there was Overcoming Bias, and Lesswrong in part became a stronger attractor because it was more open. Eventually lesswrongers migrated from Main to Discussion, and from there to Slate Star Codex, 80k blog, Effective Altruism forum, back to Overcoming Bias, and Wait But Why.
It is possible that Lesswrong had simply exerted it's capacity.
It is possible that a new higher tier league was needed to keep post quality high.
A Suggestion:
I suggest two things should be preserved:
Interesting content being created by those with more experience and knowledge who have interacted in this memespace for longer (part of why Slate Star Codex is powerful), and
The opportunity (and total absence of trivial inconveniences) for new people to try creating their own new posts.
If these two properties are kept, there is a lot of value to be gained by everyone.
The Status Quo:
I feel like we are living in a very suboptimal blogosphere. On LW, Discussion is more read than Main, which means what is being promoted to Main is not attractive to the people who are actually reading Lesswrong. The top tier quality for actually read posting is dominated by one individual (a great one, but still), disincentivizing high quality posts by other high quality people. The EA Forum has high quality posts that go unread because it isn't the center of attention.
Some secondary statistics from the results of LW Survey
| Global LW (N=643) vs USA LW (N=403) vs. Average US Household (Comparable Income) | |||||||||||||
| Income Bracket | LW Mean Contributions | USA LW Mean Contribution | US Mean Contributions** [1] | LW Mean Income | USA LW Mean Income | US Mean*** Income [1] | LW Contributions /Income | USA LW Contributions/Income | US Contributions/Income [1] | ||||
| $0 - $25000 (41% of LW) | $1,395.11 | $935.47 | $1,177.52 | $11,241.14 | $11,326.18 | $15,109.85 | 12.41% | 8.26% | 7.79% | ||||
| $25000-$50000 (17% of LW) | $438.25 | $571.00 | $1,748.08 | $34,147.14 | $32,758.06 | $38,203.79 | 1.28% | 1.74% | 4.58% | ||||
| $50000-$75000 (12% of LW) | $1,757.77 | $1,638.59 | $2,191.58 | $60,387.69 | $61,489.30 | $62,342.05 | 2.91% | 2.66% | 3.52% | ||||
| $75000-$100000 (9% of LW) | $1,883.36 | $2,211.81 | $2,624.81 | $84,204.09 | $83,049.54 | $87,182.68 | 2.24% | 2.66% | 3.01% | ||||
| $100000-$200000 (16% of LW) | $3,645.73 | $3,372.84 | $3,555.02 | $123,581.28 | $124,577.88 | $137,397.03 | 2.95% | 2.71% | 2.59% | ||||
| >$200000 (5% of LW) | $14,162.35 | $15,970.67 | $15,843.97 | $296,884.63 | $299,444.44 | $569,447.35 | 4.77% | 5.33% | 2.78% | ||||
| Total | $2,265.56 | $2,669.85 | $3,949.26 | $62,285.72 | $75,130.37 | $133,734.60 | 3.64% | 3.55% | 2.95% | ||||
| All < $200000 | $1,689.36 | $1,649.32 | $2,515.29 | $51,254.43 | $58,306.81 | $81,207.03 | 3.30% | 2.83% | 3.10% | ||||
| Global LW (N=643) vs USA LW (N=403) vs. Average US Citizen (Comparable Age) | |||
| Age Bracket* | LW Median | US LW Median | US Median*** [2] |
| 15-24 | $17,000.00 | $20,000.00 | $26,999.13 |
| 25-34 | $50,000.00 | $60,504.00 | $45,328.70 |
| All <35 | $40,000.00 | $58,000.00 | $40,889.57 |
| Global LW (N=407) vs USA LW (N=243) vs. Average US Citizen (Comparable IQ) | |||
| Average LW** | US LW | US Between 125-155 IQ [3] | |
| Median Income | $40,000.00 | $58,000.00 | $60,528.70 |
| Mean Contributions | $2,265.56 | $2,669.85 | $2,016.00 |
Note: Three data points were removed from the sample due to my subjective opinion that they were fake. Any self-reported IQs of 0 were removed. Any self-reported income of 0 was removed.
*89% of the LW population is between the age of 15 and 34.
**88% of the LW population has an IQ between 125 and 155, with an average IQ of 138.
****Median numbers were adjusted down by a factor of 1.15 to account for the fact that the source data was calculating household median income rather than individual median income.
[1] Internal Revenue Service, Charitable Giving by Households that Itemize Deductions (AGI and Itemized Contributions Summary by Zip, 2012), The Urban Institute, National Center for Charitable Statistics
[2] U.S. Census Bureau, Current Population Survey, 2013 and 2014 Annual Social and Economic Supplements.
[3] Do you have to be smart to be rich? The impact of IQ on wealth, income and financial distress Intelligence, Vol. 35, No. 5. (September 2007), by Jay L. Zagorsky
Update 1: Updated chart 1&2 to account for the fact that the source data was calculating household median income rather than individual income.
Update 2: Reverted Chart 1 back to original because I realized that the purpose was to compare LWers to those in similar income brackets. So in that situation, whether it's a household or an individual is not as relevant. It does penalize households to an extent because they have less money available to donate to charity because they're splitting their money three ways.
Update 3: Updated all charts to include data that is filtered for US only.
Less Wrong Sequences+Website feed app for Android
I use my Android phone much more than my computer, and reading the Sequences on a mobile device is a pain. I needed an easy way to access the Sequences, but since there are no apps for this website I had to create one myself. Since I'm no app developer, I used the IBuildApp.com(trustworthy according to my research) website to make the application.
Features:
* Read ALL of the main Sequences and most of the minor ones
* RSS feed to LessWrong.com for latest articles
* No ads!
Drawbacks:
*Requires an Internet connection: I individually copy-pasted each Sequence(from the compilations of posts that many people have made) to the app. Unfortunately, the app development website did not save these on the app itself, but on its server. So to access a Sequence, you require an Internet connection.
*Home screen doesn't look good, because I couldn't get an appropriately sized logo that the website would accept. The Index(where you access the Sequences) looks pretty neat though.
If there are any mobile app developers here, please try to make a better version of it(hopefully one where data is saved offline). I made this for personal use so it's functional but could be done much better by a professional. I'm posting it here for other Android-using people(especially newbies like me) who might find this useful.
Pictures:


Download Link: http://174.142.192.87/builds/00101/101077/apps/LessWrongSequences.apk


















= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)