You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

A model of arguments

-7 Elo 01 September 2016 12:41PM

Original post: http://bearlamp.com.au/a-model-of-arguments/

Why do we argue, when we could be discussing things in a productive manner? Arguments often occur because the parties involved simply don't have the tools to transmit their ideas clearly.  In this kind of situation, the whole conversation can completely break down. It’s easy to spend a lot of time saying "You're wrong", without accomplishing anything.  

Let's imagine two people having an argument, represented by a Venn diagram.  A in the black, and B in the blue.  They each see the issue slightly differently. The blue circle to represent B’s opinion and a black circle to represent A’s opinion.  The concept of “You’re wrong” falls into the area of describing the other person’s ideas.

Person A says, "You're wrong" to Person B.  A description of the state of affairs of B’s ideas.  Not one that really represents A’s own ideas.  Naturally, Person B says, “No you’re wrong” back, equally making the unfounded claim on A’s conceptual real-estate.  The thing that is hard to demonstrate is the conflict that is accidentally generated by crossing into each other’s territory to declare things.

2

To do this creates a crossing-over of ideas.  

1D

We don’t even need to know what the argument is about, but we can expect something like this to happen:

3

4edit

Now suppose instead of Person A saying, "You're wrong", where they place the burden of argument (and proof) on the opposition, they now say, "We disagree".  

5

Person B can now continue to make the same argument of "You're wrong".  But so long as Person A shrugs and replies "We disagree", there is no conflict in the argument.  

5b

For some Person Bs, Person A might get lucky, and the two could end up with a happy middle ground of  "Yes, we disagree".  This is already a step in the right direction, and will let the pair continue to sort out precisely where and how they disagree  On the other hand, a stubborn Person B will still present a problem.

6

Hey, that's the internet for you! You win some, you lose some.

a1b

Nonetheless, the shared ground offered by "We disagree" will often spur constructive discussion.

a1


As it turns out, there is another way.  When you go to understand someone else's idea, instead of starting with "You are wrong", consider starting with, "I am wrong". Right from the start, this gives you an advantage. Rather than starting off from a position of conflict, you start off in a position of equality.

8

Sometimes the other party won't accept your peace offering. They will bristle and rage and prepare for the offensive.

a2

But it's far more common to see an offer of equality met by an acceptance of that equality. Instead of things going downhill, this usually happens:

9

Or this:

10

11

And a pleasant discussion can ensue.

Why is this so great? Because what we're aiming for here - what we really want out of discussions - is this:

13

14

16

17

What we are aiming for is to trade knowledge until we can conclude the answers in the end.

This style of measured, polite and constructive conversation can only occur when parties meet each other on equal terms.

If there's one lesson to take home from this post, it's that the way you deliver your argument can easily be what makes it powerful. If you come in throwing punches, ready to take your opponent down a notch or two, you might enjoy yourself - but don't expect to have a constructive discussion. Whereas if you approach your opponent as an equal from the shared ground of "We disagree", or even from the vulnerable position of "I am wrong" - well, what reasonable opponent could disagree with that?


This post took me weeks of thinking about, and only 3 hours to write down and draw the first time.  But it was rubbish.  Didn’t make sense.  The rewrite was contributed by the Captain and the slack, taking another 2 hours.  This version gets the point of the idea across.  I sent the original post to Tim@waitbutwhy but he is very busy and declined to draw pictures to go along with it.

Cross posted to lesswrong: 

[Stub] The problem with Chesterton's Fence

4 Stuart_Armstrong 05 January 2016 05:10PM

Chesterton's meta-fence: "in our current system (democratic market economies with large governments) the common practice of taking down Chesterton fences is a process which seems well established and has a decent track record, and should not be unduly interfered with (unless you fully understand it)".

[LINK] AI risk summary published in "The Conversation"

8 Stuart_Armstrong 14 August 2014 11:12AM

A slightly edited version of "AI risk - executive summary" has been published in "The Conversation", titled "Your essential guide to the rise of the intelligent machines":

The risks posed to human beings by artificial intelligence in no way resemble the popular image of the Terminator. That fictional mechanical monster is distinguished by many features – strength, armour, implacability, indestructability – but Arnie’s character lacks the one characteristic that we in the real world actually need to worry about – extreme intelligence.

Thanks again for those who helped forge the original article. You can use this link, or the Less Wrong one, depending on the audience.

The failure of counter-arguments argument

14 Stuart_Armstrong 10 July 2013 01:38PM

Suppose you read a convincing-seeming argument by Karl Marx, and get swept up in the beauty of the rhetoric and clarity of the exposition. Or maybe a creationist argument carries you away with its elegance and power. Or maybe you've read Eliezer's take on AI risk, and, again, it seems pretty convincing.

How could you know if these arguments are sound? Ok, you could whack the creationist argument with the scientific method, and Karl Marx with the verdict of history, but what would you do if neither was available (as they aren't available when currently assessing the AI risk argument)? Even if you're pretty smart, there's no guarantee that you haven't missed a subtle logical flaw, a dubious premise or two, or haven't got caught up in the rhetoric.

One thing should make you believe the argument more strongly: and that's if the argument has been repeatedly criticised, and the criticisms have failed to puncture it. Unless you have the time to become an expert yourself, this is the best way to evaluate arguments where evidence isn't available or conclusive. After all, opposite experts presumably know the subject intimately, and are motivated to identify and illuminate the argument's weaknesses.

If counter-arguments seem incisive, pointing out serious flaws, or if the main argument is being continually patched to defend it against criticisms - well, this is strong evidence that main argument is flawed. Conversely, if the counter-arguments continually fail, then this is good evidence that the main argument is sound. Not logical evidence - a failure to find a disproof doesn't establish a proposition - but good Bayesian evidence.

In fact, the failure of counter-arguments is much stronger evidence than whatever is in the argument itself. If you can't find a flaw, that just means you can't find a flaw. If counter-arguments fail, that means many smart and knowledgeable people have thought deeply about the argument - and haven't found a flaw.

And as far as I can tell, critics have constantly failed to counter the AI risk argument. To pick just one example, Holden recently provided a cogent critique of the value of MIRI's focus on AI risk reduction. Eliezer wrote a response to it (I wrote one as well). The core of Eliezer's and my response wasn't anything new; they were mainly a rehash of what had been said before, with a different emphasis.

And most responses to critics of the AI risk argument take this form. Thinking for a short while, one can rephrase essentially the same argument, with a change in emphasis to take down the criticism. After a few examples, it becomes quite easy, a kind of paint-by-numbers process of showing that the ideas the critic has assumed, do not actually make the AI safe.

You may not agree with my assessment of the critiques, but if you do, then you should adjust your belief in AI risk upwards. There's a kind of "conservation of expected evidence" here: if the critiques had succeeded, you'd have reduced the probability of AI risk, so their failure must push you in the opposite direction.

In my opinion, the strength of the AI risk argument derives 30% from the actual argument, and 70% from the failure of counter-arguments. This would be higher, but we haven't yet seen the most prominent people in the AI community take a really good swing at it.

Writing feedback requested: activists should pursue a positive Singularity

3 michaelcurzi 16 November 2011 09:14PM

I managed to turn an essay assignment into an opportunity to write about the Singularity, and I thought I'd turn to LW for feedback on the paper. The paper is about Thomas Pogge, a German philosopher who works on institutional efforts to end poverty and is a pledger for Giving What We Can

I offer a basic argument that he and other poverty activists should work on creating a positive Singularity, sampling liberally from well-known Less Wrong arguments. It's more academic than I would prefer, and it includes some loose talk of 'duties' (which bothers me), but for its goals, these things shouldn't be a huge problem. But maybe they are - I want to know that too.

I've already turned the assignment in, but when I make a better version, I'll send the paper to Pogge himself. I'd like to see if I can successfully introduce him to these ideas. My one conversation with him indicates that he would be open to actually changing his mind. He's clearly thought deeply about how to do good, and may simply have not been exposed to the idea of the Singularity yet.

I want feedback on all aspects of the paper  - style, argumentation, clarity. Be as constructively cruel as I know only you can.

If anyone's up for it, fee free to add feedback using Track Changes and email me a copy - mjcurzi[at]wustl.edu. I obviously welcome comments on the thread as well.

You can read the paper here in various formats.

Upvotes for all. Thank you!