In response to comment by Jotto999 on Bayesian Judo
Comment author: juliawise 13 February 2012 02:50:16AM *  21 points [-]

If I were the host I would not like it if one of my guests tried to end a conversation with "We'll have to agree to disagree" and the other guest continued with "No, we can't, actually. There's a theorem of rationality called Aumann's Agreement Theorem which shows that no two rationalists can agree to disagree." In my book this is obnoxious behavior.

Having fun at someone else's expense is one thing, but holding it up in an early core sequences post as a good thing to do is another. Given that we direct new Less Wrong readers to the core sequence posts, I think they indicate what the spirit of the community is about. And I don't like seeing the community branded as being about how to show off or how to embarrass people who aren't as rational as you.

What gave me an icky feeling about this conversation is that Eliezer didn't seem to really be aiming to bring the man round to what he saw as a more accurate viewpoint. If you've read Eliezer being persuasive, you'll know that this was not it. He seemed more interested in proving that the man's statement was wrong. It's a good thing to learn to lose graciously when they're wrong, and learn from the experience. But that's not something you can force someone to learn from the outside. I don't think the other man walked away from this experience improved, and I don't think that was Eliezer's goal.

I, like you, love a good argument with someone who also enjoys it. But to continue arguing with someone who's not enjoying it feels sadistic to me.

If I were in this conversation, I would try to frame it as a mutual exploration rather than a mission to discover which of us was wrong. At the point where the other tried to shut down the conversation, I might say, "Wait, I think we were getting to something interesting, and I want to understand what you meant when you said..." Then proceed to poke holes, but in a curious rather than professorial way.

In response to comment by juliawise on Bayesian Judo
Comment author: Jotto999 13 February 2012 05:04:09PM 1 point [-]

Interesting. Do we have any good information on the attributes of discussions or debates that are the most likely to educate the other person when they disagree? In hindsight this would be a large shortcoming of mine, having debated for years now but never invested much in trying to optimize my approach with people.

Something I've noticed: when someone takes the "conquer the debate" adversarial approach, a typical-minded audience appears more likely to be interested and side with the "winner" than if the person takes a much more reserved and cooperative approach despite having just as supported arguments. Maybe the first works well for typical audiences and the second for above-typical ones? Or maybe it doesn't matter if we can foster the second in "typical" minds. Given my uncertainty it seems highly unlikely that my approach with people is optimal.

Do you have any tips for someone interested in making a mental habit out of cooperative discussion as opposed to being adversarial? I find it very difficult, I'm an aggressive and vigorous person. Maybe if I could see a video of someone using the better approach so I can try to emulate them.

Comment author: beoShaffer 28 January 2012 03:38:29AM *  20 points [-]

Less Wrong

The Modern Rationality Institute.

The institute for Bayesian Reasoning

Center for applied rationality.

The Applied Rationality Institute

Comment author: Jotto999 12 February 2012 11:19:38PM *  0 points [-]

I like how Center for Applied Rationality sounds, though it might be too long. Or maybe that isn't a problem and suddenly the amount of times I type the word CAR would increase.

How about Colligate Institute? Though maybe Colligate is too obscure a word (Google Chrome's spell-checker has it underlined in red).

In response to Bayesian Judo
Comment author: juliawise 08 August 2011 07:37:07PM 9 points [-]

This post's presence so early in the core sequences is the reason I nearly left LW after my first day or two. It gave me the impression that a major purpose of rationalism was to make fun of other people's irrationality rather than trying to change or improve either party. In short, to act like a jerk.

I'm glad I stuck around long enough to realize this post wasn't representative. Eliezer, at one point you said you wanted to know if there were characteristically male mistakes happening that would deter potential LWers. I can't speak for all women, but this post exemplifies a kind of male hubris that I find really off-putting. Obviously the woman in the penultimate paragraph appreciated it in someone else, but I don't know if it made her think, "This is a community I want to hang out with so I, too, can make fools of other people at parties."

In response to comment by juliawise on Bayesian Judo
Comment author: Jotto999 12 February 2012 09:44:56PM *  2 points [-]

Before I say anything I would like to mention that this is my first post on LW, and being only part way through the sequences I am hesitant to comment yet, but I am curious about your type of position.

What I find peculiar about your position is the fact that Yudkowsky did not, as he presented here, engage the argument. The other person did, asserting "only God can make a soul", implying that Yudkowsky's profession is impossible or nonsensical. Vocalizing any type of assertion, in my opinion, should be viewed as a two-way street, letting potential criticism come. In this particular example the assertion was of a subject that the man knew would be of large interest to Yudkowsky, certainly disproportionately more than say whether or not the punch being served had mango juice in it.

I'd like to know what you expect Yudkowsky should have done given the situation. Do you expect him not to give his own opinion, given the other person's challenge? Or was it instead something in particular about the way Yudkowsky did it? Isn't arguing inevitable and all we can do is try to build better dialogue quality? (That has been my conclusion for the last few years). Either way, I don't see the hubris you seem to. My usual complaints of discussions is that they are not well educated enough and people tend to say things that are too vague to be useful, or outright unsupported. However I rarely see a discussion and think "Well the root problem here is that they are too arrogant", so I'd like to know what your reasoning is.

It may be relevant that in real life I am known by some as being "aggressive" and "argumentative". Though you probably could have inferred that based on my position but I'd like to keep everything about my position as transparent as possible.

Thank you for your time.

In response to Dying Outside
Comment author: Jotto999 25 April 2010 10:08:31PM 1 point [-]

This is very inspiring for me! It makes me appreciate having such a mobile and agile body.

Have you seen Aubrey De Grey's TEDTalks speech? Or looked up organ printing, or other life-extension related technologies speculated to be available within ten or twenty years?

I'm not entirely sure how they could be applied to ALS patients, but it certainly would offer a chance of not just living longer, but maybe some day gaining back some function.

By choosing death, you will be forfeiting any chance of being helped by these potential new technologies. By choosing life, if you can just live long enough, you might see the days of indefinite lifespan.

Either way though, your story is very uplifting, and I hope you do live long enough to see indefinite lifespan. I hope everyone does. :)

View more: Prev