Follow-up to: Every Cause Wants To Be A Cult, Cultish Countercultishness

One of the classic demonstrations of the Fundamental Attribution Error is the 'quiz study' of Ross, Amabile, and Steinmetz (1977). In the study, subjects were randomly assigned to either ask or answer questions in quiz show style, and were observed by other subjects who were asked to rate them for competence/knowledge. Even knowing that the assignments were random did not prevent the raters from rating the questioners higher than the answerers. Of course, when we rate individuals highly the affect heuristic comes into play, and if we're not careful that can lead to a super-happy death spiral of reverence. Students can revere teachers or science popularizers (even devotion to Richard Dawkins can get a bit extreme at his busy web forum) simply because the former only interact with the latter in domains where the students know less. This is certainly a problem with blogging, where the blogger chooses to post in domains of expertise.

Specifically, Eliezer's writing at Overcoming Bias has provided nice introductions to many standard concepts and arguments from philosophy, economics, and psychology: the philosophical compatibilist account of free will, utility functions, standard biases, and much more. These are great concepts, and many commenters report that they have been greatly influenced by their introductions to them at Overcoming Bias, but the psychological default will be to overrate the messenger. This danger is particularly great in light of his writing style, and when the fact that a point is already extant in the literature, and is either being relayed or reinvented, isn't noted. To address a few cases of the latter: Gary Drescher covered much of the content of Eliezer's Overcoming Bias posts (mostly very well), from timeless physics to Newcomb's problems to quantum mechanics, in a book back in May 2006, while Eliezer's irrealist meta-ethics would be very familiar to modern philosophers like Don Loeb or Josh Greene, and isn't so far from the 18th century philosopher David Hume.

If you're feeling a tendency to cultish hero-worship, reading such independent prior analyses is a noncultish way to diffuse it, and the history of science suggests that this procedure will be applicable to almost anyone you're tempted to revere. Wallace invented the idea of evolution through natural selection independently of Darwin, and Leibniz and Newton independently developed calculus. With respect to our other host, Hans Moravec came up with the probabilistic Simulation Argument long before Nick Bostrom became known for reinventing it (possibly with forgotten influence from reading the book, or its influence on interlocutors). When we post here we can make an effort to find and explicitly acknowledge such influences or independent discoveries, to recognize the contributions of Rational We, as well as Me.

Even if you resist revering the messenger, a well-written piece that purports to summarize a field can leave you ignorant of your ignorance. If you only read the National Review or The Nation you will pick up a lot of political knowledge, including knowledge about the other party/ideology, at least enough to score well on political science surveys. However, that very knowledge means that missing pieces favoring the other side can be more easily ignored: someone might not believe that the other side is made up of Evil Mutants with no reasons at all, and might be tempted to investigate, but ideological media can provide reasons that are plausible yet not so plausible as to be tempting to their audience. For a truth-seeker, beware of explanations of the speaker's opponents.

This sort of intentional slanting and misplaced trust is less common in more academic sources, but it does occur. For instance, top philosophers of science have been caught failing to beware of Stephen J. Gould, copying his citations and misrepresentations of work by Arthur Jensen without having read either the work in question or the more scrupulous treatments in the writings of Jensen's leading scientific opponents, the excellent James Flynn and Richard Nisbett. More often, space constraints mean that a work will spend more words and detail on the view being advanced (Near) than on those rejected (Far), and limited knowledge of the rejected views will lead to omissions. Without reading the major alternative views to those of the one who introduced you to a field in their own words or, even better, neutral textbooks, you will underrate opposing views.

What do LW contributors recommend as the best articulations of alternative views to OB/LW majorities or received wisdom, or neutral sources to put them in context? I'll offer David Chalmers' The Conscious Mind for reductionism, this article on theistic modal realism for the theistic (not Biblical) Problem of Evil, and David Cutler's Your Money or Your Life for the average (not marginal) value of medical spending. Across the board, the Stanford Encyclopedia of Philosophy is a great neutral resource for philosophical posts.

Offline Reference:

Ross, L. D., Amabile, T. M. & Steinmetz, J. L. (1977). Social roles, social control, and biases in social-perceptual processes. Journal of Personality and Social Psychology, 35, 485-494.

New Comment
72 comments, sorted by Click to highlight new comments since: Today at 1:13 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I was actually quite amazed to find how far Gary Drescher had gotten, when someone referred me to him as a similar writer - I actually went so far as to finish my free will stuff before reading his book (am actually still reading) because after reading the introduction, I decided that it was important for the two of us to write independently and then combine independent components. Still ended up with quite a lot of overlap!

But I think it probably is unfair to judge Drescher as being at all representative of what ordinary philosophers can do. Drescher is an AI guy. And the comments on the back of his book seem to indicate that he was writing in a mode that philosophical readers found startling and new.

Drescher is not alternative mainstream philosophy. Drescher is alternative Yudkowsky.

I've referred to Drescher and SEP a few times. The main reason I don't refer more to conventional philosophy is that it doesn't seem very good as a field at distinguishing good ideas from bad ones. If you can't filter your good ideas and present them in a non-needlessly-complicated fashion, is there much point in pointing a reader to it?

But I've taken into account that Greene was able to rescue Roko where I could not, and I've promoted him on my list of things to read.

"But I think it probably is unfair to judge Drescher as being at all representative of what ordinary philosophers can do."

I agree, and I didn't do so (I used Dennett-type compatibilism in my list of representative views that you conveyed). Even when you do something exceptionally good independently, it can help to defuse affective death spirals to make clear that it's not quite unique.

"If you can't filter your good ideas and present them in a non-needlessly-complicated fashion, is there much point in pointing a reader to it?"

This is an authorial point of view. Readers need heuristics to confirm that this is what is going on, and not something less desirable, for particular authors and topics. If they can randomly check some of your claims against the leading rival views and see that the latter are weak, that's useful.

-3TheAncientGeek10y
Alternatively, philosophy is good at finding arguments ffor a wide variety of ideas, which is evidence against the meta level idea that there is a simplistic distinction between good and bad ideas. What is the evidence for that meta level idea?

I don't think I agree with your conclusion. It seems to assume that ideas are somehow representation-independent -- and in practical programming as well as practical psychology, that idea is a non-starter.

Or to put it another way, someone who can state a point more eloquently than its originator knows something that its originator does not. Sure, the communicator shouldn't get all the credit... but more than a little is due.

5wedrifid14y
It could be 'How to write eloquently in a context independent manner'.
3A1987dM12y
Cf. Feynman stating that since he wasn't able to explain the spin--statistics theorem at a freshman level, that meant he hadn't actually fully understood it himself.

How much non-Eliezer stuff is there on the practical "how to" of rationality, e.g. on techniques for improving one's accuracy in the manner that Something to protect, Leave a line of retreat, The bottom line, and taking care not to rehearse the evidence might improve one's accuracy?

EDIT: Sorry to add to the comment after Carl's response. I had the above list in there already, but omitted an "http://", which caused the second half of my sentence to somehow not show up.

9Eliezer Yudkowsky12y
There's Robyn Dawes's Rational Choice in an Uncertain World which is highly similar in spirit, intent, and style to the kind of defeat-the-bias writing in the Sequences. (And it's quoted accordingly when I borrow something.)
7CarlShulman15y
It's balkanized, but a lot of psychologists have written on overcoming the biases they study, e.g. the Implicit Attitude Test researchers suggesting that people keep pictures of outstanding individuals from groups for which they have a negative IAT around, or Cialdini talking about how to avoid being influenced by the social psychology effects he discusses in Influence.
8AnnaSalamon15y
Okay, yes, I've read some of that. But how much of rationality were you practicing before you ran into Eliezer's work? And where did you learn it? Also, are there other attempts at general textbooks? Double also, are there sources of rationality "how to" content you'd recommend whose content I probably haven't learned from Eliezer's posts, besides the academic heuristics and biases literature?

I read decision theory, game theory, economics, evolutionary biology, epistemology, and psychology (including the heuristics and biases program), then tried to apply them to everyday life.

I'm not aware of any general rationality textbooks or how-to guides, although there are sometimes sections discussing elements in guides for other things. There are pop science books on rationality research, like Dan Ariely's Predictably Irrational, but they're rarely 'how-to' focused to the same extent as OB/LW.

1Paul Crowley15y
Yes - where else is there an attempt to put together a coherent and in some sense complete account of what rationality is and how to reach it?

The article on theistic modal realism is ingenious. (One-sentence summary: God's options when creating should be thought of as ensembles of worlds, and most likely he'd create every world that's worth creating, so the mere fact that ours is far from optimal isn't strong evidence that it didn't arise by divine creation.)

I don't find the TMR hypothesis terribly plausible in itself -- my own intuitions about what a supremely good and powerful being would do don't match Kraay's -- but of course a proponent of TMR could always just reject my intuitions as I'd reject theirs.

However, I think the TMR hypothesis should be strongly rejected on empirical grounds.

  1. It is notable -- and this is one element of a typical instance of the Argument From Evil -- that our world appears to be governed by a bunch of very strict laws, which it obeys with great precision in ways that make substantial divine intervention almost impossible. It seems that there are many many many more possible worlds in which this property fails than in which it holds, simply because the more scope there is for intervention the more ways there are for things to happen. Therefore, unless the sort of lawlikeness we observe is

... (read more)
3Alexei14y
Granted that god exists and cares about us and he can change the world, even in tiny aspects, it's very likely god will use those small aspects as a base to create the perfect world (kind of like AI FOOM). It follows that any world where god has some kind of minimum control will converge to the perfect world. Given that we are not in the perfect world, we can assume god does not have the minimum level of control.

Excellent post. Having just read The Adapted Mind (and earlier the moral animal), I can see where Eliezer got a lot of his stuff on evolutionary psychology from.

However, all authors must walk a thin rope between appeasing the Carl Shulman's of the world who have read everything and introducing some background for naive readers beyond telling them to simply catch up on their own. I think he in general does a good job of erring on the side of more complexity, which is what I appreciate, so I of course forgive him. :)

A niche that a good author might consider filling is actually including the numbers of the experiments they reference, ie, the experimental scores and their standard errors, etc. It might turn off the innumerate but I think that pure numbers and effect sizes are grossly under reported by science writers.

"However, all authors must walk a thin rope between appeasing the Carl Shulman's of the world who have read everything and introducing some background for naive readers beyond telling them to simply catch up on their own."

I don't need to be appeased, and I strongly endorse the project of providing that introduction. My post was about ways for readers and authors to manage some of the drawbacks of success.

I agree that sample and effect sizes are grossly under-reported, often concealing that an experiment with a sexy conclusion had almost no statistical power, or that an effect was statistically significant but too small to matter. It seems possible for this to become a general science journalism norm, but only if a word-efficient and consistent format for in-text description can be devised, like conflict-of-interest acknowledgments.

0pnrjulius12y
How about this? "The study had 27 participants, 15 women and 12 men. The difference between men and women was on average 2 points on a questionnaire ranging from 0 to 100 points." This clearly explains the (small) sample size and (weak) effect size without requiring any complicated statistics.
5Benya12y
Actually, it doesn't tell you the effect size, since it doesn't include information about how much individuals in each group differ from each other. If the difference between the group means is 2 points and the standard deviation in each group is 5 points, that's the same effect size (in the technical Cohen sense) as if the difference is 10 points and the standard deviation is 25 points. I think a useful way to report data like this might be a variation on, "If you chose one of the women and random and one of the men at random, the probability that the woman would have a higher score would be 53%." Aaaand in order not to completely miss the point of the original article, ETA: I'm not sure how much of that suggestion is my own thinking, but I was certainly influenced by reading about the binomial effect size display which solves a related problem in a similar way, and after I had the idea myself I came across something very similar in Rebecca Jordan-Young's Brainstorm (p.52, and endnote 4 on p.299): mental rotation ability is "considered to be the largest and most reliable gender difference in cognitive ability"; using data from a meta-analysis, she notes that if you tried to guess someone's gender based on their score in a mental rotation test, using your best strategy you'd get it right 60% of the time. (I checked that math a while ago and got the same result, assuming normal distributions with equal variances in each group, with Cohen's d=.56; the meta-analysis is Voyer, Voyer & Bryden, 1995.) It's annoying that IIRC, "guess the gender" and "in a random pair, who has the higher score" don't give the same number, though. Average readers will probably just see a percentage in each case and derive some measure of affect from the number, whichever interpretation you give.
0[anonymous]14y
It is easier to read comments that include quoted paragraphs when they are quoted with markdown syntax. Instead of using quotation marks for isolated paragraphs use a greater than > sign at the start of the intended quote.

EDIT: I agree with your conclusion, but...

(Checks Don Loeb reference.)

While, unsurprisingly, we end up adding to the same normality, I would not say that these folks have the same metaethics I do. Certainly Greene's paper title "The Terrible, Horrible, No Good, Very Bad Truth About Morality" was enough to tell me that he probably didn't have exactly the same metaethics and interpretation I did. I would not feel at all comfortable describing myself as a "moral irrealist" on the basis of what I've seen so far.

Drescher one-boxes on Newcomb's Problem, but doesn't seem to have invented quite the same decision theory I have.

I don't think Nick ever claimed to have invented the Simulation Argument - he would probably be quite willing to credit Moravec.

On many other things, I have tried to use standard terminology where I actually agree with standard theories, and provide a reference or two. Where I am knowingly being just a messenger, I do usually try to convey that. But you may be reading too much into certain similarities that also have important points of difference or further development.

EDIT2: I occasionally notice the problem you point to, and write a blog post telling people to read more textbooks. Probably this is not enough. I'll try to reach a higher standard in any canonicalized versions.

I think the biggest issue here is your tendency to not cite sources other than yourself, which is an immediate turn-off to academics. To an academic, it suggests the following questions (amongst others): If your ideas are so good, why hasn't anyone else thought of them? Doesn't anyone else have an opinion on this - do you have a response to their arguments? Are you actually doing work in your field without having read enough to cite those who agree or disagree with you?

(I know this isn't a new issue, but it seems it bears repeating.)

Other questions that are implicitly asked:

  • Why are you not signalling in group status?
  • Why are you not signalling alliance with me or my allies by inventing excuses to refer to us?
  • Are you an outsider trying to claim our territory in cognitive space?
  • Are you talking about topics that are reserved for those with higher status in our group than we assign you?
2TheAncientGeek10y
This point could count against any amateur philosopher. What is more pertinent: why insist you are doing better than the professionals? You should assume you are making ,mistakes and reinvemtimg wheels. Why not learn the standard jargon? You may not have the time or inclination to learn the whole subject, but the jargon is the most .valuable thing to learn, because it enables you to communicate with professinals who can help you. If you are able to admit to yourself that, as an amateur, you might need help. There are some failure modes that arepart and parcel of being an amateur, and some further ones that take you into crank territory.
8steven046115y
I took a look at Greene's dissertation when Roko started pushing it, but I don't think Greene's views are much like Eliezer's at all. Specifically he doesn't seem to emphasize what Eliezer calls the "subjectively objective" quality of morality, or the fact that people may be mistaken as to what their morality says. Correct me if I'm wrong. I agree with the rest of the original post.
2CarlShulman15y
I agree about the difference of emphasis but I don't think they have a major substantive disagreement on those issues. You can check with Owain Evans, who knows him.
3CarlShulman15y
Greene doesn't really think it's horrible, just that people mistakenly think it's horrible and recoil from irrealism about XML 'rightness tags' on actions because they think it would mean that they should start robbing and murdering. Nick does acknowledge Moravec on his website now, after being informed about it (he wasn't aware before that). Perhaps I shouldn't have covered both being a messenger and acknowledgment of related independent work in the same post.
1Roko15y
yes, I detected a hint of irony in the title. The thesis is that it isn't actually that horrible, rather that people don't want to face up to the truth, and it is because of this somewhat irrational fear that even considering the possibility of antirealism is avoided.
2Roko15y
Have you read it? It takes about a day and a half to read, and I think that he points out an error with the position that you took in the "p-right" etc discussions on OB. Would it be off topic for me to do a post on this? Other than that, he takes the same position you do. I recommend that you read his dissertation, and then email him to discuss the application of this set of ideas to transhumanism/singularity. He would probably be interested.
3Eliezer Yudkowsky15y
In this case, I'd actually say email me first with a quickie description.
9CarlShulman15y
Roko exaggerates. It's only 377 pages and written in an accessible style. It summarizes the ethical literature on moral realism, and takes the irrealist view that XML tags on actions don't exist, and that even if they did exist we wouldn't care about them. It then goes into the psychology literature (Greene does experimental philosophy, e.g. finding that people misinterpret utility as having diminishing marginal utility in contravention to experimental instructions), e.g. Haidt's work on social intuitionism, to explain why it is that we think there are these moral properties 'out there' when there aren't any. Lastly, he argues that we can get on with pursuing our concerns (reasoning about conflicts between our concerns, implications, instrumental questions, etc), but suggests that awareness of the absence of XML tags can help us to better understand and deal with those with differing moral views.
4Eliezer Yudkowsky15y
This explains a LOT.
5CarlShulman15y
http://www.wjh.harvard.edu/~jgreene/GreeneWJH/Greene-Baron-JBDM-01.pdf Enjoy.
0PhilGoetz15y
?
9Vladimir_Nesov15y
There is no objective truth about which actions are the right ones, no valuation inherent in the actions themselves. And even if there was, even if you could build a right-a-meter and check which actions are good, you won't care about what it says, since it's still you that draws the judgment.

Is the idea here to counsel us against some sort of halo effect? Eliezer Yudkowsky has told me a lot of interesting things about heuristics and biases, and about how intelligence works, but I shouldn't let this affect my judgement too much if he recommends a movie?

Or is it more than that - just that I should be careful when reading anything by Eliezer, and take into account the fact that I'm probably slightly too inclined to trust it, because I've liked what came before? Because then of course, we have the issue that I should be more likely to trust an au... (read more)

For much of what EY is setting out, trust isn't an appropriate relationship to have with it. You trust that he's not misrepresenting the research or his knowledge of it, and you have a certain confidence that it will be interesting, so if an article doesn't seem rewarding at first you're more likely to put work in to squeeze the goodness out. But most of it is about making an argument for something, so the caution is not to trust it at all but to properly evaluate its merits. To trust it would be to fail to understand it.

5CarlShulman15y
"Because then of course, we have the issue that I should be more likely to trust an author who is usually right - and this just says that I should be careful not to trust them too much more." Right.

I like EY's writings, but don't hold them up as gospel. For instance, I think this guy's summary of Bayes Theorem (http://betterexplained.com/articles/an-intuitive-and-short-explanation-of-bayes-theorem) is much more readable and succinct than EY's much longer (http://yudkowsky.net/rational/bayes) essay.

I recently read Greene's essay and I thought it was a nice buttressing of ideas that I was originally exposed to in 2001 while reading "Beyond anthropomorphism". The challenge with Eliezer's earlier writing is that it is too injected with future shock to be comfortable for most non-transhumanists to read. The challenge with Eliezer's more recent writing is that it is too long for a blog format and much more suited for a book, which forces people to focus on the one thing.

The title of Greene's thesis is tongue-in-cheek. Based on my understanding of Eliezer's conception of morality, I would definitely call him irrealist.

0[anonymous]12y
Well, in as much as Mathematicians are irrealists.

The work of Jon Haidt is very enlightening.

This evening I had the pleasure of reading his Edge article on the benefits of religion, where he takes on some prominent new atheists - Myers, Sam Harris, etc. I quote:

When hurricane Katrina struck, religious groups across the country organized quickly to send volunteers and supplies. Like fraternities, religions may generate many positive externalities, including charity, social capital (based on shared trust), and even team spirit (patriotism). If all religious people lost their faith overnight and abandone

... (read more)

No, Enlightenment 2.0 requires rationalist task forces as tightly-knit, dedicated, and fast-responding as religious task forces, better coordinated and better targeted, maybe even more strongly motivated, to do every good thing that religion ever did and more.

IMHO.

4Roko15y
I think that Haidt underestimates the power of irrationality as a force for evil and chaos, which is a point that people like you make very well. The point he makes well is the power of religion to bring out our "inner bee" and just make us co-operate. This underlines a point I made earlier about the power of generalists. If Richard Dawkins, Josh Greene, Jon Haidt, Marvin Minsky, Gary Drescher, and Tversky and Kahnemann could put all of their brains together into one big head, they'd have all of your insights plus more. But they're separate, isolated specialists, so the world had to wait for a generalist. IMO modern academia's largest problem is its specialization.

It's really helpful to have good info borne to me, though, in a readable and engaging fashion. For some reason I never wound up reading the Stanford Encyclopedia of Philosophy, but I did read Eliezer's philosophical zombie movie script.

0CarlShulman15y
You can appreciate the communication of the info, just don't blur your valuation of the info itself with your valuations of the communication and communicator.

That pointer to Gary Drescher is much appreciated. Eliezer's explanations about determinism and QM make me feel "aha, now it's obvious, how could it be any other way", but I hate single-sourcing knowledge.

Just a brief mention since we're supposed to avoid AI for a while, but it is too relevant to this post to totally ignore: I just finished J Storrs Hall's "Beyond AI", the overlap and differences with Eliezer's FAI are very interesting, and it is a very readable book.

EDIT: You all might notice I did write "overlap and differences"; I noticed the differences, but I do think they are interesting; not least because they seem similar to some of Robin's criticisms of Eliezer's FAI.

3MichaelGR15y
I've read it too, but made the mistake of reading it right after Godel, Escher, Bach. Hard to compare. What surprised me most was how much of the things written in a book published in 2007 were more or less the same as those in a book published in 1979. I expected more new promising developments since then, and that was a bit of a downer.
2Eliezer Yudkowsky15y
I think I see a lot more difference between my own work and others' work than some of my readers may.
5PhilGoetz15y
I think that's inevitable, if for no other reason that someone reading two treatments of one subject that they don't completely understand is likely to interpret them in a correlated way. They may make similar assumptions in both cases; or they may understand the one they read first, and try to interpret the one they read second in a similar way.
2CarlShulman15y
Hall gives a passable history of AI, acts as a messenger for a lot of standard AI ideas, including the Dennett compatibilist account of free will and some criticisms of nonreductionist accounts of consciousness, and acts as a messenger for a stew of social science ideas, e.g. social capital and transparent motivations, although the applicability of the latter is often questionable. Those sections aren't bad. It's only when he gets to considering the dynamics of powerful intelligences and offers up original ideas that he makes glaring errors. Since that's your specialty, those mistakes stand out as horribly egregious, while casual readers might miss them or think them outweighed by the other sections of the book. I see differences between you and Drescher, or you and Greene, both in substance (e.g. some clear errors in Drescher's book when he discusses the ethical value of rock-minds, neglecting the possibility that happy experiences of others could figure in our utility functions directly, rather than only through game theoretic interactions with powerful agents) and in presentation/formalization/frameworks. We could try to quantify percentage overlap in views on specific questions.
1[anonymous]15y
This is a good example of why I don't bother to cite what others perceive as "related work", frankly.

I think you have some great points here.

The "textbooks" link is broken.

2CarlShulman15y
Fixed.

The Don Loeb and theistic modal realism links are broken. Also, the Stanford Encyclopedia of Philosophy link seems to "point" to a passage from another LW post rather than a URL.

[-][anonymous]12y00

I have never really regarded EY as anything other than the guy who wrote a bunch of good ideas in one place. The ideas are good on their own merits and after being made aware that Quine(?) invented that "Philosophy = Psychology" thing I have had some healthy 'he's right but probably not original.' And really, who cares? He is right, but don't shoot the messenger, ad hominem is still ad hominem even if it is positive. Empty agreements are as bad as empty dismissals.

Isn't this intuitively obvious? Or am I just very, very rational?

[-][anonymous]14y00

Terry Pratchett is another good person who seems to want to go out on his own terms.

[-][anonymous]15y00

There's a large overlap between the ideas on LW and OB, and a book by J. Storrs Hall called "Beyond AI". That book is a popularization. But so is OB/LW, most of the time - at least the posts, if not the comments.

The reason i love Elizier is how many people he must have attracted this art of rationality, and that without him and this site i wouldn't even know where to begin or where to look, and how he is one of surprisingly few people able to convey the information in such tasty little bits. He may not be the smartest in his field, and may 'just' be passing on things he learned from others, but his work is super valuable, for he does what the others don't. Also, Methods of Rationality happens to be on my top 3 list of Greatest Pieces of Writing IMO, so that adds a... (read more)

1Lumifer10y
Do you love him enough to spell his name right..?
0Entraya10y
I would like it if you didn't linger so much on a mere spelling mistake. I had no muscle memory for how to spell this entirely foreign name. Eliezer Yudkowski; i hope you are satisfied, for thy name is surely glorious and worthy of praise. I've also first discovered the mail-notification system, hence why it took me so long to respond
0polymathwannabe10y
I Googled "Elizier Yudkowsky" and the first suggestion was an OKCupid profile. Talk about loving him!
-1Vulture10y
I might have found your comment witty if you hadn't also downvoted. Don't be a jerk. (My apologies if it wasn't your downvote. Obviously I'm pretty confident, though)
0Lumifer10y
Sigh. I very rarely downvote any comments in the subthreads I participate in. I did not vote, up or down, on any comment in this subthread. Want to recalibrate your confidence? :-P
2Vulture10y
Gladly. In retrospect, my comment was obnoxious even if it had been right. In the future I'll try to realize this without being wrong first. edit: Out of curiousity, why do you usually not vote in threads you're participating in?
8Lumifer10y
If I am already talking to people, I can explain my likes and dislikes in words -- without using the crude tool of votes. For me comments and votes are two alternate ways of expressing my attitude, it is rare that I want to use both. Besides, it feels more "proper", in the vaguely ethical way, to not up- or down-vote people with whom I am conversing. Not that I think it should be a universal rule, that's just a quirk of mine.
1Vaniver10y
I have no qualms against upvoting people that I'm responding to or who have responded to me, but I have a much higher threshold for downvoting responses to my comments and posts, both to try to compensate for the human tendency to get defensive and to increase the probability the conversation is pleasant.
3TheOtherDave10y
Agreed with all of this, but the last bit makes me curious... does downvoting someone who is involved in an exchange with a third party decrease the probability that the conversation is pleasant for the two of them?
2Vulture10y
Well being downvoted, especially when it puts one in the negatives, stirs up bad feelings which might make someone less likely to behave pleasantly in a conversation.