Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

How can we get more and better LW contrarians?

56 Post author: Wei_Dai 18 April 2012 10:01PM

I'm worried that LW doesn't have enough good contrarians and skeptics, people who disagree with us or like to find fault in every idea they see, but do so in a way that is often right and can change our minds when they are. I fear that when contrarians/skeptics join us but aren't "good enough", we tend to drive them away instead of improving them.

For example, I know a couple of people who occasionally had interesting ideas that were contrary to the local LW consensus, but were (or appeared to be) too confident in their ideas, both good and bad. Both people ended up being repeatedly downvoted and left our community a few months after they arrived. This must have happened more often than I have noticed (partly evidenced by the large number of comments/posts now marked as written by [deleted], sometimes with whole threads written entirely by deleted accounts). I feel that this is a waste that we should try to prevent (or at least think about how we might). So here are some ideas:

  • Try to "fix" them by telling them that they are overconfident and give them hints about how to get LW to take their ideas seriously. Unfortunately, from their perspective such advice must appear to come from someone who is themselves overconfident and wrong, so they're not likely to be very inclined to accept the advice.
  • Create a separate section with different social norms, where people are not expected to maintain the "proper" level of confidence and niceness (on pain of being downvoted), and direct overconfident newcomers to it. Perhaps through no-holds-barred debate we can convince them that we're not as crazy and wrong as they thought, and then give them the above-mentioned advice and move them to the main sections.
  • Give newcomers some sort of honeymoon period (marked by color-coding of their usernames or something like that), where we ignore their overconfidence and associated social transgressions (or just be extra nice and tolerant towards them), and take their ideas on their own merits. Maybe if they see us take their ideas seriously, that will cause them to reciprocate and take us more seriously when we point out that they may be wrong or overconfident.
I guess these ideas sounded better in my head than written down, but maybe they'll inspire other people to think of better ones. And it might help a bit just to keep this issue in the back of one's mind and occasionally think strategically about how to improve the person you're arguing against, instead of only trying to win the particular argument at hand or downvoting them into leaving.
P.S., after writing most of the above, I saw  this post:
OTOH, I don’t think group think is a big problem. Criticism by folks like Will Newsome, Vladimir Slepnev and especially Wei Dai is often upvoted. (I upvote almost every comment of Dai or Newsome if I don’t forget it. Dai makes always very good points and Newsome is often wrong but also hilariously funny or just brilliant and right.) Of course, folks like this Dymytry guy are often downvoted, but IMO with good reason.
To be clear, I don't think "group think" is the problem. In other words, it's not that we're refusing to accept valid criticisms, but more like our group dynamics (and other factors) cause there to be fewer good contrarians in our community than is optimal. Of course what is optimal might be open to debate, but from my perspective, it can't be right that my own criticisms are valued so highly (especially since I've been moving closer to the SingInst "inner circle" and my critical tendencies have been decreasing). In the spirit of making oneself redundant, I'd feel much better if my occasional voice of dissent is just considered one amongst many.

Comments (328)

Comment author: pragmatist 19 April 2012 09:25:29AM *  25 points [-]

I disagree with quite a lot of the LW consensus, but I haven't really expressed my criticisms in the few comments I've made. I differ substantially from Sequence line on metaethics, reductionism, materialism, epistemology, and even the concept of truth. My views on these things are similar in many respects to those of Hilary Putnam and even Richard Rorty. Those of you familiar with the work of these gentlemen will know how far off the reservation this places me. For those of you who are not familiar with this stuff, I guess it wouldn't be stretch to describe me as a postmodernist.

I initially avoided voicing my disagreements because I suspect that my collection of beliefs is not only regarded as false by this community, but also as a fairly reliable indicator of woolly thinking and a lack of technical ability. I didn't want to get branded right off the bat as someone not worth engaging with. The thought was that I should first establish some degree of credibility within the community by restricting myself to topics where the inferential distance between the average LWer and me is small. I think wannabe contrarians entering into any intellectual community should be encouraged to expend some initial effort on credibility-building by talking about stuff on which they by and large agree with the community. I haven't been following LessWrong for that long, but I gather that there was a time when Will Newsome's comments were a lot more.... orthodox. I'm guessing that fact has a lot to do with the way his criticisms are received now.

Another big reason I avoid talking about my disagreements is that they are sufficiently fundamental that I expect a large amount of pushback. I know I find it very hard to disengage from argument, and I suspect that's also true of a significant proportion of the posters here, so I'm worried that the discussion will be a horrible time suck. I really can't afford that right now. Perhaps at some time in the future, when I have a little more time, I'll write a discussion post detailing some of my objections.

Comment author: wedrifid 19 April 2012 09:42:27AM 1 point [-]

I haven't been following LessWrong for that long, but I gather that there was a time when Will Newsome's comments were a lot more.... orthodox. I'm guessing that fact has a lot to do with the way his criticisms are received now.

He can still be found on the SingInst about us page.

Another big reason I avoid talking about my disagreements is that they are sufficiently fundamental that I expect a large amount of pushback. I know I find it very hard to disengage from argument, and I suspect that's also true of a significant proportion of the posters here, so I'm worried that the discussion will be a horrible time suck. I really can't afford that right now. Perhaps at some time in the future, when I have a little more time, I'll write a discussion post detailing some of my objections.

You do your name justice.

Comment author: Will_Newsome 19 April 2012 10:28:51AM 6 points [-]

He can still be found on the SingInst about us page.

(In case it's not obvious the description is not at all currently accurate. I am currently in the process of doing nothing. At some point I firmly decided that doing things is evil, so I try not to do things anymore, at least as a stopgap solution till I better understand the relevant motivational dynamics and moral philosophy. I still talk to people sometimes though, obviously, but to some extent I feel guilty about that too.)

Comment author: TheOtherDave 19 April 2012 03:40:37PM 8 points [-]

Would it help you behave more morally by your lights if nobody replied to you?

Comment author: michaelsullivan 20 April 2012 07:22:44PM 3 points [-]

After a long hiatus from deep involvement in comment threads here -- I actually can't tell if this is serious, or a brilliant mockery of Eliezer's decisions around creating AGI [*]

Comment author: wedrifid 19 April 2012 01:31:33PM 5 points [-]

At some point I firmly decided that doing things is evil, so I try not to do things anymore

I still act socially as a Christian in much of my social life so in a certain (not epistemically literal) sense hearing this from 'another believer' strikes me as sacrilege. The Parable of the Talents has a clear point to make on this subject! You are defying His will and teachings.

Comment author: Will_Newsome 19 April 2012 01:51:26PM 1 point [-]

If only it were so easy to tell righteous exploration from liberal folly. But anyway, it's just a stopgap solution. Likely preparation for a sojourn in the desert, and after that, God knows.

Comment author: steven0461 18 April 2012 10:42:49PM *  16 points [-]

If we have less contrarianism than is optimal, it seems like the root of the problem is that people often vote for agreement rather than for expected added value. I would start looking there for a solution.

Also, the site would be able to absorb more contrarians if their bad contributions didn't cause as much damage. It would help if we exercised better judgment in deciding when a criticism is worth engaging with and when we should just stop feeding the trolls.

Comment author: David_Gerard 18 April 2012 10:45:43PM 12 points [-]

Change the mouseovers on the thumbs-up/thumbs-down icons from "Vote up"/"Vote down" to "More like this"/"Less like this". I've suggested this before and it got upvotes, I suggest now it might be time to implement it.

Comment author: Unnamed 19 April 2012 12:16:58AM 8 points [-]

I think of it as "Pay more attention to this" / "Pay less attention to this." Communicating primarily to other readers rather than to posters.

Comment author: Will_Newsome 19 April 2012 08:36:44AM 16 points [-]

Stupid alternative: Instead of up/down, have blue/green. Let chaos reign as people arbitrarily assign meaning.

Comment author: pedanterrific 19 April 2012 11:46:07AM *  19 points [-]

Classic Will_Newsome. Greenvoted.

Comment author: David_Gerard 19 April 2012 02:21:47PM *  3 points [-]

BLUE!!

... well, it said blue when I clicked on it ...

Comment author: Nornagest 19 April 2012 08:53:19AM 9 points [-]

Predicted outcome: within a couple of weeks, blue/green will have understood but undocumented positive/negative associations. Votes will be noisier, though, thanks mostly to confused newcomers and the occasional contrarian pursuing an idiosyncratic interpretation. Complaints about downvotes, and color politics jokes, will both become more common.

p = 0.7 contingent on implementation for core claim, .5-6 range for corollaries.

Comment author: Will_Newsome 19 April 2012 09:23:10AM *  8 points [-]

0.7 strikes me as low.

Proposed chaotic refinement: Blue/green, but switch them every 18 to 30 hours (randomly sampled, uniform distribution).

(ETA: Upon reflection days or weeks would be better, to increase chaos/noise ratio. Would also work better with prominent "top contributors for last 30 days" lists for both blue and green, and more adulation/condemnation based on those lists.)

Comment author: shokwave 20 April 2012 02:21:24AM 0 points [-]

Other refinements: each person is randomly permanently assigned either: blue/green OR they see blue/green but it's actually green/blue behind the scenes. This makes any explicit discussion of blue/green more difficult.

Or: Each person actually has grue and bleen buttons. At some time t, they are suddenly voting for the other colours. An extended form of this looks similar to your ETA.

Comment author: faul_sname 20 April 2012 02:06:12AM 1 point [-]

Sort by greenest.

Comment author: Multiheaded 14 May 2012 10:20:23AM 1 point [-]

Let chaos reign as people arbitrarily assign meaning.

And you call yourself an anti-liberal traditionalist? :)

Comment author: thomblake 19 April 2012 02:32:08PM 2 points [-]

Frankly I think we should reconsider the early suggestion that karma on comments should be between 0 and 1, starting at 0.5.

Comment author: David_Gerard 19 April 2012 03:23:11PM 1 point [-]

1 and 999. No doubt someone will write a script to render the number in decibels ...

Comment author: John_Maxwell_IV 18 April 2012 11:02:19PM 3 points [-]

I think this would discourage me from writing contrary stuff. Right now if I get voted down, I explain it to myself as me having an unpopular but possibly correct opinion. Hearing that people want "less like this" seems harsh somehow.

Comment author: Larks 19 April 2012 05:02:43AM 5 points [-]

This is the pro-airbrushing argument; airbrushing in magazines decreases body neurosis because it gives girls plausible deniability for why they don't look like models.

I saw this not to pass judgement either way on your argument.

Comment author: TimS 19 April 2012 07:01:21PM 2 points [-]

Hearing that people want "less like this" seems harsh somehow.

Isn't that the point? A stimuli that is insufficiently strong to change behavior is pointless to use for behavior modification.

Comment author: steven0461 18 April 2012 11:01:34PM *  2 points [-]

Hmm. Or "Reward"/"Punish"? "Incent"/"Disincent"? "Carrot"/"Stick"?

"I like your comment, so I more like thissed it" doesn't roll off the tongue.

Comment author: Alicorn 18 April 2012 11:07:40PM 4 points [-]

"Carrot"/"Stick"?

I want to go around carroting things.

Comment author: paper-machine 19 April 2012 12:15:59AM 2 points [-]

All I could think of was this. (deep link, ten seconds long).

(Warning: Homestuck fandom, implausibly unsafe for work, unless your boss is into Homestuck.)

Comment author: RichardKennaway 18 April 2012 11:07:20PM 0 points [-]

"Reward"/"Punish"?

Please, no. As far as I'm concerned, an upvote or downvote, by me or on my posts, is not a reward or a punishment. Not even slightly.

"I like your comment, so I more like thissed it" doesn't roll off the tongue.

So much the better. I am not interested in who has upvoted or downvoted me, and I never mention my own votes.

Comment author: David_Gerard 19 April 2012 07:22:35PM *  4 points [-]

Please, no. As far as I'm concerned, an upvote or downvote, by me or on my posts, is not a reward or a punishment. Not even slightly.

I think you're wrong there. Humans are exquisitely sensitive to status, anywhere they see anything that looks even slightly like it. Upvotes/downvotes are precisely rewards/punishments, whatever else they may be or whatever you may intend yours to be.

Comment author: A4FB53AC 19 April 2012 04:12:38AM -1 points [-]

You should call it black and white. Because that's what it is, black and white thinking.

Just think about it : using nothing more than one bit of non normalized information by compressing the opinion of people who use wildly variable judgement criteria, from variable populations (different people care and vote for different topics).

Then you're going to tell me it "works nonetheless", that it self-corrects because several (how many do you really need to obtain such a self-correction effect?) people are aggregating their opinions and that people usually mean it to say "more / less of this please". But what's your evidence for it working? The quality of the discussion here? How much of that stems from the quality of the public, and the quality of the base material such as Eliezer's sequence?

Do you realize that judgements like "more / less of this" may well optimize less than you think for content, insight, or epistemic hygiene, and more than it should for stuff that just amuses and pleases people? Jokes, famous quotes, group-think, ego grooming, etc.

People optimizing for "more like this" eventually downgrades content into lolcats and porn. It's crude wireheading. I'm not saying this community isn't somewhat above going that deep, but we're still human beings and therefore still susceptible to it.

Comment author: NancyLebovitz 19 April 2012 06:24:24AM 6 points [-]

I've noticed that humor gets a lot of upvotes compared to good but non-funny comments. However, humor hasn't taken over, probably because being funny can take some thought.

I don't think karma conveys a lot of information at this point, though heavily upvoted articles tend to be good, and I've given up on reading down-voted articles, with a possible exception of those that get a significant number of comments.

Comment author: David_Gerard 19 April 2012 06:58:50AM 2 points [-]

People optimizing for "more like this" eventually downgrades content into lolcats and porn.

More so than "vote up"? You've made a statement here that looks like it should be supported by evidence. What sites do you know of this happening from going from "vote up" to "more of this"?

Comment author: John_Maxwell_IV 18 April 2012 11:17:57PM *  0 points [-]

Edited wiki.

Comment author: orthonormal 18 April 2012 11:32:03PM *  14 points [-]

One relevant dynamic is the following: if an idea is considered "absurd" to the mainstream, there will be very few people who take the idea seriously yet disagree with it. Social pressure forces polarization: if you're going to disagree with it, you might as well agree with all your normal friends that the idea is kooky.

Thus it's especially hard to find good contrarians for a forum that takes several "absurd" positions.

Comment author: prase 18 April 2012 11:03:58PM 61 points [-]

I have significantly decreased my participation on LW discussions recently, partly for reasons unrelated to whatever is going on here, but I have few issues with the present state of this site and perhaps they are relevant:

  • LW seems to be slowly becoming self-obsessed. "How do we get better contrarians?" "What should be our debate policies?" "Should discussing politics be banned on LW?" "Is LW a phyg?" "Shouldn't LW become more of a phyg?" Damn. I am not interested in endless meta-debates about community building. Meta debates could be fine, but only if they are rare - else I feel I am losing purposes. Object-level topics should form an overwhelming majority both in the main section and in the discussion.
  • Too narrow set of topics. Somewhat ironically the explicitly forbidden politics is debated quite frequently, but many potentially interesting areas of inquiry are left out completely. You post a question about calculus in the discussion section and get downvoted, since it is "off topic" - ask on MathOverflow. A question about biology? Downvoted, if it is not an ev-psych speculation. Physics? Downvoted, even if it is of the most popular QM-interpretational sort. A puzzle? Downvoted. But there is only so much one can say about AI and ethics and Bayesian epistemology and self-improvement on a level accessible to general internet audience. When I discovered Overcoming Bias (whose half later evolved into LW), it was overflowing with revolutionary and inspiring (from my point of view) ideas. Now I feel saturated as majority of new articles seem to be devoid of new insights (again from my point of view).

If you are afraid that LW could devolve into a dogmatic narrow community without enough contrarians to maintain high level of epistemic hygiene, don't try to spawn new contrarians by methods of social engineering. Instead try to encourage debates on diverse set of topics, mainly those which haven't been addressed by 246 LW articles already. If there is no consensus, people will disagree naturally.

Comment author: orthonormal 19 April 2012 04:00:32AM 7 points [-]

LW seems to be slowly becoming self-obsessed.

It waxes and wanes. Try looking at all articles labeled "meta"; there were 10(!) in April of 2009 that fit your description of meta-debates (arguing about the karma system, the proper use of the wiki, the first survey, and an Eliezer post about getting less meta).

Granted, that was near the beginning of Less Wrong... but then there was another burst with 5 such articles in April 2010 as well. (I don't know what it is about springtime...) Starting the Discussion area in September 2010 seems to have siphoned most of it off of Main; there have been 3-5 meta-ish posts per month since then (except for April 2011, in which there were 9... seriously, what the hell is going on here?)

Comment author: JenniferRM 19 April 2012 05:34:23AM 5 points [-]

Maybe April Fools day gets people's juices going?

Comment author: Wei_Dai 19 April 2012 08:55:46AM 14 points [-]

I'm not trying to spawn new contrarians for the sake of having more contrarians, nor want to encourage debate for the sake of having more disagreements. What I care about is (me personally as well as this community as a whole) having correct beliefs on the topics that I think are most important, namely the core rationality and Singularity-related topics, and I think having more contrarians who disagree about these core topics would help with that. Your suggestion doesn't seem to help with my goals, or at least it's not obvious to me how it would.

(BTW, I note that you've personally made 2 meta/community posts out of 7, whereas I've only made about 3 out of 58 (plus or minus a few counting errors). So maybe you can give me a pass on this one? :)

Comment author: prase 19 April 2012 05:09:40PM *  7 points [-]

I note that you've personally made 2 meta/community posts out of 7, whereas I've only made about 3 out of 58

I plead guilty and promise to avoid making meta posts in the future. (Edit: I don't object specifically to your meta-posts but to the overall relative number of meta discussions lately.)

Nevertheless, I doubt calling for more contrarians is helpful with respect to your purposes. The question how to increase the number of contrarians is naturally answered by proposals to create more contrarian-friendly environment, which, if implemented, attract disproportionally high amount of people who like to be contrarians, whatever the local orthodoxy is. My suggestion is, instead, to try to attract more diverse set of people, even those who are not interested in topics you consider important. You would profit indirectly, since some of them would get eventually engaged in your favourite discussions and bring fresh ideas. Incidentally they will also somewhat lower the level of discourse, but I am afraid it is an inevitable side effect of any anti-cult policy.

Comment author: Viliam_Bur 19 April 2012 01:34:46PM 1 point [-]

Do you also think that having more contrarians who disagree that "2+2=4" would increase our likelihood of having correct beliefs? I mean, if they are wrong, we will see the weakness in their arguments and refuse to update, so there is no harm; but if they are right and we are wrong, it could be very helpful.

More generally, what is your algorithm for deciding for which values of X we need more contrarians who disagree with X?

Comment author: TimS 19 April 2012 02:13:40PM 5 points [-]

If people come to LessWrong thinking "2+2 != 4" or "computer manufacturing isn't science", is saying "You're stupid" really raising the sanity line in any way? In short, we should distinguish between punishing disagreement and punishing obstinate behavior/contrarianism.

Comment author: Eugine_Nier 20 April 2012 03:42:10AM *  4 points [-]

"computer manufacturing isn't science"

Well, computer manufacturing isn't science, it's engineering.

Comment author: thomblake 19 April 2012 12:42:00AM 7 points [-]

LW seems to be slowly becoming self-obsessed.

I don't see how you could possibly be observing that trend. The earliest active comment threads on Less Wrong were voting / karma debates. Going meta is not only what we love best, it's what we're best at, and that's always been so.

You post a question about calculus in the discussion section and get downvoted, since it is "off topic" - ask on MathOverflow. A question about biology? Downvoted, if it is not an ev-psych speculation. Physics? Downvoted, even if it is of the most popular QM-interpretational sort. A puzzle? Downvoted.

Whut?

Links or it didn't happen.

Comment author: JenniferRM 19 April 2012 05:26:58AM *  14 points [-]

LW seems to be slowly becoming self-obsessed.

I don't see how you could possibly be observing that trend. The earliest active comment threads on Less Wrong were voting / karma debates. Going meta is not only what we love best, it's what we're best at, and that's always been so.

Yes, but the real question is why we love going meta. What is it about going meta that makes it worthwhile to us? Some have postulated that people here are actually addicted to going meta because it is easier to go meta than to actually do stuff, and yet despite the lack of real effort, you can tell yourself that going meta adds significant value because it helps change some insight or process once but seems to deliver recurring payoffs every time the insight or process is used again in the future...

...but I have a sneaking suspicion that this theory was just a pat answer that was offered as a status move, because going meta on going meta puts one in a position of objective examination of mere object level meta-ness. To understand something well helps one control the thing understood, and the understanding may have required power over the thing to learn the lessons in the first place. Clearly, therefore, going meta on a process would pattern match to being superior to the process or the people who perform it, which might push one's buttons if, for example, one were a narcissist.

I dare not speculate on the true meaning and function of going meta on going meta on going meta, but if I were forced to guess, I think it might have something to do with a sort of ironic humor over the appearance of mechanical repetitiveness as one iterates a generic "going meta" operation that some might naively have supposed to be the essence of human mental flexibility. Mental flexibility from a mechanical gimmick? Never!

Truly, we should all collectively pity the person who goes meta on going meta on going meta on going meta, because their ironically humorous detachment is such a shallow trick, and yet it is likely to leave them alienated from the world, and potentially bitter at its callous lack of self-aware appreciation for that person's jokes.

Comment author: Will_Newsome 19 April 2012 05:51:19AM *  5 points [-]

Related question: If the concept of meta is drawn from a distribution, or is an instance of a higher-level abstraction, what concept is best characterized by that distribution itself / that higher-level abstraction itself? If we seek whence cometh "seek whence", is the answer just "seek whence"? (Related: Schmidhuber's discussion about how Goedel machines collapse all the levels of meta-optimization into a single level. (Related: Eliezer's Loebian critique of Goedel machines.))

Comment author: JenniferRM 19 April 2012 05:44:17PM *  4 points [-]

I laughed this morning when I read this, and thought "Yay! Theism!" which sort of demands being shortened to yaytheism... which sounds so much like atheism that the handful of examples I could find mostly occur in the context of atheism.

It would be funny to use the word "yaytheism" for what could be tabooed as "anthropomorphizing meta-aware computational idealism", because it frequently seems that humor is associated with the relevant thoughts :-)

But going anthropomorphic seems to me like playing with fire. Specifically: I suspect it helps with some emotional reactions and pedagogical limitations, but it seems able to cause non-productive emotional reactions and tenacious confusions as a side effect. For example, I think the most people are better off thinking about "natural selection" (mechanistic) over either "Azathoth, the blind idiot god" (anthropomorphic with negative valence) or "Gaia" (anthropomorphic with positive valence).

Edited To Add: You can loop this back to the question about contrarians, if you notice how much friction occurs around the tone of discussion of mind-shaped-stuff. You need to talk about mind-shaped-things when talking about cogsci/AI/singularity topics, but it's a "mindfield" of lurking faux paus and tribal triggers.

Comment author: Will_Newsome 23 April 2012 07:18:52AM 4 points [-]

The following was hastily written, apologies for errors.

But going anthropomorphic seems to me like playing with fire. Specifically: I suspect it helps with some emotional reactions and pedagogical limitations, but it seems able to cause non-productive emotional reactions and tenacious confusions as a side effect. For example, I think the most people are better off thinking about "natural selection" (mechanistic) over either "Azathoth, the blind idiot god" (anthropomorphic with negative valence) or "Gaia" (anthropomorphic with positive valence).

(I would go farther, and suggest not even thinking about "natural selection" in the abstract, but specific ecological contingencies and selection pressures and especially the sorts of "pattern attractors" from complex systems. If I think about "evolution" I get this idea of a mysterious propelling force rather than about how the optimization pressure comes from the actual environment. Alternatively, Vassar's previously emphasized thinking of evolution as mere statistical tendency, not an optimizer as such;—or something like that.)

I think one thing to keep in mind is that there is a reverse case of the anthropomorphic error, which is the pantheistic/Gnostic error, and that Catholic theologians were often striving hard to carefully distinguish their conception of God from mystical or superstitious conceptions, or conceptions that assigned God no direct role in the physical universe. But yeah, at some point this emphasis seems to have hurt the Church, 'cuz I see a lot of atheists thinking that Christians think that God is basically Zeus, i.e. a sky father that is sometimes a slave to human passions, rather than a Being that takes game theoretic actions which are causally isomorphic to the outputs of certain emotions to the extent that those emotions were evolutionary selected for (i.e. given to men by God) for rational game theoretic reasons. The Church traditionally was good at toeing this line and appealing to people of very different intelligences, having a more anthropomorphic God for the commoners and a more philosophical God for the monks and priests, but I guess somewhere along the way this balance was lost. I'm tempted to blame the Devil working on the side of the Reformation and the Enlightenment but I suppose realistically some blame must fall on the temporal Church.

Alternatively, maybe you do accept Neoplatonist or Catharian thinking where we have infinitely meta-aware computational agents as abstractions without any direct physical effect that isn't screened off by the Demiurge (or cosmological natural selection or what have you). In that case I tentatively disagree, but my thoughts aren't organized well enough for me to concisely explain why.

Comment author: orthonormal 19 April 2012 02:45:03AM 5 points [-]

Links or it didn't happen.

I thought of this Mitchell Porter post on MWI and this puzzle post by Thomas. As it happens, I downvoted both (though after a while, I dropped the downvote from the latter) and would defend those downvotes, but I can see how prase gets the impression that we only upvote articles on a narrow subset of topics.

Comment author: thomblake 19 April 2012 02:08:39PM 1 point [-]

Yeah, both of those are low-quality.

Comment author: prase 19 April 2012 05:51:16PM 3 points [-]

As for physics, I was thinking more about this whose negative karma I have already commented on. In the meantime I have forgotten that the post managed to return to zero afterwards.

"Low-quality" is too general a justification to recognise the detailed reasons of downvotes. Among the more concrete criticisms I recall many "this is off-topic, hence my voting down" reactions. My memories may be subject to bias, of course, and I don't want to spend time making a more reliable statistics. What I am feeling more certain about is, however, that there are many people who wish to keep all debates relevant to rationality, which effectively denotes an accidental set of topics, roughly {AI, charity donations, meta-ethics, evolution psychology, self-improvement, cognitive biases, Bayesian probability}. No doubt those topics are interesting, even for me. But not so much to keep me engaged after three (or how much exactly) years of LW's existence. And since I disagree with many standard LW memes, I suppose there may be other potential "contrarians" (perhaps more willing to voice their disagreements than I am) becoming slowly disinterested for reasons similar to mine.

Comment author: John_Maxwell_IV 18 April 2012 11:26:33PM *  5 points [-]

LW seems to be slowly becoming self-obsessed.

This is a good point. Maybe future meta-discussions could be on talk pages for wiki articles, about specific changes to those articles, especially the about page and the FAQ? These actually represent how LW culture is being codified for new users, but unfortunately none of the recent debates seem to of resulted in substantial modification to them.

It's too bad that automatic wiki editing privileges don't come with a certain level of karma; would remove a trivial inconvenience and eliminate wiki spam.

Comment author: matt 25 April 2012 08:31:03PM 1 point [-]

It's too bad that automatic wiki editing privileges don't come with a certain level of karma

Hmmm... you know that wouldn't be too hard to arrange. Keeping the passwords in sync after a change to one account would be much more work, but might be ignorable.

Comment author: John_Maxwell_IV 28 April 2012 08:57:44PM *  1 point [-]

Ideally it seems like you would get your wiki authentication cookie automatically after logging into Less Wrong, so you could log in once and use both. I don't know if that changes things regarding passwords.

Comment author: John_Maxwell_IV 18 April 2012 11:20:51PM 4 points [-]

You post a question about calculus in the discussion section and get downvoted, since it is "off topic" - ask on MathOverflow. A question about biology? Downvoted, if it is not an ev-psych speculation. Physics? Downvoted, even if it is of the most popular QM-interpretational sort. A puzzle? Downvoted.

Do you have examples of this sort of stuff so I can go vote it up?

Comment author: prase 19 April 2012 06:45:00PM 2 points [-]

For example there are many posts tagged "physics", most of which hover around zero. A moderately interesting puzzle stands now at -7.

Comment author: DanielVarga 19 April 2012 12:08:46PM *  13 points [-]

Others already noted that we need contrary opinions more than contrarian people per se. Let me make another distinction. Is the goal a community with a diverse set of opinions, or more people who are vocal and articulate about some minority opinion? Maybe the latter goal is worth working on, but I suspect the former has already been reached. Let me go with myself as an example. I don't think anybody ever saw any of my comments as contrarian, and I am sure nobody associates my nick with contrarianism. The thing is: I would bet against Many Worlds. I am not a consequentialist. I am not really interested in cryonics. I think the flavor of decision theory practiced here is just cool math without foreseeable applications. I give very low probability to FOOM. I think FAI as a goal is unfeasible, for more than one reason.

I am not vocal at all about these positions, and you will very rarely see me engage in loud debates. But I state my position when I feel like it, and I was never punished for that. (I don't have any negatively voted comment out of a few hundred.) I think we would see a similar pattern when checking the positions of other individual "non-contrarian" commenters.

Comment author: byrnema 19 April 2012 11:03:50PM *  14 points [-]

Me too:

I would bet against Many Worlds. I am not a consequentialist. I am not really interested in cryonics. I think the flavor of decision theory practiced here is just cool math without foreseeable applications. I give very low probability to FOOM. I think FAI as a goal is unfeasible, for more than one reason.

I used to be very active on Less Wrong, posting one or two comments every day, and a large fraction of my comments (especially at first) expressed disagreement with the consensus. I very much enjoyed the training in arguing more effectively (I wanted to learn to be more comfortable with confrontation) and I even more enjoyed assimilating the new ideas and perspectives of Less Wrong that I came to agree with.

But after a long while (about two years), I got really, really bored. I visit from time to time just to confirm that, yes, indeed, there is nothing of interest for me here. Well, I'm sure that's no big deal: people have different interests and they are free to come and go.

This is the first post that has interested me in a while, because it gives me a reason to analyze why I find Less Wrong so boring. I would consider myself the type of "reasonable contrarian" the author of this post seems to be looking for -- I am motivated to argue if I disagree, and have the correct attitude in that I'm quite willing to think counter-arguments through and change my position if I disagree. If only, alas, I disagreed about anything.

On all the topics that I used to enjoy being contrary about, I've either been assimilated into Less Wrong (for example, I'm no longer a theist) or I have identified that either (a) the reason for the difference in opinion was a difference in values or (b) the argument in question had no immediate material meaning, and, so arguing about either was completely pointless. My disinterest in cryonics is an example of (a), and belief or disbelief in many worlds is an example of (b).

I do wish Less Wrong was more interesting, because I used to enjoy spending time here. I realize this is a completely self-centered perspective, because presumably many do continue to find Less Wrong entertaining. But I want to learn things, and be challenged and stretched as much possible, and now that I'm already atheist that challenge isn't there. I'd like to understand how the "world works" and now that I've got materialism under my belt, what's next? I wish Less Wrong would try and tackle taboo topics like politics, because this an area where I observe I'm completely clueless. On the other hand, I also understand that these questions are probably just too difficult to tackle, and such a conversation would have a large probability of being fruitless.

Still, I agree with prase, currently the top comment, that Less Wrong topics tend to be too narrow. My secondary criticism would be that for me (just my opinion) the posts are kind of bland. Maybe people are too reasonable (!?), but there doesn't seem to be anything to argue with.

Comment author: khafra 23 April 2012 12:43:59PM 5 points [-]

Over a year ago, Michael Vassar spoke about writing a rationalist's guide to politics. Seems like the sort of thing Steve Rayhawk would also be good at. Perhaps we could all get together and bribe somebody who could do it well to do it.

Comment author: Konkvistador 01 May 2012 07:38:24PM *  2 points [-]

Perhaps we could all get together and bribe somebody who could do it well to do it.

You have my sword.

Comment author: wedrifid 23 April 2012 02:07:15PM 2 points [-]

I used to be very active on Less Wrong, posting one or two comments every day

One or two comments every day is very active?

Oops.

Comment author: thomblake 19 April 2012 08:51:15PM 5 points [-]

You should make some discussion posts about your reasons for disagreeing with the perceived consensus on each of those issues. If they are articulate, specific, and uses the techniques of epistemic rationality, they should be well-received. (If you have good reasons for disagreeing with the techniques of epistemic rationality, then that's an even better post).

Comment author: vi21maobk9vp 20 April 2012 06:05:03PM 1 point [-]

If I have seen the replies to well-written comments expressing some opinions, I may find it unlikely that I would get new information from replies to a discussion post.

And I may have some hard-to-share reasons and personal red flags, so I do not know whether I will do good to anyone.

So, why bother?

Maybe original poster wouldn't agree with this approach, but his behaviour is consistent with it.

Comment author: vi21maobk9vp 19 April 2012 08:42:09PM 3 points [-]

A perfect example of the problem, I guess.

Many pro-LW-mainstream arguments are weak if you have significantly different priors. People with minority view quickly learn the difference in priors and learn to express their views less often and defend them less.

I also consider FOOM-as-described-on-LW quite improbable and the writings of Eliezer on the topic simply raise a few red flags; I see that it is a popular position here, but most people don't find it worth the effort to fight mainstream.

There are still many topics on LW where no relevant values or priors are parts of LW majority's collective identity and I get some entertainment and information form reading these discussions and participating in them. There are also topics close to things that are accessible to science with all its rigidity (but also stability) compared to Bayesian inference. These are very informative too.

Comment author: TheOtherDave 19 April 2012 12:09:14AM 10 points [-]

Perhaps we have this backwards?

If there is something intrinsically valuable about controversy (and I'm not really sure that there is, but I'm willing to accept the premise for the sake of discussion), and we're not getting the optimal level of controversy on the topics we normally discuss (again, not sure I agree, but stipulated), then perhaps what we should be doing is not looking for "more and better contrarians" who will disagree with us on the stuff we have consensus on, but rather starting to discuss more difficult topics where there is less consensus.

One problem is, of course, that some of us are already worried that LW is too weird-sounding and not sufficiently palatable to the mainstream, for example, and would probably be made uncomfortable if we explore more controversial stuff... it would feel too much like going to school in a clown suit. And moving from areas of strength to areas of weakness is always a little scary, and some of us will resist the transition simply for that reason. And many more.

Still, if you can make a case for the value of controversy, you might find enough of us convinced by that case to make that transition.

Comment author: [deleted] 22 April 2012 02:36:15PM *  4 points [-]

Controversial doesn't necessarily mean weird-sounding. For example, we could talk more about medicine, an area with a great deal of disagreement, without seeming like clown-suit wearing crazies. Mainstream topics should be more than enough to fill the controversy quota.

Comment author: TheOtherDave 22 April 2012 02:50:18PM 1 point [-]

(nods) Fair point.

Comment author: roystgnr 19 April 2012 01:57:31AM 11 points [-]

Here's a case for the value of controversy.

  • LessWrong orthodoxy includes a large number of propositions (over a hundred posts in just core sequences, at least one thesis per post)
  • The deductions that lead to each claim are largely independent (if post B was an obvious corollary of post A, it would have saved writer's and readers' time not to write it)
  • Reasoning is error-prone, especially when not formalized (this is a point made in the sequences; if it's wrong then q.e.d.)
  • Even if each deduction is overwhelmingly likely (let's say 99%) to be correct, it would be likely (63% in this case) that at least one out of a hundred would be incorrect
  • Because these are deductive chains of reasoning (they're "the sequences", not just "the set"), one false deduction can invalidate any number of conclusions which follow from it. The Principle of Explosion has been defeating brilliant people for millennia.

In other words, even if you believe that each item of LessWrong consensus is almost certain to be correct, you should still be doubtful that every item of LessWrong consensus is likely to be correct. And if there are significant errors, then how else will they be found and publicized other than via a controversial discussion?

Comment author: TheOtherDave 19 April 2012 02:18:29PM 6 points [-]

I agree that there are errors in the "LW consensus."
I agree that a cost-effective mechanism for identifying those errors would be a valuable thing.

By your estimation, how many controversial discussions have occurred on LW in the last year?
How many of them have contributed to identifying any of those errors?

Comment author: David_Gerard 22 April 2012 02:04:03PM 2 points [-]

This wouldn't be an issue except it's entirely unclear to me that LessWrong is making much in the way of progress of whatever sort. There's the meetup groups, which sometimes look good and sometimes sputter.

But perhaps I'm wrong and there's a list of things that are reasonable evidence of progress of whatever sort.

Comment author: Will_Newsome 19 April 2012 11:12:26AM 1 point [-]

See Wei Dai's comment here—he doesn't value controversy qua controversy.

Comment author: gRR 19 April 2012 12:26:12AM 9 points [-]

I would prefer an increase in 'question' (problem) posts, as opposed to 'statement' (solution) posts, contrarian or no.

Comment author: orthonormal 18 April 2012 11:51:49PM 19 points [-]

There's one tactic that's worked well to get LW posts on neglected topics: having a competition for the best post on a subject. A $100 prize resulted in some excellent posts on efficient charity, and the Quantified Health Prize (substantially more money) led to some good analyses of the data on dietary supplementation.

What about having a contest for the best contrarian post on topic X? Personally, I'd chip in a few bucks for a good contrarian post on intelligence explosion, the mathematical universe, the expected value of x-rationality, and other topics.

(I had this idea after reading this comment, and now that I think of it I'm reminded of ciphergoth's survey of anti-cryonics writing as well.)

Comment author: Onelier 19 April 2012 05:06:19AM 8 points [-]

Stream of consciousness. Judge me that ye may be judged. If you judge it by first-level Less Wrong standards, it should be downvoted (vague unjustifiied assertions, thoughtlessly rude), but maybe the information is useful. I look first for the heavily downvoted posts and enjoy the responses to them best.

I found the discussion on dietary supplementation interesting, in your link and elsewhere. As I recall, the tendency was for the responses (not entrants, but peoples comments around town) to be both crazy and stupid (with many exceptions, e.g., Yvain, Xacharaiah). I recall another thread on the topic where the correct comment ("careful!") was downvoted and its obvious explanation ("evolution works!") offered afterward was upvoted. Since I detected no secondary reasons for this, it was interesting in implying Less Wrongians did not see the obvious. Low certainties attached since I know I know nothing about this place. I'm deliberately being vague.

In general, Less Wrongians strike me as a group of people of impaired instrumental rationality who are working to overcome it. Give or take, most of you seem to be smarter than average but also less trustworthy, less able to exhibit strong commitments, etc. Probably this has been written somewhere hereabouts, but a lot of irrationalities are hard to overcome local optima; have you really gone far enough onto the other side? Incidentally, that could be a definition for x-rationality (if never actually done): Actually epistemically rational enough that it's instrumentally useful. Probably a brutally hard threshold to achieve and seems untrue of here, as I believe I've seen threads comment.

I was curious about the background of the people offering lessons at the rationality bootcamp, and saw some blog entry by one of them against, oh, being conservative in outlook (re: risk aversion). It was incredibly stupid; I mean, almost exclusively circular reasoning. You obviously deviate from the norm in your risk aversion. You're not obviously more successful than the norm (or are you? perhaps I'm mistaken). Maybe it's just a tough row to hoe, but that's the real task.

Personal comment: I realize Dmitry has been criticized a bit elsewhere and the voting trend doesn't support generalization to the community at large, but my conversation with him illustrates what I generally believe about this place. I knew more than he did. I said enough that he should realize this. He didn't realize it and shoehorned his response into a boring framework. I had specific advice to give, which I didn't get to, and realized I was reluctant to give (most Less Wrong stuff seems weak to me).

A whole lot of Less Wrong seems to be going for less detail, less knowledge, more use of frameworks of universal applicability and little precision. The sequences seem similar to me: Boring where I can judge meaning, meaningless where I can't. And always too long. I've read about four paragraphs of them in total. The quality of conversation here is high for a blog, of course, but low for a good academic setting. Some of the mild sneering at academics around here sounds ridiculous (an AI researcher believes in God). AI's a weak field. All round, papers don't quite capture any field and are often way way behind what people roughly feel.

Real question: Do you want me here?

I like you guys. I agree with you philosophically. I have nothing much to offer unless I put some effort into it (e.g., actually read what people write, etc). No confusion: You should be downvoting posts like this in general. You might want to make an exception 'cause it's worth hearing a particular rambling mindset once. My effort is better spent elsewhere (I can't imagine you'd disagree). I can't see anything that can be offered to me. I feel like I was more rational at age 7 than you are now (I wrote a pro and con list for castrating myself for the longevity and potential continuity of personality gains; e.g., maintaining the me of 7). A million other things. I'm working on real problems in other areas now.

Comment author: Viliam_Bur 19 April 2012 09:41:03AM 8 points [-]

I like your style of writing. Though: too many ideas, difficult to rate and respond.

Karma always has a random component. Karma of one comment is not significant. Karma of 10 comments shows a trend. I have once received a negative karma for a comment showing an obvious error in reasoning of others; but it only happened once in maybe hundred comments, so I don't make a drama of it. But yeah, it might be painful if that happened to someone's first comment on LW.

Instrumental rationality is a known problem of intelligent people. My worst experience was Mensa: huge signalling, almost nothing ever done; and if something is done, it's usually always done by the same two or three people, who could just as well have it done on their own. Compared with that, people at LW are relatively high in instrumental rationality -- they have a working website, they write good articles, they do research, they organize meetups and seminars. But yes, we could do a lot better. Instead of going meta, people could focus and write about things they care about. Not doing this on a web discussion is probably a symptom of not doing it in the real life.

Yes, being convinced of one's own rationality can lead to overconfidence. I don't know a cure. Perhaps repeated exposure to disagreement of other rational people will eventually move one to update. Another reason for people focusing on what they are good at -- providing more evidence for their rationalist friends.

Re: last three paragraphs -- the choice to stay or leave is on you. Don't participate in the discussions you consider worthless, write something about the real things you work on. (And perhaps I should do the same.) But this is not a new idea -- we have regular threads "what are you working on" here.

Comment author: twolier 20 April 2012 04:46:33AM 2 points [-]

Same dude here, despite the name. Hypothetical: Should a prof at, say, Harvard working on the genetics of longevity post and spend time here?

Discussing his own work would be identifying and probably not very productive. Let's further say he's pre-tenure. Top places have a very different tenure success rate than even very good places, so it's an iffy point in his career.

Does Less Wrong have anything to offer him? And doesn't he serve Less Wrong best by staying away and working? (or even "playing" elsewhere)

My central criticism of this place may well be that some of you won't see there really is no question what the right answer is.

Incidentally, perfectly agree with your comment TimS, but the point is that I internalized those ideas independent of LessWrong. ViliamBur, you misunderstood my Karma point. I was merely acknowledging that my comment's being upvoted and Dmitry's downvoted means I can't use it to indict the community at large (and instead was offering is as illustration of my mindset). Luke: yup. But I did skim through the papers from the institute. Not very good. I suspect I can mostly infer the sequences from very basic background knowledge in game theory, philosophy, physics, neuroscience, psych, etc, and reading current comments threads. I don't see anything too fancy implied by the secondary sources (I enjoy reading the back-and-forth more).

Uh, what else. I enjoy HPMOR. What I like about it, however, is bad about me: Basically what Robin feared in his comment on OvercomingBias. I should (and will) go. It goes without saying that you wish me well. I just felt like saying hello because I like you. And if you can make it so I can talk to you profitably, I'd like that. Not your fault and I'm sorry to have said it, but I thought you should know.

Comment author: orthonormal 20 April 2012 06:30:49PM 1 point [-]

You should reply to different commenters individually, since then it will send them each notifications that you're replying. Few readers check all branches of the thread that they replied to.

Comment author: Viliam_Bur 20 April 2012 06:43:21AM *  1 point [-]

Hypothetical: Should a prof at, say, Harvard working on the genetics of longevity post and spend time here? [...] Does Less Wrong have anything to offer him?

He could discuss the less crtitical parts of his work. If there is a meetup near his home, he could go there and try to find someone to cooperate with. Or if he is expert at genetics but less expert on math, he could ask someone to help him with statistics.

Also, he could just spend here his free time, if he prefers company of rational people and has problem finding it outside of his work.

And doesn't he serve Less Wrong best by staying away and working?

That question is relevant for all of us, experts or not. Even for me there are many things I should be doing rather than procrastinating on LW. However, I know myself -- I spent a lot of time online, so given that, at least I can choose a site that gives me intelligent discussions.

If you spend your time better, keep doing what works for you. Maybe visiting LW once a month and reading the articles in the "Main" part would be a reasonable compromise, if you want to participate. (I don't know if there is an RSS feed for "Main".)

Comment author: asr 20 April 2012 07:00:28AM *  3 points [-]

He could discuss the less crtitical parts of his work. If there is a meetup near his home, he could go there and try to find someone to cooperate with. Or if he is expert at genetics but less expert on math, he could ask someone to help him with statistics.

Suppose you were a professional researcher looking for statistical help. Would you (A) go to a LessWrong meetup, (B), give a talk at the Statistics department of your hypothetical university, or (C) ask your colleagues which statisticians or statistically-literate graduate students they have collaborated with recently?

I'm sure the LessWrong community believes in statistics, which is good. But I don't believe the average member of this crowd is any better at the humdrum practicalities of statistical hypothesis testing than your average working scientist. I would guess LessWrong skews younger and less expert.

Also, he could just spend here his free time, if he prefers company of rational people and has problem finding it outside of his work.

You will not have a hard time finding smart rational people on the Harvard campus! Or, for that matter, near any major university.

I'm with twolier -- LessWrong is fun, but I don't see it being all that professionally valuable for people in most technical fields.

Comment author: TimS 19 April 2012 02:20:48PM *  8 points [-]

A whole lot of Less Wrong seems to be going for less detail, less knowledge, more use of frameworks of universal applicability and little precision. The sequences seem similar to me: Boring where I can judge meaning, meaningless where I can't. And always too long. I've read about four paragraphs of them in total. The quality of conversation here is high for a blog, of course, but low for a good academic setting. Some of the mild sneering at academics around here sounds ridiculous (an AI researcher believes in God). AI's a weak field. All round, papers don't quite capture any field and are often way way behind what people roughly feel.

This. A thousand times this. As a lawyer, LessWrong pattern matches with people outside a complicated field who are convinced that those in the fields are idiots because observers think that "the field is not that complicated."

That said, "Boring where I can judge meaning, meaningless where I can't." is an unfair criticism. Lots of really excellent ideas seem boring if you had already internalized the core ideas.

Comment author: Will_Newsome 20 April 2012 02:46:42AM *  8 points [-]

Reminds me of part of a comment on Moldbug's blot, by Nick Szabo:

[legal reasoning]

It's a disciplined and competitive (dialectic, in the true original sense of that term) use of analogies, precedents, and emergent rules, far more sophisticated than normal use of analogy and metaphor. I learned it my first year of law school and it's a radically different kind of thinking I had never encountered before in school. The Bayesian bloggers seem to be completely oblivious to it, and to the tremendous value of tradition generally. That makes them, from my POV, culturally illiterate and incompetent to opine on law or politics. Yes, legal training also made me stuck up. :-)

If you can't afford law school, you can learn most of what you need to know from Legal Method and Writing by Charles R. Calleros and a first year law school common law casebook (Torts, Property, or Contracts).

The extremely short description of legal or scholastic reasoning is to think of a proposition or dispute as Schrodinger's Cat, both true and false at the same time, or each party at fault or not at the same time, or the appropriate dichotomy. Then gather all the moral or legal disputes that are similar to this one. Argue by analogy for each side both from the facts of those prior disputes and from the informal rules ("holdings") implied by the decisions resolving those disputes. This kind of reasoning allows a lawyer to anticipate an opponent's as well as their own argument in a case, and allows a judge to appreciate both sides of an argument, the latter also crucial, but often absent, in reasoning in about politics, morals, and the more complex areas of science, which in absence of this kind of discipline is dominated by confirmation bias and lack of understanding of other points of view.

Law also has a sophisticated set of qualitative probabilities I've blogged on, which imply not just degrees of truth but various aspects of gathering evidence, burdens of proof, and so on. The scientific method derived in large part from the Continental law of evidence, with which Galileo, Leibniz, etc. were intimately familiar having studied law. But legal reasoning, or scholastic reasoning as it used to be known, is still capable of covering a far wider swath of the human experience than scientific reasoning which is really just a special case and applies well only to hard evidence or the hard sciences.

I've been studying the history of common law lately due to Nick's influence, after which I'm gonna read the book he recommended. I notice that his description of legal reasoning is very similar to how I use my chess subskills for rationality.

Comment author: TimS 20 April 2012 07:51:59PM 2 points [-]

The extremely short description of legal or scholastic reasoning is to think of a proposition or dispute as Schrodinger's Cat, both true and false at the same time, or each party at fault or not at the same time, or the appropriate dichotomy. Then gather all the moral or legal disputes that are similar to this one. Argue by analogy for each side both from the facts of those prior disputes and from the informal rules ("holdings") implied by the decisions resolving those disputes. This kind of reasoning allows a lawyer to anticipate an opponent's as well as their own argument in a case, and allows a judge to appreciate both sides of an argument, the latter also crucial, but often absent, in reasoning in about politics, morals, and the more complex areas of science, which in absence of this kind of discipline is dominated by confirmation bias and lack of understanding of other points of view.

This is a moderately reasonable model of litigation, but it isn't complete. For example, Thurgood Marshall litigated separate-but-equal in the law school context specifically because every judge has a gut feeling of how to compare law schools, which just isn't true about other educational institutions. In law school, I heard the apocryphal story that the law for the State of Texas argued that the new segregated law school was just as good as UT Law School, and Justice Clark - a graduate of UT - passed a note to a colleague that read "Bullshit" That's clever lawyering and has nothing to do with arguing from precedent.

Further, not all law is litigation. The legislature empowered to make new laws that have no relationship to old laws. In short, there's a fair amount more to the practice of law than reasoning by analogy, even if reasoning by analogy is an important skill for a lawyer.

Comment author: Luke_A_Somers 19 April 2012 04:58:25PM 4 points [-]

I've read about four paragraphs of them in total.

??? Seriously?

Comment author: jsalvatier 20 April 2012 11:48:04PM 1 point [-]

I like this idea and am even willing to put money towards it, but some other similar experiments (of mine; maybe others would be better at this) didn't turn out so well ( this one got no entries, spaced repetition turned out okay, it but only got one good submission). Let me know if you're interested in putting effort into this (it wouldn't be hard to convince me to also do so, but I probably need someone else to help).

Comment author: buybuydandavis 19 April 2012 09:15:04AM *  6 points [-]

Any extreme minority position would take a long time to win converts. People are generally wrong because they have bad concepts, not because they have clear concepts, but mistakenly thought 2+2=5.

It takes a while to penetrate poor concepts, and the people with poor concepts have to be willing to put in the effort to justify their argument, and not just take it as a given that is up to someone else to refute their nonsense, because you can't refute gibberish. Most people here are intellectually confident. Add to that the consensus of the group, and who is going to expend the effort to honestly defend and justify the consensus?

On the contrarian side, the contrarian is also probably intellectually confident. Unless he finds a productive engagement, he'll eventually just shrug and move on. I've done as much. On one thread, I found the views about clinical trial data thoroughly wrongheaded. I was downvoted a lot, but persisted, being the ornery coot that I am. But eventually, I move on, because I have a day job, and other things to do.

And there's something about the "comments after blog post" format that isn't conducive to sustained debate for me. Maybe because it's one long page, it feels inappropriate to have 20 back and forths, while a serious discussion probably would require that.

Comment author: billswift 19 April 2012 02:16:34PM 2 points [-]

I think this is the best comment, at least the one that best captures my own views, on this thread.

Another way of looking at the problem expressed in buybuydandavis's first two paragraphs is that most people are so busy signalling, rather than thinking, that their concepts are usually "not even wrong".

Comment author: daenerys 18 April 2012 10:58:19PM *  15 points [-]

Upvote if you generally no longer post or discuss opinions that disagree with LW consensus.

Feel free to leave a comment on your experiences and reasons for this.

(If you would like to downvote this poll, please downvote the karma balance below instead, so that we can still get an accurate idea of the number of people who have this reaction.)

Comment author: pedanterrific 19 April 2012 05:01:19AM 3 points [-]

with LW census

(consensus)

And what do you mean "no longer"? Is the idea "upvote if your contrarianism has been downvoted out of you", or what?

Comment author: Multiheaded 19 April 2012 04:38:27AM 3 points [-]

I'm curious, do you? If you do, why?

Comment author: Larks 19 April 2012 04:58:50AM 1 point [-]

This poll is poorly designed; karma balances often get downvoted less than the vote options get upvoted, so this will tend to over-estimate how many people no longer dissent.

For example, when I loaded this page, this comment was at 5 and the karma balance was at -3

Comment author: daenerys 19 April 2012 02:37:36PM 4 points [-]

karma balances often get downvoted less than the vote options get upvoted, so this will tend to over-estimate how many people no longer dissent.

To me, when a karma balance is downvoted less than poll options are upvoted, it means that people think running the poll deserves some karma. This does not overestimate people who have reacted to voting patterns, since that number does not come from the karma balance. If someone (who has NOT reacted to voting patterns) wants to give karma for running the poll, they would upvote the karma balance, not the voting comment

Also, the purpose of the poll is to see whether a relatively high or relatively low amount of people have reacted to the voting patterns this way. Exact numbers are not needed.

Comment author: Random832 19 April 2012 01:00:35PM 2 points [-]

I have a proposal for a new structure for poll options:

The top-level post is just a statement of the idea, and voting has nothing to do with the poll. This can be omitted if the poll is an article.

A reply to this post is a "positive karma balance" - it should get no downvotes, and its score should be equal to the number of participants in the poll.

Two replies to the "positive karma balance" post, you downvote one to select this option in the poll.

This way voting either way in the poll has the same cost (one downvote), the enclosing post will have a high score (keeping it from being lost), and the only way to "corrupt" the poll results without leaving a trace [downvote the count post and upvote one of the option posts] simply cancels someone's vote without allowing you to make your own.

Comment author: HonoreDB 19 April 2012 01:19:17PM 5 points [-]
Comment author: cousin_it 18 April 2012 11:16:35PM *  21 points [-]

Having more contrarians would be bad for the signal to noise ratio on LW, which is already not as high as I'd like it to be. Can we obtain contrarian ideas more cheaply? For example, one could ask Carl Shulman for a list of promising counterarguments to X, rated by strength, and start digging from there. I'd be pretty interested to hear his responses for X=utilitarianism, the Singularity, FAI, or UDT.

Comment author: CarlShulman 08 May 2012 09:35:04AM *  12 points [-]

I made a post on a personal blog on one of the more significant points against utilitarianism in my view. It's very rough, but I could cross-post it to Discussion if people wanted.

Comment author: cousin_it 08 May 2012 10:40:46AM 5 points [-]

I really like how you frame the choice between altruism and selfishness as a range of different "original positions" an agent may assume. Thanks a lot, and please do more of this kind of work!

Comment author: Vladimir_Nesov 19 April 2012 06:20:22AM 4 points [-]

To generalize, this suggests re-purposing existing LWers to the role of contrarians, rather than looking for new people.

Comment author: Will_Newsome 19 April 2012 06:53:54AM *  12 points [-]

Or designing a mechanism or environment that makes it easier for existent LW contrarians to express their ideas.

(My personal experience is that trying to defend a contrarian position on LW results in a lot of personal cheap shots, unnecessarily-aggressively-phrased counter-affirmations, or needless re-affirmations of the LW consensus. (E.g., I remember one LWer said he was trying to "tar and feather [me] with low-status associations". He was probably exaggerating, but still.) This stresses me out a lot and causes me to make errors in presentation and communication, and needlessly causes me to become adversarial. Now when discussing contrarian topics I start out adversarial in anticipation of personal cheap shots et cetera. Most of the onus is on me, but still, I think higher general standards or some sideways change in the epistemic environment could make constructive contrarianism a less stressful role for LWers to take up.)

Comment author: lukeprog 18 April 2012 11:27:59PM 4 points [-]

Yes, a list of Carl's best arguments against standard positions is going to be of vastly higher quality than anything we would be likely to get from the best contrarians we can find.

Comment author: Will_Newsome 19 April 2012 04:22:30AM 8 points [-]

(FWIW Vassar, Carl, and Rayhawk (in ascending order of apparent neuroticism) are traditionally most associated with constructing steel men. (Or as I think Vassar put it, "steel men, adamantium men, magnetic monopolium men", respectively.))

Comment author: paper-machine 18 April 2012 10:36:15PM *  5 points [-]

I would love to be better at contrarianism, but I don't know where to begin.

I got where I am today mostly through trial and error.

Comment author: Will_Newsome 19 April 2012 02:12:26AM *  16 points [-]

The General Contrarian Heuristic:

  • Assume these and such people who claim to be right actually are at-least-somewhat-straightforwardly right, and they have good evidence or arguments that you're just not aware of. (There are many plausible reasons for your ignorance; e.g. for the longest time I thought Christianity and ufology were just obviously stupid, because I'd only read atheist/skeptic/scientismist diatribes. What evidence filtered evidence?) What is the most plausible evidence or argument that can be found while searching in good faith? This often splits in two directions:

    • The Vassarian steel method: E.g., you hear lots of stuff about fairies, so you go digging around and find Charles Bonnet syndrome. This might be akin to constructing steel men, but beware!, for it is often a path to sophistry & syncretism. You know how in Dan Brown novels he keeps constructing these shallow connections between spirituality and science in order to show that they're not actually at odds? Don't be Dan Brown.
    • The Newsomelike schizophrenic method: You find Charles Bonnet syndrome but decide that even that isn't enough—you postulate that daimons are taking advantage of any plausible excuse (e.g. stroke, optical damage, sleep paralysis) to manipulate people into delusion. (You then independently re-derive justifications for burning witches or whatever, 'cuz why not?) This might be akin to paranoid schizophrenia, but beware!, for it is often the path to, um, paranoid schizophrenia.

Some contrarian topics I've had fun exploring:

  • Assume UFO phenomena and Marian apparitions are legit, i.e. caused by some transhumanly powerful process. E.g., the Miracle at Fatima. What would be the mechanism? More pertinently, what would be the motivations?

  • Assume legit retroscausal psi effects in parapsychology: What would be the mechanism?

  • Assuming it is legit, i.e. retrocausal results are legit, why is psi capricious?

  • Assume intelligent life isn't fantastically unlikely. Why no signs of intelligent life? (Related to "why is psi capricious" question.)

Remember, skepticism is easy, it's the default position: if the phenomenon you're modeling is actually complex, your explanation will have to be subtle. It's always too easy to shout "confirmation bias", "mass hallucination", "memetic selection pressures", and what have you. Don't fall for that trap; it's just as much of an error as the Dan Brown trap—maybe moreso, because at least the Dan Brown trap doesn't tell you to ignore important evidence.

If you make an argument along the lines of "the prior probability of that hypothesis is low", deduct 10 of your contrarian points. If you make a reference to the universal prior, deduct 20 points and feel guilty for the next few weeks.

Note that I think I'm a decent contrarian but I'm bad at communicating contrarian ideas; I'm not sure to what extent this is a personal quirk or a general problem when talking to people who start out assuming that you're crazy/deluded/trolling/whatever. If there is a General Contrarian Heuristic that's more amenable to communicating resultant insights then maybe that heuristic is better.

"May we not forget interpretations consistent with the evidence, even at the cost of overweighting them."

Comment author: komponisto 19 April 2012 02:35:45AM 10 points [-]

May we not forget interpretations consistent with the evidence, even at the cost of overweighting them."

Upvoted. The easiest way to get the wrong answer is to never have considered the right answer.

I've always thought that imagination belonged on the list of rationalist virtues.

Comment author: NancyLebovitz 19 April 2012 06:18:33AM 6 points [-]

The easiest waty to get the wrong answer is to never have considered the right answer.

I like that a lot.

Comment author: Will_Newsome 19 April 2012 03:44:08AM 5 points [-]

I've always thought that imagination belonged on the list of rationalist virtues.

"What do you think are the rationalist virtues?" might be an interesting discussion post.

Comment author: Will_Newsome 19 April 2012 02:55:01AM *  10 points [-]

For comparison, the General Chess Heuristic: Think about a move you could make, think about the moves your opponent could make in reply, think about what moves you could make if they replied with any of those candidate moves, &c.; evaluate all possible resultant positions, subject to search heuristics and time constraints.

What's interesting is that novice chess players reliably forget to even consider what moves their opponent could make; their thought process barely includes the opponent's possible thought process as a fundamental subroutine. I think novice rationalists make the same error (where "opponent" is "person or group of people who disagree with me"), and unfortunately, unlike in chess, they don't often get any feedback alerting them to their mistake.

(Interestingly, Roko once almost defeated me in chess despite having significantly less experience than me, because he just thought really hard and reliably calculated a ton of lines. I'd never seen anyone do that successfully, and was very impressed. I would've lost except he made a silly blunder in the endgame. He who has ears to hear, let him hear.)

Comment author: Larks 19 April 2012 07:16:08AM 4 points [-]

We need a handy way of saying "Yes I understand the standard arguments for P but I still think it's worth your while considering this argument for ¬P rather than just telling me the standard arguments for P."

Unfortunately it may be that the only credible signal of this is to first outline the standard arguments for P.

Comment author: Will_Newsome 19 April 2012 09:01:37AM 7 points [-]

We need a handy way of saying "Yes I understand the standard arguments for P but I still think it's worth your while considering this argument for ¬P rather than just telling me the standard arguments for P."

Agreed. In my experience this problem of standard-argument-affirming shows up a lot during debates about uFAI risks. If I try to suggest some non-obvious argument against the Eliezerian position then I tend to mostly get re-assertions or re-phrasings of the standard Eliezerian arguments, which is distracting and a tad insulting. It seems some people identify me as a mainstream-view-loving enemy who is trying to unfairly marginalize the Eliezerian position, and thus don't bother to carefully check if my argument might be reasonable on its own terms.

In the last few months I've been averaging like 5 to 10 karma on my anti-Eliezerian AI risk arguments, and I think that's because I've expressed them more clearly and redundantly. But they're the same arguments that were getting downvoted to -5 or so back a year or two ago when I wasn't taking special care not to trigger local immune responses. (Weirdly, even saying that I'd spent a year or so with the Visiting Fellows talking to a lot of SingInst people who didn't think I was clearly stupid or insane didn't dissuade people from thinking I was clearly mistaken about basic SingInst arguments. I still don't really understand that... maybe I was interpreted as making an unjustified claim to authority that shouldn't be taken as evidence, or something.)

Comment author: Rain 20 April 2012 02:17:05PM 2 points [-]

The majority of your comments which I've downvoted have been for use of improper vocabulary. That is, you repurpose words in unconventional ways which result in extremely difficult, if not impossible, translation to something I can understand.

Lately, you seem to have been taking more care to use words with their dictionary definitions.

Comment author: daenerys 19 April 2012 01:58:43PM *  9 points [-]

Idea- Using Contrary Opinions as a Group Rationality Exercise

Sometimes when I'm discussing issues one-on-one with someone of a different opinion, I will find myself treating arguments as soldiers (I am improving on catching myself in this, I think.). I can also have difficulties verbalizing what is wrong with an argument when put on the spot.

Maybe we can use "Devil's Advocating" posts as a group exercise in rationality. Someone can read or summarize a specific opposing viewpoint that they do not necessarily agree with (maybe subjectivism, or Kuhn's scientific revolutions). They could hopefully even get completely new material, in order to provide practice in a field we haven't discussed.

They will present the strongest summary they can in a post, writing as if they fully supported the idea. The tag [Devil's Advocating] can be used to show that this is what they are doing.

One comment thread can be devoted to finding arguments that the viewpoint covers strongly. (i.e. maybe subjectivism handles a specific question a little better than most other philosophies, or maybe Kuhn's revolutions provide a better explanation of the different types of science that scientists engage in than other science philosophies). This can help us fight our "Soldiers as Arguments" inclinations.

Another comment thread can be devoted to finding specific fallacies in the argument. NOT just "This is silly, <Idea X> is better", but actual "This doesn't work because of <Reason Y>".

Of course, for this to be interesting, it has to be an opposing idea that hasn't been discussed to death. For example, I know in history there are all sorts of competing theories, some of which work better than others. I bet other fields are the same.

Comment author: thescoundrel 19 April 2012 03:07:25PM 5 points [-]

This reminds me of days in +x debate, where the topic was set in advance, and you were assigned to oppose or affirm each round. Learning to find persuasive arguments for ideas you actually support is not an intuitive skill, but certainly one that can be learned with practice. I, for one, would greatly enjoy +x debate over issues in the less wrong community.

Comment author: timtyler 19 April 2012 12:41:40AM *  8 points [-]

Most of the machine intelligence folk don't seem to be on "your" side. I think they see you as potential competitors who don't share their values.

I tend to be more sympathetic to their position than yours. In particular I don't seem to share your values, and don't much like your PR - or your "end of the world" propaganda. I think that developing in secret is a pretty dubious plan - and that the precautionary principle sucks

Probably the best thing about you is that you have Eliezer on your side - and he's a smart cookie. However, that aspect also appears to have its downsides.

Comment author: orthonormal 19 April 2012 03:03:06AM 13 points [-]

It took me much longer than it should have to mentally move you from the "troll" category to the "contrarian" one. That's my fault, but it makes for an interesting case study:

I quickly got irritated that you made the same criticisms again and again, without acknowledging the points people had argued against you each time. To a reader who disagrees with you, that style looks like the work of a troll or crank; to a reader who agrees with you, it's the best that you can do when arguing against someone more eloquent, with a bigger platform, who's gone wrong at some key step.

It should be noted that I don't instinctively think any more highly of contrarians who constantly change their line of attack; it seems to be a "damned if you do, damned if you don't" tribal response.

The way I changed my mind was that you made an incisive comment about something that wasn't part of your big disagreement with the Less Wrong community, and I was forced to update. For any would-be respected contrarians out there, this might be a good tactic to circumvent our natural impulse towards closing ranks.

Comment author: Will_Newsome 19 April 2012 06:08:59AM *  5 points [-]

It took me much longer than it should have to mentally move you from the "troll" category to the "contrarian" one.

I still find it tricky to distinguish if timtyler realizes what he's saying is going to be misinterpreted but just doesn't care (e.g. doesn't want to cave into the general resource-intensive norm of rephrasing things so as not to set off politics detectors), or if he doesn't realize what he's saying is going to be misinterpreted. E.g. he makes a lot of descriptive claims that look suspiciously like political claims and thus gets downvoted even when upon being queried he says they were intended purely as descriptive claims. I've started to think he generally just doesn't notice when he's making claims that could easily be interpreted as unnecessarily political.

Comment author: timtyler 19 April 2012 11:14:56AM 4 points [-]

Politics? This might, perhaps, be to do with the whole plan of unilaterally taking over the world? If so, that is a plan with a few politicical implications, and maybe it's hard to discuss it while avoiding seeming political.

Comment author: Will_Newsome 19 April 2012 11:30:03AM *  6 points [-]

Yes, and because the Eliezerian doom/world-takeover position is somewhat marginalized by the mainstream, people around here are quick to assume that stating simple facts or predictions about it, unless the facts are implicitly in favor of the marginalized position, is instead implicitly a vote in favor of further marginalization, and thus readers react politically even to simple observations or predictions. E.g., your anti-doom predictions are taken as a political move with the intent of further marginalizing the fund-us-to-help-fight-doom political position, even in the absence of explicit evidence that that's your intent, and so people downvote you. That's my model anyway.

Comment author: timtyler 19 April 2012 12:12:25PM *  5 points [-]

E.g., your anti-doom predictions are taken as a political move with the intent of further marginalizing the fund-us-to-help-fight-doom political position, even in the absence of explicit evidence that that's your intent

Of course, from my point of view, the "doom exaggeration" looks like a crude funding move based on exploiting people by using superstimulii - or, at best, a source of low-relevance noise from a bunch of self-selected doom enthusiasts who have clubbed together.

You do have a valid point about my intentions. I derive some value from the existence of the SI, but the overall effect seems to be negative. I'm not on "your side". I think "your side" currently sucks - and I don't see much sign of reform. I plan to join another group.

Comment author: Will_Newsome 19 April 2012 12:20:35PM 1 point [-]

I plan to join another group.

Me too. Probably the Catholics.

Comment author: khafra 19 April 2012 01:41:05PM 5 points [-]

Is there a Dominican community blog I should watch? Also, would you surreptitiously palm some small dry ice granules right before you dip your fingers in the water during confirmation? I've always wanted to see that.

Comment author: Will_Newsome 19 April 2012 01:58:55PM *  2 points [-]

I know basically nothing about modern Catholics, actually, which is a big reason why I haven't yet converted. E.g. I have serious doubts about the goodness of the Second Vatican Council. If the Devil has seriously tainted the temporal Church then I want no part in it.

Also, would you surreptitiously palm some small dry ice granules right before you dip your fingers in the water during confirmation? I've always wanted to see that.

That would be really cool. But I think God would be displeased. ...I'm not sure about that, I'll ask Him. (FWIW I rather doubt He'll give an unambiguous answer.)

Comment author: drethelin 19 April 2012 05:14:37PM 3 points [-]

If you had to specify a historical year in which Catholicism seems most correct to you which would it be?

Comment author: RichardKennaway 20 April 2012 07:00:01AM 2 points [-]

But I think God would be displeased. ...I'm not sure about that, I'll ask Him. (FWIW I rather doubt He'll give an unambiguous answer.)

How do you go about asking God, and how do you experience His answers?

Comment author: NancyLebovitz 20 April 2012 06:09:09AM 0 points [-]

Why do you think the Devil might have tainted the temporal Church through the Second Vatican Council?

Comment author: NancyLebovitz 19 April 2012 10:20:46PM 0 points [-]

There is no such thing as "modern Catholics". There are a number of subgroups, but I don't know enough to be usefully more specific.

Comment author: timtyler 19 April 2012 10:49:36AM *  4 points [-]

I quickly got irritated that you made the same criticisms again and again, without acknowledging the points people had argued against you each time.

That doesn't sound great! Was I right? If you think there's a case where I should have updated - but didn't - perhaps it can be revisited? Of course, I don't mean to put pressure on you to trawl through my comments - but it would be nice for me to know if you have any specific cases in mind.

Comment author: orthonormal 19 April 2012 11:06:12PM 4 points [-]

I couldn't find them in a quick search, but the gist of the argument that got me frustrated was a cluster of arguments that you've stated a lot but never written up at length. Let me summarize roughly:

All new technological developments are just continuations of evolution; there are no relevant differences between evolution of genes, memes, corporations, etc; and therefore the Singularity couldn't be an existential crisis, just a faster continuation of evolution.

(Apologies if I've mangled it.) It seemed to me that every time a relevant topic was mentioned, back in the days of the Sequences, you merely stated one of these opinions rather than argued for it. But again, it's difficult for me to recognize good arguments when I disagree with their conclusions.

Comment author: timtyler 20 April 2012 01:44:31AM *  1 point [-]

I couldn't find them in a quick search, but the gist of the argument that got me frustrated was a cluster of arguments that you've stated a lot but never written up at length.

Hmm. Thanks. I did write a whole book about that one - I think.

Your objection also makes me think of this material:

Even with regular evolution there can still be existence "failures" - for particular species.

Also, I do think one of these is coming: http://alife.co.uk/essays/memetic_takeover/

...leading to this: http://alife.co.uk/essays/engineered_future/ - apparently a future where humans as we know them play a pretty insignificant role.

I do think that the trend towards increased destructive power needs to be considered in the light of the simultaneous trend towards greater levels of cooperation, moral behaviour, and peacefulness.

Comment author: orthonormal 20 April 2012 02:11:40AM 3 points [-]

Ah— you have written it up at great length, just not in Less Wrong posts.

I think you claim too strong a predictive power for the patterns you see, but that's a discussion for a different thread. (One particular objection: the fact that evolution has gotten us here contains a fair bit of anthropic bias. We don't know exactly how narrow are the bottlenecks we've survived already.)

Comment author: timtyler 20 April 2012 11:21:05AM *  1 point [-]

One particular objection: the fact that evolution has gotten us here contains a fair bit of anthropic bias. We don't know exactly how narrow are the bottlenecks we've survived already.

Well, I don't want to appear to endorse the thesis that you associated me with - but it appears that while we don't know much about the past exactly, we do have some idea about past risks to our own existence. We can look at the distribution of smaller risks among our ancestors, and gather data from a range of other species. What Joshua Zelinsky said about genetic data is also a guide to recent bottleneck narrowness.

Occam's razor also weighs against some anthropic scenarios that imply a high risk to our existence. The idea that we have luckily escaped 1000 asteroid strikes by chance has to compete with the explanation that these asteroids were never out there in the first place. The higher the supposed risk, the bigger the number of "lucky misses" that are needed - and the lower the chances are of that being the correct explanation.

Not that the past is necessarily a good guide - but rather we can account for anthropic effects quite well.

Comment author: JoshuaZ 20 April 2012 02:31:32AM 1 point [-]

We don't know exactly how narrow are the bottlenecks we've survived already.

We can estimate this for a lot of the major bottlenecks. For example, we can look at how likely other intelligent species are to survive and in what contexts. We have a fair bit of data for that. We also now have detailed genetic data so we can look at historical genetic bottlenecks in the technical sense for humans and for other species.

Comment author: siodine 20 April 2012 02:39:50AM 1 point [-]
Comment author: siodine 20 April 2012 02:21:47AM 1 point [-]

What's the current state of memetics in science (universities, journals, and so on)? I thought it turned out to be a dead end.

Comment author: timtyler 20 April 2012 11:06:33AM *  3 points [-]

Susan Blackmore recently described the current state of memetics as a science as being "pathetic".

A few pages on the general topic:

What we do have is a lot of modern work on "cultural evolution". It's not quite the same - but it's close, and it has many of the basics down.

Statistically, memetics may not be doing too well - but memes are going crazy - through the roof. It bodes well for the subject, I think.

Comment author: siodine 20 April 2012 02:04:11PM *  2 points [-]

Nice, I was impressed by the video and your page on the criticisms of memetics. But I think you'd be more agreeable to more prejudicial people (i.e., most everyone) if you made some stylistic changes; would you care to see some criticisms?

Comment author: timtyler 20 April 2012 03:33:57PM 1 point [-]

Any feedback you care to offer would be more than welcome.

Comment author: metaphysicist 15 May 2012 05:10:09AM *  3 points [-]

I don't like contrarians, but I think honest and fundamental dissent is vital.

A recent development in applied psychology is that small incentives can have large consequences. I think the upvote/downvote ratio is underestimated in importance. The ratio currently is obviously greater than 1; I don't know how much greater. (Who does?) This creates an asymmetry in which below zero, each downvote has disproportionate stigmatizing power, creating an atmosphere of apprehension among dissenters. The complexion of postings might change if downvoting and upvoting rights were issued so that the numbers tended to be equal. A downvote should simply mean the opposite of an upvote; it shouldn't be the rare failing mark. Then, the outcome is truly more like a vote than a blackballing.

Comment author: anotherblackhat 19 April 2012 07:01:30PM 3 points [-]

I think the kind of people you're looking for are rare in general, so it shouldn't be a surprise that they are rare on LW.

That said, there's room for improvement. The karma system only allows for one kind of vote. It could be more like Slashdot and allow for tagging of the vote, or better yet allow for up/down voting in several different categories. If a comment is IMO well worded, clear, logical, and dead wrong, then it's probably worth reading, but not worth believing. Right now all I can do is vote it up or down. I'd like to be able to vote for clarity and against content at the same time. And as long as I'm wishing, I'd also like to be able to vote just to vote, so we can have user generated polls without needing a karma dump. And humor - that deserves it's own category. Better feedback, better results. Or at least, so I believe, never having had better feedback.

Comment author: private_messaging 14 May 2012 10:51:06AM *  2 points [-]

I think we can see now how the situation evolved: SI ignored what 'contrarians' (the mainstream) said, the views they formed after reading SI's arguments, etc.

SI then gone to talk to GiveWell, and the presentation resulted in Holden forming same view - if you strip his statement down to bare bones he says that he thinks giving money to SI results in either no change, or increase of the risk, as the approach SI advocates is more dangerous than current direction, and the rationale given has already been available (but has been ignored).

Ultimately, it may be the case that SI arguments, when examined in depth by random outsider, typically result in strongly negative opinion of SI, but sometimes result in positive opinion of SI. The people whom form positive opinion seem to be a significant fraction at LW - ultimately if you examine the AI related arguments here, and form negative opinion, you'll be far less interested in trying to learn rationality from those people.

Comment author: Viliam_Bur 25 June 2012 10:40:45AM 1 point [-]

Is Holden's view really the same as the mainstream view, or is it just a surface similarity?

For example, a typical outsider would doubt about SIAI abilities, because a typical outsider thinks intelligent machines belong to sci-fi, not real life. Holden worries about lack of credentials. Among those who think intelligent machines are possible, a typical person thinks it will be OK, because obviously the machines will do only what we tell them to do. Holden worries that a (supposedly) Friendly AI is more risky than a "Tool AI". Etc.

Comment author: private_messaging 25 June 2012 11:44:38AM *  0 points [-]

Mainstream meaning the people with credentials that the Holden was referring to (whose views are somewhat echoed by everyone else). The kind of folk that will not be swayed by some sort of mental confusion between common discourse "the function of the AI is to make paperclips" and technical discourse where utility function is mathematical function that is a part of specific design of a specific AI architecture. Same kind of folk, if they come across the Russian mathematician name-dropping that's going on here, and after they politely exhaust the possibility that they misunderstood, would be convinced that this is some complete pile of manure arising from utterly incompetent person reporting his awesome misunderstandings of advanced mathematics he read off a popularization book. Second order bad science popularization. I don't even care about AI any more. It boggles my mind that there's entire community of people who just go around having such gross lack of understanding of the things they are talking about.

edit: This stuff is only tolerated because it sort of promotes interest in mathematics. To be fair, even very gross misunderstanding of mathematics may serve a good function if a person passionately talks of the importance of mathematics he misunderstood. But once you start seriously pushing nonsense forward - you're out. This whole thing reminds me of experience with entirely opposite but equally dumb point: some guy with good verbal skills read Godel, Escher, Bach, thought he understood Godel's incompleteness theorem, and imagined that understanding of Godel's incompleteness theorem implied that humans are capable of hypercomputation (beyond Turing machine). It's literally impossible to talk sense into such cases. They don't understand the basics but they jump ahead to the highly advanced topics, which they understand metaphorically. Not having had properly studied mathematics they do not understand how great is the care required for not screwing up (especially when bordering philosophy). That can serve a good function, yes: someone sees the One Truth in, say, Solomonoff induction, and someone else actually learns the mathematics, which is interesting in it's own right even though it doesn't disprove God or accomplish anything equally interesting.

Comment author: John_Maxwell_IV 18 April 2012 10:07:37PM *  5 points [-]

Maybe we could have a "contrarian of the month" award? This could also encourage normally agreeable Less Wrong users to argue against consensus positions in hopes of winning the award.

Comment author: David_Gerard 18 April 2012 10:50:24PM *  11 points [-]

Awarded to a nonconformist in black or a nonconformist in a clown suit? The latter is likely to get the tone argument (where someone's claimed rejection is the tone of the statement rather than its content).

Suggestion: whenever you're tempted to respond with a tone argument ("stop being so rude/dismissive/such a flaming arsehole/etc"), try really hard to respond to the substance as if the tone is lovely. The effort will net you upvotes ;-)

Comment author: cousin_it 18 April 2012 11:21:40PM *  9 points [-]

Seconding your suggestion because it's worked well for me every time I found the strength to use it. Also, when you feel really aggravated at your opponent's tone, fogging is a useful and civil-sounding technique.

Comment author: thomblake 19 April 2012 12:34:55AM 5 points [-]

That took forever for me to figure out. Wikipedia:Fogging.

Comment author: thomblake 21 April 2012 04:54:46PM 4 points [-]

Hmm... I just realized my standard for "taking forever" to find a piece of information is about 30 seconds. I love the future.

Comment author: David_Gerard 18 April 2012 11:38:07PM *  5 points [-]

For a good example, note how wonderful Wei Dai's tone consistently is, even when responding to comments where "go away you idiot" would be a quite reasonable reaction.

Comment author: Wei_Dai 19 April 2012 08:21:13AM 3 points [-]

Worked well in what sense? David talked about netting upvotes, but surely that's not a main consideration for you at this point. I'm hoping that being nice and responding just to substance might make the other person less belligerent and a better contributor to the community. I tried this on Dmytry and it didn't work, but I wonder if it has worked in the past on others. Do you or anyone else have any anecdotes in this regard?

Comment author: cousin_it 19 April 2012 06:14:00PM *  5 points [-]

Hmm, you're right, I just checked and it has never worked on rude people for me either. I must've been thinking about my exchanges with some people who were confident and confused about an issue, but not rude. Sorry.

Comment author: David_Gerard 19 April 2012 07:27:22PM 3 points [-]

It nets upvotes because it produces a useful response post for the onlookers, who have the votes. This is why it's work, because it involves turning an annoying post into something of value.

Comment author: Will_Newsome 20 April 2012 03:11:09AM *  2 points [-]

(I remember being sort of rude or at least mildly-aggressively-uncharitable to you about a year ago and you responded saying we could clear up any misunderstandings via chat. I subsequently issued some mea culpas and was probably more charitable towards you from then on. Not sure if that counts, IIRC I was only being mildly rude.)

Comment author: wedrifid 19 April 2012 08:56:59AM 2 points [-]

Worked well in what sense?

Avoiding flame wars. Leaving the 'contrarian' at least with the sense that some of their ideas have been heard and validated. Reducing the extent to which you yourself get caught up in negative spirals. All without enabling them or encouraging more undesired behavior.

Comment author: Wei_Dai 20 April 2012 12:11:55AM *  1 point [-]

Both you and David_Gerard seem to have taken my question as asking about the general benefits of "ignoring tone", when I was trying to figure out what cousin_it meant by "worked well", specifically whether he had succeeded in making a rude commenter less belligerent and a better contributor to the community, and also explaining why I wasn't sure what he meant.

Did you really misinterpret my question, or did you just use it as an opportunity to go off on a tangent and write something of general interest? (I'm trying to figure out if I need to be more careful about how to express myself.)

Comment author: ahartell 18 April 2012 10:39:12PM 4 points [-]

I'm not sure how much I like this idea (or the version I'm about to propose) but I think it would be better to treat it as a "Contrarian Quotes of the Month" type thing, kind of like the Rationality Quotes thread but using contrarian lesswrong comments.

Comment author: wedrifid 18 April 2012 10:21:44PM 13 points [-]

Maybe we could have a "contrarian of the month" award?

Can we please not do this? I already feel a pre-emptive contrarian outrage against whatever consensus is arrived at when awarding this "official contrarily" award. Then I start thinking of court Jesters. This is a way to get people to think in the predetermined 'outside the box box' and change their 'mainstream' uniform to the 'rebel' uniform. That's not the way to get useful contrarians.

This could also encourage normally agreeable Less Wrong users to argue against consensus positions in hopes of winning the award.

You're advocating this as a good thing?

Comment author: John_Maxwell_IV 18 April 2012 10:39:47PM 1 point [-]

Are you suggesting folks can't be trusted to reliably identify genuinely high-quality opinions that disagree with theirs?

What can we learn from this thread?

http://lesswrong.com/lw/2sl/the_irrationality_game/

You're advocating this as a good thing?

The OP talks about folks who "like to find fault in every idea they see". Assuming this is valuable, there are two ways to have this kind of person: be this kind of person naturally, or unnaturally in order to win an award.

Keep in mind that the award's specifications can be changed, for example, "best civil disagreement with LW majority" or "changed the most minds among LW users".

Comment author: thomblake 18 April 2012 10:05:05PM 3 points [-]

It's so difficult to find someone who will communicate on our level and yet disagrees on object-level things.

Probably the best way to get more contrarians, is for folks from Less Wrong to learn from people outside the community, change their own beliefs because of it, and come back to share their wisdom with the masses.

Okay, that sounded better in my head too.

Comment author: RichardKennaway 18 April 2012 11:03:18PM *  7 points [-]

It's so difficult to find someone who will communicate on our level and yet disagrees on object-level things.

Is this because people smart enough to communicate on our level largely agree with a lot of what is generally agreed on here, for the same reason that most people all agree that 2+2=4?

Or is it because LessWrong is, for reasons unconnected with rationality, largely drawn from a certain very narrow demographic range, who grab onto this constellation of ideas like an enzyme to its substrate, and "communicating on our level" just means being that sort of person?

Comment author: vi21maobk9vp 24 April 2012 06:51:16AM 1 point [-]

It is not just about demographic.

You are supposed to be familiar with many standard arguments; but many of them make no sense if you have different priors, because they have too little evidence on their side (AI researcher interview series seems to illustrate well that some kind of experience can give you evidence against a few key points).

If you find Hanson's arguments about the core of FOOM concept stronger than Eliezer's, you will have less incentive to familiarize yourself to everything that you should remember to communicate on what you called "our level", because it makes no sense without this key point.

So disagreement on object level in the very beginning leads to infamilarity with required things. Nothing too strange here.

Comment author: Will_Newsome 19 April 2012 04:13:48AM 3 points [-]

Some advice for wannabe contrarians and trolls, here. (Muflax seems to be in the middle of re-designing his blog so the link might not be 100% stable.)

Comment author: Manfred 19 April 2012 02:09:23AM *  2 points [-]

This could be rephrased more positively :D

If someone has something they may well be right about, and you don't learn it, that's a problem. Or if they make an argument that you know is wrong from parallel lines of evidence but can't say why it's wrong, that's a slightly smaller problem. And it's a problem with you, not with them. This is a general principle of disagreement. This post is the charge that we are bad at learning from people.

Hmm. Or maybe that's not right. We could be learning from them (on average), but still driving them away because what seemed like constructive argument from one side didn't from the other. In which case, that's fine and you shouldn't listen to this comment :P

Comment author: billswift 19 April 2012 02:33:34PM 2 points [-]

but still driving them away because what seemed like constructive argument from one side didn't from the other.

Or still driving them away because the comment stream petered out before people got around to expressing their changed viewpoint and the contrarian left because he never realized he was having an impact. The post and comment format isn't really very good for a serious back-and-forth discussion. Especially when posts are so briefly on the front page, note that this is another good reason for getting meet-up announcements OFF of the discussion page.

Comment author: RichardKennaway 18 April 2012 10:53:51PM 2 points [-]

I don't see a problem with driving "contrarians" away. That is what we should be doing.

To be a "contrarian" is to have written a bottom line already: disagree with everything everyone else agrees with.

To be a "contrarian" among smart people is to adopt reversed intelligence as a method of intelligence.

To be a "contrarian" among stupid people is, like American football, something that you have to be smart enough to do but stupid enough to think worth doing.

To be a "contrarian" is to limit oneself to writing against. I am not interested in what anyone is against until I have seen what they are for.

To be a "contrarian" is the safe and easy path. It is easy, because you can find good arguments against everything, as nothing is perfect. It is safe, for you can take agreement and disagreement alike as confirmation. Like most safe and easy paths, nothing is achieved along it.

To style oneself a "contrarian" is a giant red warning light that the person has nothing useful to say. That rule has not failed me yet.

Comment author: Wei_Dai 18 April 2012 11:12:41PM *  9 points [-]

Yes, being a "contrarian" is irrational for the individual, but may be good for the group. I wouldn't try to turn someone into a "contrarian" for my own benefits, but I don't feel qualms about making better use of people who already are.

Comment author: Khoth 18 April 2012 11:04:55PM 5 points [-]

I think there's a difference between "contrarian about X" and "contrarian". The former has (hopefully) looked at the evidence around X and come to a position on X that differs from the mainstream. The latter values being different over being right.

I think the first sort can be valuable, and shouldn't be driven away.

Comment author: RichardKennaway 18 April 2012 11:15:33PM *  0 points [-]

Wei Dai's first sentence only talks about the second sort, and I wouldn't call someone who has come to a position on X that differs from the mainstream a "contrarian about X". If they call themselves that, then instead of simply being able to present their arguments, they have tied their identity to being in opposition, and the whole downward spiral I described comes into play.

Comment author: chaosmosis 19 April 2012 12:37:33AM *  1 point [-]

There's no problem with identifying with arguments and wanting to defend certain positions if you are open to arguments and evidence against your position. It's actually convenient to do so for the purposes of discussion and advocacy.

Most people here are probably "transhumanists", which connects their beliefs to their identity, but that doesn't mean they wouldn't change their mind or alter their beliefs if they see evidence against transhumanism. Describing specific traits that apply to you and your positions shouldn't make you reluctant to change your positions, and also identifying with specific advocacy groups is probably inevitable.

I don't think you're really addressing what Wei Dai's original post is actually discussing. I think that it should be apparent that Wei Dai isn't advocating having more closeminded commenters, but is advocating a more diverse set of viewpoints and advocacies. You're dismissing the overall idea was trying to be reached at based on an interpretation of "contrarian" that doesn't make sense when viewed in the context of the advocacy statement within the original post. Even if you're right about what "contrarian" means, please mentally replace every instance of "contrarian" with "person advocating something unpopular", and that will make this discussion much more productive.

I agree that tying one's identity to opposition specifically is bad, though. That's political paralysis as a consequence of misguided cynicism. If you reject every position then you can advocate nothing. That's not just ineffective, it's a horrible way to live. Affirmation is good.

Comment author: wedrifid 19 April 2012 12:43:26AM 0 points [-]

I don't think you're really addressing what Wei Dai's original post is actually discussing. I think that it should be apparent that she isn't advocating having more closeminded commenters.

As far as I know Wei Dai is male.

Comment author: Alicorn 19 April 2012 12:44:17AM 2 points [-]

As far as I know Wei Dai is male.

I've met him in person, and this is the case.

Comment author: chaosmosis 19 April 2012 12:57:57AM 0 points [-]

I realized while writing the post that I didn't know his gender and proceeded to edit as fast as I could but you people still caught the mistake before I fixed it, I'm embarrassed. At least it's better to use "she" than "he" as my default assumption (balances against gendered language in favor of men, etc). Although on second thought it probably indicates that I associate civility with females which is stupid and unfair and can't be intentionally controlled by me anyways so it's not really worth lamenting.

But, sorry, Wei Dai, although it was just an accident and I doubt you'll care much.

Comment author: wedrifid 19 April 2012 01:05:27AM 0 points [-]

Although on second thought it probably indicates that I associate civility with females which is stupid and unfair and can't be intentionally controlled by me anyways so it's not really worth lamenting.

It makes a difference that there are some Wei Dais that are female.

I probably wouldn't default to associating anti-consensus advocacy with female. That goes against a notorious (and as far as I know reasonably well founded) stereotype.

Comment author: Eugine_Nier 19 April 2012 12:50:00AM *  3 points [-]

I sometimes argue in favor of positions I don't really believe (i.e., assign p<.5 to) if I think the probability is higher than general consensus and I suspect at least Will Newsome frequently does the same.

Comment author: Will_Newsome 19 April 2012 01:05:46AM 4 points [-]

Yes, but it's often a hassle. You risk being accused of trolling, overconfidence, &c., and it's difficult to claim that such accusations don't have some tinge of truth.

I suspect it's not overall a very good habit and that I bring it to LessWrong mostly because it happens to work well in my personal rationality practice. On LessWrong it's probably better to put in a little extra work to find a way to go meta—don't support a side, but show clear not-introspectively-obvious reasons why someone could hold a belief that was to them introspectively obvious and thus difficult to explain. I generally like the anti-democracy LW commenters because they seem to have practiced this skill.

Comment author: Viliam_Bur 19 April 2012 11:06:55AM *  0 points [-]

This comment should have 99 upvotes and should be moved to "Main" as a separate article. Then we should link it whenever the same topic appears again.

Reversing group-think is like reversing stupidity, or like an underconfidence at group level. It can be done. It can be interesting. But I prefer reading rational people's best estimates of reality. And I prefer disagreement based on genuine experience and belief, not because someone has felt a duty to artificially maintain diversity.

If you disagree with whatever, for example many-worlds interpretation, say it. Say "I disagree because of X and Y". Or say "I disagree, because if feels wrong, and because many people disagree, including some experts in the field (which is a good Bayesian evidence)". That's all OK. But don't say or imply things like "we should attract more people who disagree with many-world interpretation, to keep our discussion balanced". That is manipulating evidence.

If anything, we should discuss wider range of topics. Then naturally we will attract people who agree with N-1 topics, and disagree with 1 topic; and they will say it, and we will know they mean it.

Comment author: duckduckMOO 19 April 2012 12:19:56PM *  2 points [-]

haven't read yet but you can start by not calling anyone who disagrees with the established view a contrarian. It implies anyone who disagrees is doing so to play out a role rather than out of actual disagreement.

edit: so it seems that people who are playing out a role is exactly what you want more of. I assumed you were using "how can we get more contrarians" as codespeak for how can we get more disagreement. If you just want more actual "contrarians", well, I'm not sure "contrarians" is a real category. In any case it's not the relevant category. What you want is people who like criticising things, not people who like disagreeing with established opinion (again I really have to emphasise how ridiculous the way "contrarian" is used is. It's blatantly a story someone has made up to ad hominem away criticisms of standard ideas.)

For my part I would not feel comfortable finding fault in everything I see here. I know I can do it, I just don't think it would go down well. Not that it tends to go down well many other places either. part of the problem is something like people being too comfortable talking in terms of e.g. evolution's intentions so good criticisms can be dismissed as pedantry.

I might make a contrarian account though and see how well that goes down.

Comment author: Incorrect 18 April 2012 11:30:31PM 0 points [-]

I completely disagree. The optimal number of contrarians is 0.

Comment author: orthonormal 19 April 2012 02:48:15AM *  3 points [-]

It's unlikely that the "LW mainstream position" is currently right about all of its weird beliefs, though I wouldn't be surprised if we're right to take each of the ideas more seriously than the normal mainstream does.

EDIT: never mind, I didn't catch that you were doing this.

Comment author: TimS 18 April 2012 11:48:33PM 3 points [-]

What is the optimal number of people who are intelligent but, on reflection, don't agree with the LessWrong consensus?

Comment author: Incorrect 18 April 2012 11:51:48PM 2 points [-]

Give me your answer to that question before I answer.

Comment author: TimS 18 April 2012 11:58:33PM 3 points [-]

I'd guess that somewhere between 1/3 and 1/4 of the current active LessWrong community should be willing to intelligently disagree with consensus - if our goal is to improve our theories of how society does and should work.

Comment author: Incorrect 19 April 2012 12:00:29AM 1 point [-]

I completely disagree.

Comment author: chaosmosis 20 April 2012 07:58:39PM *  1 point [-]

Tangentially related: I was in the HPMOR thread and noticed that there's a strong tendency to reward good answers but only a weak tendency to reward good questions. The questions are actually more important than the answers because they're a prerequisite to the answers, but they don't seem to be being treated as such. They have roughly half as much reputation as the popular answers do, which seems unfair.

I would guess that this extends to the rest of the site as well, as it's a fairly common thing that humans do. Things would probably be better here if we tried to change that. As a rough rule of thumb, we should more or less make it our general personal policies to upvote a question if the question itself is not stupid and the question results in an answer that is insightful and deserves an upvote.

I tried to not use "we" in this comment but then it was grammatically incoherent and it wasn't worth the effort of fixing it.

Comment author: Nornagest 20 April 2012 08:09:19PM *  1 point [-]

Disagree. Insightful-sounding questions are much much easier to come up with than genuinely insightful answers, so despite the fact that the former is a prerequisite to the latter, rewarding them equally would provide perverse incentives.

At least, that's true if our goal is to maximize the number of insightful results we generate -- which seems like a pretty reasonable assumption to me.

Comment author: chaosmosis 23 April 2012 02:25:32AM *  1 point [-]

You cheated. You're comparing "insightful-sounding questions" to "genuinely insightful answers". Of course the genuine answers are going to come out ahead. That's completely unfair to the suggestion. But, assuming that people on LessWrong actually have the ability to distinguish between insightful-sounding questions and genuinely insightful questions (which seems just as easy as distinguishing between insightful-sounding answers and genuinely insightful answers, btw) the proposal makes sense.

Your comment does not contain an argument. It contains a blatantly flawed framing of the proposal I put forward and a catchphrase, "perverse incentives", and you don't explain the thought that goes into that catchphrase. You never articulate what the actual impact of these perverse incentives would look like, or how these perverse incentives would arise. Do you anticipate that if more people upvoted questions we would end up with fewer good results? I do not see how such an outcome would occur. I see zero reason to believe the "perverse incentives" you reference would originate.

There's a huge tendency within academia to ignore anything with partial solutions or doubts or blank spaces, and to undervalue questioning. Questions are inherently low status because they explicitly reveal a large gap of knowledge that cannot easily be overcome by the asker, and also have an element of submission to the "more intelligent" person who will answer the question. My suggestion is designed to counterbalance that. The best way to maximize the number of insightful thoughts and results you have is to ask insightful questions, that seems like a very reasonable assumption to me.

Moreover, putting forth the question which took place at an earlier point in the thought process allows others to more easily understand whatever conclusions you may or may not reach. It also allows people to take that question along different avenues of thought to reach useful conclusions that you would not have even considered.

Now, clearly we don't want to ask questions for the sake of asking questions. But good questions are extremely important and should be encouraged. Upvoting more questions than usual and asking more questions as a general rule is therefore a good idea. The proposal can be selectively applied by the intelligent commenters of LessWrong, and none of the "perverse incentives" you envision will occur or do any damage to the site.

Comment author: Nornagest 23 April 2012 03:18:02AM 2 points [-]

"Perverse incentives" isn't a LW catchphrase. It's a term from economics, used to describe situations where external changes in the incentive structure around some good you want to maximize actually end up maximizing something else at its expense. This often happens when the thing you wanted to maximize is hard to quantify or has a lot of prerequisites, making it easier to encourage things by proxy -- which sometimes works, but can also distort markets. Goodhart's law is a special case. I'd assumed this was a ubiquitous enough concept that I wouldn't have to explain it; my mistake.

In this case, we've got an incentive (karma) and a goal to maximize (insightful results, which require both a question and a promising answer to it). In my experience, which you evidently disagree with, judging the fruitfulness of questions (other than the trivial or obviously flawed) is difficult without putting effort into analyzing them: effort which is unproductive if expended on a dead-end question. Also in my experience, questions are cheap if you're already closely familiar with the source material, which most of the people posting in the MoR threads probably are. If I'm right about both of these points, valuing insightful-sounding questions on par with insightful-sounding answers creates a karma disincentive to spend time in analysis of open questions (you could spend the same time writing up new questions and be rewarded more), and a proportionally lower number of results.

There are a number of ways this could fail in practice: the question or answer space might be saturated, or people's inclinations in this area might be insensitive to karma (in which cases no amount of incentives either way would help). One of the premises could be wrong. But as marginal reasoning, it's sound.

Comment author: chaosmosis 23 April 2012 02:28:20PM *  1 point [-]

This is all reasoning that should have been made explicit in your comment. Your objection has good thoughts going into it but I had no way of knowing that from your previous comment. I knew that "perverse incentives" was an economic catchphrase but thought you were just referencing it without reason because you made no attempt to describe why the perverse incentives would arise and why the LessWrong commenters would have a difficult time distinguishing intelligent questions from dumb questions. I thought you were treating the economic catchphrase like phlogiston. If your above thought process had been described in your comment it would have made much more sense.

In my experience, which you evidently disagree with, judging the fruitfulness of questions (other than the trivial or obviously flawed) is difficult without putting effort into analyzing them: effort which is unproductive if expended on a dead-end question.

Isn't this the same with answers? I don't see why it wouldn't be.

Also in my experience, questions are cheap if you're already closely familiar with the source material, which most of the people posting in the MoR threads probably are.

Isn't this the same with answers? I don't see why it wouldn't be.

If I'm right about both of these points, valuing insightful-sounding questions on par with insightful-sounding answers creates a karma disincentive to spend time in analysis of open questions (you could spend the same time writing up new questions and be rewarded more), and a proportionally lower number of results.

This only makes sense if people are rational agents. Given that you've already conceded that we irrationally undervalue good questions and questioners, doesn't it make more sense that actively trying to be kinder to questioners would return the question/answer market to its objective equilibria, thus maximizing utility?

I note the irony of asking questions here but I couldn't manage to express my thoughts differently.

Comment author: siodine 20 April 2012 01:13:10AM 0 points [-]

I've noticed there's been a dozen or more threads and suggestions like this one; has anything ever come from them? These suggestions are starting to look like simple opportunities for circle jerking. Who would even decide on and implement these things? Yudkowsky?

Comment author: MixedNuts 20 April 2012 09:29:31AM 1 point [-]

Matt.