Crocker's Rules: How far to take it?

7 lsparrish 20 January 2012 02:46PM

Recently I've been considering declaring Crocker's Rules. The wiki page and source document don't suggest any particular time limit or training period, and also don't provide any empirical results of testing it, positive or negative. It sounds good in theory, but how does it affect people in the real world?

  • If you operate under the Rules for an extended period, does your social status diminish due to behaving like a pushover when insulted?
  • Does it usually become unbearable after a particular period of time? Or is there a temporary discomfort that you get over quickly?
  • Is there a list of signatories who have declared Crocker's Rules on an indefinite or time-limited basis?
  • Where can I find examples of dialogue that has benefited (or suffered) from this?

It seems like an "obviously cool" idea but the risk to one's reputation is worth taking into consideration. If it is clear that the risk is low, and if the value to be gained is clearly very high, we should probably be doing more to encourage it as an explicit norm.

On the other hand, if it is just one of those ideas that sounds better in theory than it is in practice (because the theory does not correctly model reality), or is just yet another signaling game with a net negative value, that is worth knowing as well.

I haven't seen anyone argue against Crocker's Rules or claim it ruined their life, so my estimation is that the risk is low (although there is a small sample size to start with). Also, I have seen at least one statement from lukeprog implying that it has been instrumental in triggering updates during live conversations he has observed, indicating that the value is high (though its causal role is not firmly established in that example).

Does anyone have further data points to add?

A question about Eliezer

33 perpetualpeace1 19 April 2012 05:27PM

I blew through all of MoR in about 48 hours, and in an attempt to learn more about the science and philosophy that Harry espouses, I've been reading the sequences and Eliezer's posts on Less Wrong. Eliezer has written extensively about AI, rationality, quantum physics, singularity research, etc. I have a question: how correct has he been?  Has his interpretation of quantum physics predicted any subsequently-observed phenomena?  Has his understanding of cognitive science and technology allowed him to successfully anticipate the progress of AI research, or has he made any significant advances himself? Is he on the record predicting anything, either right or wrong?   

Why is this important: when I read something written by Paul Krugman, I know that he has a Nobel Prize in economics, and I know that he has the best track record of any top pundit in the US in terms of making accurate predictions.  Meanwhile, I know that Thomas Friedman is an idiot.  Based on this track record, I believe things written by Krugman much more than I believe things written by Friedman.  But if I hadn't read Friedman's writing from 2002-2006, then I wouldn't know how terribly wrong he has been, and I would be too credulous about his claims.  

Similarly, reading Mike Darwin's predictions about the future of medicine was very enlightening.  He was wrong about nearly everything.  So now I know to distrust claims that he makes about the pace or extent of subsequent medical research.  

Has Eliezer offered anything falsifiable, or put his reputation on the line in any way?  "If X and Y don't happen by Z, then I have vastly overestimated the pace of AI research, or I don't understand quantum physics as well as I think I do," etc etc.

The Fallacy of Gray

97 Eliezer_Yudkowsky 07 January 2008 06:24AM

Followup toTsuyoku Naritai, But There's Still A Chance Right?

    The Sophisticate:  "The world isn't black and white.  No one does pure good or pure bad. It's all gray.  Therefore, no one is better than anyone else."
    The Zetet:  "Knowing only gray, you conclude that all grays are the same shade.  You mock the simplicity of the two-color view, yet you replace it with a one-color view..."
      —Marc Stiegler, David's Sling

I don't know if the Sophisticate's mistake has an official name, but I call it the Fallacy of Gray.  We saw it manifested in yesterday's post—the one who believed that odds of two to the power of seven hundred and fifty millon to one, against, meant "there was still a chance".  All probabilities, to him, were simply "uncertain" and that meant he was licensed to ignore them if he pleased.

"The Moon is made of green cheese" and "the Sun is made of mostly hydrogen and helium" are both uncertainties, but they are not the same uncertainty.

Everything is shades of gray, but there are shades of gray so light as to be very nearly white, and shades of gray so dark as to be very nearly black.  Or even if not, we can still compare shades, and say "it is darker" or "it is lighter".

continue reading »

Diseased thinking: dissolving questions about disease

236 Yvain 30 May 2010 09:16PM

Related to: Disguised Queries, Words as Hidden Inferences, Dissolving the Question, Eight Short Studies on Excuses

Today's therapeutic ethos, which celebrates curing and disparages judging, expresses the liberal disposition to assume that crime and other problematic behaviors reflect social or biological causation. While this absolves the individual of responsibility, it also strips the individual of personhood, and moral dignity

             -- George Will, townhall.com

Sandy is a morbidly obese woman looking for advice.

Her husband has no sympathy for her, and tells her she obviously needs to stop eating like a pig, and would it kill her to go to the gym once in a while?

Her doctor tells her that obesity is primarily genetic, and recommends the diet pill orlistat and a consultation with a surgeon about gastric bypass.

Her sister tells her that obesity is a perfectly valid lifestyle choice, and that fat-ism, equivalent to racism, is society's way of keeping her down.

When she tells each of her friends about the opinions of the others, things really start to heat up.

Her husband accuses her doctor and sister of absolving her of personal responsibility with feel-good platitudes that in the end will only prevent her from getting the willpower she needs to start a real diet.

Her doctor accuses her husband of ignorance of the real causes of obesity and of the most effective treatments, and accuses her sister of legitimizing a dangerous health risk that could end with Sandy in hospital or even dead.

Her sister accuses her husband of being a jerk, and her doctor of trying to medicalize her behavior in order to turn it into a "condition" that will keep her on pills for life and make lots of money for Big Pharma.

Sandy is fictional, but similar conversations happen every day, not only about obesity but about a host of other marginal conditions that some consider character flaws, others diseases, and still others normal variation in the human condition. Attention deficit disorder, internet addiction, social anxiety disorder (as one skeptic said, didn't we used to call this "shyness"?), alcoholism, chronic fatigue, oppositional defiant disorder ("didn't we used to call this being a teenager?"), compulsive gambling, homosexuality, Aspergers' syndrome, antisocial personality, even depression have all been placed in two or more of these categories by different people.

Sandy's sister may have a point, but this post will concentrate on the debate between her husband and her doctor, with the understanding that the same techniques will apply to evaluating her sister's opinion. The disagreement between Sandy's husband and doctor centers around the idea of "disease". If obesity, depression, alcoholism, and the like are diseases, most people default to the doctor's point of view; if they are not diseases, they tend to agree with the husband.

The debate over such marginal conditions is in many ways a debate over whether or not they are "real" diseases. The usual surface level arguments trotted out in favor of or against the proposition are generally inconclusive, but this post will apply a host of techniques previously discussed on Less Wrong to illuminate the issue.

continue reading »

Mike Darwin on Steve Jobs's hypocritical stance towards death

25 Synaptic 08 October 2011 03:32AM

First, Darwin describes Jobs's (far mode) stance towards death: 

As Aschwin points out Jobs is on record (his Stanford Commencement Speech) as saying that death is the best thing that ever happened to life - that it clears out the old, and makes way for the new.

But these are Jobs's actual (near mode) actions regarding his own death: 

The really big story, so far largely unexploited by the media, is that Jobs got a liver transplant and got it here in the US. This just does not happen in patients with his Dx and prognosis - not since Mickey Mantle, anyway. And his outcome was exactly as was predicted. This infuriates those 'in the know' in the transplant community, because you have only to look to guys like Jim Neighbors, Larry Hagman, or even Larry Kramer who got livers many years or even a decade or two ago, and who continue not only to survive, but to do well. To put the liver of a 25-year old into a ~54 year old man with metastatic neuroendocrine pancreatic cancer violates the established protocols of just about every transplant center in the US.

The conclusion:

I find it more than a little hypocritical that Jobs, who spoke so glowingly of the utility of death for others, used every bit of medical technology AND his considerable wealth and influence, to postpone it for it himself, including the expedient of taking a GIFT, given with the sole intention of its being used to provide genuinely life saving benefit (not a futile exercise in medical care) and squandering it on a doomed attempt to save his own life. If you have the temerity to stand before the entire population of this planet and proclaim the goodness of death, then you should have the balls to accept it - especially when your own warped, erroneous and IRRATIONAL decision making was the proximate cause of your own dying. Instead, Jobs chose to grasp at straws, take a gift from a dead man and his family, given in good faith, and squander it on his own lust for more of the very thing (life) that he has publicly proclaimed it is a second best to "Death (which) is very likely the single best invention of life." 

 

I hate TL;DR

21 MarkusRamikin 20 September 2011 09:23AM

It's a minor annoyance but perhaps I am not the only one who feels this way.

I dislike it when we summarize our posts and articles with a "tl;dr". There's a perfectly good English word for it, namely "summary".

"tl;dr", besides being an ugly internetism, seems to me to convey a certain additional meaning, over the neutral "summary". If, as happens on the rest of the web, a commenter responds to a post with "tl;dr", it expresses an expectation to be entertained without exercising the reader's attention span or making him think. It's also an easy and insulting way to respond to someone you disagree with, avoiding having to process their argument and maybe change your mind.

If an author uses it in their own article, it seems to me to be pandering to the same expectation, apologising for actually having something to say that takes a few paragraphs to explore properly. Less Wrong, a community consisting largely of above average people in terms of intelligence and ability to follow detailed arguments, is the last place I'd like or expect to see that attitude validated. If your post has substance and says something I didn't know/think before, of course it will take work - apparently even in the thermodynamic sense - to process it...

It's particularly jarring to see a tl;dr appended to posts that took me only a few seconds to read in full anyway.

Or maybe it's just me. I don't know.

/rant

Polyhacking

75 Alicorn 28 August 2011 08:35AM

This is a post about applied luminosity in action: how I hacked myself to become polyamorous over (admittedly weak) natural monogamous inclinations.  It is a case history about me and, given the specific topic, my love life, which means gooey self-disclosure ahoy.  As with the last time I did that, skip the post if it's not a thing you desire to read about.  Named partners of mine have given permission to be named.

1. In Which Motivation is Acquired

When one is monogamous, one can only date monogamous people.  When one is poly, one can only date poly people.1  Therefore, if one should find oneself with one's top romantic priority being to secure a relationship with a specific individual, it is only practical to adapt to the style of said individual, presuming that's something one can do.  I found myself in such a position when MBlume, then my ex, asked me from three time zones away if I might want to get back together.  Since the breakup he had become polyamorous and had a different girlfriend, who herself juggled multiple partners; I'd moved, twice, and on the way dated a handful of people to no satisfactory clicking/sparking/other sound effects associated with successful romances. So the idea was appealing, if only I could get around the annoying fact that I was not, at that time, wired to be poly.

Everything went according to plan: I can now comfortably describe myself and the primary relationship I have with MBlume as poly.  <bragging>Since moving back to the Bay Area I've been out with four other people too, one of whom he's also seeing; I've been in my primary's presence while he kissed one girl, and when he asked another for her phone number; I've gossiped with a secondary about other persons of romantic interest and accepted his offer to hint to a guy I like that this is the case; I hit on someone at a party right in front of my primary.  I haven't suffered a hiccup of drama or a twinge of jealousy to speak of and all evidence (including verbal confirmation) indicates that I've been managing my primary's feelings satisfactorily too.</bragging>  Does this sort of thing appeal to you?  Cross your fingers and hope your brain works enough like mine that you can swipe my procedure.

continue reading »

A History of Bayes' Theorem

53 lukeprog 29 August 2011 07:04AM

Sometime during the 1740s, the Reverend Thomas Bayes made the ingenious discovery that bears his name but then mysteriously abandoned it. It was rediscovered independently by a different and far more renowned man, Pierre Simon Laplace, who gave it its modern mathematical form and scientific application — and then moved on to other methods. Although Bayes’ rule drew the attention of the greatest statisticians of the twentieth century, some of them vilified both the method and its adherents, crushed it, and declared it dead. Yet at the same time, it solved practical questions that were unanswerable by any other means: the defenders of Captain Dreyfus used it to demonstrate his innocence; insurance actuaries used it to set rates; Alan Turing used it to decode the German Enigma cipher and arguably save the Allies from losing the Second World War; the U.S. Navy used it to search for a missing H-bomb and to locate Soviet subs; RAND Corporation used it to assess the likelihood of a nuclear accident; and Harvard and Chicago researchers used it to verify the authorship of the Federalist Papers. In discovering its value for science, many supporters underwent a near-religious conversion yet had to conceal their use of Bayes’ rule and pretend they employed something else. It was not until the twenty-first century that the method lost its stigma and was widely and enthusiastically embraced.

So begins Sharon McGrayne's fun new book, The Theory That Would Not Die, a popular history of Bayes' Theorem. Instead of reviewing the book, I'll summarize some of its content below. I skip the details and many great stories from the book, for example the (Bayesian) search for a lost submarine that inspired Hunt for Red October. Also see McGrayne's Google Talk here. She will be speaking at the upcoming Singularity Summit, too, which you can register for here (price goes up after August 31st).

continue reading »

My Bayesian Enlightenment

25 Eliezer_Yudkowsky 05 October 2008 04:45PM

Followup toThe Magnitude of His Own Folly

I remember (dimly, as human memories go) the first time I self-identified as a "Bayesian".  Someone had just asked a malformed version of an old probability puzzle, saying:

If I meet a mathematician on the street, and she says, "I have two children, and at least one of them is a boy," what is the probability that they are both boys?

In the correct version of this story, the mathematician says "I have two children", and you ask, "Is at least one a boy?", and she answers "Yes".  Then the probability is 1/3 that they are both boys.

But in the malformed version of the story—as I pointed out—one would common-sensically reason:

If the mathematician has one boy and one girl, then my prior probability for her saying 'at least one of them is a boy' is 1/2 and my prior probability for her saying 'at least one of them is a girl' is 1/2.  There's no reason to believe, a priori, that the mathematician will only mention a girl if there is no possible alternative.

So I pointed this out, and worked the answer using Bayes's Rule, arriving at a probability of 1/2 that the children were both boys.  I'm not sure whether or not I knew, at this point, that Bayes's rule was called that, but it's what I used.

And lo, someone said to me, "Well, what you just gave is the Bayesian answer, but in orthodox statistics the answer is 1/3.  We just exclude the possibilities that are ruled out, and count the ones that are left, without trying to guess the probability that the mathematician will say this or that, since we have no way of really knowing that probability—it's too subjective."

I responded—note that this was completely spontaneous—"What on Earth do you mean?  You can't avoid assigning a probability to the mathematician making one statement or another.  You're just assuming the probability is 1, and that's unjustified."

To which the one replied, "Yes, that's what the Bayesians say.  But frequentists don't believe that."

And I said, astounded: "How can there possibly be such a thing as non-Bayesian statistics?"

continue reading »

Competent Elites

46 Eliezer_Yudkowsky 27 September 2008 12:07AM

Followup toThe Level Above Mine

(Anyone who didn't like yesterday's post should probably avoid this one.)

I remember what a shock it was to first meet Steve Jurvetson, of the venture capital firm Draper Fisher Jurvetson.

Steve Jurvetson talked fast and articulately, could follow long chains of reasoning, was familiar with a wide variety of technologies, and was happy to drag in analogies from outside sciences like biology—good ones, too.

I once saw Eric Drexler present an analogy between biological immune systems and the "active shield" concept in nanotechnology, arguing that just as biological systems managed to stave off invaders without the whole community collapsing, nanotechnological immune systems could do the same.

I thought this was a poor analogy, and was going to point out some flaws during the Q&A.  But Steve Jurvetson, who was in line before me, proceeded to demolish the argument even more thoroughly.  Jurvetson pointed out the evolutionary tradeoff between virulence and transmission that keeps natural viruses in check, talked about how greater interconnectedness led to larger pandemics—it was very nicely done, demolishing the surface analogy by correct reference to deeper biological details.

I was shocked, meeting Steve Jurvetson, because from everything I'd read about venture capitalists before then, VCs were supposed to be fools in business suits, who couldn't understand technology or engineers or the needs of a fragile young startup, but who'd gotten ahold of large amounts of money by dint of seeming reliable to other business suits.

One of the major surprises I received when I moved out of childhood into the real world, was the degree to which the world is stratified by genuine competence.

continue reading »

View more: Prev | Next