Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Brian_Jaress2 18 May 2008 09:07:00AM 2 points [-]

When they taught me about the scientific method in high school, the last step was "go back to the beginning and repeat." There was also a lot about theories replacing other theories and then being replaced later, new technologies leading to new measurements, and new ideas leading to big debates.

I don't remember if they explicitly said, "You can do science right and still get the wrong answer," but it was very strongly (and logically) implied.

I don't know what you were taught, but I expect it was something similar.

All this "emotional understanding" stuff sounds like your personal problem. I don't mean that it isn't important or that I don't have sympathy for any pain you suffered. I just think it's an emotion issue, not a science issue.

Comment author: neuromancer92 18 April 2012 06:42:42PM 0 points [-]

I understand the point you're raising, because it caught me for a while, but I think I also see the remaining downfall of science. Its not that science leads you to the wrong thing, but that it cannot lead you to the right one. You never know if your experiments actually brought you to the right conclusion - it is entirely possible to be utterly wrong, and complete scientific, for generations and centuries.

Not only this, but you can be obviously wrong. We look at people trusting in spontaneous generation, or a spirit theory of disease, and mock them - rightfully. They took "reasonable" explanations of ideas, tested them as best they could, and ended up with unreasonable confidence in utterly illogical ideas. Science has no step in which you say "and is this idea logically reasonable", and that step is unattainable even if you add it. Science offers two things - gradual improvement, and safety from being wrong with certainty. The first is a weak reward - there is no schedule to science, and by practicing it there's a reasonable chance that you'll go your entire life with major problems with your worldview. The second is hollow - you are defended from taking a wrong idea and saying "this is true" only inasmuch as science deprives you of any certainty. You are offered a qualifier to say, not a change in your ideas.

In response to Conjunction Fallacy
Comment author: J_Thomas 19 September 2007 10:36:38AM 8 points [-]

I think this might possibly be explained if they looked at it in reverse. Not "how likely is it that somebody with this description would be A-F", but "how likely is it that somebody who's A-F would fit this description".

When I answered it I started out by guessing how many doctors there were relative to accountants -- I thought fewer -- and how many architects there were relative to doctors -- much fewer. If there just aren't many architects out there than it would take a whole lot of selection for somebody to be more likely to be one.

But if you look at it the other way around then the number of architects is irrelevant. If you ask how likely is it an architect would fit that description, you don't care how many architects there are.

So it might seem unlikely that a jazz hobbyist would be unimaginative and lifeless. But more likely if he's also an accountant.

Comment author: neuromancer92 17 April 2012 10:45:54PM 1 point [-]

I think this is a key point - given a list of choices, people compare each one to the original statement and say "how well does this fit?" I certainly started that way before an instinct about multiple conditions kicked in. Given that, its not that people are incorrectly finding the chance that A-F are true given the description, but that they are correctly finding the chance that the description is true, given one of A-F.

I think the other circumstances might display tweaked version of the same forces, also. For example, answering the suspension of relations question not as P(X^Y) vs P(Y), but perceiving it as P(Y), given X.

Comment author: neuromancer92 17 April 2012 06:46:57PM 1 point [-]

I'm significantly torn on whether to enable this. I understand the downsides of seeing authors (and am confident that I'm engaging in at least some of them), but I have one issue with it. Knowing authors can improve my ability to rapidly and effectively process posts. There's at least one author who makes very good points, but sometimes glosses over issues that turn out to be either quite complicated or openings to criticism of the post. I've found these omissions both important and quite hard to find - at the moment, its worth it to me to leave author names active just to be aware that I need to read these posts with a different style of criticism than I normally engage in.

In short, there are sometimes positive outcomes of knowing authors, if only for general efficiency increases.

In response to That Magical Click
Comment author: Zubon 26 January 2010 12:48:44AM 5 points [-]

Was there some particular bright line at which cryonics flipped from "impossible given current technology" to "failure to have universal cryonics is a sign of an insane society"? That is a sign change, not just a change in magnitude.

If we go back 50 or 100 years, we should be at a point where then-present preservation techniques were clearly inadequate. Maybe vitrification was the bright line, I do not pretend that preserving brains is a specialty of mine. I just empathize with those who still doubt that the technology is good enough to fulfill its claims prior to seeing a brain revived. We have a bold history of technological claims that turned out to be not all that, but we promise that it will work fine in twenty years.

That seems like a perfectly sane outside view: every (?) previous human preservation technique was found inadequate over a span of a few years or decades, so we assume against the latest one until proven otherwise.

We must still have large areas of the planet where it is still sane not to sign up your kids, notably where the per capita income is below $300/year.

In response to comment by Zubon on That Magical Click
Comment author: neuromancer92 14 February 2012 06:24:14AM 3 points [-]

Rather than being a sane view, this is a logical fallacy. I don't know of a specific name to give it, but survivorship bias and the anthropic principle are both relevant.

The fallacy is this: for anything a person tries to do, every relevant technology will be inadequate up to the one that succeeds. Inherently, the first success at something will end the need to make new steps towards it, so we will never see a new advance where past advances have been sufficient for an end.

The weak anthropic principle says that we only observe our universe when it is such that it will permit observers. Similarly, we can assume that if new developments are being made towards an aim, they are being made because past steps were inadequate. We cannot view new advances as having their chances of success biased by past failures since they come into existence only in the case that past attempts have indeed failed.

(I am aware that technologies are improved on even after they achieve their aim, but in these cases new objectives like "faster" or "cheaper" are still unsatisfied, and drive the progress.)

Comment author: Friendly-HI 16 June 2011 01:19:20PM 2 points [-]

That explanation via analogy is actually quite good and may very well be true.

If for some reason memes fail to properly fortify themselves when they claim territory inside your brain, they may be very easy to replace by competing memes, which could explain the "clickiness" of some people.

If true, one thing we may expect from (as of yet) non-rationalist people whose minds have that clicking quality, is that they may be unusually susceptible to New Age crap or generally tend to alter their views quickly. It was certainly the case with me when I was young and still lacked the mental tools of rationality.

Also, a slight rebelliousness or disregard towards what other people think may be part of it. If you ever introduced someone to a position that is very unconventional or even something entirely new that they have never heard of, more often than not they display some deep gut reaction feeling of dismissal and come up with ridiculous on-the-spot rationalizations why that new position can not possibly be the case... and I have the impression, that one of the most determining factors in what their gut-reaction will root for will be heavily connected to what other people in their tribe think.

I know Eliezer's post is older, but I wonder if he probed the possibility that this clickiness may be predominantly a feature of people who simply have a general tendency or willingness for being a contrarian.

Comment author: neuromancer92 14 February 2012 06:07:16AM 0 points [-]

This suggestion is certainly an interesting one - that clicks happen in places where pre-existing ideas are weak, and "clicky" people have fewer strongly-entrenched concepts.

I think the explanation goes somewhat beyond this however, based on a personal observation that "clicks" seem to preferentially arise for ideas which are, to the best of our understanding, "right". I know people with very low thresholds of belief, and clicky people, and it seems to me that the correlation between the two is negative if it exists. Credulous people can't click onto an idea because it doesn't seem more right to them than any other - every point is neutral, so new ideas are simply accepted.

Clicky people, by contrast, can click in the positive or negative. Just as intelligence explosion can make "intrinsic" sense to someone, counterarguments to it are likely to throw a mental flag even before they find a clear source for the objection. The click seems to go beyond acceptance to rapid understanding and evaluation.

Comment author: neuromancer92 14 February 2012 05:52:06AM 0 points [-]

For me, the discovery that science is too slow was bound up with the realization that science is not safe. My private discovery of the slowness of science didn't come from looking at the process of scientific discovery and reflecting on the time it took - rather, it arose from realizing that the things I learned or discovered via science were slower more painful than those I learned from other methods. "Other methods" encompasses everything from pure mathematics to That Magical Click, the first inescapable and the second, initially, unsupported. Realizing that science was a fairly low-quality set of tools carried with it the realization that its inefficiency was a function of its precautions. Not trusting science as the ideal method for discovery, I ceased to trust it as ideal for reliability.

New to this site, Bayescraft, and rationalism as a whole, I still have a mentor left to distrust. Consciously, I know that these techniques are imperfect, but I have yet to understand them well enough to be failed by them.