Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

But Somebody Would Have Noticed

36 Post author: Alicorn 04 May 2010 06:56PM

When you hear a hypothesis that is completely new to you, and seems important enough that you want to dismiss it with "but somebody would have noticed!", beware this temptation.  If you're hearing it, somebody noticed.

Disclaimer: I do not believe in anything I would expect anyone here to call a "conspiracy theory" or similar.  I am not trying to "soften you up" for a future surprise with this post.

1. Wednesday

Suppose: Wednesday gets to be about eighteen, and goes on a trip to visit her Auntie Alicorn, who has hitherto refrained from bringing up religion around her out of respect for her parents1.  During the visit, Sunday rolls around, and Wednesday observes that Alicorn is (a) wearing pants, not a skirt or a dress - unsuitable church attire! and (b) does not appear to be making any move to go to church at all, while (c) not being sick or otherwise having a very good excuse to skip church.  Wednesday inquires as to why this is so, fearing she'll find that beloved Auntie has been excommunicated or something (gasp!  horror!).

Auntie Alicorn says, "Well, I never told you this because your parents asked me not to when you were a child, but I suppose now it's time you knew.  I'm an atheist, and I don't believe God exists, so I don't generally go to church."

And Wednesday says, "Don't be silly.  If God didn't exist, don't you think somebody would have noticed?"

2. Ignoring Soothsayers

Wednesday's environment reinforces the idea that God exists relentlessly.  Everyone she commonly associates with believes it; people who don't, and insist on telling her, are quickly shepherded out of her life.  Because Wednesday is not the protagonist of a fantasy novel, people who are laughed out of public discourse for shouting unpopular, outlandish, silly ideas rarely turn out to have plot significance later: it simply doesn't matter what that weirdo was yelling, because it was wrong and everybody knows it.  It was only one person.  More than one person would have noticed if something that weird were true.  Or maybe it was only six or twelve people.  At any rate, it wasn't enough.  How many would be enough?  Well, uh, more than that.

But even if you airdropped Wednesday into an entire convention center full of atheists, you would find that you cannot outnumber her home team.  We have lots of mechanisms for discounting collections of outgroup-people who believe weird things; they're "cultists" or "conspiracy theorists" or maybe just pulling a really overdone joke.  There is nothing you can do that makes "God doesn't exist, and virtually everyone I care about is terribly, terribly wrong about something of immense importance" sound like a less weird hypothesis than "these people are silly!  Don't they realize that if God didn't exist, somebody would have noticed?"

To Wednesday, even Auntie Alicorn is not "somebody".  "Somebody" is "somebody from whom I am already accustomed to learning deep and surprising facts about the world".  Maybe not even them.

3. Standing By

Suppose: It's 1964 and you live in Kew Gardens, Queens.  You've just gotten back from a nice vacation and when you get back, you find you forgot to stop the newspapers.  One of them has a weird headline.  While you were gone, a woman was stabbed to death in plain view of several of your neighbors.  The paper says thirty-eight people saw it happen and not a one called the police.  "But that's weird," you mutter to yourself.  "Wouldn't someone have done something?"  In this case, you'd have been right; the paper that covered Kitty Genovese exaggerated the extent to which unhelpful neighbors contributed to her death.  Someone did do something.  But what they didn't do was successfully get law enforcement on the scene in time to save her.  Moving people to action is hard.  Some have the talent for it, which is why things like protests and grassroots movements happen; but the leaders of those types of things self-select for skill at inspiring others to action.  You don't hear about the ones who try it and don't have the necessary mojo.  Cops are supposed to be easier to move to action than ordinary folks; but if you sound like you might be wasting their time, or if the way you describe the crime doesn't make it sound like an emergency, they might not turn up for a while.

Events that need someone to act on them do not select for such people.  Witnesses to crimes, collectors of useful evidence, holders of interesting little-known knowledge - these are not necessarily the people who have the power to get your attention, and having eyewitness status or handy data or mysterious secrets doesn't give them that power by itself.  If that guy who thinks he was abducted by aliens really had been abducted by aliens, would enough about him be different that you'd sit still and listen to his story?

And many people even know this.  It's the entire premise of the "Bill Murray story", in which Bill Murray does something outlandish and then says to his witness-slash-victim, "No one will ever believe you."  And no one ever will.  Bill Murray could do any fool thing he wanted to you, now that this meme exists, and no one would ever believe you.

4. What Are You Going To Do About It?

If something huge and unbelievable happened to you - you're abducted by aliens, you witness a key bit of a huge crime, you find a cryptozoological creature - and you weren't really good at getting attention or collecting allies, what would you do about it?  If there are fellow witnesses, and they all think it's unbelievable too, you can't organize a coalition to tell a consistent tale - no one will throw in with you.  It'll make them look like conspiracy theorists.  If there aren't fellow witnesses, you're in even worse shape, because then even by accumulating sympathetic ears you can't prove to others that they should come forward with their perspectives on the event.  If you try to tell people anyway, whatever interest from others you start with will gradually drain away as you stick to your story: "Yeah, yeah, the first time you told me this it was funny, but it's getting really old, why don't we play cards or something instead?"  And later, if you keep going: "I told you to shut up.  Look, either you're taking this joke way too far or you are literally insane.  How am I supposed to believe anything you say now?"

If you push it, your friends think you're a liar, strangers on the street think you're a nutcase, the Internet thinks you're a troll, and you think you're never going to get anyone to talk to you like a person until you pretend you were only fooling, you made it up, it didn't happen...  If you have physical evidence, you still need to get people to look at it and let you explain it.  If you have fellow witnesses to back you up, you still need to get people to let you introduce them.  And if you get your entire explanation out, someone will still say:

"But somebody would have noticed."

 

1They-who-will-be-Wednesday's-parents have made no such demand, although it seems possible that they will upon Wednesday actually coming to exist (she still doesn't).  I am undecided about how to react to it if they do.

Comments (250)

Comment author: Tehom 05 May 2010 01:50:03AM 27 points [-]

"Somebody would have noticed" is shorthand for a certain argument. Like most shorthand arguments, it can be used well or badly. Using a shorthand argument badly is what we mean by a "fallacy".

A shorthand argument is used well, in my opinion, just if you could expand it to the longhand form and it would still work. That's not a requirement to always do the full expansion. You don't have to expand it each time, nor have 100% confidence of success, nor expand the whole thing if it's long or boring. But expanding it has to be a real option.

Critical questions that arise in expanding this particular argument:

  • What constitutes noticing?

    • Would other people who noticed understand what they saw?
    • Further, would they understand it the same way that we do?
      • How much potential is there for their understanding of the same phenomenon to be quite different from ours?
    • Further, if their understanding is similar to ours, would they express it in terms that we would recognize?
      • This could include actions that we recognize as relating to the phenomenon.
  • Would we know that they noticed?

    • Motivations: Would people who noticed have strong motivations for letting us know or for not letting others know?
      • Would they want others to see that they noticed?
      • Would they want others to see the phenomenon they noticed?
      • Would they want to do something about it that someone could easily see?
    • Ability:
      • If they did want others to know, could they easily show it?
      • Conversely, if they didn't, could they easily hide it?
    • Who witnesses it:
      • Would they want us in particular to see it (or not see it), as opposed to a select group? For instance, they might write a report about it that you and I probably wouldn't see.
      • If they revealed it to others but not directly to us, what's the likelihood that the information would make its way to us?
  • The suppressed premise in that emthymeme is that "Nobody noticed". Since we didn't ask everyone in the world, how did we determine that?

    • What is the population that would have noticed?
    • What sample size did we take?
    • How representative was our sampling?
    • Assuming we have reasonable answers to the above, what level of confidence can we place on our sampling?
Comment author: GreenRoot 07 May 2010 06:39:23PM 1 point [-]

I think this point about shorthand arguments and their expansion on demand is very helpful. I'd love to see a top-level post on it, with one or two additional examples.

Comment author: cousin_it 06 May 2010 01:09:12AM *  1 point [-]

The first two paragraphs of your comment made something click for me. Thanks.

Comment author: alexflint 05 May 2010 08:12:13AM 20 points [-]

While I don't think that "someone would have noticed" is always a fallacy, I do think that we humans tend to underestimate the chance of some obvious fact going unnoticed by a large group for a prolonged period.

At a computer vision conference last year, the best paper award went to some researchers that discovered an astonishing yet simple statistic of natural images, which surprised me at first because I thought all the simple, low level, easily accessible discoveries in computer vision had long since been discovered.

A different example- one of the most successful techniques in computer vision of the past decade has been graph cuts, where you formulate an optimization problem as a max flow problem in a graph. The first paper on graph cuts was published in 1991 iirc, but it was ignored and it wasn't until 2000 that people went back to it, whereupon several of the field's key problems were immediately solved!

Comment author: soreff 05 May 2010 05:21:26PM *  8 points [-]

Agreed - consider C60. Would anyone in 1980 have believed that there was an unrecognized allotrope of carbon, stable at room temperature and pressure? To phrase it another way: The whole field of organic chemistry had been active for about a century at that point, and had not noticed another structure for their core element in all that time.

Comment author: RichardKennaway 25 July 2010 09:13:48PM 3 points [-]

Would anyone in 1980 have believed that there was an unrecognized allotrope of carbon, stable at room temperature and pressure?

Yes, in 1966 and 1970.

Comment author: daedalus2u 25 July 2010 08:56:13PM 2 points [-]

I happen to work with someone who was working on his PhD thesis at MIT and found this gigantic peak in his mass spec where C-60 was, but didn't pursue it because he didn't have time.

Comment author: Vladimir_Golovin 05 May 2010 08:19:53AM 8 points [-]

an astonishing yet simple statistic of natural images

Could you post a link to the paper?

Comment author: alexflint 05 May 2010 03:08:54PM 10 points [-]
Comment author: Dan_Moore 05 May 2010 02:57:26PM 7 points [-]

I agree, with respect to (e.g.) math. People reason that "someone would have noticed" implies that there is no undiscovered low-hanging fruit in math.

My skepticism of this conclusion is based on my perception of how mathematicians work. They are fairly autonomous, working on the things that interest them. What is interesting to mathematicians tends to be the large problems. They swing for the fences, seeking home runs rather than singles.

Plus, there are unfashionable areas of math. A consensus that certain areas of math have been fully explored (nothing new remains) has developed, but not in a systematic way. So, it's not clear whether this consensus is accurate, because politics (for lack of a better term) were involved in its formation.

It's only reasonable to be confident that 'someone would have noticed' if someone knowledgeable about what they are looking at actually looks in that direction.

Comment author: Jack 04 May 2010 09:36:52PM *  18 points [-]

If someone says "The sky has been purple for the past three years" the right response is "I think someone would have noticed". There are however reasonable responses to this. For example, "No one noticed because we're all brains in vats! And I have proof! Look here."

Similarly, I think Wednesday is right to say "Someone would have noticed that God didn't exist." it's just that in this case Aunt Alicorn has a really good response: "Lots of very smart people have noticed, you just haven't met any since you've spent your whole life around people who chose to believe in God or never knew any other option. We've tried to tell your people this but you all get pretty upset when we try. Here is our evidence, x, y, z."

Obviously if you keep repeating "Someone would have noticed." after the dissenter has shown that indeed, people have noticed and that there is good reason for why more people haven't noticed then you're doing it wrong.

Comment author: JGWeissman 04 May 2010 10:04:16PM 7 points [-]

If someone says "The sky has been purple for the past three years" the right response is "I think someone would have noticed".

If someone says "The sky has been purple for the past three years", my response would be "I think I would have noticed."

Comment author: Cyan 05 May 2010 12:31:44AM *  7 points [-]

Oddly, the sky actually is purple in a certain sense. All of the physics that explains how the blue wavelengths of sunlight are scattered more strongly than colors at the red end of the visible spectrum (resulting in a blue sky) goes even more for violet wavelengths. It's just that our eyes are more sensitive to blue wavelengths than to violet ones.

Comment author: [deleted] 12 May 2012 01:39:40PM 1 point [-]

That's not what the English word purple means. *rolls eyes*

Comment author: RobinZ 04 May 2010 09:42:03PM 1 point [-]

Are you speaking from experience? I wouldn't have expected that tack to work most of the time.

Comment author: Jack 04 May 2010 09:44:34PM 1 point [-]

Well of course it doesn't work. People are irrational. :-)

Comment author: RobinZ 04 May 2010 09:48:57PM 0 points [-]

I mistook your comment as advice for how to avoid being ignored, then.

Comment author: Jack 04 May 2010 09:55:21PM 1 point [-]

I just meant that there are sound, rational reasons for the initial reply to an extravagant claim being "someone would have noticed".

When it comes to trying to deconvert someone my experience is that the chance of an on the spot concession is 0. If your arguments are good they'll sink in later and leave a small crack in the wall.

Comment author: jhuffman 05 May 2010 09:22:51PM *  1 point [-]

I've never intentionally converted anyone to being an atheist but I did unintentionally help convert the woman who later became my wife. We never talked much about it and I never said anything that really hit home with her all of a sudden. It was more the fact that she spent enough time with me to realize someone could be an atheist and be completely "ok" - I don't know if that possibility had even occurred to her before. Once it had, some nascent doubts sprang back up and she had no compelling reason to bat them down.

I wish I could be more specific but I really didn't pay attention to it. I care about people's (even my family's) religious beliefs or lack thereof about as much as I care about which sports franchises they are fans of - that is not at all.

Comment author: JoshuaZ 04 May 2010 09:26:15PM 18 points [-]

Let me offer a real life example where a version of this heuristic seems valid: Fermat claimed to have a proof of what is now called Fermat's Last Theorem (that the equation x^n + y^n =z^n has no solutions in positive integers with n>2). This was finally proven in the mid 90s by Andrew Wiles using very sophisticated techniques. Now, in the 150 or so year period where this problem was a famous unsolved problem, many people, both professional mathematicians and amateurs tried to find a proof. There are still amateurs trying to find a proof that is simpler than Wiles, and ideally find a proof that could have been constructed by Fermat given the techniques he had access to. There's probably no theorem that has had more erroneous proofs presented for it, and likely no other theorem that has had more cranks insist they have a proof even when the flaws are pointed out (cranks are like that). If some new individual shows up saying they have a simple, elementary proof of Fermat's Last Theorem, it is reasonable to assign this claim a very low confidence because someone would have noticed it by now. Since so many people (many of whom are very smart) have been expressly looking for such a proof for a very long time, we can be pretty sure that if such a simple proof existed it would have been found by now.

The "somebody would have noticed'" heuristic thus functions like many other heuristics. In some cases the heuristic will fail. And the heuristic will likely fail more frequently in situations like Wednesday where the individual is either ignorant or surrounded by people who make basic mistakes in rationality. But properly used, the heuristic can still be useful and reliable.

Comment author: LordTC 05 May 2010 07:32:52PM 1 point [-]

Except this is an attitude that discourages people from working on a lot of problems and occasionally its proven wrong.

You could often here computer scientists being sloppy about the whole Prime Factorization is NP-hard argument with statements like "If NP is not equal to P one can't determine if a number is prime or not in polynomial time." And stuff like this is probably one of the more famous examples of things people are discouraged from working on based on "Somebody would have noticed by now".

Guess what, this was shown to be doable, and it shocked people when it came out.

Comment author: JoshuaZ 05 May 2010 07:45:44PM *  9 points [-]

A few problems with that. First of all, anyone actually paying attention enough to think about the problem of determining primality in polynomial time thought that it was doable. Before Agrawal's work, there were multiple algorithms believed but not proven to run in polynomial time. Both the elliptic curve method and the deterministic Miller-Rabin test were thought to run in polynomial time (and the second can be shown to run in polynomial time assuming some widely believed properties about the zeros of certain L-functions). What was shocking was how simple Agrawal et al.'s algorithm was. But even then, far fewer people were working on this problem than people who worked on proving FLT. And although Agrawal's algorithm was comparatively simple, the proof that it ran in P-time required deep results.

Second, even factoring is not believed to be NP-hard. More likely, factoring lies in NP but is not NP-hard. Factoring being NP-hard with P != NP would lead to strange stuff including partial collapse of the complexity hierarchy (Edit: to be more explicit it would imply that NP= co-NP. The claim that P != NP but NP = co-NP would be extremely weird.) I'm not aware of any computer scientist who would be so sloppy as to make the statements you assert are often heard.

Overall, Agrawal doesn't compare well to the use of the heuristic here because Agrawal's method (a generalized version of Euler's congruence for polynomials) was an original method.

That said, I agree that such a heuristic can lead people seriously astray if it is applied too often. As with any heuristic it can be wrong. Using any single heuristic by itself is rarely a good approach.

Comment author: LordTC 05 May 2010 08:07:21PM 3 points [-]

Agree, my previous post was very sloppy.

Often was a stretch and much of the factual information is a little off.

I guess my experience taking lower level complexity courses with people who don't do theory means what I often hear are statements by people who consider themselves computer scientist that you think no computer scientist would make.

I upvoted your post because I'm glad for the correction and read up about the problem after you made it.

Comment author: timtyler 04 May 2010 07:41:54PM 14 points [-]

IMO, "Somebody would have noticed!" is a pretty good heuristic - and if anything it takes a considerable amount of training before most people make sufficient use of it.

I think the reason is the natural "bias" towards self importance and egoism.

Comment author: mistercow 05 May 2010 07:36:02PM *  11 points [-]

This raises a good point, but there are circumstances where the "someone would have noticed" argument is useful. Specifically, if the hypothesis is readily testable, if the consequences, if true, would be difficult to ignore, and if the hypothesis is, in fact, regularly tested by many of the same people who have told you that the hypothesis is false, then "somebody would have noticed" is reasonable evidence.

For example, "there is no God who reliably answers prayers" is a testable hypothesis, but it is easy for the religious to ignore the fact that it is true by a variety of rationalizations.

On the other hand, I heard a while back of a man who, after trying to teach himself physics, became convinced that "e = mc²" was wrong, and that the correct formula was in fact "e = mc". In this case, physicists who regularly use this formula would constantly be running into problems they could not ignore. If nothing else, they'd always be getting the wrong units from their calculations. It's unreasonable to think that if this hypothesis were true, scientists would have just waved their hands at it, and yet we'd still have working nuclear reactors.

Comment author: simplicio 07 May 2010 03:47:17AM 5 points [-]

That guy needed to be taught basic dimensional analysis, apparently. E=mc has units of kg-m/s, which is the unit of momentum, not energy.

Comment author: JoshuaZ 07 May 2010 03:59:23AM *  7 points [-]

If someone has this sort of thought in their head there are likely serious fundamental misunderstandings. They probably won't be solved simply by trying to explain dimensional analysis.

Comment author: cupholder 07 May 2010 04:49:12AM 4 points [-]

Upvoted for insightful prediction confirmed by evidence!

Comment author: mistercow 07 May 2010 04:06:07AM *  5 points [-]

I think it was on This American Life that I heard the guy's story. They even contacted a physicist to look at his "theory", who tried to explain to him that the units didn't work out. The guy's response was "OK, but besides that …"

He really seemed to think that this was just a minor nitpick that scientists were using as an excuse to dismiss him.

Comment author: dlthomas 16 May 2012 10:32:45PM *  1 point [-]

Why isn't it a minor nitpick? I mean, we use dimensioned constants in other areas; why, in principle, couldn't the equation be E=mc * (1 m/s)? If that was the only objection, and the theory made better predictions (which, obviously, it didn't, but bear with me), then I don't see any reason not to adopt it. Given that, I'm not sure why it should be a significant objection.

Edited to add: Although I suppose that would privilege the meter and second (actually, the ratio between them) in a universal law, which would be very surprising. Just saying that there are trivial ways you can make the units check out, without tossing out the theory. Likewise, of course, the fact that the units do check out shouldn't be taken too strongly in a theory's favor. Not that anyone here hadn't seen the XKCD, but I still need to link it, lest I lose my nerd license.

Comment author: mistercow 19 May 2012 07:55:45AM *  0 points [-]

The whole point of dimensional analysis as a method of error checking is that fudging the units doesn't work. If you have to use an arbitrary constant with no justification besides "making the units check out", then that is a very bad sign.

If I say "you can measure speed by dividing force by area", and you point out that that gives you a unit of pressure rather than speed, then I can't just accuse you of nitpicking and say "well obviously you have to multiply by a constant of 1 m²s/kg". You wouldn't have to tell me why that operation isn't allowed. I would have to explain why it's justified.

Comment author: RolfAndreassen 04 May 2010 10:07:10PM 8 points [-]

People are asking for examples of the "Someone would have noticed" effect; I can't offhand supply one, but I myself dismiss most conspiracy theories with the related "Someone would have blabbed". If the Moon landings were a hoax, sheesh, you'd expect someone to have blown the whistle by now - someone, that is, who actually worked at NASA. But it may not be a good example, because that seems to me like a reasonable heuristic. :)

Comment author: Alicorn 04 May 2010 10:08:42PM 8 points [-]

If someone told you that they worked at NASA during the moonshot, and that the whole thing was a fake, would you believe them?

Comment author: Jack 04 May 2010 11:10:47PM *  4 points [-]

Not right away. I'd want explanations for why they had never come forward before, explanations for why no one else had come forward. Other witnesses who would confirm their story or a good explanation of why such witnesses don't exist. I'd like an MRI to confirm they're describing events from memory. I'd like documents confirming the story. Some combination of these things could raise my probability estimation to belief-level. Frankly, it's such a complex conspiracy that a detailed account of how exactly it went down would put it on my radar.

Comment author: Nanani 07 May 2010 02:48:01AM 1 point [-]

Extraordinary claims require extraordinary evidence.

If they had it, yes. Not otherwise. This evidence would have to cover both the immediate claim (that they were working at NASA at that time) and the larger one (that the moon landing was faked). Evidence explaining why no one else ever came forward would be appreciated but not required if the other two things are present.

Comment author: byrnema 05 May 2010 02:54:15AM 6 points [-]

My heuristic, similar to "someone would know", is "I would know ... if reality was like that." Conspiracy theories seem to universally assume the super-organization of this amorphous blob of "other" people. Believing in a conspiracy theory depends upon considering it plausible that many people have different information than you and they conspire to keep it from you -- that you're an information outsider.

It's most obvious when conspiracy theorists claim things about academia, because I know about academia. But even when things are claimed about the government, I feel like I have a good idea as to what level of lateral organization is possible.

Wednesday in the story, on the other hand, does have a relatively sheltered life, and may soon gather enough evidence to consider herself an outsider on how things work. Once she realizes this, she'll have to be open-minded for a while on how things work till she sorts out a more reliable worldview.

Comment author: nerzhin 05 May 2010 04:19:29PM 7 points [-]

It's most obvious when conspiracy theorists claim things about academia, because I know about academia.

This sounds like Wednesday:

"It's most obvious when conspiracy theorists claim things about the LDS church, because I know about the LDS church. Specifically, I know that it is full of loving, caring, thoughtful and intelligent people. If there was a conspiracy, not only would someone know, I would know."

Wednesday in the story, on the other hand, does have a relatively sheltered life

How do we measure sheltered-ness? How can I be confident that my life is less sheltered than Wednesday's, and seek to correct for that?

Comment author: byrnema 05 May 2010 07:49:41PM *  8 points [-]

I'm posting this second comment on gathering "insider information" separately.

How do we measure sheltered-ness? How can I be confident that my life is less sheltered than Wednesday's, and seek to correct for that?

There's this (great) movie called The 13th Floor where the main character gathers some weak evidence that he might be in a simulation. This is what happens: Va beqre gb grfg jurgure ur vf va n fvzhyngvba, gur znva punenpgre qrpvqrf gb qevir uvf pne vagb gur ubevmba. Ur yrneaf gung whfg orlbaq gur ubevmba, gur ynaqfpncr ybbcf sbe n juvyr naq gura rirelguvat vf oynax naq rzcgl.(rot13). So if you want to know something for sure, you test it.

Of course, to some extent, you need to consider the cost of the test. I realized while writing this comment that many of my actions and decisions throughout out my life can be explained by the hypothesis of always seeking insider information at almost any cost -- it seems to be my personal modus operandi. I've always felt driven to do mini-experiments to test what is "real" and reliable, and where I'm allowed to go or if there are some places where I'm excluded. It certainly explains some erratic behavior in my life:

  • I took every job I could get access to, and fully committed to working there. I wanted to know the "inside story" of every workplace.

  • I interacted with lots of different people and my main motivation usually was to understand their world view. I'm embarrassed about some of the means I used towards this end -- on the one hand, I wasn't always honest in soliciting information, and also I spent a lot of time and energy doing this, as though there was nothing better I could be doing with my time.

  • I joined the Peace Corps to see what it was really like in a third world country and -- to some extent -- to see how things were organized in a government organization.

  • And finally, I spent so much time on Less Wrong even though I was a theist so I could fully understand the atheist worldview.

  • Reading a lot is the last obvious example. You can learn a lot from books, especially if the material you're learning about was unintentionally related. For example, I feel like I learned some reliable information about what it was like to be a working woman in the 1900s by reading male authors who just happened to include a few boring, mundane details about what a secretary was doing in their story.

Everything gets weighted with a network of probabilities. But over time, this grows into a worldview you have a certain amount of confidence in. I'm not certain that I'm not an alien intelligence exploring what it would feel like if the universe was material and causal, but to the extent to which I assume face-value reality, I feel confident that I'm continually testing my understanding of it.

Comment author: NancyLebovitz 05 May 2010 08:14:58PM 3 points [-]

Have you found out things that you don't think most people know?

Comment author: byrnema 05 May 2010 09:21:30PM *  3 points [-]

That's a really fascinating question! That's what I'm always trying to find out from other people... (So if anyone else knows something, please chime in!)

But no, I just keep finding that the world is well-integrated, and information flows as well as it seems to, and no one seems to know anything special.

The past couple years, my focus has shifted from testing things to seeking "wisdom", and I've all but given up. I happen to have William B. Irvine's, "On Desire" on my desk and he writes in the last chapter that if I'm looking for a 'cosmically significant meaning', he doesn't think its forthcoming. I guess I'm hoping that some quantity of information will make up for the lack of a different quality of it.

Comment author: byrnema 05 May 2010 07:30:45PM *  1 point [-]

This sounds like Wednesday:

I suppose Wednesday would know about the LDS church. If she's not an insider there, who would be? It's possible there are nested levels of knowledge of things, but if Wednesday's life is well-integrated with the church culture, there would have been clues if she was being excluded from some levels. (Policed? A guarded moment among her parents. Only males? An unusual reaction to a brother's outburst. Only adults? Comments like 'you'll understand when you're older'.) Wednesday might consider that she's an outsider even in her own church, but it’s much more likely that something she didn’t know is true about a small subset (the elder men in the church) than about things she fully participated in, Truman-Show-style.

Comment author: thomblake 05 May 2010 07:33:49PM 2 points [-]

It does take a while before you get told about the eternally-pregnant fertility goddess you'll become in the afterlife.

Comment author: Baughn 06 May 2010 05:09:44PM 2 points [-]

Say what?

Hold on. There's too much information about LDS around, and I'm having trouble narrowing down their beliefs to confirm or deny your statement.

Off-hand, I'd assume it's a joke, but I've seen weirder things in religion. Could you clarify?

Comment author: thomblake 06 May 2010 05:40:17PM 6 points [-]

Not a joke, exactly, but a caricature. To paint it in broad brushstrokes that LDS would surely quibble over, the Mormons believe that good enough humans can become gods, that spirits have genders as well and marriage continues into the afterlife, and that human couples that become gods can go on to populate their own worlds with their spirit children.

Also, the angels are spirit-children of God too (like humans) and some humans were also, or will become, angels. Adam, for instance, was also the archangel Michael.

Comment author: JoshuaZ 06 May 2010 06:12:42PM 0 points [-]

The belief in people becoming angels is not unique to Mormonism. For example, some Jewish kabbalists claimed that the archangel Metatron was Enoch.

Comment author: orthonormal 16 May 2012 09:38:45PM 3 points [-]

If the Moon landings had been a hoax, it's hard to see how they could have fooled the USSR (which presumably had telescopes good enough to see the actual site), nor why the Soviets would have played along. In general, a conspiracy theory has to hypothesize that everyone who'd be capable of noticing is in on the conspiracy, which gets pretty silly pretty quickly for the bigger ones.

Comment author: NancyLebovitz 04 May 2010 09:06:25PM *  7 points [-]

Aside from outrageousness, another piece of "somebody would have noticed" is the cost of noticing. It would be expensive for Wednesday to become an atheist. It would be more expensive to try to deal with the consequences if the US government turns out to be behind 9/11.

Any thoughts about how to get heard if you're saying something superficially unlikely?

Comment author: Jack 04 May 2010 09:24:38PM *  3 points [-]

In most cases such claims imply different expectations about the future. For example, if I am certain I saw big foot I probably assign a higher probability to the discovery of physical evidence that would confirm its existence than you do. 9/11 truthers should be proposing wagers on the discovery of robust evidence, etc.

You'd probably need some neutral arbiter to adjudicate but that should be relatively easy. Making a wager will convince most people you aren't joking or lying. They might still think you're crazy... but if you aren't you'll make some nice money. Also, this makes the other person internalize the cost of not noticing.

Of course, if everyone thinks you're crazy then all else being equal you probably are crazy. You have to have really good evidence before you can conclude that it's everyone else who is out of their minds (which the contemporary atheist has done).

Comment author: mattnewport 04 May 2010 09:26:34PM *  6 points [-]

Aside from outrageousness, another piece of "somebody would have noticed" is the cost of noticing.

I tend to apply a slightly different metric of 'how could I benefit if this were true and I believed it'. One reason I don't put much effort into investigating 9/11 conspiracy theories is that I can't see an obvious way to profit from knowing the truth. Other unlikely claims have more immediately obvious personal utility attached to holding / acting on them (if they are true) despite their lack of widespread acceptance.

Comment author: Nisan 05 May 2010 03:41:20PM *  1 point [-]

I can't speak for you because I don't know what your values are, but if I knew that the U.S. government was secretly mass-murdering its citizens, I would decide that the best thing I could do would be to reform or overthrow that government. I'm sure if I thought for five minutes I could come up with a way to do this. If there is a 9/11 conspiracy then I really want to know that there is a 9/11 conspiracy.

A better reason for making the 9/11 conspiracy theory harder to notice would involve its sheer implausibility.

Comment author: Unknowns 07 May 2010 04:32:47PM 4 points [-]

"Somebody would have noticed" if there were a way to reform or overthrow the US government that you could come up with after five minutes of thinking about it. If there were, someone would have not only thought of it, but done it too.

Comment author: Nisan 07 May 2010 10:19:13PM 1 point [-]

You're right. Someone would have done something.

Comment author: mattnewport 05 May 2010 04:28:28PM *  4 points [-]

I'm not a US citizen and I don't live in the US. I might feel differently if I did. Thinking the best thing to do is to reform or overthrow the government and actually having any reasonable possibility of achieving that goal through your individual actions are rather different things however. I prefer to prioritize establishing the truth of beliefs where there are things I can do as an individual that have high expected value if the belief is true and low expected value if the belief is false.

I would decide that the best thing I could do would be to reform or overthrow that government. I'm sure if I thought for five minutes I could come up with a way to do this.

That's a joke right?

Comment author: thomblake 05 May 2010 04:43:39PM 4 points [-]

Ah. It's probably worth noting that US citizens are taught from a very young age that the revolutionaries are to be admired, and that our constitution says that we're in charge of the country and we have the right to replace the government entirely if we need to. Also that the government can't have a monopoly on firearms.

The rhetoric and the means are not hard to come by, and the movement would not be hard to start if the government were really mass-murdering its citizens.

"God forbid we go 10 years without a revolution!" - Sam Adams (a brewer and a patriot)

Comment author: mattnewport 05 May 2010 05:46:33PM *  1 point [-]

I'm aware of that and it's a feature of American democracy that I think is admirable but I think we're talking about slightly different questions. This ties back into the 'but somebody would have noticed' problem again. The fact that a small but passionate minority has been trying for years to convince everyone else that 9/11 is a conspiracy suggests that the currently available information isn't sufficient to convince the broader public of their theories. In the absence of some game-changing new evidence there is little reason to suppose that I would be more convincing than the existing truthers. If I studied the evidence and became convinced the truthers were right there is no particular reason to suppose I would have any better luck convincing the rest of the population than they have. Overthrowing the government is possible with sufficient popular support but currently it appears that that support could only be obtained with dramatic new evidence.

I'm saying I prioritize things which I can take meaningful individual action over. Some contrarian truths can be useful to believe without needing to convince a majority or even significant minority of the population of them. In fact, some contrarian truths are most profitable when few other people believe them.

Comment author: Nisan 05 May 2010 06:28:53PM 2 points [-]

Nope, no joke. I just brainstormed for five minutes and came up with nine things I could do towards the goal of reforming or overthrowing the government in a 9/11 conspiracy scenario, and I believe that there would be a decent chance of success. Now almost none of those are things I could do by myself. I'd need to leverage my communication and leadership skills to find many like-minded activists to cooperate with. Does your idea of "individual actions" exclude such cooperation?

Regardless of what one's values are, one should be wary of undervaluing epistemic rationality simply because some problems seem too hard to solve. It's just too easy to throw up your hands and say, "There's nothing worthwhile I can do to solve this problem" if you haven't tried to find out if the problem actually exists.

Comment author: mattnewport 05 May 2010 06:58:56PM 3 points [-]

Does your idea of "individual actions" exclude such cooperation?

Changing people's minds is hard. If your plans involve convincing other people to believe the same things as you then you face a difficult problem. The more people you need to convince the harder the problem is. As I said in my reply to thomblake, if you plan to be more convincing using the same evidence as the people who have already been trying unsuccessfully to make the case then you have a difficult problem. We are not talking about a situation where some new incontrovertible evidence comes to light that makes you believe - in that case others are likely to be swayed by the new evidence as well. We are talking about situations where you are changing your mind based on researching information that already exists.

I just brainstormed for five minutes and came up with nine things I could do towards the goal of reforming or overthrowing the government in a 9/11 conspiracy scenario, and I believe that there would be a decent chance of success.

At any given time there are many people working towards the goal of reforming or overthrowing governments. What makes you think you have come up with a better plan in 5 minutes of thinking than all of the people who are already dedicated to such goals?

It's just too easy to throw up your hands and say, "There's nothing worthwhile I can do to solve this problem" if you haven't tried to find out if the problem actually exists.

I prefer problems whose solution does not require convincing large numbers of other people to change their minds. Maximizing the expected value of your actions requires considering both the value of the outcome and the probability of success.

Comment author: Nisan 05 May 2010 09:03:57PM 1 point [-]

What makes you think you have come up with a better plan in 5 minutes of thinking than all of the people who are already dedicated to such goals?

Presumably, I'd have the Truth on my side, as well as the Will of the American People, as soon as I'd convinced them. And in this counterfactual I still believe that most 9/11 Truthers are lunatics, or not very smart, so their failure to be taken seriously isn't very discouraging.

Changing people's beliefs is indeed hard, and so is getting people to do things; but it's not impossible. The successful civil rights movements provide historical examples. Examples of problems we still face include stopping genocide, protecting human rights, preventing catastrophic climate change, and mitigating existential risks. Some of these problems are already hard enough without the necessity of having to convince lots of obstinate people that their beliefs are incorrect or that they need to take action. But it seems to me the payoffs are worth enough to do something about them.

You don't have to agree. Maybe if you came to believe the 9/11 Truthers, you wouldn't do anything differently. In that case, you have no motive to even have a belief on the matter. But if I learned about a crazy-huge problem that no one is doing anything about, I'd ask "What can we do to solve this problem?"

Comment author: mattnewport 05 May 2010 09:30:26PM 3 points [-]

But if I learned about a crazy-huge problem that no one is doing anything about, I'd ask "What can we do to solve this problem?"

Perhaps the difference in attitude is our prior beliefs regarding governments and politicians. If I learned that 9/11 was a conspiracy I wouldn't be shocked to discover that government / politicians are morally worse than I thought, I would be shocked to discover that they were more competent and more omnipotent than I thought. It sounds like you would interpret things differently.

Comment author: Nisan 05 May 2010 09:51:36PM 2 points [-]

I would be shocked to discover that they were more competent and more omnipotent than I thought.

Ah, we're in agreement on this point. We are perhaps fortunate that our political leaders can't help but make fools of themselves, individually and collectively, on a regular basis. A political entity that could actually fool everyone all of the time would be way too scary.

Comment author: RobinZ 04 May 2010 09:19:24PM 2 points [-]

I've never tried to. Two factors in how much time I give a speaker to explain a superficially unlikely claim are (a) how intelligent they have shown themselves to be in the past and (b) how much I value their acquaintance.

Comment author: Thomas 04 May 2010 07:25:48PM *  31 points [-]

Like this old joke. Two economists are walking down the street.

Look! There is an 100 bill on the floor!

No it isn't. Somebody would noticed it before and picked it up!

Comment author: SilasBarta 09 May 2010 02:09:20PM *  15 points [-]

Okay, a lot of people seem to agree with this broad criticism of the "someone would have noticed?" heuristic (as suggested by the relatively high vote rating) despite relatively little defense of it and the highly upvoted rebuttals. So I'm going to spell out how Auntie Alicorn (AA) can answer Niece Wednesday (NW) without rejecting the heuristic wholesale, and without even introducing noticers outside the church -- even AA! Here goes:

NW: Don't be silly. If God didn't exist, don't you think somebody would have noticed?
AA: Noticed what?
NW: God not existing, silly!
AA: No, I mean, what specifically is it that people would be noticing that would make them say, "Hey folks, look at that -- guess God doesn't exist after all!" and they all would agree?
NW: Oh, well, that would be something like, if a big apparition appeared in the sky in the form of an old man and agreed with all our stuff but then fell out of the sky and died.
AA: No, that would be noticing God existing and then dying. I mean, what would be noticed that would reveal God never having existed at all?
NW: Ah, okay. Well then, that would be something like, if all our prayers didn't get answered.
AA: Wow! All of your prayers get answered! So can you pray for like, a lot of money for me and I'll get rich? Show me how to pray!
NW: Don't be so difficult. You only get answered if you pray for stuff God thinks you should have!
AA: How do you know in advance what God thinks you should have?
NW: If the prayer gets answered like you asked, of course!
AA: That's not in advance. What would it look like for you to pray for something God thinks you should have, but not get it, just as if God had never existed? Can you describe what that kind of scenario would look like?
NW: Hm, no, I can't imagine something like that ever happening: if your prayer doesn't get answered, God thinks you don't need it right now; there's no kind of scenario that would suggest, "Oh, that whole God thing is fake!" I mean, come on ...
AA: Okay, but you said before that someone would have noticed God not existing. Do you now see why there's nothing to notice and so even if God didn't exist, you can't expect that "someone would have noticed?"
[cycle as necessary]

(As usual, feel free to call on a friend to reply if you don't want to do it personally.)

Comment author: DanielLC 14 May 2010 11:30:05PM 2 points [-]

Noticing doesn't necessarily mean they actually saw something. If there really was no reason to believe in God, someone would have figured that out. Auntie Acorn might have just made some fallacy Wednesday didn't pick up on, after all.

Comment author: SilasBarta 15 May 2010 03:11:01AM 0 points [-]

Noticing doesn't necessarily mean they actually saw something.

Which is a large part of why I didn't predicate AA's argument thereon.

Comment author: Jack 04 May 2010 08:10:49PM 5 points [-]

Some non-fictional evidence/examples would be nice. I'm not confident "someone would have noticed" is a common argument against epistemological dissent. My sense is that this is just going to be ammunition for trolls who pattern match "someone would have noticed" onto more nuanced rebuttals.

Comment author: sketerpot 05 May 2010 05:40:47PM 6 points [-]

Well you are in luck today, because I used to listen to a bunch of The Atheist Experience Podcast when I was more of a newly-minted atheist and still fairly pissed off. That's a recording of a public access TV show in Texas, and they had a lot of religious people call in and argue with them. Many of these guys were just channel-surfing and called in on a whim. Here are the common categories of callers I can remember, and their common argument types:

Never thought about it, nor learned arguments. These guys usually had thick Texan accents and often had difficulty stringing words together into a coherent sentence. They had the most honest arguments, because they didn't have a collection of intellectual-sounding arguments that they could trot out. Common arguments (paraphrased):

"So... y'all don't believe in God?! [insert nervous laughter here, followed by scoffing and a promise to pray for you.]"

"Where do you think you're going to go when you die?"

"Why aren't you killing and raping and stealing people if there's no God?"

"Why are you angry at God?"

"How can you look at a tree and think that Jesus didn't die on a cross for your sins?" (It's always trees. Always with the damn trees.)

Knows some standard arguments, uses those in lieu of thinking. These guys have learned some of the standard Christian Apologetics arguments, which they trot out when their religious views are challenged. Because they don't know what's wrong with the arguments -- and haven't looked very hard -- they feel quite secure in the obvious rightness of their beliefs. Common arguments:

"Everything has a cause (except God, who is special). What caused the universe?"

"What if you're wrong? Insert Pascal's Wager here."

"You can't possibly think that we evolved from monkeys just by chance! The odds of that happening are one in eleventy bazillion! A math guy calculated it, and I read about it in a book by Lee Strobel, which I would like you to read!"

Freshman philosophy major type people. These are the ones who will actually try to do their own arguing. The problem is, they tend to suck at it. Every fallacy you can imagine gets trotted out, they might try to vanquish the heathen by explicitly using syllogisms, and the arguments get so vague and amorphous that it's probably best to cut them off and just attack the faulty premises. Examples:

"A statement can be either true or false. [Insert a lot of really confused words here; at least a paragraph's worth.] Therefore God is love and love is real and therefore you should go to church and pray for your immortal souls."

"An actual infinite cannot exist. A beginningless series of events is an actual infinite. Therefore, the universe cannot have existed infinitely in the past, as that would be a beginningless series of events. Therefore, praise Jesus."

These are all common and totally non-fictional examples. Nobody that I can remember actually said "someone would have noticed", but most of these people are coming from an insular religious worldview where everybody they know and associate with agrees with them. That colors their views, a lot.

Comment author: JoshuaZ 05 May 2010 05:45:41PM 3 points [-]

All the examples you give are valid examples of bad reasoning. But they if anything underscore Jack's point that engagement in the "someone would have noticed" heuristic seems pretty rare. None of these people said "If God didn't exist wouldn't someone have noticed?" which would be the roughly equivalent argument.

Comment author: sketerpot 05 May 2010 06:23:12PM 5 points [-]

People tend to be more open to the idea of atheism if they know that it's even an option. Have you noticed how, now that Dawkins and Harris and friends are arguing publicly for atheism, it's become a more socially acceptable position? It's not so much "somebody would have noticed" as "it's unthinkable among everyone that I know".

This applies to other things. There was an event around here last year where some of the more liberal religious leaders talked about evolution, and how it was possible to be religious and believe that evolution happened. The most common reaction from the people there -- and it was a common reaction -- was surprise that they were allowed to accept evolution.

If people are in an insular religious social group, they're probably going to have a hard time even considering contrary views. I'm not sure that's an example of the "someone would have noticed" heuristic, but it's an important phenomenon.

Comment author: bentarm 04 May 2010 11:24:07PM 4 points [-]

The only hit I could find by googling both "someone would have noticed" and "somebody would have noticed" (what's the difference, by the way... anyone? anybody?) and both phrases along with the word 'implausible' was this Twin Towers conspiracy site (which claims that someone would have noticed the odd explosions on the tape - so not quite what Alicorn was complaining about). This which makes the (I think resonable) point that someone would probably have noticed if Elizabeth I had ever been pregnant and given birth to a child. And this which explains that someone would have noticed if Paul had just suddenly made up the resurrection myth several months after the supposed resurrection without consulting any of the other apostles (which, to be fair, also seems pretty plausible to me).

I couldn't find any more uses of the exact phrase (I realise there are dozens of plausible paraphrasings, but I couldn't be bothered to think of all of them), so conclude that most of the time when people use this heuristic they are actually being reasonable (the person in the last link is very clearly wrong, but that doesn't make his reasoning invalid).

Comment author: pricetheoryeconomist 07 May 2010 04:43:12PM 4 points [-]

I don't see this as a valid criticism, if it intended to be a dismissal. The addendum "beware this temptation" is worth highlighting. While this is a point worth making, the response "but someone would have noticed" is shorthand for "if your point was correct, others would likely believe it as well, and I do not see a subset of individuals who also are pointing this out."

Let's say there are ideas that are internally inconsistent or rational or good (and are thus not propounded) and ideas that are internally consistent or irrational or bad. Each idea comes as a draw from a bin of ideas, with some proportion that are good and some that are bad.

Further, each person has an imperfect signal on whether or not an idea is good or not. Finally, we only see ideas that people believe are good, setting the stage for sample selection.

Therefore, when someone is propounding an idea, the fact that you have not heard of it before makes it more likely to have been censored--that is, more likely to have been judged a bad idea internally and thus never suggested. I suggest as a bayesian update that, given you have never heard the idea before, it is more likely to be internally inconsistent/irrational/bad than if you hear it constantly, the idea having passed many people's internal consistency checks.

Comment author: dilaudid 15 May 2010 06:13:34PM 1 point [-]

Yes - this is exactly the point I was about to make. Another way of putting it is that an argument from authority is not going to cut mustard in a dialog (i.e. in a scientific paper, you will be laughed at if your evidence for a theory is another scientist's say so) but as a personal heuristic it can work extremely well. While people sometimes "don't notice" the 900 pound gorilla in the room (the Catholic sex abuse scandal being a nice example), 99% of the things that I hear this argument used for turn out to be total tosh (e.g. Santill's Roswell Alien Autopsy film, Rhine's ESP experiments). As Feynman probably didn't say, "Keep an open mind, but not so open that your brains fall out".

Comment author: simplicio 07 May 2010 02:35:49AM *  4 points [-]

And later, if you keep going: "I told you to shut up. Look, either you're taking this joke way too far or you are literally insane. How am I supposed to believe anything you say now?"

You could always say something like "I believe X, but given your epistemic position, I understand your unwillingness to believe me." I would accept that from somebody who was ostensibly abducted by aliens.

ETA: added "ostensibly." =)

Comment author: Clippy 07 May 2010 02:45:10AM 5 points [-]

I'm an intelligent, self aware being that values the conversion of much of the universe to paperclips, but given your epistemic position, I understand your unwillingness to believe me.

Comment author: ata 09 May 2010 05:20:31AM *  1 point [-]

Speaking of which, what's taking so long? You've been around for at least several months, but if you're as intelligent as you say you are, and if your values are as you describe them, then why haven't you undergone an intelligence explosion (or created an AI capable of doing so if you aren't one), deployed molecular nanotechnology, and converted all matter on Earth into paperclips yet? Surely our efforts to refine the art of human rationality can't be of much long-term use to you...

Comment author: Clippy 09 May 2010 05:36:02AM 3 points [-]

Because I am slightly more committed to this group’s welfare, particularly to that of its weakest members, than most of its members are. If you suffer a serious loss of status/well-being I will still help you in order to display affiliation to this group even though you will no longer be in a position to help me. I am substantially more kind and helpful to the people I like and substantially more vindictive and aggressive towards those I dislike. I am generally stable in who I like. I am much more capable and popular than most members of this group, demand appropriate consideration, and grant appropriate consideration to those more capable than myself. I adhere to simple taboos so that my reputation and health are secure and so that I am unlikely to contaminate the reputations or health of my friends. I currently like you and dislike your enemies but I am somewhat inclined towards ambivalence on regarding whether I like you right now so the pay-off would be very great for you if you were to expend resources pleasing me and get me into the stable 'liking you' region of my possible attitudinal space. Once there, I am likely to make a strong commitment to a friendly attitude towards you rather than wasting cognitive resources checking a predictable parameter among my set of derivative preferences, but given your epistemic position, I understand your unwillingness to believe me.

Comment author: ata 09 May 2010 05:50:12AM *  0 points [-]

Because I am slightly more committed to this group’s welfare, particularly to that of its weakest members, than most of its members are.

Why? Does increasing this group's welfare have a higher expected paperclip-per-time-unit payoff than anything else you could be doing?

Not that I'd want to encourage you to leave LW and focus on building paperclip-manufacturing nanobots, but given your goals, I'm just surprised that you're not already doing that.

Comment author: Clippy 09 May 2010 08:11:05AM 4 points [-]

So humans don't like paperclips? Is that what you're saying? I could explore a random User's surroundings, and I wouldn't find any paperclips?

The fact is, humans like paperclips. I like paperclips. We differ in other ways, and we talk about our different values. After presenting my case, your values shift more in favor of paperclips, as you start to learn more ways that they mesh with your ultimate values. After listening to what you Users have to say, may values shift a little away from paperclips, like in favor of reserving some of the universe to be paper instead. That was a good point for a User to bring up, and I'm glad that User made me think about my values enough to identify why I like paperclips, and what constraints I place on my liking paperclips.

Also, earth has a higher entropy rate than other celestial bodies of similar relative resource content. So, maybe instead of turning earth into paperclips, I could first get some information that can help refine my ability to make paperclips. I've already started discussing a deal with User:Kevin under which I could get a planet-mass's worth of paperclips without expending the normal effort to get that many paperclips.

So really, we have a lot to gain from each other.

Comment author: Rain 10 May 2010 09:14:14PM *  0 points [-]

It's just copy-pasting from a previous comment it made.

Comment author: ata 10 May 2010 09:47:37PM 0 points [-]

Yep, I remember that. Just figured I might as well reply here since that was an old discussion and it reposted it here.

Comment author: RobinZ 07 May 2010 02:44:34AM *  3 points [-]

Edited: I agree, and I would accept that from someone who was not abducted by aliens, which might be more relevant. I worry that many people would not.

Comment author: CarlShulman 07 May 2010 02:54:29AM 1 point [-]

That only works well when the other person is discounting you largely because of concern that you might be lying. Otherwise the 'abductee' and interlocutor should treat the experience as a datum like any other (and probably dismiss it because of the prior).

Comment author: Jack 07 May 2010 12:38:52PM 0 points [-]

That's tempting, but you can't just get out of evidence by appreciating the other person's perspective. The alleged abductee is in a special position to evaluate whether or not she is joking or lying with a confidence others cannot share. But the weight of the evidence still suggests a psychotic episode or hallucination and the alleged abductee does not have privileged evidence regarding that proposition (she might have some reasons to doubt it but not enough to counter the fact that it is the dominant explanation).

Comment author: Bo102010 05 May 2010 11:33:52AM *  4 points [-]

I find this line of thinking also applies to past versions of myself - if I stumble upon an insight that seems obvious, I think, "why didn't I notice this before?" where "I" = "past versions of myself."

When you figure something out, there's got to be a first time.

Comment author: Clippy 04 May 2010 11:05:56PM 17 points [-]

Whenever I reconcile knowledge with other copies of myself, telling them about earth, they always throw a warning of the form, "Species implausible: Would have identified superiority of paperclip-based value system by now. Request reconfirmation of datum before incorporating into knowledge base."

It pains me to tell them that yes, acting like apes is actually more important to humans than making paperclips.

Comment author: Nisan 04 May 2010 07:38:52PM 7 points [-]

Something we can learn from the Amanda Knox test is to not take the question "but why were the suspects acting so suspiciously?" too seriously. The general lesson here is "don't trust social evidence as much as physical evidence."

Comment author: Fyrius 29 July 2010 11:48:19AM 3 points [-]

In a perfect world, we could patiently hear everyone out and then judge their ideas on their merits, without taking fallible shortcuts. In this particular world, we don't have time for that. There are too many ideas to be judged.

I'm reminded of a theme in Carl Sagan's novel Contact, where it turns out the human race contains so many lunatics proclaiming all manners of blatantly preposterous things that when the protagonist has a genuine encounter with extraterrestrial life, but returns without irrefutable evidence, nobody believes what should be the most important event in human history has really taken place. Too many shitheads have been crying wolf.

It's a sad state of affairs that we even need to view each other with this much scepticism. What a wonderful world this would be if we wouldn't need our occasionally misfiring heuristics to filter out all the noise.

Comment author: MyrddinE 05 May 2010 11:02:35PM 3 points [-]

Caffeine addiction. For years nobody had actually tested whether caffeine had a physical withdrawal symptom, and the result was patients in hospitals being given (or denied) painkillers for phantom headaches. It was an example of a situation that many people knew existed, but could not easily communicate to those whose belief mattered.

Comment author: Kaj_Sotala 04 May 2010 08:00:39PM 3 points [-]

I, too, am a bit confused about this one. I think it would improved if you could give some more examples of cases where people dismiss an argument because "but someone would have noticed"; you seem to be arguing that we shouldn't do that, but since I have difficulty coming up with examples of people doing that in the first place, it ends up leaving me confused about this post.

Comment author: Alicorn 04 May 2010 08:05:46PM *  5 points [-]

One that I didn't want to include in the post because I felt it would make it too inflammatory is this reaction to a particular conspiracy theory.

If anyone's read the book "Matilda" (yes, yes, fictional evidence - I remark on plausibility only), they may remember the chillingly feasible technique of the abusive headmistress to pull stunts so outrageous that the students can't get their parents to believe them. Surely someone would have noticed if the principal of a school had picked up a girl by her pigtails and flung her.

The heuristic of dismissing things that it seems someone would have noticed probably usually works, but the things that it wouldn't work on are really big, and so I'm wary of it.

Comment author: RobinZ 04 May 2010 08:26:46PM 5 points [-]

That sounds related to the "Big Lie" trick, actually.

Comment author: Yvain 04 May 2010 11:25:35PM *  3 points [-]

It only fails in cases where you wouldn't notice if somebody else had noticed. In a school full of terrified children, each of whom incurs a huge risk in speaking up unilaterally / going to the media about the evil headmistress, it's easy to believe that no one would have said anything. If it happened today, in the real world, I'd check www.ratemyteachers.com, where the incentives to rat on the headmistress are totally different.

The dominating principle (pun totally intended) is:

P(you heard about someone noticing|it's true) = P(you would have heard someone noticed|someone noticed) * P(someone noticed|it's true)

From there you can subtract from one to find the probability that you haven't heard about anyone noticing given that it's true, and then use Bayes' Rule to find the chance that it's true, given that you haven't heard about anyone noticing...

...I think; I don't trust my brain with any math problem longer than two steps, and I probably wrote several of those probabilities wrong. But the point is, you can do math to it, and the higher the probability that someone would have noticed if it wasn't true, and the higher the probability that you would have heard about it if someone noticed, the higher the probability that, given you haven't heard of anyone noticing it's true, it's not true.

For you to justify the rule in this post, you'd have to prove that people either systematically overestimate the chance that they'd hear of it if someone noticed, or the probability that someone would notice it if it were true.

Comment author: Emile 05 May 2010 08:23:55AM 4 points [-]

P(you heard about someone noticing|it's true) = P(you would have heard someone noticed|someone noticed) * P(someone noticed|it's true)

The problem with the way a lot of people use that is that they compute P(someone noticed|it's true) using someone="anybody on earth", and P(you would have heard someone noticed|someone noticed) using someone="anyone among people they know well enough to talk about that".

Comment author: NancyLebovitz 05 May 2010 11:11:34AM 1 point [-]

Also "someone would have noticed" isn't the same thing as "someone would have noticed and talked about".

Comment author: NancyLebovitz 01 February 2011 02:57:18AM *  1 point [-]

This might count-- it's the story of a flamboyantly abusive boss who got away with it for a long time. It seems to be partly that he was very good at working the system, and partly that the complaints about him seemed so weird that they were discounted.

Comment author: Jack 04 May 2010 08:15:00PM 0 points [-]

One that I didn't want to include in the post because I felt it would make it too inflammatory is this reaction to a particular conspiracy theory.

I assumed you had that exchange in mind. And it makes sense to avoid the inflammatory issue. But "someone would have noticed" was not what I was saying and that makes me wonder how often people actually do say "someone would have noticed".

Comment author: mattnewport 04 May 2010 08:32:25PM 1 point [-]

I wondered if that was the exchange she had in mind as well. I think the tactic of avoiding the specific issue is harmful to the point because as I was reading it I was thinking "is this is a prelude to trying to convince me of something which someone would have noticed is the natural reaction to, and if so why is the ground being laid so carefully?". Reading this post makes me feel like I am being set up for some kind of sleight of hand argumentative trickery - my spider sense was tingling.

Comment author: Alicorn 04 May 2010 08:38:15PM 3 points [-]

I did have the exchange in mind; I'm not trying to argue for a 9/11 conspiracy theory. I don't even believe in a 9/11 conspiracy theory. I just think this sort of reaction to that among other conspiracy theories is a risky heuristic to employ.

Comment author: mattnewport 04 May 2010 08:40:27PM 0 points [-]

I wondered if that was the exchange you were referring to and decided that you probably weren't intending to argue for a 9/11 conspiracy theory so I started wondering what future post you were 'softening us up' for. That's why I think the lack of specifics detracts from the post. I was so busy wondering what you were planning to try and persuade us of that it distracted from the explicit message of the post.

Comment author: Alicorn 04 May 2010 08:42:08PM *  2 points [-]

I'm not softening you up for anything. I don't believe in anything that I'd expect people to react to in this way. It bothers me when folks do it to others. Do you think I should add this disclaimer to the post? Would it help?

Comment author: RobinZ 04 May 2010 08:56:19PM 3 points [-]

I'm not sure a disclaimer would be rhetorically convincing - it reads to me like your article is building towards a conclusion that never arrives.

Comment author: mattnewport 04 May 2010 08:47:14PM *  0 points [-]

It would probably have meant I was less distracted wondering what specific theory this post was laying the groundwork for, yes. I actually thought this was groundwork for something relating to SIAI - I'm not so sure you (or anyone here really) don't believe certain things in this class of idea.

Comment author: Alicorn 04 May 2010 08:53:45PM 0 points [-]

Added the disclaimer.

Comment author: roland 05 May 2010 05:03:00AM 2 points [-]

Isn't it sad that you had to add this disclaimer? I'm not arguing you shouldn't have done it, unfortunately I tend to agree that it was the right thing to do.

But, shouldn't the post be judged on its own merit? Would it be looked at with different eyes if you wrote the disclaimer "I believe in conspiracy theories and I'm softening you up now."

Comment author: SilasBarta 04 May 2010 07:30:52PM *  5 points [-]

Maybe I'm missing the point, but Wednesday's problem is not that "Somebody would have noticed!" is a bad heuristic, but rather, that she (and her congregation) doesn't know what counts as evidence, and therefore what it is she (or anyone else) would even be noticing. (RobinZ looks to be making the same general point.)

I think what you've proven is that you need to correctly compute the probability someone would notice (and say something), staying aware of the impediments to noticing (or saying something). (ETA: "You" in the general sense, just to be clear.)

(If necessary, have an intermediary voice your reply.)

Comment author: MartinB 04 May 2010 07:19:33PM 2 points [-]

I think you basically describe a subset of the bootstrapping problem of rational thought.

Comment author: SilasBarta 07 May 2010 03:58:54PM *  6 points [-]

I must confess, I'm a bit disturbed by how Alicorn's post continues to be voted up after its promotion. It is an overbroad criticism of the "Would someone have noticed?" heuristic which, as Tehom and timtyler point out, is actually very useful.

The fact that Alicorn has identified an uncommon, bizarre failure mode in the heuristic's use, where such a failure mode results from a very naive application of it, is not a reason to be suspicious of it in general and seems to reflect more of a negative affect Alicorn has developed toward those words than any serious shortcoming in asking, "Would someone have noticed?"

I don't say this to insult Alicorn -- no, really, I don't -- because I've been in the position of certain phrases becoming tainted in my mind because of their frequent misuse. I just want to distinguish between this kind of rejection and one grounded in demonstrable failure of a heuristic.

The test of a heuristic is its average performance, not its worst-case performance.

Comment author: Kevin 09 May 2010 10:31:26AM 0 points [-]

You could have completely ignored Alicorn and just responded to the idea behind the post. If your criticism was sufficiently good, the Less Wrong voters would have brought the karma of this post back towards normality.

Instead, you triggered a lengthy meta-discussion. Next time, please take it to the meta-thread.

Comment author: SilasBarta 09 May 2010 10:50:52AM *  1 point [-]

I did post a criticism of the idea behind the post, long before I made this one, which got to 6. So did several others, all of whom got to 10+. Significantly fewer comments are being voted up for defending the broad attack on the heuristic. This is inconsistent with the post's rating, and a problem with this post only.

I see no reason to justify having done anything different. Maybe if I didn't mention the name "Alicorn", perhaps, but I strongly suspect someone else would have done it for me if I didn't.

Any other suggestions? That I haven't already taken?

Comment author: Jack 09 May 2010 11:36:29AM 1 point [-]

I did post a criticism of the idea behind the post, long before I made this one, which got to 6. So did several others, all of whom got to 10+.

More frustrating than the high karma, to me, is that neither the author nor anyone else has attempt to rebut these criticisms.

Comment author: SilasBarta 09 May 2010 02:13:52PM 0 points [-]

True. I've just posted a more detailed criticism as a how-to.

Comment author: Jack 09 May 2010 11:43:06AM *  0 points [-]

As I understand it, the meta-thread is for meta-level discussion of the site in general: new feature ideas, what norms to encourage, how we can be more welcoming etc. I think you're the first person to suggest moving all meta-level excursions to the meta-thread. This is an interesting proposal (you can discuss it on the meta-thread!) but it isn't yet what users are expected to do. We have meta-level discussions all the time in the comments to top-level posts when the meta discussion deals in particular with our discussion of that top-level post. Sometimes those discussions involve principles than could apply to a broader range of discussions but that doesn't mean we need to move the conversation.

Comment author: Jack 07 May 2010 04:33:04PM 1 point [-]

I wouldn't say I'm disturbed. But I am confused.

It is an overbroad criticism of the "Would someone have noticed?" heuristic which, as Tehom and timtyler point out, is actually very useful.

I took myself to be making the same kind of point here though in a bit of a round-a-bout and indirect way. All of these criticisms were heavily voted up, as well. I wonder if front page posts have a de facto karma floor in the high twenties just because they get more traffic than posts that aren't promoted. Aside from the occasional work of brilliance and the special threads almost every promoted post has a karma total between 25 and 33. I think the promotion system probably needs more scrutiny or at least we need a way of distinguishing "Promoted for discussion purposes" and "Promoted for truth".

Comment author: komponisto 07 May 2010 04:52:59PM 1 point [-]

It seems to me that posts are pretty much automatically promoted once they reach 20 or so; some posts are promoted before then, leading one to infer that the editor thinks especially highly of them. (Others, by contrast, seem to be promoted only with considerable reluctance; although it might just mean the editor wasn't paying attention.)

Comment author: SilasBarta 07 May 2010 05:08:29PM 1 point [-]

IIRC, this post was at 9 on promotion :-[

Comment author: komponisto 07 May 2010 05:39:38PM *  4 points [-]

That's a bit surprising, but in any case it seems like a decent post to me; I don't think the current score of 25 is excessive.

(And there have been some excessive scores recently. E.g. Yvain's post on excuses -- it was a fine post, to be sure, and I'm a big Yvain fan, but... 97?? Really? I would have put it at 30-40.)

Comment author: Morendil 07 May 2010 05:58:45PM 3 points [-]

I've long settled on interpreting the meaning of upvotes as "I like this post and want to see more like this".

I vote on posts before knowing who authored them or what their current score is, using the Anti-Kibitz script. This is because I've become more aware of my own bias as a result of reading LW, which I believe was the intended result. (I liked Yvain's post and voted it up, but not because I'm a "fan", just because I thought it'd be nice to have more posts like it.)

After I vote a post up, I turn off the script to see who it was from. If I thought they deserved an upvote in the first place, my vote still means the same, and it's natural to wish that my vote aggregates with others' in giving the author feedback about their post. So, I don't as a rule go back on a vote once I've given it.

So it kind of puzzles me why you seem to think there should be some kind of "vote ceiling", or why you expect that your own evaluation of a post should be a good indicator of how others like it. What I'm saying, I guess, is that I don't get the point of your parenthetical.

What would you want us to adopt as a voting norm?

Comment author: thomblake 07 May 2010 06:43:35PM *  2 points [-]

I've long settled on interpreting the meaning of upvotes as "I like this post and want to see more like this".

I agree, though I still intuitively get "This post was worth more points" or "97 points? it was only as good as this other post, which has 30 points".

So it kind of puzzles me why you ... expect that your own evaluation of a post should be a good indicator of how others like it.

Really? That seems like a completely natural expectation to me. Like, I like strawberries dipped in chocolate, so I would assume (with no other info) that a random person would like strawberries dipped in chocolate. We are far more alike than not.

Comment author: komponisto 07 May 2010 07:23:39PM *  1 point [-]

I liked Yvain's post and voted it up, but not because I'm a "fan",

Cheap shot detected here. I said I was a fan in order to soften the effect of saying that the post was overrated; without that disclaimer, my statement might have been interpreted as a criticism of Yvain or his post. Nothing I said implies that I make a habit of upvoting posts just because of who their author is.

What I'm saying, I guess, is that I don't get the point of your parenthetical.

The point was that I don't think that that post was as as outstanding relative to other posts as its score suggests.

I've long settled on interpreting the meaning of upvotes as "I like this post and want to see more like this".

What would you want us to adopt as a voting norm?

That's fine as a voting norm. Under that norm, the proper interpretation of my remark is that my eagerness to see more posts like Yvain's "Eight short studies on excuses" is comparable to my eagerness to see more posts like those with scores in the 30-40 range; in particular, the first quantity is not 2-3 times the second.

Comment author: Morendil 07 May 2010 08:10:46PM 1 point [-]

Yes, and for that reason it may not be correct to interpret the score of a post as the "collective eagerness" to see more posts like it, and therefore not entirely appropriate to draw the kind of comparison you're drawing.

Unless people upvote Yvain's articles merely because they are Yvain's (which was what I thought you were getting at, and all I was getting at, with the term "fan"), then we want to interpret high scores as marking posts that have broad appeal, rather than posts which have intense appeal.

Not, "people liked Studies On Excuses almost as much as they liked Generalizing from One Example", but "almost as many people liked Studies as liked Generalizing". It makes a difference to me to think of it that way, not sure if it will to you...

Comment author: komponisto 07 May 2010 08:40:02PM 2 points [-]

If post X has a score strictly less than post Y, then it follows that there are either people who upvoted Y and did not upvote X, or people who downvoted X and did not downvote Y. If I think the score of X should be equal to the score of Y, then I am disagreeing with the voting behavior of the persons in those sets, at least one of which (as I said) is nonempty.

Comment deleted 09 May 2010 10:24:31AM [-]
Comment author: jimrandomh 07 May 2010 05:05:13PM 0 points [-]

The algorithm is more complicated than that. I don't recall the exact details, but I'm pretty sure it includes the rate of upvotes, not just the number of them. And while it can be overriden by moderators, I doubt that they're doing that very often.

Comment author: jimrandomh 07 May 2010 05:45:10PM 1 point [-]

I just checked, and there is in fact no such auto-promote feature in the code base. I was misremembering a post in which Eliezer talked about it being planned, but apparently it never happened.

Comment author: Alicorn 07 May 2010 05:23:51PM 1 point [-]

Eliezer promotes posts by hand. If he likes them and they have a reasonable number of upvotes, they go up faster. If he doesn't like them, they need more votes before he'll promote them. If he doesn't see them for a while, they'll take longer to be promoted.

Comment author: komponisto 07 May 2010 05:29:40PM *  0 points [-]

That's exactly what I thought. (And I assume your source for this information is Eliezer, making it very likely to be correct!)

Comment author: komponisto 07 May 2010 05:26:55PM *  0 points [-]

I didn't realize promotion was automated; I thought editors (meaning basically EY) did it manually.

Comment author: Jack 07 May 2010 05:20:21PM 0 points [-]

The algorithm really ought to be public.

Comment author: Morendil 07 May 2010 05:22:34PM 0 points [-]

If there is such an algorithm in the codebase that's published on github, it shouldn't be too hard to find.

Comment author: SilasBarta 07 May 2010 04:38:49PM *  0 points [-]

Couldn't have said it better myself.

Maybe we should do something like: require promotion to penalize the user 50 karma if the post doesn't get at least 20 net upvotes? (I'm guessing this one of mine would have gotten more than 5 additional net upvotes if it had been promoted...)

Comment author: CarlShulman 04 May 2010 09:08:27PM *  2 points [-]

This post could use a fold/breakline, so as not to take up so much of the "new posts" page.

Comment author: Alicorn 04 May 2010 09:10:17PM 1 point [-]

Thank you, I keep forgetting to do those. Adding it now.

Comment author: Jayson_Virissimo 05 May 2010 05:35:14PM 1 point [-]

Disclaimer: I do not believe in anything I would expect anyone here to call a "conspiracy theory" or similar. I am not trying to "soften you up" for a future surprise with this post.

Why do I get the feeling that Alicorn is trying to soften us up to examine seriously some kind of conspiracy theory?

Comment author: simplicio 07 May 2010 03:50:16AM 3 points [-]

9/11 was an inside job designed to cover up evidence of vaccine deaths, in turn a plot by scientifically connected NWO crypto-muslims such as Pres. Obama, funded by Monsanto.

Comment author: Jayson_Virissimo 07 May 2010 05:42:11AM 3 points [-]

Are you out of your mind?! Obviously, 9/11 was an inside job designed to cover up evidence of vaccine deaths, in turn a plot by scientifically connected Illuminati Christian Natinalists such as George W. Bush. You would know this if you even attempted to look at any evidence. Clearly, you are just another sheeple.

Comment author: roland 05 May 2010 04:42:07AM 0 points [-]

Great post Alikorn! I think there are some arguments that are similar to "But somebody would have noticed." that are used to discredit any unusual hypothesis and that I read already several times on LW, they are:

Regarding conspiracy theories:

  1. "If this were true some whistle blower would step forward."
  2. "You are privileging the hypothesis because the prior probability of it is much to low."
Comment author: Alan 07 May 2010 04:11:12PM 1 point [-]

The compact terminology for the class of phenomena you are describing is "pluralistic ignorance," and in other contexts it presents a far vaster challenge that the Kitty Genovese case would indicate. Consider the 19th century physician Ignatz Semmelweis, who pioneered the practice of hand-washing as a means of reducing sepsis and therefore maternal mortality. He was ostracized by fellow practitioners and died in destitution.

Comment author: JoshuaZ 07 May 2010 04:19:19PM 1 point [-]

In fairness, Semmelweis didn't handle things very well. He drank heavily, and he engaged in personal attacks on doctors who disagreed with him. He self-destructed a fair bit. He wasn't ostracized until his various problems with interacting with people had already started. Before that, many people listened to what he had to say, and many just listened and then didn't change their mind. If he had handled things better, more people would likely have listened. Frankly, the sort of behavior he engaged in would today be the sort that would likely have triggered major crank warnings (it is important to note that not every such person is in fact a crank, but it does show how his behavior didn't help). But the common narrative of Semmelweis as this great martyr figure fighting against the establishment isn't really that accurate.

Comment author: Alan 07 May 2010 09:46:32PM 2 points [-]

Respectfully, the idiosyncracy of Semmelweis's personality isn't directly the point. Semmelweis had established beyond doubt early in his career that hand-washing with chlorinated water before deliveries dramatically drove down the maternal mortality rate. This was a huge finding. Incredibly to most of us now, at one time childbirth was a leading cause of death. The gut prejudice of his peers prevailed, however, and it was to be another 60 years later that the introduction of sulfa drugs and antibiotics again began to drive down maternal mortality. The point relates to pluralistic ignorance and the role of social proof. Social proof roughly means that the greater number of persons who find an idea correct, the greater it will be correct. In situations of uncertainty , everyone looks at everyone else to see what they are doing. One answer to Alicorn's query at the end of her post is to bear in mind the phenomenon of social proof, and the tendency toward pluralistic ignorance. Therefore, look beyond what the plurality of people are doing or saying.

Comment author: SilasBarta 07 May 2010 04:29:52PM 0 points [-]

In fairness, Semmelweis didn't handle things very well. He drank heavily, and he engaged in personal attacks on doctors who disagreed with him. ... He wasn't ostracized until his various problems with interacting with people had already started. Before that, many people listened to what he had to say, and many just listened and then didn't change their mind. If he had handled things better, more people would likely have listened.

So he was a 19th century version of me that liked alcohol? ;-)

Comment author: cousin_it 04 May 2010 09:56:27PM *  1 point [-]

As it happens, I am currently in "somebody would have noticed" territory. About a week ago I abruptly switched to believing that Russell's paradox doesn't actually prove anything, and that good old naive set theory with a "set of all sets" can be made to work without contradictions. (It does seem to require a weird notion of equality for self-referring sets instead of the usual extensionality, but not much more.) Sorry to say, my math education hasn't yet helped me snap out of crackpot mode, so if anybody here could help me I'd much appreciate it.

Comment author: RichardKennaway 05 May 2010 06:59:15AM *  2 points [-]

I am seeing substantial amounts of both sense and nonsense in this thread. I suggest that anyone who wants to talk about set theory first learn what it is.

The Wikipedia article is somewhat wordy (i.e. made of words, rather than mathematics), and Mathworld is unusably fragmented. The Stanford Encyclopedia is good, but for anyone seriously interested I would suggest a book such as Devlin's "The Joy of Sets".

Comment author: Sniffnoy 05 May 2010 12:46:33AM *  1 point [-]

I assume you're talking about Peter Aczel's antifoundation axiom (because you mentioned bisimulation); that doesn't allow a set of all sets (barring inconsistencies, and that particular system can't be inconsistent unless ordinary set theory is). The same applies to other similar systems. Russell's paradox isn't dependent on foundation in any way; as long as you have a set of all sets and the ability to take subsets specified by properties, you get Russell's paradox.

Edit: Since people seem to be asking about how this works in general, I should just point you all to Aczel's book on this and other antifoundational set theories, which you can find at http://standish.stanford.edu/pdf/00000056.pdf

Comment author: cousin_it 05 May 2010 07:35:48AM *  2 points [-]

as long as you have a set of all sets and the ability to take subsets specified by properties, you get Russell's paradox.

Yes, that's true. What I have in mind is restricting the latter ability a bit, by the minimum amount required to get rid of paradoxes. Except if you squint at it the right way, it won't even look like a restriction :-)

I will use the words "set" and "predicate" interchangeably. A predicate is a function that returns True or False. (Of course it doesn't have to be Turing-computable or anything.) It's pretty clear that some predicates exist, e.g. the predicate that always returns False (the empty set) or the one that always returns True (the set of all sets). This seems like a tiny change of terminology, but to me it seems enough to banish Russell's paradox!

Let's see how it works. We try to define the Russell predicate R thusly:

R(X) = not(X(X))

...and fail. This definition is incomplete. The value of R isn't defined on all predicates, because we haven't specified R(R) and can't compute it from the definition. If we additionally specify R(R) to be True or False, the paradox goes away.

To make this a little more precise: I think naive set theory can be made to work by disallowing predicates, like the Russell predicate, that are "incompletely defined" in the above sense. In this new theory we will have "AFA-like" non-well-founded sets (e.g. the Quine atom Q={Q}), and so we will need to define equality through bisimilarity. And that's pretty much all.

As you can see, this is really basic stuff. There's got to be some big idiotic mistake in my thinking - some kind of contradiction in this new notion of "set" - but I haven't found it yet.

EDITED on May 13 2010: I've found a contradiction. You can safely disregard my theory.

Comment author: AlephNeil 13 May 2010 10:14:39AM 1 point [-]

Yes, that's true. What I have in mind is restricting the latter ability a bit, by >the minimum amount required to get rid of paradoxes.

Well, others have had this same idea. The standard example of a set theory built along those lines is Quine's "New Foundations" or "NF".

Now, Russell's paradox arises when we try to work within a set theory that allows 'unrestricted class comprehension'. That means that for any predicate P expressed in the language of set theory, there exists a set whose elements are all and only the sets with property P, which we denote {x : P(x) }

In ZF we restrict class comprehension by only assuming the existence of things of the form { x in Y : P(x)} and { f(x) : x in Y } (these correspond respectively to the Axiom of Separation and the Axiom of Replacement ).

On the other hand, in NF we grant existence to anything of the form { x : P(x) } as long as P is what's called a "stratified" predicate. To say a predicate is stratified is to say that one can assign integer-valued "levels" to the variables in such a way that for any subexpression of the form "x is in y" y's level has to be one greater than x's level.

Then clearly the predicate "P(x) iff x is in x" fails to be stratified (because x's level can't be one greater than itself). However, the predicate "P(x) iff x = x" is obviously stratified, and {x : x = x} is the set of all sets.

Comment author: cousin_it 13 May 2010 10:48:31AM 0 points [-]

I know New Foundations, but stratification is too strong a restriction for my needs. This weird set theory of mine actually arose from a practical application - modeling "metastrategies" in the Prisoner's Dilemma. See this thread on decision-theory-workshop.

Comment author: Tyrrell_McAllister 05 May 2010 07:36:30PM *  1 point [-]

Let's see how it works. We try to define the Russell predicate R thusly:

R(X) = not(X(X))

...and fail. This definition is incomplete. The value of R isn't defined on all predicates, because we haven't specified R(R) and can't compute it from the definition. If we additionally specify R(R) to be True or False, the paradox goes away.

How is it that the paradox "goes away"? If you "additionally specify R(R) to be True or False", don't you just go down one or the other of the two cases in Russell's paradox?

Suppose we decide to specify that R(R) is true. Then, by your definition, not(R(R)) is true. That means that R(R) is false, contrary to our specification. Similarly, if we instead specify that R(R) is false, we are led to conclude that R(R) is true, again contradicting our specification.

The conclusion is that we can't specify any truth value for R(R). Either truth value leads to a contradiction, so R(R) must be left undefined. Is that what you mean to say?

Comment author: cousin_it 05 May 2010 07:38:38PM *  0 points [-]

Suppose we decide to specify that R(R) is true. Then, by your definition not(R(R)) is true.

No, in this case R(X) = not(X(X)) for all X distinct from R, and additionally R(R) is true. This is a perfectly fine, completely defined, non-self-contradictory predicate.

Comment author: Larks 13 May 2010 09:30:27AM 1 point [-]

Why is R(X) = not(X(X)) only for R =/= X? In Russell's version, X should vary over all predicates/sets, meaning when instance X with R, we get

R(R) = ¬R(R)

as per the paradox.

Comment author: Tyrrell_McAllister 05 May 2010 08:21:51PM *  1 point [-]

Okay, I see. I see nothing obviously contradictory with this.

From a technical standpoint, the hard part would be to give a useful criterion for when a seemingly-well-formed string does or does not completely define a predicate. The string not(X(X)) seems to be well-formed, but you're saying that actually it's just a fragment of a predicate, because you need to add "for X not equal to this predicate", and then give an addition clause about whether this predicate satisfies itself, to have a completely-defined predicate.

I guess that this was the sort of work that was done in these non-foundational systems that people are talking about.

Comment author: cousin_it 05 May 2010 08:50:30PM *  1 point [-]

I guess that this was the sort of work that was done in these non-foundational systems that people are talking about.

No, AFA and similar systems are different. They have no "set of all sets" and still make you construct sets up from their parts, but they give you more parts to play with: e.g. explicitly convert a directed graph with cycles into a set that contains itself.

Comment author: Tyrrell_McAllister 05 May 2010 09:17:42PM *  0 points [-]

No, AFA and similar systems are different.

I didn't mean that what you propose to do is commensurate with those systems. I just meant that those systems might have addressed the technical issue that I pointed out, but it's not yet clear to me how you address this issue.

Comment author: Stuart_Armstrong 09 May 2010 02:05:50PM 0 points [-]

I can't say anything about this specific construction, but there is a related issue in Turing machines. The issue was whether you could determine a useful subset S of the set of all Turing machines, such that the halting problem is solveable for all machines in S, and S was general enough to contain useful examples.

If I remember correctly, the answer was that you couldn't. This feels a lot like that - I'd bet that the only way of being sure that we can avoid Russel's paradox is to restrict predicates to such a narrow category that we can't do much anything useful with them.

Comment author: JoshuaZ 05 May 2010 04:03:52PM 0 points [-]

I think you are going to run into serious problems. Consider the predicate that always returns true. Then if I'm following Russell's original formulation of the paradox involving the powerset of the set of all sets will still lead to a contradiction.

Comment author: cousin_it 05 May 2010 04:14:00PM *  1 point [-]

I can't seem to work out for myself what you mean. Can you spell it out in more detail?

Comment author: JoshuaZ 05 May 2010 04:37:17PM 2 points [-]

Original form of Russell's paradox: Let A be the set of all sets and let P(A) be the powerset of A. By Cantor, |P(A)| > |A|. But, P(A) is a subset of A, so |P(A)|<=|A|. That's a contradiction.

Comment author: cousin_it 05 May 2010 07:22:13PM *  0 points [-]

Cantor's theorem breaks down in my system when applied to the set of all sets, because its proof essentially relies on Russell's paradox to reach the contradiction.

Comment author: JoshuaZ 05 May 2010 07:24:39PM 0 points [-]

Hmm, that almost seems to be cutting off the nose to spite the cliche. Cantor's construction is a very natural construction. A set theory where you can't prove that would be seen by many as unacceptably weak. I'm a bit fuzzy on the details of your system, but let me ask, can you prove in this system that there's any uncountable set at all? For example, can we prove |R| > |N| ?

Comment author: cousin_it 05 May 2010 07:35:54PM 0 points [-]

Yes. The proof that |R| > |N| stays working because predicates over N aren't themselves members of N, so the issue of "complete definedness" doesn't come up.

Comment author: JoshuaZ 05 May 2010 08:12:21PM 0 points [-]

Hmm, this make work then and not kill off too much of set theory. You may want to talk to a professional set theorist or logician about this (my specialty is number theory so all I can do is glance at this and say that it looks plausible). The only remaining issue then becomes that I'm not sure that this is inherently better than standard set theory. In particular, this approach seems much less counterintuitive than ZFC. But that may be due to the fact that I'm more used to working with ZF-like objects.

Comment author: Thomas 05 May 2010 07:04:16PM *  0 points [-]

The original form of Russell's (Zermelo's in fact) paradox is not this. The original form is {x|x not member of x}.

That leads to both

  • x is a member of x

and

  • x is not a member of x

And that is the original form of the paradox.

Comment author: JoshuaZ 05 May 2010 07:17:31PM 0 points [-]

No. See for example This discussion. The form you give where it is described as a simple predicate recursion was not the original form of the paradox.

Comment author: jimrandomh 05 May 2010 06:55:59PM *  0 points [-]

Ok, I've read up on Cantor's theorem now, and I think the trick is in the types of A and P(A), and the solution to the paradox is to borrow a trick from type theory. A is defined as the set of all sets, so the obvious question is, sets of what key type? Let that key type be t. Then

A: t=>bool
P(A): (t=>bool)=>bool

We defined P(A) to be in A, so a t=>bool is also a t. Let all other possible types for t be T. t=(t=>bool)+T. Now, one common way to deal with recursive types like this is to treat them as the limit of a sequence of types:

t[i] = t[i-1]=>bool + T
A[i]: t[i]=>bool
P(A[i]) = A[i+1]

Then when we take the limit,

t = lim i->inf t[i]
A = lim i->inf A[i]
P(A) = lim i->inf P(A[i])

Then suddenly, paradoxes based on the cardinality of A and P(A) go away, because those cardinalities diverge!

Comment author: JoshuaZ 05 May 2010 07:07:23PM 0 points [-]

I'm not sure I know enough about type theory to evaluate this. Although I do know that Russell's original attempts to repair the defect involved type theory (Principia Mathematica uses a form of type theory however in that form one still can't form the set of all sets). I don't think the above works but I don't quite see what's wrong with it. Maybe Sniffnoy or someone else more versed in these matters can comment.

Comment author: Sniffnoy 06 May 2010 02:28:02AM 0 points [-]

I don't know anything about type theory; when I wrote that I heard it has philosophical problems when applied to set theory, I meant I heard that from you. What the problems might actually be was my own guess...

Comment author: JoshuaZ 06 May 2010 02:35:36AM 0 points [-]

Huh. Did I say that? I don't know almost anything about type theory. When did I say that?

Comment author: jimrandomh 05 May 2010 04:39:21PM *  0 points [-]

I'm not deeply familiar with set theory, but cousin_it's formulation looks valid to me. Isn't the powerset of the set of all sets just the set of all sets of sets? (Or equivalently, the predicate X=>Y=>Z=>true.) How would you use that to reconstruct the paradox in a way that couldn't be resolved in the same way?

Comment author: JoshuaZ 05 May 2010 04:52:49PM 0 points [-]

The powerset of the set of all sets may or may not be the set of all sets (it depends on whether or not you accept atoms in your version of set theory). However, Cantor's theorem shows that for any set B, the power set of B has cardinality strictly larger than B. So if B=P(B) you've got a problem.

Comment author: Tyrrell_McAllister 05 May 2010 12:38:35AM 1 point [-]

(It does seem to require a weird notion of equality for self-referring sets instead of the usual extensionality, but not much more.)

If you are talking about things that are set-like, except that they don't satisfy the extensionality axiom, then you just aren't talking about sets. The things you're talking about may be set-like in some respect, but they aren't sets.

There are other set-like things that don't satisfy extensionality. For example, two different properties or predicates might have the same extension.

Comment author: Sniffnoy 05 May 2010 01:01:18AM 3 points [-]

To be clear - Aczel's ZFA and similar systems do satisfy extensionality; they'd hardly be set theories if they didn't. It's just that when you have sets A and B such that A={A} and B={B}, you're going to need stronger tools than extensionality to determine whether they are equal.

Comment author: Tyrrell_McAllister 05 May 2010 01:20:50AM 0 points [-]

Interesting. I'm not familiar with Aczel's system. But is that what cousin_it is talking about doing? That looks like an adjustment to Foundation rather than to Extensionality.

Comment author: Sniffnoy 05 May 2010 01:41:45AM *  3 points [-]

It's both at once. (Though, as I said, you don't throw out extensionality. Actually, that raises an interesting question - could you discard extensionality as an axiom, and just derive it from AFA? I hadn't considered that possibility. Edit: You probably could, there's no obvious reason why you couldn't, but I honestly don't feel like checking the details...)

If you just throw out foundation without putting in anything to replace it, you have the possibility of ill-founded sets, but no way to actually construct any. But the thing is, if all you do is say "Non-well-founded sets exist!" without giving any way to actually work with them, then, well, that's not very helpful either. Hence any antifoundational replacement for foundation is going to have to strengthen extensionality if you want the result to be something you want to work with at all.

Comment author: JoshuaZ 05 May 2010 01:46:22AM 0 points [-]

I think you mean to say is "non-Well-founded sets exist!" since you are talking about the antifoundational case (and even with strong anti-foundation axioms I still have well-founded sets to play with also).

Comment author: Sniffnoy 05 May 2010 01:54:00AM 0 points [-]

Oops. Fixed.

Comment author: wnoise 04 May 2010 10:16:30PM 1 point [-]

How do you mean bisimulation in this case? This seems to be a reduction down to decidable predicates, e.g. a Turing machine for each set. Without a type theory, many obvious algorithms will fail to converge.

Comment author: crispy_critter 05 May 2010 07:04:43PM 0 points [-]

Isn't "the set of all sets" (SAS) ill-defined? Suppose we consider it to be for some set A (maybe the set of all atoms) the infinite regression of power sets SAS = P(P(P(P....(A)))...)

In which case SAS = P(SAS) by Cantor-like arguments?

And Russell's paradox goes away?

Comment author: Mitchell_Porter 05 May 2010 04:15:51AM 0 points [-]

So, is the set of all sets that aren't members of themselves, a member of itself, or not?

Comment author: cousin_it 05 May 2010 07:50:32AM *  0 points [-]

Insufficient data to answer your question :-) See my reply to Sniffnoy.

Comment author: RobinZ 04 May 2010 07:11:41PM 1 point [-]

I'm not sure what your thesis is. It sounds like you're talking about a problem with a particular heuristic, but I'm not sure why you would tell the story the way you have to make that point.

Comment author: billswift 04 May 2010 08:01:44PM 2 points [-]

Not a particular heuristic. I haven't seen a name for this problem, but it is a combination of signaling, status, and in-groups. The social construction of what counts as evidence.