You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

[LINK] Common fallacies in probability (when numbers aren't used)

7 Stuart_Armstrong 15 January 2016 08:29AM

Too many people attempt to use logic when they should be using probabilities - in fact, when they are using probabilities, but don't mention it. Here are some of the major fallacies caused by misusing logic and probabilities this way:

  1. "It's not certain" does not mean "It's impossible" (and vice versa).
  2. "We don't know" absolutely does not imply "It's impossible".
  3. "There is evidence against it" doesn't mean much on its own.
  4. Being impossible *in a certain model*, does not mean being impossible: it changes the issue to the probability of the model.

Common fallacies in probability

[Link] arguman.org, an argument analysis platform

1 dyokomizo 19 October 2015 03:46PM

I recently found out about argumanIt's an online tool to dissect arguments and structure agreement and refutation.

It seems like something that's been discussed about in LW some times in the past.

The Pre-Historical Fallacy

13 Tem42 03 July 2015 08:21PM

One fallacy that I see frequently in works of popular science -- and also here on LessWrong -- is the belief that we have strong evidence of the way things were in pre-history, particularly when one is giving evidence that we can explain various aspects of our culture, psychology, or personal experience because we evolved in a certain way. Moreover, it is held implicit that because we have this 'strong evidence', it must be relevant to the topic at hand. While it is true that the environment did effect our evolution and thus the way we are today, evolution and anthropology of pre-historic societies is emphasized to a much greater extent than rational thought would indicate is appropriate. 

As a matter of course, you should remember these points whenever you hear a claim about prehistory:

  • Most of what we know (or guess) is based on less data than you would expect, and the publish or perish mentality is alive and well in the field of anthropology.
  • Most of the information is limited and technical, which means that anyone writing for a popular audience will have strong motivation to generalize and simplify.
  • It has been found time and time again that for any statement that we can make about human culture and behavior that there is (or was) a society somewhere that will serve as a counterexample. 
  • Very rarely do anthropologists or members of related fields have finely tuned critical thinking skills or a strong background on the philosophy of science, and are highly motivated to come up with interpretations of results that match their previous theories and expectations. 

Results that you should have reasonable levels of confidence in should be framed in generalities, not absolutes. E.g., "The great majority of human cultures that we have observed have distinct and strong religious traditions", and not "humans evolved to have religion". It may be true that we have areas in our brain that evolved not only 'consistent with holding religion', but actually evolved 'specifically for the purpose of experiencing religion'... but it would be very hard to prove this second statement, and anyone who makes it should be highly suspect. 

Perhaps more importantly, these statements are almost always a red herring. It may make you feel better that humans evolved to be violent, to fit in with the tribe, to eat meat, to be spiritual, to die at the age of thirty.... But rarely do we see these claims in a context where the stated purpose is to make you feel better. Instead they are couched in language indicating that they are making a normative statement -- that this is the way things in some way should be. (This is specifically the argumentum ad antiquitatem or appeal to tradition, and should not be confused with the historical fallacy, but it is certainly a fallacy). 

It is fine to identify, for example, that your fear of flying has a evolutionary basis. However, it is foolish to therefore refuse to fly because it is unnatural, or to undertake gene therapy to correct the fear. Whether or not the explanation is valid, it is not meaningful. 

Obviously, this doesn't mean that we shouldn't study evolution or the effects evolution has on behavior. However, any time you hear someone refer to this information in order to support any argument outside the fields of biology or anthropology, you should look carefully at why they are taking the time to distract you from the practical implications of the matter under discussion. 

 

Visions and Mirages: The Sunk Cost Dilemma

-8 OrphanWilde 20 May 2015 08:56PM

Summary

How should a rational agent handle the Sunk Cost Dilemma?

Introduction

You have a goal, and set out to achieve it.  Step by step, iteration by iteration, you make steady progress towards completion - but never actually get any closer.  You're deliberately not engaging in the sunk cost fallacy - at no point does the perceived cost of completion get higher.  But at each step, you discover another step you didn't originally anticipate, and had no priors for anticipating.

You're rational.  You know you shouldn't count sunk costs in the total cost of the project.  But you're now into twice as much effort as you would have originally invested, and have done everything you originally thought you'd need to do, but have just as much work ahead of you as when you started.

Worse, each additional step is novel; the additional five steps you discovered after completing step 6 didn't add anything to predict the additional twelve steps you added after completing step 19.  And after step 35, when you discovered another step, you updated your priors with your incorrect original estimate - and the project is still worth completing.  Over and over.  All you can conclude is that your original priors were unreliable.  Each update to your priors, however, doesn't change the fact that the remaining cost is always worth paying to complete the project.

You are starting to feel like you are caught in a penny auction for your time.

When do you give up your original goal as a mirage?  At what point do you give up entirely?

Solutions

The trivial option is to just keep going.  Sometimes this is the only viable strategy; if your goal is mandatory, and there are no alternative solutions to consider.  There's no guarantee you'll finish in any finite amount of time, however.

One option is to precommit; set a specific level of effort you're willing to engage in before stopping progress, and possibly starting over from scratch if relevant.  When bugfixing someone else's code on a deadline, my personal policy is to set aside enough time at the end of the deadline to write the code from scratch and debug that (the code I write is not nearly as buggy as that which I'm usually working on).  Commitment of this sort can work in situations in which there are alternative solutions or when the goal is disposable.

Another option is to discount sunk costs, but include them; updating your priors is one way of doing this, but isn't guaranteed to successfully navigate you through the dilemma.

Unfortunately, there isn't a general solution.  If there were, IT would be a very different industry.

Summary

The Sunk Cost Fallacy is best described as a frequently-faulty heuristic.  There are game-theoretic ways of extracting value from those who follow a strict policy of avoiding engaging in the Sunk Cost Fallacy which happen all the time in IT - frequent requirement changes to fixed-cost projects are a good example (which can go both ways, actually, depending on how the contract and requirements are structured).  It is best to always have an exit policy prepared.

Related Less Wrong Post Links

http://lesswrong.com/lw/at/sunk_cost_fallacy/ - A description of the Sunk Cost Fallacy

http://lesswrong.com/lw/9si/is_sunk_cost_fallacy_a_fallacy/ - Arguments that the Sunk Cost Fallacy may be misrepresented

http://lesswrong.com/lw/9jy/sunk_costs_fallacy_fallacy/ - The Sunk Cost Fallacy can be easily used to rationalize giving up

ETA: Post Mortem

Since somebody has figured out the game now, an explanation: Everybody who spent time writing a comment insisting you -could- get the calculations correct, and the imaginary calculations were simply incorrect?  I mugged you.  The problem is in doing the calculations -instead of- trying to figure out what was actually going on.  You forgot there was another agent in the system with different objectives from your own.  Here, I mugged you for a few seconds or maybe minutes of your time; in real life, that would be hours, weeks, months, or your money, as you keep assuming that it's your own mistake.

Maybe it is a buggy open-source library that has a bug-free proprietary version you pay for - get you in the door, then charge you money when it's more expensive to back out than to continue.  Maybe it's somebody who silently and continually moves work to your side of the fence on a collaborative project, when it's more expensive to back out than to continue.  Not counting all your costs opens you up to exploitative behaviors which add costs at the back-end.

In this case I was able to mug you in part because you didn't like the hypothetical, and fought it.  Fighting the hypothetical will always reveal something about yourself - in this case, fighting the hypothetical revealed that you were exploitable.

In real life I'd be able to mug you because you'd assume someone had fallen prone to the Planning Fallacy, as you assumed must have happened in the hypothetical.  In the case of the hypothetical, an evil god - me - was deliberately manipulating events so that the project would never be completed (Notice what role the -author- of that hypothetical played in that hypothetical, and what role -you- played?).  In real life, you don't need evil gods - just other people who see you as an exploitable resource, and will keep mugging you until you catch on to what they're doing.

Experience of typical mind fallacy.

2 Elo 27 April 2015 06:39PM

following on from:

http://lesswrong.com/lw/dr/generalizing_from_one_example/

I am quite sure in my experience that at some point between the ages of 10-15 I concluded that; "no the rest of the world does not think like me, I think in an unusual way".

This idea disagrees with the typical mind fallacy (where people outwardly generalise to think everyone else has similar minds to their own).

I suspect I started with a typical mind model of the world but at some point it broke badly enough that I re-modelled on "I just think differently to most others".

I wanted to start a new discussion; rather than continuing on from one in 2009;

Where do your experiences lie in relation to typical minds?

The representational fallacy

1 DanielDeRossi 25 June 2014 11:28AM

Basically Heather Dyke argues that metaphysicians are too often arguing from representations of reality (eg in language) to reality itself.

 It looks to me like a variant of the mind projection fallacy. This might be the first book length treatment teh fallacy has gotten though.  What do people think?

 

See reviews here

https://www.sendspace.com/file/k5x8sy

https://ndpr.nd.edu/news/23820-metaphysics-and-the-representational-fallacy/

To give bit of background there's a debate between A-theorists and B-theorists in philosophy of time.

A-theorists think time has ontological distinctions between past present and future

B-theorists hold there is no ontological distinction between past present and future.

Dyke argues that a popular argument for A-theory (tensed language represents ontological distinctions) commits the representational fallacy. Bourne agrees , but points out an argument Dyke uses for B-theory commits the same fallacy.

How Tim O'Brien gets around the logical fallacy of generalization from fictional evidence

9 mszegedy 24 April 2014 09:41PM

It took me until I read The Things They Carried for the third time until I realized that it contained something very valuable to rationalists. In "The Logical Fallacy of Generalization from Fictional Evidence," EY explains how using fiction as evidence is bad not only because it's deliberately wrong in particular ways to make it more interesting, but more importantly because it does not provide a probabilistic model of what happened, and gives at best a bit or two of evidence that looks like a hundred or more bits of evidence.

Some background: The Things They Carried is a book by Tim O'Brien that reads as an autobiography where he recollects various stories from being a story in the Vietnam War. However, O'Brien often repeats himself, writing the same story over again, but with details or entire events that change. It is actually a fictional autobiography; O'Brien was in the Vietnam War, but all the stories are fictional.

In The Things They Carried, Tim O'Brien not only explains how generalization from fictional evidence is bad, but also has his own solution to the problem that actually works, i.e. gives the reader a useful probabilistic model of what happened in such a way that actually interests the reader. He does this by telling his stories many times, changing significant things about them. Literally; he contradicts himself, writing out the same story but with things changed. The best illustration of the principle in the book is the chapter "How to Tell a True War Story," found here (PDF warning, and bad typesetting warning).

A reader is not inclined to read a list of probabilities, but they are inclined to read a bunch of short stories. He talks about this practice a lot in the book itself, writing, "All you can do is tell it one more time, patiently, adding and subtracting, making up a few things to get at the real truth. … You can tell a true war story if you just keep on telling it." He always says war story, but the principle generalizes. At one point, he has a character represent the forces that act on conventional writing, telling a storyteller that he cannot say that he doesn't know what happened, and that he cannot insert any analysis.

O'Brien also writes about a lot of other things I don't want to mention more than briefly here, such as the specific ways in which the model that conventional war stories give of war is wrong, and specific ways in which the audience misinterprets stories. I recommend the book very much, especially if you think writing "tell multiple short stories" fiction is a great idea and want to do it.

I apologize if this post has been made before.

EDIT: Tried to clarify the idea better. I added an example with an excerpt.

EDIT 2: Added a better excerpt.

EDIT 3: Added a paragraph about background.

The Relation Projection Fallacy and the purpose of life

67 Academian 28 December 2012 04:14AM

I bet most people here have realized this explicitly or implicitly, but this comment has inspired me to write a short, linkable summary of this error pattern, with a name:

The Relation Projection Fallacy: a denotational error whereby one confuses an n-ary relation for an m-ary relation, where usually m<n.

Example instance: "Life has no purpose."

This is a troublesome phrase.  Why?  If you look at unobjectionable uses of the concept <purpose> --- also referenced by synonyms like "having a point" --- it is in fact a ternary relation.

Example non-instance: "The purpose of a doorstop is to stop doors."

Here, one can query "to whom?" and be returned the context "to the person who made it" or "to the person who's using it", etc.  That is, the full denotation of "purpose" is always of the form "The purpose of X to Y is Z," where Y is often implicit or can take a wide range of values.

This has nothing to do with connotation... it's just how the concept <purpose> typically works as people use it.  But to flog a dead horse, the purpose of a doorstop to a cat may be to make an amusing sound as it glides across the floor after the cat hits it.  The value of Y always matters.  There is no "true purpose" stored anywhere inside the doorstop, or even in the combination of the doorstop and the door it is stopping.  To think otherwise is literally projecting, in the mathematical sense, a ternary relation, i.e., a subset of a product of three sets (objects)x(agents)x(verbs), into a product of two sets, (objects)x(verbs).  But people often do this projection incorrectly, by either searching for a purpose that is intrinsic to the Doorstop or to Life, or by searching for a canonical value of "Y" like "The Great Arbiter of Purpose", both of which are not to be found, at least to their satisfaction when they utter the phrase "Life has no purpose."

Likewise, the relation "has a purpose" is typically a binary relation, because again, we can always ask "to whom?".  "<That doorstop> has a purpose to <me>."

In some form, this realization is of course the cause of many schools of thought taking the name "relativist" on many different issues.  But I find that people over-use the phrase "It's all relative" to connote "It's all meaningless" or "there is no answer".  Which is ironic, because meaning itself is a ternary relation!  Its typical denotation is of the form "The meaning of X to Y is Z", like in

  • "The meaning of <the sound 'owe'> to <French people> is <liquid water>" or
  • "The meaning of <that pendant> to <your mother> is <a certain undescribed experience of sentimentality>".

Realizing this should NOT result in a cascade of bottomless relativism where nothing means anything!  In fact, the first time I had this thought as a kid, I arrived at the connotationally pleasing conclusion "My life can have as many purposes as there are agents for it to have a purpose to."

Indeed, the meaning of <"purpose"> to <humans> is <a certain ternary functional relationship between objects, agents, and verbs>, and the meaning of <"meaning"> to <humans> is <a certain ternary relationship between syntactic elements, people generating or perceiving them, and referents>. 

When I found LessWrong, I was happy to find that Eliezer wrote on almost exactly this realization in 2-Place and 1-Place Words, but sad that the post had few upvotes -- only 14 right now.  So in case it was too long, or didn't have a snappy enough name, I thought I'd try giving the idea another shot.

 


 

ETA: In the special case of talking to someone wondering about the purpose of life, here is how I would use this observation in the form of an argument:

First of all, you may be lacking satisfaction in your life for some reason, and framing this to yourself in philosophical terms like "Life has no purpose, because <argument>."  If that's true, it's quite likely that you'd feel differently if your emotional needs as a social primate were being met, and in that sense the solution is not an "answer" but rather some actions that will result in these needs being met.  

Still, that does not address the <argument>.  So because "What is s the purpose of life?" may be a hard question, let's look at easier examples of purpose and see how they work.  Notice how they all have someone the purpose is to?  And how that's missing in your "purpose of life" question?  Because of that, you could end up feeling one of two ways: 

 (1) Satisfied, because now you can just ask "What could be the purpose of my life to <my friends, my family, myself, the world at large, etc>", and come up with answers, or 

 (2) Unsatisfied, because there is no agent to ask about such that the answer would seem important enough to you.

And I claim that whether you end up at (1) or (2) is probably more a function of whether your social primate emotional needs are being met than any particular philosophical argument.

That being said, if you believe this argument, the best thing to do for someone lacking a sense of purpose is probably not to just say the argument, but to help them start satisfying their emotional needs, and have this argument mainly to satisfy their sense of curiosity or nagging intellectual doubts about the issue.

Is Omega Impossible? Can we even ask?

-8 mwengler 24 October 2012 02:47PM

EDIT: I see by the karma bombing we can't even ask.  Why even call this part of the site "discussion?"  

 

Some of the classic questions about an omnipotent god include

 

  1. Can god make a square circle?
  2. Can god create an immovable object?  And then move it?
Saints and philosophers wrestled with these issues back before there was television.  My recollection is that people who liked the idea of an omnipotent god would answer "omnipotence does not include the power to do nonsense" where they would generally include contradictions as nonsense.  So omnipotence can't square a circle, can't make 2=3, can't make an atom which is simultaneously lead and gold.  

But where do the contradictions end and the merely difficult to conceive begin?  Can omnipotence make the ratio of the diameter to the circumference of a circle = 3, or 22/7?  Can omnipotence make sqrt(2)=1.4 or 2+2=5?  While these are not directly self-contradictory statements, they can be used with a variety of simple truths to quickly derive self-contradictory statements.  Can we then conclude that "2+2=5" is essentially a contradiction because it is close to a contradiction?  Where do we draw the line?  

What if were set some problem where we are told to assume that 
  1. 2+2 = 5
  2. 1+1 = 2
  3. 1+1+1+1+1 = 5
In solving this set problem, we can quickly derive that 1=0, and use that to prove effectively anything we want to prove.  Perhaps not formally, but we have violated the "law of the excluded middle," that either a statement is true or its negation is.  Once you violate that, you can prove ANYTHING using simple laws of inference, because you have propositions that are true and false.  

What if we set a problem where we are told to assume
  1. Omega is an infallible intelligence that does not lie
  2. Omega tells you 2+2=5
Well, we are going to have the same problem as above, we will be able to prove anything.

Newcomb's Problem

In Newcomb's box problem, we are told to assume that
  1. Omega is an infallible intelligence
  2. Omega has predicted correctly whether we will one box or two box.  
From these assumptions we wind up with all sorts of problems of causality and/or free will and/or determinism.  

What if these statements are not consistent?  What if these statements are tantamount to assuming 0=1, or are within a few steps of assuming 0=1?  Or something just as contradictory, but harder to identify?  

Personally, I can think of LOTS of reasons to doubt that Newcomb's problem is even theoretically possible to set.  Beyond that, I can think that the empirical barrier to believing Omega exists in reality would be gigantic, millions of humans have watched magic shows performed by non-superior intelligences where cards we have signed have turned up in a previously sealed envelope or wallet or audience member's pocket.  We recognize that these are tricks, that they are not what they appear.  

To question Omega is not playing by the mathematician's or philosopher's rules.  But when we play by the rules, do we blithely assume 2+2=5 and then wrap ourselves around the logical axle trying to program a friendly AI to one-box?  Why is questioning Omega's possibility of existence, or possibility of proof of existence out-of-bounds?  

 

The Fallacy of Large Numbers

20 dspeyer 12 August 2012 06:39PM

I've been seeing this a lot lately, and I don't think it's been written about here before

Let's start with a motivating example.  Suppose you have a fleet of 100 cars (or horses, or people, or whatever).  For any given car, on any given day, there's a 3% chance that it'll be out for repairs (or sick, or attending grandmothers' funerals, or whatever).  For simplicity's sake, assume all failures are uncorrelated.  How many cars can you afford to offer to customers each day?  Take a moment to think of a number.

Well, 3% failure means 97% success.  So we expect 97 to be available and can afford to offer 97.  Does that sound good?  Take a moment to answer.

Well, maybe not so good.  Sometimes we'll get unlucky.  And not being able to deliver on a contract is painful.  Maybe we should reserve 4 and only offer 96.  Or maybe we'll play it very safe and reserve twice the needed number.  6 in reserve, 94 for customers.  But is that overkill?  Take note of what you're thinking now.

The likelihood of having more than 4 unavailable is 18%.  The likelihood of having more than 6 unavailable is 3.1%.  About once a month.  Even reserving 8, requiring 9 failures to get you in trouble, gets you in trouble 0.3% of the time.  More than once a year.  Reserving 9 -- three times the expected -- gets the risk down to 0.087% or a little less than every three years.  A number we can finally feel safe with.

So much for expected values.  What happened to the Law of Large Numbers?  Short answer: 100 isn't large.

The Law of Large Numbers states that for sufficiently large samples, the results look like the expected value (for any reasonable definition of like).

The Fallacy of Large Numbers states that your numbers are sufficiently large.

This doesn't just apply to expected values.  It also applies to looking at a noisy signal and handwaving that the noise will average away with repeated measurements.  Before you can say something like that, you need to look at how many measurements, and how much noise, and crank out a lot of calculations.  This variant is particularly tricky because you often don't have numbers on how much noise there is, making it hard to do the calculation.  When the calculation is hard, the handwave is more tempting.  That doesn't make it more accurate.

I don't know of any general tools for saying when statistical approximations become safe.  The best thing I know is to spot-check like I did above.  Brute-forcing combinatorics sounds scary, but Wolfram Alpha can be your friend (as above).  So can python, which has native bignum support.  Python has a reputation as being slow for number crunching, but with n<1000 and a modern cpu it usually doesn't matter.

One warning sign is if your tools were developed in a very different context than where you're using them.  Some approximations were invented for dealing with radioactive decay, where n resembles Avogadro's Number.  Applying these tools to the American population is risky.  Some were developed for the American population.  Applying them to students in your classroom is risky.

Another danger is that your dataset can shrink.  If you've validated your tools for your entire dataset, and then thrown out some datapoints and divided the rest along several axes, don't be surprised if some of your data subsets are now too small for your tools.

This fallacy is related to "assuming events are uncorrelated" and "assuming distributions are normal".  It's a special case of "choosing statistical tools based on how easy they are to use whether they're applicable to your use-case or not".

Fine-Tuned Mind Projection

3 Alexandros 29 November 2010 12:08AM

The Fine-Tuning Argument (henceforth FTA) is the pet argument of many a religious apologist, allowing them as it does to build support for their theistic thesis on the findings of cosmology. The basic premise is this: The laws of nature appear to contain constants that if changed slightly would yield universes inhospitable to life. Even though a lot can be said about this premise, Let's assume it true for the purposes of this article.

Luke Muehlhauser over at Common Sense Atheism recently wrote an article pointing out what I think is a central flaw of the FTA. To summarise, he notes that there are multitudes of propositions that are true for this universe and would not be true in a different universe. For instance galaxies, or, Luke's tongue-in-cheek example: iPads. If you accept that the universe is fine-tuned for life, you also have to accept that it's fine-tuned for galaxies, and iPads, given that some changes in the fine-tuned constants would not produce galaxies, and certainly not iPads. 

So the question posed to defenders of the FTA is 'why life'? Why focus on this particular fact? What is it that sets life apart from all the other propositions true about our universe but not other the other possible universes? The usual answer is that life stands out, being valuable in ways that galaxies, iPads, and all the other true propositions are not. It seems that this is an unstated premise of the FTA. But where does that premise come from? Physics gives us no instrument to measure value, so how did this concept get in what was supposed to be a cosmology-based argument?

I present the FTA here as an argument that while seemingly complex, simply evaporates in light of the Mind Projection Fallacy. Knowing that humans tend to confuse 'I see X as valuable' with 'x is valuable', the provenance of the hidden premise 'life is valuable' is laid bare, as is the identity of the agent who is doing the valuing, and it is us. With the mystery solved, explaining why humans find life valuable does not require us to go to the extreme lengths of introducing a non-naturalistic cause for the universe.

Without any support for life being special in some way, the FTA devolves into a straightforward case of Texas Sharpshooter Fallacy: There exists life, our god would have wanted to create life, therefore our god is real! Not quite as compelling.