Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

The noncentral fallacy - the worst argument in the world?

157 Yvain 27 August 2012 03:36AM

Related to: Leaky Generalizations, Replace the Symbol With The Substance, Sneaking In Connotations

David Stove once ran a contest to find the Worst Argument In The World, but he awarded the prize to his own entry, and one that shored up his politics to boot. It hardly seems like an objective process.

If he can unilaterally declare a Worst Argument, then so can I. I declare the Worst Argument In The World to be this: "X is in a category whose archetypal member gives us a certain emotional reaction. Therefore, we should apply that emotional reaction to X, even though it is not a central category member."

Call it the Noncentral Fallacy. It sounds dumb when you put it like that. Who even does that, anyway?

It sounds dumb only because we are talking soberly of categories and features. As soon as the argument gets framed in terms of words, it becomes so powerful that somewhere between many and most of the bad arguments in politics, philosophy and culture take some form of the noncentral fallacy. Before we get to those, let's look at a simpler example.

Suppose someone wants to build a statue honoring Martin Luther King Jr. for his nonviolent resistance to racism. An opponent of the statue objects: "But Martin Luther King was a criminal!"

Any historian can confirm this is correct. A criminal is technically someone who breaks the law, and King knowingly broke a law against peaceful anti-segregation protest - hence his famous Letter from Birmingham Jail.

But in this case calling Martin Luther King a criminal is the noncentral. The archetypal criminal is a mugger or bank robber. He is driven only by greed, preys on the innocent, and weakens the fabric of society. Since we don't like these things, calling someone a "criminal" naturally lowers our opinion of them.

The opponent is saying "Because you don't like criminals, and Martin Luther King is a criminal, you should stop liking Martin Luther King." But King doesn't share the important criminal features of being driven by greed, preying on the innocent, or weakening the fabric of society that made us dislike criminals in the first place. Therefore, even though he is a criminal, there is no reason to dislike King.

This all seems so nice and logical when it's presented in this format. Unfortunately, it's also one hundred percent contrary to instinct: the urge is to respond "Martin Luther King? A criminal? No he wasn't! You take that back!" This is why the noncentral is so successful. As soon as you do that you've fallen into their trap. Your argument is no longer about whether you should build a statue, it's about whether King was a criminal. Since he was, you have now lost the argument.

Ideally, you should just be able to say "Well, King was the good kind of criminal." But that seems pretty tough as a debating maneuver, and it may be even harder in some of the cases where the noncentral Fallacy is commonly used.

continue reading »

To Learn Critical Thinking, Study Critical Thinking

26 gwern 07 July 2012 11:50PM

Critical thinking courses may increase students’ rationality, especially if they do argument mapping.

The following excerpts are from “Does philosophy improve critical thinking skills?”, Ortiz 2007.

1 Excerpts

This thesis makes a first attempt to subject the assumption that studying [Anglo-American analytic] philosophy improves critical thinking skills to rigorous investigation.

…Thus the second task, in Chapter 3, is to articulate and critically examine the standard arguments that are raised in support of the assumption (or rather, would be raised if philosophers were in the habit of providing support for the assumption). These arguments are found to be too weak to establish the truth of the assumption. The failure of the standard arguments leaves open the question of whether the assumption is in fact true. The thesis argues at this point that, since the assumption is making an empirical assertion, it should be investigated using standard empirical techniques as developed in the social sciences. In Chapter 4, I conduct an informal review of the empirical literature. The review finds that evidence from the existing empirical literature is inconclusive. Chapter 5 presents the empirical core of the thesis. I use the technique of meta-analysis to integrate data from a large number of empirical studies. This meta-analysis gives us the best yet fix on the extent to which critical thinking skills improve over a semester of studying philosophy, general university study, and studying critical thinking. The meta-analysis results indicate that students do improve while studying philosophy, and apparently more so than general university students, though we cannot be very confident that this difference is not just the result of random variation. More importantly, studying philosophy is less effective than studying critical thinking, regardless of whether one is being taught in a philosophy department or in some other department. Finally, studying philosophy is much less effective than studying critical thinking using techniques known to be particularly effective such as LAMP.

continue reading »

A few analogies to illustrate key rationality points

50 kilobug 09 October 2011 01:00PM

Introduction

Due to long inferential distances it's often very difficult to use knowledge or understanding given by rationality in a discussion with someone who isn't versed in the Art (like, a poor folk who didn't read the Sequences, or maybe even not the Goedel, Escher, Bach !). So I find myself often forced to use analogies, that will necessary be more-or-less surface analogies, which don't prove anything nor give any technical understanding, but allow someone to have a grasp on a complicated issue in a few minutes.

A tale of chess and politics

Once upon a time, a boat sank and a group of people found themselves isolated in an island. None of them knew the rules of the game "chess", but there was a solar-powered portable chess computer on the boat. A very simple one, with no AI, but which would enforce the rules. Quickly, the survivors discovered the joy of chess, deducing the rules by trying moves, and seeing the computer saying "illegal move" or "legal move", seeing it proclaiming victory, defeat or draw game.

So they learned the rules of chess, movement of the pieces, what "chess" and "chessmate" is, how you can promote pawns, ... And they understood the planning and strategy skills required to win the game. So chess became linked to politics, it was the Game, with a capital letter, and every year, they would organize a chess tournament, and the winner, the smartest of the community, would become the leader for one year.

One sunny day, a young fellow named Hari playing with his brother Salvor (yes, I'm an Asimov fan), discovered a new move of chess : he discovered he could castle. In one move, he could liberate his rook, and protect his king. They kept the discovery secret, and used it on the tournament. Winning his games, Hari became the leader.

Soon after, people started to use the power of castling as much as they could. They even sacrificed pieces, even their queen, just to be able to castle fast. But everyone was trying to castle as fast as they could, and they were losing sight of the final goal : winning, for the intermediate goal : castling.

continue reading »

Pancritical Rationalism Can Apply to Preferences and Behavior

1 TimFreeman 25 May 2011 12:06PM

ETA: As stated below, criticizing beliefs is trivial in principle, either they were arrived at with an approximation to Bayes' rule starting with a reasonable prior and then updated with actual observations, or they weren't.  Subsequent conversation made it clear that criticizing behavior is also trivial in principle, since someone is either taking the action that they believe will best suit their preferences, or not.  Finally, criticizing preferences became trivial too -- the relevant question is "Does/will agent X behave as though they have preferences Y", and that's a belief, so go back to Bayes' rule and a reasonable prior. So the entire issue that this post was meant to solve has evaporated, in my opinion. Here's the original article, in case anyone is still interested:

Pancritical rationalism is a fundamental value in Extropianism that has only been mentioned in passing on LessWrong. I think it deserves more attention here. It's an approach to epistemology, that is, the question of "How do we know what we know?", that avoids the contradictions inherent in some of the alternative approaches.

The fundamental source document for it is William Bartley's Retreat to Commitment. He describes three approaches to epistemology, along with the dissatisfying aspects of the other two:

  • Nihilism. Nothing matters, so it doesn't matter what you believe. This path is self-consistent, but it gives no guidance.
  • Justificationlism. Your belief is justified because it is a consequence of other beliefs. This path is self-contradictory. Eventually you'll go in circles trying to justify the other beliefs, or you'll find beliefs you can't jutify. Justificationalism itself cannot be justified.
  • Pancritical rationalism. You have taken the available criticisms for the belief into account and still feel comfortable with the belief. This path gives guidance about what to believe, although it does not uniquely determine one's beliefs. Pancritical rationalism can be criticized, so it is self-consistent in that sense.

Read on for a discussion about emotional consequences and extending this to include preferences and behaviors as well as beliefs.

continue reading »

How are critical thinking skills acquired? Five perspectives

9 matt 22 October 2010 02:29AM

Link to sourcehttp://timvangelder.com/2010/10/20/how-are-critical-thinking-skills-acquired-five-perspectives/
Previous LW discussion of argument mappingArgument Maps Improve Critical ThinkingDebate tools: an experience report

How are critical thinking skills acquired? Five perspectivesTim van Gelder discusses acquisition of critical thinking skills, suggesting several theories of skill acquisition that don't work, and one with which he and hundreds of his students have had significant success.

In our work in the Reason Project at the University of Melbourne we refined the Practice perspective into what we called the Quality (or Deliberate) Practice Hypothesis.   This was based on the foundational work of Ericsson and others who have shown that skill acquisition in general depends on extensive quality practice.  We conjectured that this would also be true of critical thinking; i.e. critical thinking skills would be (best) acquired by doing lots and lots of good-quality practice on a wide range of real (or realistic) critical thinking problems.   To improve the quality of practice we developed a training program based around the use of argument mapping, resulting in what has been called the LAMP (Lots of Argument Mapping) approach.   In a series of rigorous (or rather, as-rigorous-as-possible-under-the-circumstances) studies involving pre-, post- and follow-up testing using a variety of tests, and setting our results in the context of a meta-analysis of hundreds of other studies of critical thinking gains, we were able to establish that critical thinking skills gains could be dramatically accelerated, with students reliably improving 7-8 times faster, over one semester, than they would otherwise have done just as university students.   (For some of the detail on the Quality Practice hypothesis and our studies, see this paper, and this chapter.)

LW has been introduced to argument mapping before

Link: "You're Not Being Reasonable"

12 CronoDAS 15 September 2010 07:19AM

Thanks to David Brin, I've discovered a blogger, Michael Dobson, who has written, among other things, a fourteen-part series on cognitive biases. But that's not what I'm linking to today.

This is what I'm linking to:

You're Not Being Reasonable

I’m embarrassed to admit that I’ve been getting myself into more online arguments about politics and religion lately, and I’m not happy with either my own behavior or others. All the cognitive biases are on display, and hardly anyone actually speaks to the other side. Unreasonableness is rampant.

The problem is that what’s reasonable tends to be subjective. Obviously, I’m going to be biased toward thinking people who agree with me are more reasonable than those lunkheads who don’t. But that doesn’t mean there aren’t objective standards for being reasonable.

...

I learned some of the following through observation, and most of it through the contrary experience of doing it wrong. You’ve heard some of the advice elsewhere, but a reminder every once in a while comes in handy.

Yes, much of it is pretty basic stuff, but as he says, a reminder every once in a while comes in handy, and this is as good a summary of the rules for having a reasonable discussion as I've seen anywhere.

And the rest of the blog seems pretty good, too. (Did I mention the fourteen-part series on cognitive biases?)

Anthropomorphic AI and Sandboxed Virtual Universes

-3 jacob_cannell 03 September 2010 07:02PM

Intro

The problem of Friendly AI is usually approached from a decision theoretic background that starts with the assumptions that the AI is an agent that has awareness of AI-self and goals, awareness of humans as potential collaborators and or obstacles, and general awareness of the greater outside world.  The task is then to create an AI that implements a human-friendly decision theory that remains human-friendly even after extensive self-modification.

That is a noble goal, but there is a whole different set of orthogonal compatible strategies for creating human-friendly AI that take a completely different route: remove the starting assumptions and create AI's that believe they are humans and are rational in thinking so.  

continue reading »

Dreams of AIXI

-1 jacob_cannell 30 August 2010 10:15PM

Implications of the Theory of Universal Intelligence

If you hold the AIXI theory for universal intelligence to be correct; that it is a useful model for general intelligence at the quantitative limits, then you should take the Simulation Argument seriously.


AIXI shows us the structure of universal intelligence as computation approaches infinity.  Imagine that we had an infinite or near-infinite Turing Machine.  There then exists a relatively simple 'brute force' optimal algorithm for universal intelligence. 


Armed with such massive computation, we could just take all of our current observational data and then use a particular weighted search through the subspace of all possible programs that correctly predict this sequence (in this case all the data we have accumulated to date about our small observable slice of the universe).  AIXI in raw form is not computable (because of the halting problem), but the slightly modified time limited version is, and this is still universal and optimal.


The philosophical implication is that actually running such an algorithm on an infinite Turing Machine would have the interesting side effect of actually creating all such universes.

AIXI’s mechanics, based on Solomonoff Induction, bias against complex programs with an exponential falloff ( 2^-l(p) ), a mechanism similar to the principle of Occam’s Razor.  The bias against longer (and thus more complex) programs, lends a strong support to the goal of String Theorists, who are attempting to find a simple, shorter program that can unify all current physical theories into a single compact description of our universe.  We must note that to date, efforts towards this admirable (and well-justified) goal have not born fruit.  We may actually find that the simplest algorithm that explains our universe is more ad-hoc and complex than we would desire it to be.  But leaving that aside, imagine that there is some relatively simple program that concisely explains our universe.

If we look at the history of the universe to date, from the Big Bang to our current moment in time, there appears to be a clear local telic evolutionary arrow towards greater X, where X is sometimes described as or associated with: extropy, complexity, life, intelligence, computation, etc etc.  Its also fairly clear that X (however quantified) is an exponential function of time.  Moore’s Law is a specific example of this greater pattern.


This leads to a reasonable inductive assumption, let us call it the reasonable assumption of progress: local extropy will continue to increase exponentially for the foreseeable future, and thus so will intelligence and computation (both physical computational resources and algorithmic efficiency). The reasonable assumption of progress appears to be a universal trend, a fundamental emergent property of our physics.


Simulations

If you accept that the reasonable assumption of progress holds, then AIXI implies that we almost certainly live in a simulation now.


As our future descendants expand in computational resources and intelligence, they will approach the limits of universal intelligence.  AIXI says that any such powerful universal intelligence, no matter what its goals or motivations, will create many simulations which effectively are pocket universes.  


The AIXI model proposes that simulation is the core of intelligence (with human-like thoughts being simply one approximate algorithm), and as you approach the universal limits, the simulations which universal intelligences necessarily employ will approach the fidelity of real universes - complete with all the entailed trappings such as conscious simulated entities.


The reasonable assumption of progress modifies our big-picture view of cosmology and the predicted history and future of the universe.  A compact physical theory of our universe (or multiverse), when run forward on a sufficient Universal Turing Machine, will lead not to one single universe/multiverse, but an entire ensemble of such multi-verses embedded within each other in something like a hierarchy of Matryoshka dolls.

The number of possible levels of embedding and the branching factor at each step can be derived from physics itself, and although such derivations are preliminary and necessarily involve some significant unknowns (mainly related to the final physical limits of computation), suffice to say that we have sufficient evidence to believe that the branching factor is absolutely massive, and many levels of simulation embedding are possible.

Some seem to have an intrinsic bias against the idea bases solely on its strangeness.

Another common mistake stems from the anthropomorphic bias: people tend to image the simulators as future versions of themselves.

The space of potential future minds is vast, and it is a failure of imagination on our part to assume that our descendants will be similar to us in details, especially when we have specific reasons to conclude that they will be vastly more complex.

Asking whether future intelligences will run simulations for entertainment or other purposes are not the right questions, not even the right mode of thought.  They may, they may not, it is difficult to predict future goal systems.  But those aren’t important questions anyway, as all universe intelligences will ‘run’ simulations, simply because that precisely is the core nature of intelligence itself.  As intelligence expands exponentially into the future, the simulations expand in quantity and fidelity.


The Assemble of Multiverses


Some critics of the SA rationalize their way out by advancing a position of ignorance concerning the set of possible external universes our simulation may be embedded within.  The reasoning then concludes that since this set is essentially unknown, infinite and uniformly distributed, that the SA as such thus tells us nothing. These assumptions do not hold water.

Imagine our physical universe, and its minimal program encoding, as a point in a higher multi-dimensional space.  The entire aim of physics in a sense is related to AIXI itself: through physics we are searching for the simplest program that can consistently explain our observable universe.  As noted earlier, the SA then falls out naturally, because it appears that any universe of our type when ran forward necessarily leads to a vast fractal hierarchy of embedded simulated universes.

At the apex is the base level of reality and all the other simulated universes below it correspond to slightly different points in the space of all potential universes - as they are all slight approximations of the original.  But would other points in the space of universe-generating programs also generate observed universes like our own?

We know that the fundamental constants in the current physics are apparently well-tuned for life, thus our physics is a lone point in the topological space supporting complex life: even just tiny displacements in any direction result in lifeless universes.  The topological space around our physics is thus sparse for life/complexity/extropy.  There may be other topological hotspots, and if you go far enough in some direction you will necessarily find other universes in Tegmark’s Ultimate Ensemble that support life.  However, AIXI tells us that intelligences in those universes will simulate universes similar to their own, and thus nothing like our universe.

On the other hand we can expect our universe to be slightly different from its parent due to the constraints of simulation, and we may even eventually be able to discover evidence of the approximation itself.  There are some tentative hints from the long-standing failure to find a GUT of physics, and perhaps in the future we may find our universe is an ad-hoc approximation of a simpler (but more computationally expensive) GUT theory in the parent universe.


Alien Dreams

Our   Milky Way galaxy   is vast and old, consisting of hundreds of billions of stars, some of which are more than 13 billion years old, more than three times older than our sun.  We have direct evidence of technological civilization developing in 4 billion years from simple protozoans, but it is difficult to generalize past this single example.  However, we do now have mounting evidence that planets are common, the biological precursors to life are probably common, simple life may even have had a historical presence on mars, and all signs are mounting to support the  principle of mediocrity:  that our solar system is not a precious gem, but is in fact a typical random sample.

If the evidence for the mediocrity principle continues to mount, it provides a further strong support for the Simulation Argument.  If we are not the first technological civilization to have arisen, then technological civilization arose and achieved Singularity long ago, and we are thus astronomically more likely to be in an alien rather than posthuman simulation.

What does this change?

The set of simulation possibilities can be subdivided into PHS (posthuman historical), AHS (alien historical), and AFS (alien future) simulations (as posthuman future simulation is inconsistent).  If we discover that we are unlikely to be the first technological Singularity, we should assume AHS and AFS dominate.  For reasons beyond this scope, I imagine that the AFS set will outnumber the AHS set.

Historical simulations would aim for historical fidelity, but future simulations would aim for fidelity to a 'what-if' scenario, considering some hypothetical action the alien simulating civilization could take.  In this scenario, the first civilization to reach technological Singularity in the galaxy would spread out, gather knowledge about the entire galaxy, and create a massive number of simulations.  It would use these in the same way that all universal intelligences do: to consider the future implications of potential actions.

What kinds of actions?  

The first-born civilization would presumably encounter many planets that already harbor life in various stages, along with planets that could potentially harbor life.  It would use forward simulations to predict the final outcome of future civilizations developing on these worlds.  It would then rate them according to some ethical/utilitarian theory (we don't even need to speculate on the criteria), and it would consider and evaluate potential interventions to change the future historical trajectory of that world: removing undesirable future civilizations, pushing other worlds towards desirable future outcomes, and so on.

At the moment its hard to assign apriori weighting to future vs historical simulation possibilities, but the apparent age of the galaxy compared to the relative youth of our sun is a tentative hint that we live in a future simulation, and thus that our history has potentially been altered.

 

Sequential Organization of Thinking: "Six Thinking Hats"

25 JustinShovelain 18 March 2010 05:22AM

Many people move chaotically from thought to thought without explicit structure. Inappropriate structuring may leave blind spots or cause the gears of thought to grind to a halt, but the advantages of appropriate structuring are immense:

Correct thought structuring ensures that you examine all relevant facets of an issue, idea, or fact.

  • It ensures you know what to do next at every stage and are not frustrated or crippled by akrasia between moments of choice; the next action is always obvious.
  • It minimizes the overhead of task switching: you are in control and do not dither between possibilities.
  • It may be used in a social context so that potentially challenging issues and thoughts may be brought up in a non-threatening manner (let's look at the positive aspects, now let's focus purely on the negative...).


To illustrate thought structuring, I use the example of Edward de Bono's "six thinking hats" mnemonic.  With Edward de Bono's "six thinking hats" method you metaphorically put on various colored "hats" (perspectives) and switch "hats" depending on the task. I will use the somewhat controversial issue of cryonics as my running example.1

continue reading »

Deception and Self-Doubt

8 Psychohistorian 11 March 2010 02:39AM

A little while ago, I argued with a friend of mine over the efficiency of the Chinese government. I admitted he was clearly better informed on the subject than I. At one point, however, he claimed that the Chinese government executed fewer people than the US government. This statement is flat-out wrong; China executes ten times as many people as the US, if not far more. It's a blatant lie. I called him on it, and he copped to it. The outcome is besides the point. Why does it matter that he lied? In this case, it provides weak evidence that the basics of his claim were wrong, that he knew the point he was arguing was, at least on some level, incorrect.

The fact that a person is willing to lie indefensibly in order to support their side of an argument shows that they have put "winning" the argument at the top of their priorities. Furthermore, they've decided, based on the evidence they have available, that lying was a more effective way to advance their argument than telling the truth. While exceptions obviously exist, if you believe that lying to a reasonably intelligent audience is the best way of advancing your claim, this suggests that you know your claim is ill-founded, even if you don't admit this fact to yourself.

continue reading »

View more: Next