Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Epistemological Implications of a Reduction of Theoretical Implausibility to Cognitive Dissonance

0 common_law 25 July 2017 01:29AM
The Aronsonian re-interpretation of cognitive dissonance as caused by ideas in conflict with self-image forestalled some obvious applications to philosophical issues lying at the border with psychology. As the action-oriented approach suggests, when Festinger's theory is deepened to pertain to the relations between far-mode and near-mode representations, the similarity between cognitive dissonance and theoretical plausibility becomes almost obvious. Implausibility has the same properties and role as cognitive dissonance. It is an aversive state that that can motivate a change in far-mode beliefs, and the change is toward a form of coherence among beliefs. Rival theories can be rated on a single dimension of plausibility in the same way that they evoke different degrees of dissonance.

The reduction of implausibility to cognitive dissonance bears significant philosophical weight. It denies both Bayesian and coherentist theories of knowledge. The fashionable Bayesian interpretation of implausibility is in terms of degrees of rational belief. A theory is implausible according to the Bayesian School when it possesses a low a priori probability. But we don't thereby scale our beliefs for rationality if we scale beliefs by how much dissonance they cause,. Moreover, to scale them by rationality would require that we have some independent reason for thinking, apart from the comparative cognitive dissonance they arouse, that one theory is more rational than is the other. Scaling our beliefs by the cognitive dissonance they arouse cannot itself be justified on a priori grounds, since dissonance reduction often takes us systematically away from the truth, as in fact is the case in most experimental studies of dissonance. (This helps explain why the identity of dissonance and implausibility hasn't previously been noted.)

Regarding the other negative implication of the dissonance account of implausibility - coherentist theories of knowledge haven't arrived at a clear meaning of "coherence," but coherentist theories emphasize logical and explanatory relations among far-mode ideas (although recent versions have included the role of ideas derived directly from perception). The dissonance theory of implausibility holds that dissonance is aroused by pragmatic incompatibility between near-mode and far-mode cognition. However, we don't seem to be immune to conflicts between our far-mode ideas, although the extent to which we are - the extent of the immunity to far-mode hypocrisy - tends to surprise many of us. The resolution of this problem is that the logical analysis of the relations between far-mode ideas is itself a near-mode activity. (Consider that the practice of mathematics is near-mode, as much as its content is abstract.) We are sensitive to inconsistencies in far-mode ideas only to the extent that we draw upon them in our analytical practices - and to the extent that our own activity involves such practices. Those involved in expounding a doctrine or acting in its terms will be subject to dissonance to the extent that far-mode ideas pragmatically conflict with the performance of their analytical performances.

The dissonance theory of plausibility also bears no the mystery of the conjuring up of theoretical terms. We know that scientific theories go beyond the empirical evidence, as in principle there are infinitely many theories consistent with any set of empirical facts. On the dissonance account of plausibility, theory creation and acceptance is driven by dissonance reduction. Far-more theories promote scientific practices by energizing them. They do this by providing the framework in which scientists work. If work is to be systematic, a framework is necessary, but are the declarative propositions the framework expresses true? Do they have a probability of truth?

Scientific Realist philosophers of science have argued convincingly that theoretical propositions in science often purport to be true, but nobody has come close to providing an account of what it means for an abstract theory to be probable, such that we can inquire regarding the epistemic probability that Newtonian physics was true? The notion that we have rational degrees of belief in theories does accord with some intuitions. Plausibility must allow at least ordinal ranking, since dissonance involves choice between different cognitive states according to their plausibility. This in turn means that the laws of probability apply to ordinal relations. For example, the plausibility of Theory A and Theory B will never be greater than the plausibility of Theory A. But let me suggest that even this is a product of dissonance as shaped by theoretical development, as is shown in studies showing that in many situations we empirically find the conjunction fallacy compelling - that is, plausible.

Sometimes the search for dissonance reduction leads to truth, and sometimes it leads away from truth. Rationality is a limiting case of dissonance reduction, but it's one impossible to specify except from within a psyche subject to the laws of cognitive dissonance. Then this problem: how do we even express the expectation that scientific theories get closer to the truth and religious theories do not? We can say that scientific theories depend on experimental and observational practices and therefore have at least the possibility of resting on actual evidence. We can say scientific theories have greater plausibility than religious theories, these both being judgments that are a product of the law of dissonance. But, counter-intuitively (at least for me), we can't say that scientific theories are more probable than religious theories. It isn't, it's important to notice, that we don't know which is more probable. Rather, the whole notion of probability as applied to theories is misbegotten. That a theory is implausible or plausible is a far-mode conclusion, and far-mode doesn't deal in the relative frequencies modeled by the probability calculus.

A simple example might be clarifying in showing the limits of the concept of probability and its closeness to near-mode experience. During the last presidential election, pollsters arrived at estimates of the probability of winning for each candidate. Pollsters use mainly near-mode reasoning to engineer the best predictive formulas. The pollsters substantially agreed, as you would expect when they each applied similar methodologies, all based on simple extrapolation of the near-mode process of sampling and generalizing to a defined population. 

The accuracy of these conclusions, however, depended on certain far-mode assumptions, such as that people taking polls respond honestly. What if this assumption didn't hold? Well, it didn't; Trump won and the main reason the polls got it wrong was (or might have been, if you prefer) that voters polled weren't honest about their preferences. We might ask, what should have been the true probability estimate, given that the pollsters didn't take into account the probability that their model was based on false assumptions. How should they have taken this into account? Probability estimates result from the near-mode operation of fitting observation to a relative frequency model. We can complicate the model to take account of more information, but what we can't do is adjust the probability estimate to take account of the model's own fallibility. (At a deeper level, Bayesian estimates can’t adjust for the probability that the Bayesian methodology itself is invalid—as I contend it is in fact.) If it makes sense to assign a probability to a theory being true, how much belief should be accorded in some idealized rational world, then it should be possible to approximate that probability. Someone can adjust it "intuitively," but my point is that there is nothing appropriate for an intuition to be about. Theoretical plausibility is not probability.

At this point arises a skeptical temptation, for not only is our knowledge not absolute, it isn't even probable. Plausibility can systematically take us away from knowledge. We seem to long for a rationale for doing the rational thing, and such a rationale is supplied when knowledge and rational conduct is represented by Bayesian and decision-theoretic formulas. We see ourselves as free and availed of (mentally) unlimited choice. We are rational because we choose to be, and that entails that the choice itself be rational. But our possibilities aren't unlimited, freely chosen. Our ideas will evolve in accordance with the demands of cognitive dissonance or else moved by receptivity to suggestion. There's no "free will" to seize the initiative, and no direct access to rationality to guide us.
Comment author: common_law 12 December 2015 04:57:58AM 3 points [-]

Looking for mental information in individual neuronal firing patterns is looking at the wrong level of scale and at the wrong kind of physical manifestation. As in other statistical dynamical regularities, there are a vast number of microstates (i.e., network activity patterns) that can constitute the same ghloal attractor, and a vast numbmer of trajectories of microstate-to-microstate changes that will tend to converge to a common attractor. But it is the final quasi-regular network-level dynamic, like a melody played by a million-instrument orchestra, that is the medium of mental information. - Terrence W. Deacon, Incomplete Nature: How Mind Emerged from Matter, pp. 516 - 517.

Comment author: gjm 05 September 2015 06:56:16PM 0 points [-]

I think you may be confusing "rigorous" with "elaborate" or "detailed". (Or maybe not, in which case you might like to say a few words about why the former, and not only the latter, applies to Marx and Freud.)

Comment author: common_law 07 September 2015 08:48:12PM *  0 points [-]

Elaborate or detailed are characteristics neither necessary nor sufficient for rigor. The first describe characteristics of the theory; the second of the argument for the theory. To say a theory is rigorous is neither more or less than to say it is well argued (with particular emphasis on the argument's tightness).

Whether Freud and Marx argued well may be hard to agree on when we examine their arguments. [Agreement or disagreement on conclusions have a way of grossly interfering with evaluation of argument, with the added complication that evaluation must be relative to a historical state of play.] And we ignore what could be called holes in Einstein and Darwin because the theories are the consensus - holes like the absence of the Mendelian mechanism in Darwin or the (still-unresolved, at least philosophically) problem of infinities in general relativity. [I'm sure that's controversial, however.]

But I would suggest that a theories that have sustained the agreement of even a large minority of serious intellectuals and academics for more than a century should be presumed rigorous. Rigor is what establishes lasting intellectual success. It is what primarily defines whether a work is "impressive" (to use Robin Hanson's as-always useful term).

On the other hand, I agree that third-rate minds use formulaic methods to generate a huge number of publications, and by their nature, such works will never be rigorous (or lastingly impressive).

Comment author: common_law 04 September 2015 03:54:12AM 1 point [-]

You've drawn a significant distinction, but I don't think degree of rigor defines it. I'm not sufficiently familiar with many of these thinkers to assess their rigorousness, but I am familiar with several, the ones who would often be deemed most important: Einstein, Darwin on the side you describe as rigorous; Freud and Marx on the side you describe as less rigorous. I can't agree that Freud and Marx are less rigorous. Marx makes a argument for his theory of capitalism in three tightly reasoned volumes of capital, none of the arguments formulaic. Freud develops the basics of his psychology in "The Interpretation of Dreams," a rigorous study of numerous dreams, his own and his patients, extracting principles of dream interpretation.

Let me offer an alternative hypothesis. The distinction doesn't regard rigor but rather elegance. Einstein and Darwin developed elegant explanations; Freud and Marx developed systems of insights, supported by argument and evidence, but less reducible to a central, crisp insight. I haven't considered a term for the latter, but for the moment, I'll call them systematic theories.

An elegant theory must be accepted as a whole or not at all. A systematic theory contains numerous insights that despite their integration can often be separated from one another, one idea accepted and another rejected.

With that distinction, it can readily be explained why systematic theorists produce a greater total bulk of work. It takes more words, and more working through, to explain a system than an elegant principle.

Comment author: common_law 16 May 2015 09:01:16PM *  0 points [-]

Aesthetic ability as such hasn't been extracted as a cognitive ability factor. My guess would be that it's mainly explained by g and the temperamental factor of openness to experience. (I don't know what the empirical data is on this subject, but I think some immersion in the factor-analytic data would prove rewarding.)

[Added.] On aesthetic sense: the late R.B. Cattell (psychologist) devised an IQ test based on which jokes were preferred.

[Added.2] I'm wondering if you're not misinterpreting your personal experience. You say your IQ is only LW-average. You also say you have a nonverbal learning disability; but that would render any score you obtained on an IQ test a substantial underestimate. I'm inclined to think what you're calling aesthetic ability (in your case, at least) is just intelligence beyond what the uninterpreted scores say.

Comment author: common_law 01 March 2015 05:03:46AM 2 points [-]

What's your basis for concluding that verbal-reasoning ability is an important component of mathematical ability—particularly important in more theoretical areas of math?

The research that I recall showed little influence of verbal reasoning on high-level math ability, verbal ability certainly being correlated with math ability but the correlation almost entirely accounted for by g (or R). There's some evidence that spatio-visual ability, rather unimportant for mathematical literacy (as measured by SAT-M, GRE-Q), becomes significant at higher levels of achievement. But from what I've seen, the factor that emerged most distinctive for excellent mathematicians (distinguishing them from other fields also demanding high g) isn't g itself, but rather cognitive speed. Talented mathematicians are mentally quick.

In response to The Hostile Arguer
Comment author: common_law 27 November 2014 11:07:57PM 4 points [-]

You should question your unstated but fundamental premise: one should avoid arguments with "hostile arguers."

A person who argues to convince rather than to understand harms himself, but from his interlocutor's standpoint, dealing with his arguments can be just as challenging and enlightening as arguing with someone more "intellectually honest."

Whether an argument is worthwhile depends primarily on the competence of the arguments presented, which isn't strongly related to the sincerity of the arguer.

Comment author: common_law 01 November 2014 10:07:10PM *  1 point [-]

Actually, I think you're wrong in thinking that LW doctrine doesn't dictate heightened scrutiny of the deployment of self-deception. At the same time, I think you're wrong to think false beliefs can seldom be quarantined, compartmentalization being a widely employed defense mechanism. (Cf., any liberal theist.)

Everyone feels a tug toward the pure truth, away from pure instrumental rationalism. You're mistake (and LW's), I think, is to incorporate truth into instrumental rationality (without really having a cogent rationale, given the reality of compartmentalization). The real defect in instrumental rationalism is that no person of integrity can take it to heart. "Values" are of two kinds: biological givens and acquired tendencies that restrict the operation of those givens (instinct and restraint). The drive for instrumental rationality is a biological given; epistemic rationality is a restraint intellectuals apply to their instrumental rationality. It is ethical in character, whereas instrumental rationality is not; and it is a seductive confusion to moralize it.

For intellectuals, the businessman's "winner" ethos--the evaluative subordination of epistemic rationality to instrumentality--is an invitation to functional psychopathy.

Comment author: undermind 22 October 2014 08:53:39PM 2 points [-]

I guess I was trying to say that the hard work montage is one common narrative, but it is far from the only one.

And yes, there are inevitably constraints that get in the way of investing effort in any particular place, and correspondingly to gaining power by one particular means. But even when the path with the highest payoff is blocked, some of the remaining options will be more beneficial than others. For example, if someone has a low IQ but is strong, they could become a lumberjack, or they could become a henchman to their local supervillain.

Comment author: common_law 26 October 2014 02:59:53AM 0 points [-]

I don't see how your argument gains from attributing the hard-work bias to stories. (For one thing, you still have to explain why stories express this bias—unless you think it's culturally adventitious.)

The bias seems to me to be a particular case of the fair-world bias and perhaps also the "more is better" heuristic. It seems like you are positing a new bias unnecessarily. (That doesn't detract from the value of describing this particular variant.)

Comment author: alex_zag_al 19 October 2014 01:30:43PM 4 points [-]

Yes. Because, we're trying to express uncertainty about the consequences of axioms. Not about axioms themselves.

common_law's thinking does seem to be something people actually do. Like, we're uncertain about the consequences of the laws of physics, while simultaneously being uncertain of the laws of physics, while simultaneously being uncertain if we're thinking about it in a logical way. But, it's not the kind of uncertainty that we're trying to model, in the applications I'm talking about. The missing piece in these applications are probabilities conditional on axioms.

Comment author: common_law 19 October 2014 10:21:00PM *  1 point [-]

Philosophically, I want to know how you calculate the rational degree of belief in every proposition.

If you automatically assign the axioms an actually unobtainable certainty, you don't get the rational degree of belief in every proposition, as the set of "propositions" includes those not conditioned on the axioms.

View more: Next