You're mistaken in applying the same standards to personal and deliberative decisions. The decision to enroll in cryonics is different in kind from the decision to promote safe AI for the public good. The first should be based on the belief that cryonics claims are true; the second should be based (ultimately) on the marginal value of advocacy in advancing the discussion. The failure to understand this distinction is a major failing in public rationality. For elaboration, see The distinct functions of belief and opinion.
The concept of belief and the nature of abstraction
Belief, puzzling to philosophy, is part of psychology’s conceptual framework. The present essay provides a straightforward yet novel theory of the explanatory and predictive value of describing agents as having beliefs. The theory attributes full-fledged beliefs exclusively to agents with linguistic capacities, but it does so as an empirical matter rather than a priori. By treating abstraction as an inherently social practice, the dependence of full-fledged belief on language resolves a philosophical problem regarding its possibility in a world where only concrete particulars exist.
The propositional character of belief
It can appear mysterious that the content of epistemic attitudes (belief and opinion) is conveyed by clauses introduced by that: “I believe that the dog is in his house.” If beliefs were causes of behavior, our success in denoting them gives rise to an apparently insurmountable problem: how do propositions—if they exist at all—exist independently of human conduct, so as to be fit for causally explaining it?
While belief ascriptions figure prominently in many behavioral explanations, their propositional form indicates that they pertain to states of information. My belief that my dog is in his house consists of the reliable use of the information that he’s there. Not only will I reply accordingly if asked about his location; in directing other my conduct, I may use that information. If I want the dog to come, I will yell in the direction of his house rather than toward his sofa. Yet, I won’t always use this information: I might absent-mindedly call to my dog on the sofa despite knowing (hence believing) that he is in his house. Believed information can be mistakenly disregarded.
Belief “that p” is a propensity to take p into rational account when p is relevant to the agent’s goals. But taking certain information into account involves also various skills, and it must be facilitated by the appropriate habits. The purposeful availability of believed information is also affected by, besides skills, inhibitions, habits, and desires.
What becomes striking on recognizing beliefs as propensities to use particular information is that behavior can be so successfully explained, when we know something of an agent’s purposes, by reference to the information on which we can predict the agent’s reliance.
Is this successful reliance a unique feature of human cognition? We can use belief ascriptions to describe nonhuman behavior, but we can do the same for machines. The concept of belief, however, isn’t essential to describing nonintelligent machine behavior. When my printer’s light indicates that it is out of paper, I might say it believes it is, particularly if, in fact, the tray is full. But compare it to what is true of me when I run out of paper, where my belief that I have exhausted my supply can explain an indefinitely large set of potential behaviors, from purchasing supplies to postponing work to expressing frustrated rage—in any of an indefinitely large variety of manners. The printer’s “belief” that it is out of paper is expressed in two ways: it refuses to print and a light turns on, and I can refer to these directly, without invoking the concept of belief.
Applying the concept of belief to nonhuman animals is intermediate between applying it to machines or to humans; it can be applied to animals more robustly than to machines. It isn’t preposterous to say that a dog believes his bone is buried at a certain location, particularly if it’s been removed and he still tries to retrieve it from the old location. What can give us pause about saying the dog believes arises from the severely limited conduct that’s influenced by the dog’s information about the bone’s location, as is apparent when the dog fails, except when hungry, to behave territorially toward the bone’s burial place.
Humans differ from canines in our capacity to carry the information constituting a belief’s propositional content to indefinitely many contexts. This makes belief indispensable in forecasting human behavior: without it, we could not exploit the predictive power of knowing what information a human agent is likely to rely on in new contexts.
This cross-contextual consistency in the use of information seems to rest on our having language, which permits (but does not compel!) the insertion of old information into new contexts.
The social representation of abstractions
Explaining our cross-contextual capacities is the problem (in the theory of knowledge) of how we manage to mentally represent abstractions. In Kripke’s version of Wittgenstein’s private-language argument, the problem is expressed in the dependence of concepts on extensions that are not rule governed. The social consensus engendered by how others apply words provides a standard against which to measure one’s own word usage.
Abstraction relies, ultimately, on the “wisdom of crowds” in achieving the most instrumentally effective segmentations. The source of abstraction—a form of social coordination—lies in our capacity to intuit (but only approximately) how others apply words.
The capacity to grasp the meanings of others’ words underlies the fruitfulness of using believed propositions to forecast human behavior. With language we can represent the information that another human agent is also able to represent and can transfer to all manner of contexts. But this linguistic requirement for full-fledged belief does not mean that people’s beliefs are always the beliefs they claim (or believe) they have. Language allows us our propositional knowledge about abstract informational states, but that doesn’t imply that we have infallible access to those states—obviously not pertaining to others but not even about ourselves. Nor does it follow that nonlinguistic animals can have full-fledged beliefs limited only by concreteness. Nonlinguistic animals lack full-fledged beliefs about even concrete matters because linguistic representation is the only available means for representing information in a way allowing its introduction to indefinitely varied contexts.
This account relies on a weakened private-language argument to explain abstraction as social consensus. But I reject Wittgenstein’s argument that private language is impossible: we do have propositional states accessible only privately. Wittgenstein’s argument proves too much, as it would impugn also the possibility of linguistic meaning, for which there is no fact of the matter as to how society must extend the meaning to new information. The answer to the strong private-language argument is the propositional structure of perception itself. (See T. Burge, Origins of Objectivity (2010).) What language provides is a consensual standard against which one’s (ultimately idiosyncratic) personal standard can be compared and modified. (Notice that this invokes a dialectic between what I’ve termed “opinion” and “belief.”)
This account of the role if language in abstraction justifies the early 20th-century Russian psychologist Vygotsky’s view that abstract thought is fundamentally linguistic.
I don't remember a period of my life where I didn't feel like I had a deep understanding of math, and so it's hard for me to separate out mathematical ability and cognitive ability.
I'd be interested in hearing more about your experience. A lot of smart people don't develop a deep understanding of math because that's not how the subject is taught and because they don't have the initiative to try to work things out themselves. With this in mind, to what do you attribute your success?
that's not how the subject is taught
Hope this isn't too off-topic, but I wonder if you have any ideas about why that is.
The main impediment to many far-mode thinkers learning hard (post-calculus) math is the drill and drudgery involved. If you're going to learn hard math, it seems you should, by all means, learn it deeply. That's not the obstacle. The obstacle is that to learn math deeply, you must first learn a lot of it rotely--at least the way it's taught.
In the far-distant past, when I was in school, learning elementary calculus meant rote drilling on techniques of solving integrals. Is this still the case? Is it inevitable, or is it the result of methods of education?
The main reason "smart people" avoid math isn't that they want to avoid depth; rather, what is, at least for some of them, drudgery. Math, more than any subject I know of, seems to require a very high level of sheer diligence to get to the point where you can start thinking about it deeply. Is this inevitable?
I wouldn't have predicted that intuition thinkers would ignore situational over personality information. This may be a more cultural difference.
Right, it's probably cultural - I wouldn't assume it to be as prominent in Western holistic thinkers, either. Mostly I just brought it up to highlight the fact that the intuitive/holistic distinction may not map perfectly to the System 1/System distinction.
The reason for apparent anomalies is that "holistic" thinking can involve two different styles: pre-attentive thinking and far-mode thinking. That is, you can have cognition that could be described as holistic either by being unreflective (System 1) or by engaging in far-mode forms of reflection (System 2 offloads to System 1.) In Ulric Neisser's terms, what is being called "intuitive" might reflect distinctly deeper or distinctly shallower processing than what is called analytic. I sort this out in The deeper solution to the mystery of moralism.
You needn't buy my conclusions about morality to accept the analysis of modes as related to systems 1 and 2.
Spite exists, and people do things out of spite. That doesn't mean punishment shouldn't exist. If you don't stop being friends with anyone ever you will be abused and used and forced to spend time with awful people.
Total karma isn't for you, it's for everyone else.
Total karma isn't for you, it's for everyone else.
Producing correlated rather than independent judgments of post quality, with the well-known cascading effects. The "system" deliberately introduces what I call belief-opinion confusion
Oh wait, you're that other person with a bunch of different monikers: metaphysicist, srdiamond, etc. Sorry.
There is another (known) sockpuppet abuser that I need to downvote? Bother. I thought we just had the one.
To Vladimir Nesov:
"A particularly unpopular posting" is not normally the issue, it's usually the systematic failure to respond to negative feedback, including by stopping to post in particular modes or at all.
I'm sorry: what's a "particular mode"? And what does stopping to post altogether have to do with multiple identities?
More to the point, what is "responding to feedback"? Posting responses to disagreement? Surely you know that depresses "karma" further. Or is "responding to feedback" a euphemism for conforming one's opinion and conduct to the community?
Mr. Nesov, you want to be a scientist; why do you post in bureaucratese? Obfuscatory writing is both cause and symptom of wretched thinking.
Edit. Changed Nessov to Nesov.
Oh wait, you're that other person with a bunch of different monikers: metaphysicist, srdiamond, etc. Sorry.
Apology accepted, but I think it's Dmytry to whom you actually owe it: he's the one you recklessly accused of deceitful self-promotion.
I found an historical and logical critique of the OP. More on point than the existing Comments.
Buridan's ass and the psychological origins of objective probability
The medieval philosopher Buridan reportedly constructed a thought experiment to support his view that human behavior was determined rather than “free”—hence rational agents couldn’t choose between two equally good alternatives. In the Buridan’s Ass Paradox, an ass finds itself between two equal equidistant bales of hay, noticed simultaneously; the bales’ distance and size are the only variables influencing the ass’s behavior. Under these idealized conditions, the ass must starve, its predicament indistinguishable from a physical object suspended between opposite forces, such as a planet that neither falls into the sun nor escapes into outer space. (Since the ass served Buridan as metaphor for the human agent, in what follows, I speak of “ass” and “agent” interchangeably.)
Computer scientist Leslie Lamport formalized the paradox as “Buridan’s Principle,” which states that the ass will starve if it is situated in a range of possibilities that include midpoints where two opposing forces are equal and it must choose in a sufficiently short time span. We assume, based on a principle of physical continuity, that the larger the bale of hay compared to the other, the faster will the ass be able to decide. Since this is true on the left and on the right, at the midpoint, where the bales are equal, symmetry requires an infinite decision time Conclusion: within some range of bale comparisons, the ass will require decision time greater than a given bounded time interval. (For rigorous treatment, see Buridan’s Principle (1984).)
Buridan’s Principle is counterintuitive, as Lamport discovered when he first tried to publish. Among the objections to Buridan’s Principle summarized by Lamport, the main objection provides an insight about the source of the mind-projection fallacy, which treats probability as a feature of the world. The most common objection is that when the agent can’t decide it may use a default metarule. Lamport points out this substitutes another decision subject to the same limits: the agent must decide that it can’t decide. My point differs from that of Lamport, who proves that binary decisions in the face of continuous inputs are unavoidable and that with minimal assumptions they preclude deciding in bounded time; whereas I draw a stronger conclusion: no decision is substitutable when you adhere strictly to the problem’s conditions specifying that the agent be equally balanced between the options. Any inclination to substitute a different decision is a bias toward making the decision that the substitute decision entails. In the simplest variant, the ass may use the rule: turn left when you can’t decide, potentially entrapping it in the limbo between deciding whether it can’t decide. If the ass has a metarule resolving conflicting to favor the left, it has an extraneous bias.
Lamport’s analysis discerns a kind of physical law; mine elucidates the origins of the mind-projection fallacy. What’s psychologically telling is that the most common metarule is to decide at random. But if by random we mean only apparently random, the strategy still doesn’t free the ass from its straightjacket. If it flips a coin, an agent is, in fact, biased toward whatever the coin will dictate, bias, here, means an inclination to use means causally connected with a certain outcome, but the coin flip’s apparent randomness is due to our ignorance of microconditions; truly random responding would allow the agent to circumvent the paradox’s conditions. The theory that the agent might use a random strategy expresses the intuition that the agent could turn either way. It seems a route to where the opposites of functioning according to physical law and acting “freely” in perceived self-interest are reconciled.
This false reconciliation comes through confusing two kinds of symmetry: the epistemic symmetry of “chance” events and the dynamic symmetry in the Buridan’s ass paradox. If you flip a coin, the symmetry of the coin (along with your lack of control over the flip) is what makes your reasons for preferring heads and tails equivalent, justifying assigning each the same probability. We encounter another symmetry with Buridan’s ass, where we also have the same reason to think the ass will turn in either direction. Since the intuition of “free will” precludes impossible decisions, we construe our epistemic uncertainty as describing a decision that’s possible but inherently uncertain.
When we conceive of the ass as a purely physical process subject to two opposite forces (which, of course, it is), and then it’s obvious that the ass can be “stuck.” What miscues intuition is that the ass need not be confined to one decision rule. But if by hypothesis it is confined to one rule, the rule may preclude decision. This hypothetical is made relevant by the necessity of there being some ultimate decision rule.
The intuitive physics of an agent that can’t get stuck entails: a) two equal forces act on an object producing an equilibrium; b) without breaking the equilibrium, an additional natural law is added specifying that the ass will turn. Rather than conclude this is impossible, intuition “resolves” the contradiction through conceiving that the ass will go in each direction half the time: the probability of either course is deemed .5. Confusion of kinds of symmetry, fueled by the intuition of free will, makes Buridan’s Principle counter-intuitive and objective probabilities intuitive.
How do we know that reality can’t be like this intuitive physics? We know because realizing a and b would mean that the physical forces involved don’t vary continuously. It would make an exception, a kind of singularity, of the midpoint.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
This essay makes a correct appraisal of Less Wrong thinking, but it denominates the position confusingly as "natural rights." The conventional designation is "moral realism," with "natural rights" denoting a specific deonotological view.
A more charitable reading than than provided by commenters would have understood that all the arguments invoked against natural rights (as well as the arguments attributing natural-rights thinking to Less Wrong) hold for other forms of moral realism, in particular utilitarianism/consequentialism. For an argument that utilitarianism is necessarily a form of moral realism (and other problems with utilitarianism) see "Utilitarianism twice fails".
In short, substitute "moral realism" for "natural rights."