Taking Ideas Seriously

51 Will_Newsome 13 August 2010 04:50PM

I, the author, no longer endorse this post.


 

Abstrummary: I describe a central technique of epistemic rationality that bears directly on instrumental rationality, and that I do not believe has been explicitly discussed on Less Wrong before. The technnique is rather simple: it is the practice of taking ideas seriously. I also present the rather simple metaphor of an 'interconnected web of belief nodes' (like a Bayesian network) to describe what it means to take an idea seriously: it is to update a belief and then accurately and completely propagate that belief update through the entire web of beliefs in which it is embedded. I then give a few examples of ideas to take seriously, followed by reasons to take ideas seriously and what bad things happens if you don't (or society doesn't). I end with a few questions for Less Wrong.

continue reading »

'Is' and 'Ought' and Rationality

2 BobTheBob 05 July 2011 03:53AM

On the face of it, there is a tension in adhering both to the idea that there are facts about what it's rational for people to do and to the idea that natural or scientific facts are all the facts there are. The aim of this post is just to try to make clear why this should be so, and hopefully to get feedback on what people think of the tension.

In short

To a first approximation, a belief is rational just in case you ought to hold it; an action rational just in case you ought to take it. A person is rational to the extent that she believes and does what she ought to. Being rational, it is fair to say, is a normative or prescriptive property, as opposed to a merely descriptive one. Natural science, on the other hand, is concerned merely with descriptive properties of things -what they weigh, how they are composed, how they move, and so on. On the face of it, being rational is not the sort of property about which we can theorize scientifically (that is, in the vocabulary of the natural sciences). To put the point another way, rationality concerns what a thing (agent) ought to do, natural science concerns only what it is and will do, and one cannot deduce 'ought' from 'is'.

At greater length

There are at least two is/ought problems, or maybe two ways of thinking about the is/ought problem. The first problem (or way of thinking about the one problem) is posed from a subjective point of view. I am aware that things are a certain way, and that I am disposed to take some course of action, but neither of these things implies that I ought to take any course of action -neither, that is, implies that taking a given course of action would in any sense be right. How do I justify the thought that any given action is the one I ought to take? Or, taking the thought one step further, how, attending only to my own thoughts, do I differentiate merely being inclined to do something from being bound by some kind of rule or principle or norm, to do something?

This is an interesting question -one which gets to the very core of the concept of being justified, and hence of being rational (rational beliefs being justified beliefs). But it isn't the problem of interest here.

The second problem, the problem of interest, is evident from a purely objective, scientific point of view. Consider a lowly rock. By empirical investigation, we can learn its mass, its density, its mineralogical composition, and any number of other properties. Now, left to their own devices, rocks don't do much of anything, comparatively speaking, so it isn't surprising that we don't expect there to be anything it ought to do. In any case, natural science does not imply there is anything it ought to do, I think most will agree.

Consider then a virus particle - a complex of RNA and ancillary molecules. Natural science can tell us how it wiil behave in various circumstances -whether and how it will replicate itself, and so on- but once again surely there is nothing in biochemistry, genetics or other science which implies there is anything our very particle ought to do. It's true that we may think of it as having the goal to replicate itself, and consider it to have made a mistake if it replicates itself inaccurately, but these conceptions do not issue from science. Any sense in which it ought to do something, or is wrong or mistaken in acting in a given way, is surely purely metaphorical (no?).

How about a bacterium? It's orders of magnitude more complicated, but I don't see that matters are any different as regards what it ought to do. Science has nothing to tell us about what if anything is important to a bacterium, as distinct from what it will tend to do.

Moving up the evolutionary ladder, does the introduction of nervous systems make any difference? What do we think about, say, nematodes or even horseshoe crabs? The feedback mechanisms underlying the self-regulatory processes in such animals may be leaps and bounds more sophisticated than in their non-neural forebears, but it's far from clear how such increasing complexity could introduce goals.

To cut to the chase, how can matters be any different with the members of Homo sapiens ? Looked at from a properly scientific point of view, is there any scope for the attribution of purposes or goals or the appraisal of our behaviour in any sense as right or wrong? I submit that a mere increase in complexity -even if by many orders of magnitude- does not turn the trick. To be clear, I'm not claiming there are no such facts -far from it- just that these facts cannot be articulated in the language of purely natural science.

continue reading »

Less Wrong Rationality and Mainstream Philosophy

106 lukeprog 20 March 2011 08:28PM

Part of the sequence: Rationality and Philosophy

Despite Yudkowsky's distaste for mainstream philosophy, Less Wrong is largely a philosophy blog. Major topics include epistemology, philosophy of language, free willmetaphysics, metaethics, normative ethics, machine ethicsaxiology, philosophy of mind, and more.

Moreover, standard Less Wrong positions on philosophical matters have been standard positions in a movement within mainstream philosophy for half a century. That movement is sometimes called "Quinean naturalism" after Harvard's W.V. Quine, who articulated the Less Wrong approach to philosophy in the 1960s. Quine was one of the most influential philosophers of the last 200 years, so I'm not talking about an obscure movement in philosophy.

Let us survey the connections. Quine thought that philosophy was continuous with science - and where it wasn't, it was bad philosophy. He embraced empiricism and reductionism. He rejected the notion of libertarian free will. He regarded postmodernism as sophistry. Like Wittgenstein and Yudkowsky, Quine didn't try to straightforwardly solve traditional Big Questions as much as he either dissolved those questions or reframed them such that they could be solved. He dismissed endless semantic arguments about the meaning of vague terms like knowledge. He rejected a priori knowledge. He rejected the notion of privileged philosophical insight: knowledge comes from ordinary knowledge, as best refined by science. Eliezer once said that philosophy should be about cognitive science, and Quine would agree. Quine famously wrote:

The stimulation of his sensory receptors is all the evidence anybody has had to go on, ultimately, in arriving at his picture of the world. Why not just see how this construction really proceeds? Why not settle for psychology?

But isn't this using science to justify science? Isn't that circular? Not quite, say Quine and Yudkowsky. It is merely "reflecting on your mind's degree of trustworthiness, using your current mind as opposed to something else." Luckily, the brain is the lens that sees its flaws. And thus, says Quine:

Epistemology, or something like it, simply falls into place as a chapter of psychology and hence of natural science.

Yudkowsky once wrote, "If there's any centralized repository of reductionist-grade naturalistic cognitive philosophy, I've never heard mention of it."

When I read that I thought: What? That's Quinean naturalism! That's Kornblith and Stich and Bickle and the Churchlands and Thagard and Metzinger and Northoff! There are hundreds of philosophers who do that!

continue reading »

Generalizing From One Example

259 Yvain 28 April 2009 10:00PM

Related to: The Psychological Unity of Humankind, Instrumental vs. Epistemic: A Bardic Perspective

"Everyone generalizes from one example. At least, I do."

   -- Vlad Taltos (Issola, Steven Brust)

My old professor, David Berman, liked to talk about what he called the "typical mind fallacy", which he illustrated through the following example:

There was a debate, in the late 1800s, about whether "imagination" was simply a turn of phrase or a real phenomenon. That is, can people actually create images in their minds which they see vividly, or do they simply say "I saw it in my mind" as a metaphor for considering what it looked like?

Upon hearing this, my response was "How the stars was this actually a real debate? Of course we have mental imagery. Anyone who doesn't think we have mental imagery is either such a fanatical Behaviorist that she doubts the evidence of her own senses, or simply insane." Unfortunately, the professor was able to parade a long list of famous people who denied mental imagery, including some leading scientists of the era. And this was all before Behaviorism even existed.

The debate was resolved by Francis Galton, a fascinating man who among other achievements invented eugenics, the "wisdom of crowds", and standard deviation. Galton gave people some very detailed surveys, and found that some people did have mental imagery and others didn't. The ones who did had simply assumed everyone did, and the ones who didn't had simply assumed everyone didn't, to the point of coming up with absurd justifications for why they were lying or misunderstanding the question. There was a wide spectrum of imaging ability, from about five percent of people with perfect eidetic imagery1 to three percent of people completely unable to form mental images2.

Dr. Berman dubbed this the Typical Mind Fallacy: the human tendency to believe that one's own mental structure can be generalized to apply to everyone else's.

continue reading »

Reason as memetic immune disorder

215 PhilGoetz 19 September 2009 09:05PM

A prophet is without dishonor in his hometown

I'm reading the book "The Year of Living Biblically," by A.J. Acobs.  He tried to follow all of the commandments in the Bible (Old and New Testaments) for one year.  He quickly found that

  • a lot of the rules in the Bible are impossible, illegal, or embarassing to follow nowadays; like wearing tassels, tying your money to yourself, stoning adulterers, not eating fruit from a tree less than 5 years old, and not touching anything that a menstruating woman has touched; and
  • this didn't seem to bother more than a handful of the one-third to one-half of Americans who claim the Bible is the word of God.

You may have noticed that people who convert to religion after the age of 20 or so are generally more zealous than people who grew up with the same religion.  People who grow up with a religion learn how to cope with its more inconvenient parts by partitioning them off, rationalizing them away, or forgetting about them.  Religious communities actually protect their members from religion in one sense - they develop an unspoken consensus on which parts of their religion members can legitimately ignore.  New converts sometimes try to actually do what their religion tells them to do.

I remember many times growing up when missionaries described the crazy things their new converts in remote areas did on reading the Bible for the first time - they refused to be taught by female missionaries; they insisted on following Old Testament commandments; they decided that everyone in the village had to confess all of their sins against everyone else in the village; they prayed to God and assumed He would do what they asked; they believed the Christian God would cure their diseases.  We would always laugh a little at the naivete of these new converts; I could barely hear the tiny voice in my head saying but they're just believing that the Bible means what it says...

How do we explain the blindness of people to a religion they grew up with?

continue reading »

Eight questions for computationalists

16 dfranke 13 April 2011 12:46PM

 

This post is a followup to "We are not living in a simulation" and intended to help me (and you) better understand the claims of those who took a computationalist position in that thread.  The questions below are aimed at you if you think the following statement both a) makes sense, and b) is true:

"Consciousness is really just computation"

I've made it no secret that I think this statement is hogwash, but I've done my best to make these questions as non-leading as possible: you should be able to answer them without having to dismantle them first. Of course, I could be wrong, and "the question is confused" is always a valid answer. So is "I don't know".

  1. As it is used in the sentence "consciousness is really just computation", is computation:
    a) Something that an abstract machine does, as in "No oracle Turing machine can compute a decision to its own halting problem"?
    b) Something that a concrete machine does, as in "My calculator computed 2+2"?
    c) Or, is this distinction nonsensical or irrelevant?
  2. If you answered "a" or "c" to question 1: is there any particular model, or particular class of models, of computation, such as Turing machines, register machines, lambda calculus, etc., that needs to be used in order to explain what makes us conscious? Or, is any Turing-equivalent model equally valid?
  3. If you answered "b" or "c" to question 1: unpack what "the machine computed 2+2" means. What is that saying about the physical state of the machine before, during, and after the computation?
  4. Are you able to make any sense of the concept of "computing red"? If so, what does this mean?
  5. As far as consciousness goes, what matters in a computation: functions, or algorithms? That is, does any computation that give the same outputs for the same inputs feel the same from the inside (this is the "functions" answer), or do the intermediate steps matter (this is the "algorithms" answer)?
  6. Would an axiomatization (as opposed to a complete exposition of the implications of these axioms) of a Theory of Everything that can explain consciousness include definitions of any computational devices, such as "and gate"?
  7. Would an axiomatization of a Theory of Everything that can explain consciousness mention qualia?
  8. Are all computations in some sense conscious, or only certain kinds?

ETA: By the way, I probably won't engage right away with individual commenters on this thread except to answer requests for clarification.  In a few days I'll write another post analyzing the points that are brought up.

View more: Prev