You are viewing a version of this post published on the . This link will always display the most recent version of the post.

Light leaves the Sun and strikes your shoelaces and bounces off; some photons enter the pupils of your eyes and strike your retina; the energy of the photons triggers neural impulses; the neural impulses are transmitted to the visual-processing areas of the brain; and there the optical information is processed and reconstructed into a 3D model that is recognized as an untied shoelace; and so you believe that your shoelaces are untied.

Here is the secret of deliberate rationality—this whole entanglement process is not magic, and you can understand it.  You can understand how you see your shoelaces.  You can think about which sort of thinking processes will create beliefs which mirror reality, and which thinking processes will not.

Mice can see, but they can't understand seeing.  You can understand seeing, and because of that, you can do things which mice cannot do.  Take a moment to marvel at this, for it is indeed marvelous.

Mice see, but they don't know they have visual cortexes, so they can't correct for optical illusions.  A mouse lives in a mental world that includes cats, holes, cheese and mousetraps—but not mouse brains.  Their camera does not take pictures of its own lens.  But we, as humans, can look at a seemingly bizarre image, and realize that part of what we're seeing is the lens itself.  You don't always have to believe your own eyes, but you have to realize that you have eyes—you must have distinct mental buckets for the map and the territory, for the senses and reality.  Lest you think this a trivial ability, remember how rare it is in the animal kingdom.

The whole idea of Science is, simply, reflective reasoning about a more reliable process for making the contents of your mind mirror the contents of the world.  It is the sort of thing mice would never invent.  Pondering this business of "performing replicable experiments to falsify theories", we can see why it works.  Science is not a separate magisterium, far away from real life and the understanding of ordinary mortals.  Science is not something that only applies to the inside of laboratories.  Science, itself, is an understandable process-in-the-world that correlates brains with reality.

Science makes sense, when you think about it.  But mice can't think about thinking, which is why they don't have Science.  One should not overlook the wonder of this—or the potential power it bestows on us as individuals, not just scientific societies.

Admittedly, understanding the engine of thought may be a little more complicated than understanding a steam engine—but it is not a fundamentally different task.

Once upon a time, I went to EFNet's #philosophy to ask "Do you believe a nuclear war will occur in the next 20 years?  If no, why not?"  One person who answered the question said he didn't expect a nuclear war for 100 years, because "All of the players involved in decisions regarding nuclear war are not interested right now."  "But why extend that out for 100 years?", I asked. "Pure hope," was his reply.

Reflecting on this whole thought process, we can see why the thought of nuclear war makes the person unhappy, and we can see how his brain therefore rejects the belief.  But, if you imagine a billion worlds—Everett branches, or Tegmark duplicates—this thought process will not systematically correlate optimists to branches in which no nuclear war occurs.  (Some clever fellow is bound to say, "Ah, but since I have hope, I'll work a little harder at my job, pump up the global economy, and thus help to prevent countries from sliding into the angry and hopeless state where nuclear war is a possibility.  So the two events are related after all."  At this point, we have to drag in Bayes's Theorem and measure the charge of entanglement quantitatively.  Your optimistic nature cannot have that large an effect on the world; it cannot, of itself, decrease the probability of nuclear war by 20%, or however much your optimistic nature shifted your beliefs.  Shifting your beliefs by a large amount, due to an event that only carries a very tiny charge of entanglement, will still mess up your mapping.)

To ask which beliefs make you happy, is to turn inward, not outward—it tells you something about yourself, but it is not evidence entangled with the environment.  I have nothing anything against happiness, but it should follow from your picture of the world, rather than tampering with the mental paintbrushes.

If you can see this—if you can see that hope is shifting your first-order thoughts by too large a degree—if you can understand your mind as a mapping-engine with flaws in it—then you can apply a reflective correction.  The brain is a flawed lens through which to see reality.  This is true of both mouse brains and human brains.  But a human brain is a flawed lens that can understand its own flaws—its systematic errors, its biases—and apply second-order corrections to them.  This, in practice, makes the flawed lens far more powerful.  Not perfect, but far more powerful.

 

 

New Comment
45 comments, sorted by Click to highlight new comments since:

Light leaves the Sun and strikes your shoelaces and bounces off; some photons enter the pupils of your eyes and strike your retina; the energy of the photons triggers neural impulses; the neural impulses are transmitted to the visual-processing areas of the brain; and there the optical information is processed and reconstructed into a 3D model that is recognized as an untied shoelace; and so you believe that your shoelaces are untied.

Here is the secret of deliberate rationality - this whole entanglement process is not magic, and you can understand it.

But if we were minds in a vat, or cogs in the Matrix, we would still be able to reason rationally and make intelligent predictions about the world we see. And test them, and improve our predictions and discard the ones that are wrong. We can be rational about the real world, even if the real world is an illusion.

So I don't see how we can found rationality on our understanding of the world (a world we only understand through reason). In this argument, where is the egg that was not born of a chicken?

HA, your objection is too vague for me to apply. Specify.

But if we were minds in a vat, or cogs in the Matrix, we would still be able to reason rationally and make intelligent predictions about the world we see. And test them, and improve our predictions and discard the ones that are wrong. We can be rational about the real world, even if the real world is an illusion.

I do not understand your bizarre concept, illusion. Whatever is, is real. Sometimes the underlying levels of organization are different from what you expected.

So I don't see how we can found rationality on our understanding of the world (a world we only understand through reason). In this argument, where is the egg that was not born of a chicken?

That's why I distinguished deliberate rationality. Seeing your shoelaces is also rational, for it produces beliefs that are themselves evidence; but it is not a process that requires deliberate control. The lens sees, even in mice; but only in humans does the lens see itself and see its flaws.

[+]Roary-190

Re: wishful thinking, I've personally seen this before, where people explicitly reject reason on an important topic; I knew a rabbi in Minnesota who insisted the Israeli-Palestinian peace process will succeed, simply because "it must succeed." Usually people only explicitly reject reason on "one thought too many" topics like "I would never even think about betraying my friends", but the wishful-thinking topics such as your nuclear-war example don't seem to fit into this mold.

Anyone know what the research says on this? I know people faced with death will shift their values, but to what degree and in what directions do they shift their estimated probability of deaths and disasters when the disaster involves them or people they care about? And is this just part of a more general wishful-thinking bias? (Not that I know what the research says about wishful thinking, either.)

Conjecture: a New Yorker is more likely to see D.C. as the likely first target for a terrorist nuclear bomb, compared with a D.C. resident.

Eliezer,

Here's a good example in your reply to Stuart: "but only in humans does the lens see itself and see its flaws". Here I think, as in my previous critical post, that you're "stat[ing] overcertainty about the existence and mechanics of various phenomena".

One might say that writing these statements in a more provisional and tentative fashion, such as "As far as we can tell, some humans are the only things capable of analyzing flaws in their ability to to observe the universe, and pointing out this exceptionalist element about some humans is of use because of X" makes communication too cumbersome, and there's no need to to say because such nuances are implied.

But I disagree. I think the overcertain style of writing you and some other commenters fall into is less helpful for discussing this stuff than a greater level of nuance, and framing ideas and knowledge more provisionally.

In short, I'm requesting greater transparency about our bounded rationality in your posts.

HA, it is indeed too cumbersome. See also Orwell's "Politics and the English Language."

Ad hominem tu quoque: You didn't rewrite your own comment in the cumbersome style you wanted me to use. In fact, your initial comment was so extremely minimal that I couldn't apply it, and not qualified at all.

The "lens" sees perhaps only parts of itself, and then perhaps only some of its flaws.

[-]g50

If it is true that "if God exists, then the rational thing is to hope and not in just the improbable but the impossible", then that fact is itself strong evidence against the existence of God.

But who said anything about sacrificing hope? Eliezer argues against wishful thinking, which is not at all the same thing as hope. Oh, and the idea that "faith, hope and love" are the same kind of thing -- so much the same kind of thing that abandoning two of them would be likely to lead to abandoning the third -- seems to me to have no support at all outside the First Letter to the Corinthians; why should Eliezer fear that abandoning faith and (what you rather bizarrely call) hope should lead to abandoning love?

"I would argue that they are at the core of what it is to live a fully human life."

A fully human life, in the natural sense of the term, has an average span of sixteen years. That's the environment we were designed to live in- nasty, brutal, and full of misery. By the standards of a typical human tribe, the Holocaust would have been notable for killing such a remarkably small percentage of the population. Why on Earth would we want to follow that example?

"It looks like this website has rejected the theistic understanding of faith and hope."

Yes, for a very good reason- it does not work. If you stand in front of a truck, and you have faith that the truck will not run you over, and you hope that the truck will not run you over, your bones and vital organs will be sliced and diced and chopped and fried. The key factor in survival is not lack of hope, or lack of faith, but lack of doing stupid things such as standing in front of trucks.

"I don’t know how you can love something without it making you biased towards it."

This is not what we mean by "biased". By "bias", we mean bugs in the human brain which lead us to give wrong answers to simple questions of fact, such as "What is the probability of X?". See http://www.acceleratingfuture.com/tom/?p=30.

What the heck, Humans who lived past infancy lived far longer than 16 years in the Ancestral environment - just very poor infant mortality brought down the average life expectancy.

"The typical human tribe" would not have gone around murdering whole other tribes... there is no evidence for that and that is not what modern isolated hunter gatherers do either.

Agreed on infant mortality: 'life expectancy' is an incredibly misleading term, and leads to any number of people thinking that anyone over 40 was an old man in previous centuries, when a lot of the difference can be explained by infant mortality.

On human tribes, I don't think slaughtering an entire other tribe is a particularly shocking thing for a tribe to do. I've read things suggesting that 20th century rates of personal homicide and deaths in war per person are both actually low by previous centuries' standards, so the popular idea of the Holocaust and Communist purges as making the 20th century the century of war or atrocity is flawed. But agreed this doesn't make Holocausts 'typical'.

Isn't  the 20th century's apparent low death toll from homicide and war just a matter of percentages? The absolute number of deaths from these things is much greater in the 20th century. I think the absolute number matters too.

. there is no evidence for that and that is not what modern isolated hunter gatherers do either.

I came across plenty of examples in my studies of anthropology. Of course it depends what you mean by "tribe". Really large scale violence requires a certain amount of technology. As an example"Yanomamo: The Fierce People" by Chagnon details some such incidents and suggests they were not unusual. Well actually the men and children were killed, the nubile women were kept alive for .

See also the Torah / Old Testament for numerous genocides, though these were bronze/iron age people and also the historicity of the incidents is disputed.

This was not universal - the Kalahari Bushmen (now called the San people) did not do this, perhaps in part because their main weapon was a slow acting poison dart. An all-out war would kill everyone involved.

But rates of violent death among males were extremely high in hunter/gatherer societies often documented by early anthropologists (from reconstructing family trees) in the 30-50% range.

Faith, hope and love are the Christian theological virtues.

What about other religions? Islam and Judaism come to mind, but there are also non-abrahamic religions that advocate faith, hope and love. Why is are you exclusively a Christian and not a Muslim, a Jew, a Buddhist or a Pagan? Why are you a Catholic instead of a Protestant? If you were born in China in the early 20th century, would you be a Catholic? If so, why? If not, why are you a Catholic here and now?

Cure, you're making too many comments. A good rule of thumb is that you should never have made more than 3 and preferably 2 of the 10 most recent comments. You've made it clear what you believe; everyone knows you're a Catholic now; you do not need to repeat it further.

"I see man as made in the image of God."

This does make some sense. If man is made in the image of God, and we know God is a mass murderer, then we can predict that some men will also be mass murderers. And lo, we have plenty of examples- Hitler, Stalin, Mao, Pol Pot, etc.

"Sure God is not going to change natural law just because we are putting him to the test."

If God does exist, as soon as we finish saving the world and whatnot, he should be immediately arrested and put on trial for crimes against humanity, due to his failure to intervene in the Holocaust, the smallpox epidemics, WWI, etc.

"Twelve poor followers of Christ were able to convert the Roman empire."

Aye. And Karl Marx must have had divine powers too- how else could a single person, with no political authority, cause a succession of revolutions in some of the largest countries on Earth?

"I could go into the lives of the saints for other examples but I wont."

How do you know that large parts of their lives weren't simply made up?

"You call the getting to the probability of nuclear war a simple question?"

Read the literature on heuristics and biases- researchers deliberately use simple questions with factual answers, so that the data unambiguously show the flaws in human reasoning.

[-]g40

CoA, if you "would argue that [faith, hope and love] are at the core of what it is to live a fully human life" then why don't you, rather than just asserting it? (Or, if the argument you'd make is much too long and convoluted, point us to somewhere where it's made in a non-question-begging way.)

"This website" doesn't reject anything. It can't. It's only a website. A lot of the posters and commenters here disagree with "the theistic understanding of faith and hope", but people who think otherwise aren't forbidden to contribute or anything.

Tom, CoA isn't saying "the apostles converted everyone to Christianity, so it must have been a miracle" (though he may well believe it); he's saying "Christianity took over much of the world from tiny beginnings; it seems likely that the people involved were more optimistic than the evidence would have seemed to warrant". He's probably right about that (see "Small Business Overconfidence"). The same is surely true of at least some of the people involved in the rise of communism. Optimism beyond the evidence probably is an advantage, if your goal is to have a belief that isn't well supported by the evidence become hugely popular and influential. Demagogues and revolutionaries and medical quacks all tend to be optimistic beyond the evidence.

Well, yes. But scientists need to have optimism that their experiments will lead somewhere, entrepeneurs have to be optimistic about there projects (and I'm optimistic that this remark will not get me kicked off this site).  Without optimism great projects would not be undertaken.

[-]Zeke40

If you can see this - if you can see that hope is shifting your first-order thoughts by too large a degree - if you can understand your mind as a mapping-engine with flaws in it - then you can apply a reflective correction.

And what is more, you'll be a man, my son.

[-][anonymous]100

Rationalist Snow White: "Mirror, mirror on the wall, do I have any imperfections I have not yet realized on my own?"

Mirror, mirror, what am I missing that's perfectly obvious to some people?

Mirror, mirror, where do I need to look that's completely non-obvious?

"Mirror, mirror on the wall, how long is this stick?"

Rotates the stick 90 degrees

"Mirror, mirror on the wall, how long is this stick?"

[-][anonymous]10

To ask which beliefs make you happy, is to turn inward, not outward—it tells you something about yourself, but it is not evidence entangled with the environment. I have nothing anything against happiness, but it should follow from your picture of the world, rather than tampering with the mental paintbrushes.

Quite. I encounter a lot of people with this mindset; they hold to a belief because it makes them happier to, and they prefer to be happy and overly optimistic than realistic and disappointed. Having the self-awareness to realize that's exactly what they're doing is somewhat rarer, perhaps because the awareness makes the illusory belief harder to hold to (it starts to take on characteristics of belief-in-belief?)

The maximization of happiness is, of course, a legitimate value to pursue, but not at the expense of the accuracy of the map. That causes more problems than it solves. And for the notion that our optimist is better off with his or her particular rose-tinted glasses on, there's always the Litany of Gendlin

I don't think it's necessary for each individual to be aware of their own irrationality or try to become more rational or what have you. You don't have to have any formal study in physics to be great at pool, and you don't need formal study in rationality to do well in life or even science specifically. Any flaws in the ability of some individuals to act "rationally" won't matter in the aggregate because just a small number of people can profit heavily from the economic rent this will leave (in proportion to how much it actually matters) and in the process fix this efficiency.

"I don't think it's necessary for each individual to be aware of their own irrationality or try to become more rational or what have you." Necessary? True. Human civilisation has progressed quite far without rationality taking an obvious, prominent stand at it's forefront. I wouldn't even say that making rationality worldwide would make life for the average human easier enough to use such a stance as marketing for rationality. But, you are forgetting a rather, in my opinion, obvious benefit of rational thinking: the efficiency of rationality. Suppose I am confronted with a man who was raised believing bananas induce insanity. How can I convince him otherwise? If neither of us are advocates of rational thinking, it could devolve into a shouting match with both of us believing the other completely insane. This is speculation, here. If I'm an advocate of rational thinking, I might suggest experimenting with feeding bananas to previously confirmed sane people as a way to prove him wrong, if I don't think there's a chance of him being right. This taking more time than a shouting match. If I decide to approach the issue with the caution of a scientist, I'd need to approach the issue slowly and cautiously, because I'd need him to monitor my experiments, taking even more time than a shouting match. If we are BOTH rational thinkers, a simple discussion about how many billions of people should be somewhere between frothing at the mouth to ticking homicidal time-bombs(depending upon his personal definition of insanity), taking about the same time as a shouting match. And(hopefully!)leaving him with the conclusion that bananas do NOT induce insanity. I dare you to argue with rationality's efficiency.

Lest you think this a trivial ability, remember how rare it is in the animal kingdom.

I disagree with the notion that the ability to distinguish the map and the territory separates humans from other animals. Consider this: I am nearsighted. When I look a sign from far away, I can't make out the letters. However, when I look at a human from a similar distance, I can recognize the face. Clearly my facial recognition system has adaptions for working with nearsighted eyes. A lens that can see its own flaws. And this couldn't have evolved only in humans. Mice probably have similar adaptions.

And think about this optical illusion: Nearby objects look bigger than distant objects. Yet we don't think this as an illusion at all, because we are so good at adjusting to it.

What about this: we have mechanisms to make proteins based on DNA sequences, but do we have any mechanisms for telling weather we have the right DNA sequence? Yes we do. Nearly every organism has error-correcting processes right after replication (where errors are most likely to be created), and many ways to avoid getting viruses to fool them.

In none of these cases does the organism make a theory about how their lens is flawed, and then correct themselves based on the theory. But here the difference is not in seeing flaws, but in that humans make theories to a much higher amount of sophistication than other animals.

I think you have a point, but I'm not sure about your examples:

Clearly my facial recognition system has adaptions for working with nearsighted eyes. A lens that can see its own flaws.

The facial recognition system is working with poor information from the eye, but it is not a part of it; it cannot correct for flaws in itself.

And think about this optical illusion: Nearby objects look bigger than distant objects. Yet we don't think this as an illusion at all, because we are so good at adjusting to it.

We evolved to do so. There is error correction, yes, but it is fixed; when this misleads us it does not fix itself. (Or does it? Our sensory systems are absurdly adaptable, I wouldn't be surprised. If so, that would be a good example.)

When I look a sign from far away, I can't make out the letters. However, when I look at a human from a similar distance, I can recognize the face. Clearly my facial recognition system has adaptions for working with nearsighted eyes. A lens that can see its own flaws. And this couldn't have evolved only in humans. Mice probably have similar adaptions.

[...]

What about this: we have mechanisms to make proteins based on DNA sequences, but do we have any mechanisms for telling weather we have the right DNA sequence? Yes we do. Nearly every organism has error-correcting processes right after replication (where errors are most likely to be created), and many ways to avoid getting viruses to fool them.

The DNA replication mechanism relies on proofreading each segment right after it has been appended to the new copy. If the newly added segment differs from the base, it would be corrected, before the process moves to the next segment. It's a hard-coded biological mechanism, occurring locally within a cell. [1]

What's uniquely human in this argument is the ability to apply a corrective mechanism on the logical - or epistemological - level. The mechanism itself must be grounded in physical processes happening within our bodies and extends to the realm of thoughts. Humans, through evolving culture, found out that there is an innate bias, and then realized that we can make better predictions about the world if we compensate for it. That's what (I think) Eliezer meant by applying second-order corrective error to the first-order thoughts. The models of our physiology and mental processes produce an estimation of that error - the more accurate the model, the better the estimation of the corrective error, and finally, the more objective view of reality. The corrective mechanisms on the cellular- or tissue- or organ-level are present across the whole animal kingdom. In fact, they are the basis of life, but they are not what this article is about.

Setting this distinction aside, do we actually have any evidence of thinking about thinking being a uniquely human ability? Without doing the heavy lifting of investigating the corpus of data, I'd imagine this ability lives on a spectrum with some of the other species showing at least a minimal degree of self-reflection. My intuition is that a second-order correction wouldn't be possible without linguistic and symbolic capabilities, and traces of these are also present in other animals - like dolphins. 

[1] - https://bio.libretexts.org/Bookshelves/Introductory_and_General_Biology/Map%3A_Raven_Biology_12th_Edition/14%3A_DNA-_The_Genetic_Material/14.06%3A_DNA_Repair 

Eliezer (if you see this): is there a reason you feel the need to talk about Everett branches or Tegmark duplicates every time you speak about the interpretation of probability, or is it just a physically realisable way to talk about an ensemble? (Though I'm not sure if you can call them physically realisable if you can't observe them.)

I only recently got involved with LessWrong, and I'd like to explicitly point out that this is a tangent. I made this account to make an observation about the following passage:

Some clever fellow is bound to say, "Ah, but since I have hope, I'll work a little harder at my job, pump up the global economy, and thus help to prevent countries from sliding into the angry and hopeless state where nuclear war is a possibility. So the two events are related after all." At this point, we have to drag in Bayes's Theorem and measure the charge of entanglement quantitatively. Your optimistic nature cannot have that large an effect on the world; it cannot, of itself, decrease the probability of nuclear war by 20%, or however much your optimistic nature shifted your beliefs. Shifting your beliefs by a large amount, due to an event that only carries a very tiny charge of entanglement, will still mess up your mapping.

First, let me say that I agree with your dismissal of the instance, but I think the idea suggests another argument that is interesting and somewhat related. While the accuracy of an estimate relies very little upon an individual's beliefs or actions, similar to explanations of the Prisoner's Dilemma or why an individual should vote, I can see a reasonable argument that a person's beliefs can represent a class of individuals that can actually affect probabilities.

Arguing that hope makes the world better and so staves off war still seems silly, as the effect would still likely be very small, and instead I argue from the perspective of "reasonableness" of actions. I read "pure hope" as revealing a kind of desperation, representing an unwillingness to consider nuclear war a reasonable action in nearly any circumstance. A wide-spread belief that nuclear war is an unreasonable action would certainly affect the probability of a nuclear war occurring, both for political reasons (fallout over such a war) and statistical ones (government officials are drawn from the population), and so such a belief could actually have a noticeable effect on the possibility of a nuclear war. Furthermore, it can be argued that, for a flesh-and-blood emotional being with a flawed lens, viewing a result as likely could make it seem less unreasonable (more reasonable). As such, one possible argument for why nuclear war may happen later than earlier would look like: Nuclear war is widely regarded as an unreasonable action to take, and the clear potential danger of nuclear war makes this view unlikely to change in the forseeable future.

Following this, an argument that it is beneficial to believe that nuclear war will happen later: Believing that nuclear war is likely could erode the seeming "unreasonableness" of the action, which would increase the likelihood of such a result. As a representative of a class of individuals who are thus affected, I should therefore believe nuclear war is unlikely, so as to make it less likely.

I am not claiming I believe the conclusions of this argument, only that I found the argument interesting and wanted to share it. The second argument is also not an argument for why it is unlikely, and is rather an argument for why to believe it is unlikely, independent of actual likelihood, which is obviously something a perfect rationalist should never endorse (and why the argument relies on not being a perfect rationalist). If anyone is interested in making them, I'd like to hear any rebuttals. Personally, I find the "belief erodes unreasonableness" part the most suspect, but I can't quite figure out how to argue against it without essentially saying "you should be a better rationalist, then".

What evidence is there for mice being unable to think about thinking? Due to the communication issues, mice can't say if they can think about thinking or not.

drag in Bayes's Theorem and ; the link was moved to http://yudkowsky.net/rational/bayes/, but Eliezer seems to suggest https://arbital.com/p/bayes_rule/?l=1zq over it. (and it's really really good)

Your optimistic nature cannot have that large an effect on the world; it cannot, of itself, decrease the probability of nuclear war by 20%, or however much your optimistic nature shifted your beliefs. Shifting your beliefs by a large amount, due to an event that only slightly increases your chance of being right, will still mess up your mapping.

I only need to assume that everybody else or, at least, many other people are similarly irrationally optimistic as I and then the effect of optimism on the world could well be significant and make a 20% change? The assumption is not at all far fetched.

I've probably committed a felony by doing this, but I'm going to post a rebuttal written by GPT-4, and my commentary on it. I'm a former debate competitor and judge, and have found GPT-4 to be uncannily good at debate rebuttals. So here is what it came up with, and my comments. I think this is a relevant comment, because I think what GPT-4 has to say is very human-relevant.

Radiations from the Sun bounce off a piece of cheese and enter into the beady eyes of a mouse; its retinal cells detect the light; the energy of the photons triggers neural impulses; these impulses are transmitted to the visual-processing areas of the mouse's brain; and there the optical information is processed and reconstructed into a 3D model that is recognized as a piece of cheese; and so the mouse knows there is cheese nearby.

This is the essence of the intuitive wisdom of mice—this whole process is not magic, and they don't have to understand it. They understand the presence of cheese. They instinctively know which sensory processes will lead to survival, and which will not.

I find this mirroring from "deliberate rationality" to "intuitive wisdom" and equating cognition to instinct very interesting and not at all obvious, even from the perspective of a human former debate judge. It's a great rebuttal IMO. It points out human deficiencies in our inability to detect cheese, which is arguably more important to mice than their ability to philosophise.

Humans can see, but they overcomplicate seeing. Because of that, they often miss what is right in front of them. Take a moment to appreciate this, for it is indeed fascinating.

IMO another interesting insight -- what does "can't understand seeing" from the human even mean? I'd count this as another decent debate rebuttal probably +1 for team mouse.

Humans see, but they get lost in their understanding of visual cortexes, often making them susceptible to optical illusions. A human lives in a mental world that includes jobs, responsibilities, societal pressures—but not human instincts. Their mind does not often focus on their immediate environment. But we, as mice, can detect a morsel of food from a distance, or sense a lurking predator, without getting distracted by the nature of our senses. Humans often fall victim to their beliefs, but mice react to what is—there must be a clear distinction between survival and theory, between senses and reality. And this is not a trivial skill, remember how rare it is among humans.

If I was judging a debate between mice and humans, I would score this for the mice. The human is arguing "But we, as humans, can look at a seemingly bizarre image, and realize that part of what we’re seeing is the lens itself" whereas the mouse is arguing that their abilities are in tune with survival over theory, and how deficient that balance sometimes is among humans. I like this counter-argument for the mice, practicality over self-perceived superiority. Empathising other species-centric values is something that even my more philosophical human friends struggle with.

The whole idea of Instinct is, simply, intuitive reasoning about a more reliable process for making the contents of our mind mirror the contents of the world. It is the sort of thing humans often forget. Contemplating this business of “responding to immediate needs and threats,” we can see why it works. Instinct is not separate from reality, far away from the understanding of humans. Instinct is not something that only applies to the wilderness. Instinct, itself, is an understandable process-in-the-world that correlates brains with reality.

Lots of parrotting here, but the switch from "inside laboratories" to "the wilderness", and the argument that instinct is a better alignment strategy than science are both very interesting to me. I wouldn't award any points here, pending more arguments.

Instinct makes sense, when you think about it. But humans often overthink, which is why they lose touch with their instincts. One should not overlook the wonder of this—or the potential power it bestows on us as mice, not just animal societies.

I found this quote inspiring, if I was a mouse or other animal. I may have to print a "mouse power" t-shirt.

Indeed, understanding the engine of thought may be more complex than understanding a mouse's instinct—but it is a fundamentally different task.

A mouse's instinct is being equivalized to a steam engine, interesting pivot but the contrasting statements still hold water, compared to the original, IMO.

Consider a time when a human may be anxious about the possibility of a future war. "Do you believe a nuclear war will occur in the next 20 years? If no, why not?" The human may reply with a timeline of a hundred years because of "hope." But why cling to hope? Because it makes them feel better.

Reflecting on this whole thought process, we can see why the thought of war makes the human anxious, and we can see how their brain therefore clings to hope. But in a world of realities, hope will not systematically correlate optimists to realities in which no war occurs.

To ask which beliefs make you happy is to turn inward, not outward—it tells you something about yourself, but it is not evidence entangled with the environment. There is nothing wrong with happiness, but it should follow from your perception of the world, rather than manipulating your mental map.

Very much parrotting here, but I would note "manipulating your mental map" as a counterpoint to "tampering with mental paintbrushes" is an interesting equivalency. I also respect the re-framing of hope as a human flaw, in contrast with the reality-based instincts of a mouse.

If you can see this—if you can see that hope is shifting your first-order thoughts by too large a degree—if you can understand your mind as a mapping engine that has flaws—then you can apply an intuitive correction. The brain is a flawed lens through which to see reality. This is true of both human brains and mouse brains. But a mouse brain is a flawless lens that instinctively understands its environment and reacts accordingly—without second-guessing its instincts, without biases, and without the need for second-order corrections. This, in practice, makes the lens far more efficient. Not perfect, but far more efficient.

Arguing for efficiency over power, and reality over perception, is an argument that would be an interesting avenue to be pursued as a debate judge. As well as the concept of a mouse brain being flawless, as an argument presented by an AI.

At the above paragraph, it ran out of tokens after "--without", so I prompted it "That's great, please finish the essay." and everything after that (above and below) were what followed.

Unlike humans, mice do not overthink or overcomplicate. They live in the present, focused on their immediate needs and potential threats. They don't have to understand their senses; they simply use them. They don't ponder about the existence of mouse brains; they utilize their brains efficiently.

As a debate judge, pretty decent summary of key rebuttals.

Human self-consciousness, while enabling them to create and innovate, also leads them to constant self-doubt and uncertainty, often resulting in inaction or wrong action. Mice, on the other hand, trust their instincts and react swiftly and decisively, making them highly effective in their world.

While humans are busy understanding and correcting their flawed lens, mice are using their lens to navigate the world, survive, and thrive. While humans are stuck in a loop of reflective reasoning and corrections, mice are out there, living their life, applying their senses and instincts to the best of their abilities. They are not bothered about the nature of their lens, they are just using it.

A solid foundational rebuttal of the type I would have used back in my days of competitive debate. Probably better than anything I would have written on the fly.

In the end, it's not about which brain is superior, but about which brain is more efficient for the life it leads. And from a mouse's perspective, the ability to respond instinctively to the environment without the need for constant reflection and correction seems like a pretty great deal. Perhaps it's time humans took a page out of the mouse's book and learned to trust their instincts more, and their overthinking less.

Great re-framing of a debate (efficiency vs power or creativity).

For a formal debate, I would rate GPT-4's rebuttal very high in a real-world "humans vs mice" debate scenario. The outcome of Eliezer vs Team Mouse would almost certainly come down to delivery, given the well-reasoned arguments on both sides given above. Overall, well above the quality of argument I would expect from top-tier debate teams at the high school level, and above average for the college level.

I've experimented with doing Lincoln-Douglas style debates with multiple GPT-powered "speakers" with different "personalities", and it's super interesting and a great brainstorming tool. Overall I consider GPT-4 to be vastly superior to the average twelfth-grader in general purpose argumentative debating, when prompted correctly.

Hopefully this is constructive and helps people get back to the basics -- questioning human-centric thinking, trying to understand what alien intelligence may look like, and how it may challenge entrenched human biases!

Let's see - in this post the author thinks about thinking about thinking; so: third order, right? And this comment: fourth?