This is the second of four short essays that say explicitly some things that I would tell an intrigued proto-rationalist before pointing them towards Rationality: AI to Zombies (and, by extension, most of LessWrong). For most people here, these essays will be very old news, as they talk about the insights that come even before the sequences. However, I've noticed recently that a number of fledgling rationalists haven't actually been exposed to all of these ideas, and there is power in saying the obvious.

This essay is cross-posted on MindingOurWay.


A note on what sort of artifact a brain is:

A brain is a specialty device that, when slammed against its surroundings in a particular way, changes so that its insides reflect its outsides. A brain is a precise, complex machine that continually hits nearby things just so, so that some of its inner bits start to correlate with the outside world.

Consider the photons bouncing off the chair in the room where I write this. In coarse summary, those photons slam into specialized proteins in the membrane of my photoreceptor cells, changing their shape and setting off a chain reaction that activates an enzyme that breaks down certain nucleotides, thereby changing the electrochemical gradient between the inside and the outside of the cell, preventing the release of certain neurotransmitters through its membrane. This lack of neurotransmitters causes nearby cells to undergo similar ionization events, and those cells transmit the signal from a number of nearby photoreceptor cells into the first layer of my retinal cells (again, by the mechanism of proteins changing shape and altering the electrochemical gradient). And that's just the very beginning of a looooong Rube Goldberg machine: the signal then makes its way down the retina (interacting, at each level, with signals from higher levels) until it's passed to the optic nerve, where it's passed to the visual cortex, where the specific pattern of nerve cell ionization events causes a specific pattern of neurons to fire, setting off a cascade of neurons-affecting-other-neurons in a domino effect that results in the inside of my brain containing a tiny summarized representation of a chair.

A brain is a complex piece of machinery that, when immersed in a big soup of photons while connected to light-sensors, undergoes a massive chain reaction that causes the inner parts of the brain to correlate with the things the photons bounced off of.

A brain is a machine that builds up mutual information between its internals and its externals.


Of course, a brain is not only a mutual information machine. A brain also does many other things. Parts of the Rube Goldberg machine predict the future. Other parts create plans, and somehow the artifact implements a consciousness, which is pretty dang impressive.

Furthermore, the brain was definitely not designed to be a mutual information machine. There's not a crisp "information machine" part of the artifact that can be separated out from the predictor and the planner and the feeler.

And, of course, the brain is not a perfect information machine. Far from it.

But though brains are not only information machines, and though they are not intentionally designed information machines, and though they are not perfect information machines, they definitely are information machines: one of the things your brain is doing, right now, is hitting the environment in just such a way so as to hone an internal model of reality.


Most people already know that what they perceive is not reality itself, but rather a representation of the outer world rebuilt inside their heads. And yet, that knowledge often leads people to a visualize a homunculus sitting inside a brain-shaped room watching a video feed.

It helps, instead, to visualize the brain as a blind Rube Goldberg machine cleverly constructed so that when it slams against the rest of reality, the shrewdly placed wheels and gears line up just right, such that a tiny, summarized map of the world arises inside the machine.

I often find that this visualization un-sticks something, for many people. It reminds people that brains are an artifact, a real thing that has to hit its surroundings in order to summarize them. It reminds people that every artifact is blind, that the only way to get a model of the world inside is to bump into the outside enough that it's possible for the innards to correlate with the outards.

From this vantage point, it is easier to see the need for the art of human rationality — for we are artifacts, and we are blind.

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 3:57 PM
[-][anonymous]9y10

'What is a RG machine? Seems a bit abruptly introduced.' (I'll Google it, of course.)

A Rube Goldberg machine is a device which accomplishes a (relatively) simple task in a ludicrously complicated and suboptimal way, such as a device which squeezes one's toothpaste through an intricate system of pulleys, weights, and trained hamsters. While it gets the job done (eventually), it's incredibly inefficient, and most of the effects it produces are intermediate products.