Buried somewhere among Eliezer's writings is something essentially the same as the following phrase:
"Intentional causes are made of neurons. Evolutionary causes are made of ancestors."
I remember this quite well because of my strange reaction to it. I understood what it meant pretty well, but upon seeing it, some demented part of my brain immediately constructed a mental image of what it thought an "evolutionary cause" looked like. The result was something like a mountain of fused-together bodies (the ancestors) with gears and levers and things (the causation) scattered throughout. "This," said that part of my brain, "is what an evolutionary cause looks like, and like a good reductionist I know it is physically implicit in the structure of my brain." Luckily it didn't take me long to realize what I was doing and reject that model, though I am just now realizing that the one I replaced it with still had some physical substance called "causality" flowing from ancient humans to my brain.
This is actually a common error for me. I remember I used to think of computer programs as these glorious steampunk assemblies of wheels and gears and things (apparently gears are a common visual metaphor in my brain for things it labels as complex) floating just outside the universe with all the other platonic concepts, somehow exerting their patterns upon the computers that ran them. It took me forever to figure out that these strange thingies were physical systems in the computers themselves, and a bit longer to realize that they didn't look anything like what I thought they did. (I still haven't bothered to find out what they really are, despite having a non-negligible desire to know.) And even before that -- long before I started reading Less Wrong, or even adopted empiricism (which may or may not have come earlier), I decided that because the human brain performs computation, and (it seemed to me) all computations were embodiments of some platonic ideal, souls must exist. Which could have been semi-okay, if I had realized that calling it a "soul" shouldn't allow you to assume it has properties that you ascribe to "souls" but not to "platonic ideals of computation".
Are errors like this common? I talked to a friend about it and she doesn't make this mistake, but one person is hardly a good sample. If anyone else is like this, I'd like to know how often it causes really big misconceptions and whether you have a way to control it.
Platonism can be a bit of a double-edged sword. On the one hand, it can make certain concepts a bit easier to visualize, like imagining that probabilities are over a space of "possible worlds" — you certainly don't want to develop your understanding of probability in those terms, but once you know what probabilities are about, that can still be a helpful way to visualize Bayes's theorem and related operations. On the other hand, this seems to be one of the easiest ways to get caught in the mind projection fallacy and some of the standard non-reductionist confusions.
Generally, I allow myself to use Platonist and otherwise imaginary visualizations, as long as I can keep the imaginariness in mind. This has worked well enough so far, particularly because I'm rather confused about what "existence" means, and am wary of letting it make me think I understand strange concepts like "numbers", "universes", etc. better than I really do. Though sometimes I do wonder if any of my visualizations are leading me astray. My visualization of timeless physics, for instance; I'm a bit suspicious of it since I don't really know how to do the math involved, and so I try not to take the visualization too seriously in case I'm imagining the wrong sort of structure altogether.
Look what up, exactly?
Well said.
Oh, sorry, I thought that was clear. I want to find out what the physical systems in a computer actually look like. Right now all I (think I) know is that RAM is electricity.
Edited to make this more clear.