I see a self-reference problem with reductionism. I wonder if this has already been solved. So I'm asking.
Best as I can tell, there aren't actually things in reality. That's a human interpretation. It collapses upon almost any inspection, like the Ship of Theseus or the paradox of the heap. We also see the theory of thing-ness collapsing with physical inspection, which is why QM is "weird".
Best as I can tell, all thing-ness arises from reification. Like how we talk about "government" like it's a thing, but really we've just thingified a process. "The weather" is another clear-to-me example.
It seems to me that physical objects are exactly the same in this respect: a child interacts with swirling sense perceptions and reifies (i.e. thingifies) those experiences into "a ball" or whatever.
So how does reification happen?
- Well, it's not like there's a thing that reification is; it's just a process that a human mind does.
- Okay, so what's a human mind? Well, it's a process that the human brain engages in.
- So what's a brain? A configuration of chemicals.
- The chemicals are atoms, which are patterns of wiggling magical reality fluid from QM, which is maybe just made of mathematical structures.
So… when do we get to the place where we aren't using objects to explain how the impression of objects arises?
This puzzle shows up in the Many Worlds view of QM. It's roughly equivalent to "How do worlds arise?"
Two things (!) get entangled via an interaction. When one of those things is a human brain, we see the various possibilities, but as various brains which aren't interacting directly anymore from the perspective of those brains. So instead of seeing all quantum superposed configurations at once, each version of us observes just one configuration.
Okay, great.
So where are these brains that are getting entangled with other things? Aren't these brains made of the same quantum soup as everything else?
This Many Worlds thing makes a lot of sense if you're seeing the situation from the outside, where you can safely reify everything without self-reference. You can be a brain looking at a situation you're not in.
But we're in the reality where this happens. We're embedded agents. The brains doing this reification are somehow arising from the very process they're attempting to explain, which they meta-explain by… reifying themselves?
Which is to say, brains exist as reifications of brains.
So WTF is a brain??
What is reality actually doing here?
What is going on before whatever it is reflects on and reifies itself as "a brain" or "a human" or whatever?
What is that which comes before thing-ness?
I find it hard to talk clearly about this puzzle.
Best as I can tell, language assumes the objectivity of things as its foundation. I have not found a way to write a clear grammatical sentence without at least implicitly using nouns or gerunds.
E.g., "What is that which comes before thing-ness?" assumes the answer will be a thing, which explicitly it cannot be.
Poetry sometimes sidesteps this limitation but at the cost of precision.
Please be forgiving of my attempt to be clear using a medium that I find doesn't allow me to speak coherently about this.
If you care to articulate the puzzle better than I have, I'm all ears. I'd love to see how to use language more skillfully here.
I also would very much like to know if there's already a known answer that doesn't defeat itself by ignoring the question.
("Oh, brains are just processes that arise from the laws of physics." Okay. So, like, what are these "laws of physics" and these "processes" prior to there being a brain to interpret them as those things as opposed to there just being more swirling magical reality fluid?)
Regarding the first part, here's what comes to mind: Long before brains evolved any higher capacities (for "conscious", "self-reflective", etc. thought), they evolved to make their hosts respond to situations in "evolutionarily useful" ways. If you see food, some set of neurons fire and there's one group of responses; if you see a predator, a different set of neurons fire.
Then you might define "food (as perceived by this organism)" to be "what tends to make this set of neurons fire (when light reflects off it (for certain ranges of light) and reaches the eyes of this organism)". Boundary conditions (like something having a color that's on the edge of what is recognized as food) are probably resolved "stochastically": whether something that's near the border of "food" actually fires the "food" neurons probably depends significantly on silly little environmental factors that normally don't make a difference; we tend to call this "random" and say that this almost-food thing has a 30% chance of making the "food" neurons fire.
There probably are some self-reinforcing things that happen, to try[1] to make the neurons resolve one way or the other quickly, and to some extent quick resolution is more important than accuracy. (See Buridan's principle: "A discrete decision based upon an input having a continuous range of values cannot [always] be made within a bounded length of time.") Also, extremely rare situations are unimportant, evolutionarily speaking, so "the API does not specify the consequences" for exactly how the brain will respond to strange and contrived inputs.
("This set of neurons fires" is not a perfectly well-defined and uniform phenomenon either. But that doesn't prevent evolution from successfully making organisms that make it happen.)
Before brains (and alongside brains), organisms could adapt in other ways. I think the advantage of brains is that they increase your options, specifically by letting you choose and execute complex sequences of muscular responses to situations in a relatively cheap and sensitive way, compared to rigging up Rube Goldberg macroscopic-physical-event machines that could execute the same responses.
Having a brain with different groups of neurons that execute different responses, and having certain groups fire in response to certain kinds of situations, seems like a plausibly useful way to organize the brain. It would mean that, when fine-tuning how group X of neurons responds to situation Y, you don't have to worry about what impacts your changes might have in completely different situations ABC that don't cause group X to fire.
I suspect language was ultimately built on top of the above. First you have groups of organisms that recognize certain things (i.e. they have certain groups of neurons that fire in response to perceiving something in the range of that thing) and respond in predictable ways; then you have organisms that notice the predictable behavior of other organisms, and develop responses to that; then you have organisms noticing that others are responding to their behavior, and doing certain things for the sole purpose[1] of signaling others to respond.
Learning plus parent-child stuff might be important here. If your helpless baby responds (by crying) in different ways to different problems, and you notice this and learn the association, then you can do better at helping your baby.
Anyway, I think that at least the original notion of "a thing that I recognize to be an X" is ultimately derived from "a group of neurons that fire (reasonably reliably) when sensory input from something sufficiently like an X enters the brain". Originally, the neuronal connections (and the concepts we might say they represented) were probably mostly hardcoded by DNA; later they probably developed a lot of "run-time configuration" (i.e. the DNA lays out processes for having the organism learn things, ranging from "what food looks like" [and having those neurons link into the hardcoded food circuit], through learning to associate mostly-arbitrary "language" tokens to concepts that existing neuron-groups recognize, to having general-purpose hardware for describing and pondering arbitrary new concepts). But I suspect that the underlying "concept X <--> a group of neurons that fires in response to perceiving something like X, which gates the organism's responses to X" organization principle remains mostly intact.
Anthropomorphic language shorthand for the outputs of evolutionary selection