I see a self-reference problem with reductionism. I wonder if this has already been solved. So I'm asking.

Best as I can tell, there aren't actually things in reality. That's a human interpretation. It collapses upon almost any inspection, like the Ship of Theseus or the paradox of the heap. We also see the theory of thing-ness collapsing with physical inspection, which is why QM is "weird".

Best as I can tell, all thing-ness arises from reification. Like how we talk about "government" like it's a thing, but really we've just thingified a process. "The weather" is another clear-to-me example.

It seems to me that physical objects are exactly the same in this respect: a child interacts with swirling sense perceptions and reifies (i.e. thingifies) those experiences into "a ball" or whatever.

So how does reification happen?

  • Well, it's not like there's a thing that reification is; it's just a process that a human mind does.
  • Okay, so what's a human mind? Well, it's a process that the human brain engages in.
  • So what's a brain? A configuration of chemicals.
  • The chemicals are atoms, which are patterns of wiggling magical reality fluid from QM, which is maybe just made of mathematical structures.

So… when do we get to the place where we aren't using objects to explain how the impression of objects arises?


This puzzle shows up in the Many Worlds view of QM. It's roughly equivalent to "How do worlds arise?"

Two things (!) get entangled via an interaction. When one of those things is a human brain, we see the various possibilities, but as various brains which aren't interacting directly anymore from the perspective of those brains. So instead of seeing all quantum superposed configurations at once, each version of us observes just one configuration.

Okay, great.

So where are these brains that are getting entangled with other things? Aren't these brains made of the same quantum soup as everything else?

This Many Worlds thing makes a lot of sense if you're seeing the situation from the outside, where you can safely reify everything without self-reference. You can be a brain looking at a situation you're not in.

But we're in the reality where this happens. We're embedded agents. The brains doing this reification are somehow arising from the very process they're attempting to explain, which they meta-explain by… reifying themselves?

Which is to say, brains exist as reifications of brains.

So WTF is a brain??

What is reality actually doing here?

What is going on before whatever it is reflects on and reifies itself as "a brain" or "a human" or whatever?

What is that which comes before thing-ness?


I find it hard to talk clearly about this puzzle.

Best as I can tell, language assumes the objectivity of things as its foundation. I have not found a way to write a clear grammatical sentence without at least implicitly using nouns or gerunds.

E.g., "What is that which comes before thing-ness?" assumes the answer will be a thing, which explicitly it cannot be.

Poetry sometimes sidesteps this limitation but at the cost of precision.

Please be forgiving of my attempt to be clear using a medium that I find doesn't allow me to speak coherently about this.

If you care to articulate the puzzle better than I have, I'm all ears. I'd love to see how to use language more skillfully here.

I also would very much like to know if there's already a known answer that doesn't defeat itself by ignoring the question.

("Oh, brains are just processes that arise from the laws of physics." Okay. So, like, what are these "laws of physics" and these "processes" prior to there being a brain to interpret them as those things as opposed to there just being more swirling magical reality fluid?)

New Answer
New Comment

7 Answers sorted by

Ape in the coat

Jun 10, 2023

2214

It's clear to me that this is a simple case of map-territory confusion, though I feel a bit weird explaining it to a co-founder of CFAR.

"Thingness" comes from the map. Referents of "things" can be in the territory. "Realityfluid" - terrible name by the way, lets call it "fundamentals" - is the territory. Territory exists regardless whether it's interpreted by a subject of cognition or not, but subjects of cognition understand the territory only through a map. "Laws of physics" can be about properties of the fundamentals or about their reificated representation on the map and we need to be careful not to confuse these two

Also, my being a cofounder of CFAR doesn't mean I'm immune to sufficiently complex basic confusions! This might be simple to clear up. But my mind is organized right now such that just saying "map vs. territory" just moves the articulation around. It doesn't address the core issue whatsoever from what I can tell.

3Ape in the coat11mo
Sure.  The part about me being weirded out is me noticing my own confusion that someone who I expect to know and understand what I know and understand to be confused about a thing that I'm not. Which can very well mean that it's me who is missing some crucial detail and the clarity that I experience is false. And I'm mentally preparing myself for it. On a reread I noticed that  can be interpreted as a status-related reproach. I don't remember having intended it and I'm sorry it turned out to be this way.
2the gears to ascension11mo
that was where status came into it; evaluating the author to predict word quality
[-]TAG11mo20

The map/territory distinction allows you to state a range of positions, and maybe make a few claims like "just because it's in the map, it doesn't mean it's in the territory. It's not a solution to everything.

1Ape in the coat11mo
Not to everything, no.  This framework helps to clear the standard confusion in some philosophical questions, typically the ones phrased with such words as 'real', 'non-real' 'objective' and 'subjective'.
2TAG11mo
If you are talking about mainstream, professional philosophy, then , no, because it's very basic in that context. If you are talking about the average person then, yes, it's a very useful first step. (Mainstream philosophers may not use the exact words, but that is of little significance).
1Ape in the coat11mo
Are questions regarding reality of some phenomena, for instance morality and mathematical objects continue to be open problems for mainstream, professional philosophy? If so, seems that this very basic first step can be very helpful for at least some of mainstream philosophers.  Of course there are those philosophers who understand it well, after all this idea itself was developped by philosophers.
2[comment deleted]11mo

"Realityfluid" - terrible name by the way, lets call it "fundamentals"

I don't think that quite captures what I was pointing at. I'll buy there are better words for it, but I don't just mean "fundamentals". Or at least that phrasing feels meaningfully inaccurate to me.

I picked up the phrase "magical reality fluid" from a friend who was deep into mathematical physics. He used it the same way rationalists use (or at least used to use?) "magic": "By some magic cognitive process…." The idea being to name it in a silly & mysterious-sounding way to emphasize ... (read more)

1Ape in the coat11mo
I think fundamentals = realityfluid in this definition, in case that realityfluid  doesn't consist of even simpler elements which is possible but we do not need to commit to it forthe sake of this discussion.  I don't like the term "realityfluid" being used for the most fundamental elements of the universe because 1) its made from two words which is a terrible fit for something that by definiton isn't made from anything else; 2) it has "real" in it and "real/unreal" distinction is a confusing and strictly inferrior to "map/territory". I don't mind preserving the reminder that we do not know much about actual fundamental stuff. Lets call it "mages" instead of "fundamentals" then. A short world, and the idea that wizards are the fundamental elements of reality sounds even more ridiculous than some kind of magical fluid.

Yep. The trouble is that all maps are in the territory. Even "territory" in "map vs. territory" is actually a map embedded in… something. ("The referent of 'territory'", although saying it this way just recurses the problem. Like reference itself is a more fundamental reality than either maps or the referent of "territory".)

So solving this by clearing up the map/territory distinction is about creating a map within which you can have "map" separate from a "territory". The true territory (whatever that is) doesn't seem to me to make such a distinction.

The is... (read more)

6Ape in the coat11mo
This recursion itself is the artifact of the fact that we can comprehend territory only through maps. And it exists only in our map, not in the territory. Try reasoning on a fixed level, carefully noticing which elements are part of a map and which are part of a territory for this level. And then you can generalise this reasoning for every level of recursion. I think you did a wrong turn here. By "reference" do you mean the ability of a map to correspond to a territory? Territory is just a lot of fundamentals. The properties of these fundamentals turned out to allow specific configurantions of fundamentals that we call "brains" to arrange themselves in patterns that we call "having a map of a territory". Which properties of the fundamentals exactly do allow it? - is an interesting question which we do not know the answer yet. We can speculate in terms of laws of physics that are part of our map  - probably has something to do with "locality". Likewise, we can't exactly specify the principle of what it means to "be a brain" or "have a map representing a territory" in terms of configurations of fundamentals. But we can understand the principle that every referent of our map is some configuration of fundamentals.

Dagon

Jun 10, 2023

86

So… when do we get to the place where we aren't using objects to explain how the impression of objects arises?

You're very clever, young man, very clever. But it's objects all the way down.

What you call "reification", I call "modeling".  This is what it feels like to be an algorithm (at least, THIS algorithm - who knows about others?) which performs compression-of-experience and predictive-description-based decisionmaking.  On many scales, it does seem to work to do the rough math of movement, behavior, and future local configurations based on aggregates and somewhat arbitrary clustering of perceptions.  

Brains, to the extent that they are useful to think of as a class (another non-real concept) of things (unreal, as you say), are local configurations of universe-stuff that do this compression and prediction.  When executing, they are processing information at a level different from the underlying quantum reality.  

The universe IS (as far as any subset of the universe, like a brain, can tell) swirling magical reality fluid.  A thing is it's own best model, and this includes both "the universe" and "the portion of the universe in any given entity's lightcone".  Brains are kind of amazing (aka magical aka sufficiently advanced technology) in that they make up models and stories which seem to make sense of at least part of what they experience, at some levels of abstraction.  Note that they ALSO hallucinate abstractions of configurations that don't occur (counterfactuals) or are outside their lightcones (for MWI, FAR outside).  

I think reductionism is a very useful model for many (MANY!) levels of abstraction, but I have to admit to believing (also a synonym of "modeling") that when taken far enough, there will be a mysterianism underlayer, where it's un-measurably and un-modelably complex.  It's unknown how far that is - current ability is probably seeing mysteries that can at some point become models, even if there are true un-modelable layers much deeper.  Scientific progress consists of peeling the onion one more level, but we will ALWAYS find mystery at the next level, which we will scratch at until we either dissolve the mystery into our models, or ... something.  It's unknown even whether we'll be able to know when we reach "the bottom".

[ Epistemic status: I think there's some valid modeling in what I say, but I don't claim it's complete nor perfectly coherent.  ]

shminux

Jun 10, 2023

62

What you are gesturing at is how to identify an embedded agent in "the real world", as far as I can tell. Then you keep asking deeper questions, like "what are these laws of physics". 

So, let's start from the basic assumptions, let me know if they make sense to you.

Assumption: there is a universe of which we are a small part (hence "embedded agency"). Basically "something exists". 

Assumption: from the point of view of Laplace's demon, we are identifiable and persistent features of the world, not Boltzmann brains.

Note that at this point we have not assumed the existence of "time" or any other familiar abstraction in our mental map. Just "externally identifiable structures".

Also note that the world might be a completely random instance of whatever. A bunch of rocks. You can even find loose patterns in white noise if you look hard enough. I wrote a couple of posts about it some years ago:

https://www.lesswrong.com/posts/aCuahwMSvbAsToK22/physics-has-laws-the-universe-might-not

https://www.lesswrong.com/posts/2FZxTKTAtDs2bnfCh/order-from-randomness-ordering-the-universe-of-random

Now, if we assume that this world contains something externally identifiable as "agents", it implies another very strong assumption: internal predictability. This is usually glossed over, but this point is crucial, and restricts possible worlds quite a bit. Again, it is a very very very strong constraint. The world must be such that some relevant large scale features of it can be found inside an incredibly tiny part of it. I do not necessarily mean "spatially tiny". We have not assumed the existence of space yet. Just that there are small subsets of the whole thing that have identifiable (at least to a Laplace demon) features of the whole thing.

Now, given this assumption, you can talk about the world being usefully lossily compressible, to an extremely large degree. "Usefully" here means that the compressed image of the world can fit into an agent and can be traced to be "used" by the agent. Actually the meaning of "usefully" and "used" is a separate can of worms deserving much more than a couple of sentences.

Now, at this point we got "physical laws": the distillation of the compression algorithm that fits into the agent. For some agents (bacteria) it is "identify sugar gradients and eat your way up the gradient". For others it is "quantum field theory that predicts the mass of the Higgs boson, given what we can measure". 

This is a crucial point. The world does not come with "matter" and "laws" separately. Physical laws are agent-size distillations of the world, and they are compatible but not unique, and depend on the agent. 

To recap, the chain of reasoning about the world goes like this: Something exists -> we exist in this "something" -> for us to persist the "something" must be compressible -> these compression algorithms are physical laws (and sometimes moral laws, or legal laws).

So when you say "brains exist as reifications of brains." what you probably mean is "the world is predictable from the inside".

[-]TAG11mo20

This is a crucial point. The world does not come with "matter" and "laws" separately

In your vocabulary, it comes with matter and compressibility.

2shminux11mo
Sort of. It depends on whether every random instance has useful approximate patterns. I do not know the answer. One of my linked posts points at how order can be found in complete randomness.

sjeffh

Jun 10, 2023

5-4

I believe the Tractatus Logico-Philosophicus addresses this question. In a very important sense, philosophy and brain analysis pulls the rug out from under its own feet. Wittgenstein provides lots of interesting ideas but famously concludes with: "What we cannot speak about we must pass over in silence."

My informal chain of reasoning is as follows:

  1. How could this paradoxical "thing before things" is be described in human language at all?
  2. Rather, it must be pointed to.
  3. Presumably some pointers would be more effective than others, but then what is the metric that determines which ones are more effective?
  4. That must also be a "thing before things."
  5. If we cannot reify the "thing before things" or the metric for it, we may still have hope of reifying it, since we can reify enough things around it until we have an extremely reliable pointer.

In my opinion, what this means in practice is that the best pointer to the question is basically every philosophical and religious response of humanity to Truth combined into a single whole. This is essentially a holistic combination of all pointers along with a metric for going towards better pointers.

In any case, I am pretty sure the Tractatus Logico-Philosophicus answers this question well, so maybe look there?

Ah, cool, this sounds like maybe the right kind of thing. Your step 4 particularly jumps out at me: it highlights the self-reference in the answer, which makes it sound plausible as a path to an answer.

Thank you!

Gordon Seidoh Worley

Jun 10, 2023

40

I like some of the other answers, but they aren't phrased how I would explain it, so I'll add my own. This is something like the cybernetics answer to your question.

The world is made of "stuff". This "stuff" is a mixed soup that has no inherent divisions. But then some of this stuff gets organized into negative feedback processes, and the telos of these feedback processes creates meaning when they extract information from sensors. This information extraction tells this from that in order for the stuff thus organized to do things. From this we get the basis of what we call minds: stuff arranged to perform negative feedback that generates information about the world via observation that models the world to change behavior. Stack enough of these minds up and you get interesting stuff like plants and animals and machines.

So the brain, though kind of weird to think about, is just this kind of control system, or rather an agglomeration of control systems, that is able to do things like map the territory it finds itself in.

I try to cover this topic in some depth in this chapter of my in-progress book, here.

localdeity

Jun 10, 2023

40

Regarding the first part, here's what comes to mind: Long before brains evolved any higher capacities (for "conscious", "self-reflective", etc. thought), they evolved to make their hosts respond to situations in "evolutionarily useful" ways.  If you see food, some set of neurons fire and there's one group of responses; if you see a predator, a different set of neurons fire.

Then you might define "food (as perceived by this organism)" to be "what tends to make this set of neurons fire (when light reflects off it (for certain ranges of light) and reaches the eyes of this organism)".  Boundary conditions (like something having a color that's on the edge of what is recognized as food) are probably resolved "stochastically": whether something that's near the border of "food" actually fires the "food" neurons probably depends significantly on silly little environmental factors that normally don't make a difference; we tend to call this "random" and say that this almost-food thing has a 30% chance of making the "food" neurons fire.

There probably are some self-reinforcing things that happen, to try[1] to make the neurons resolve one way or the other quickly, and to some extent quick resolution is more important than accuracy.  (See Buridan's principle: "A discrete decision based upon an input having a continuous range of values cannot [always] be made within a bounded length of time.")  Also, extremely rare situations are unimportant, evolutionarily speaking, so "the API does not specify the consequences" for exactly how the brain will respond to strange and contrived inputs.

("This set of neurons fires" is not a perfectly well-defined and uniform phenomenon either.  But that doesn't prevent evolution from successfully making organisms that make it happen.)

Before brains (and alongside brains), organisms could adapt in other ways.  I think the advantage of brains is that they increase your options, specifically by letting you choose and execute complex sequences of muscular responses to situations in a relatively cheap and sensitive way, compared to rigging up Rube Goldberg macroscopic-physical-event machines that could execute the same responses.

Having a brain with different groups of neurons that execute different responses, and having certain groups fire in response to certain kinds of situations, seems like a plausibly useful way to organize the brain.  It would mean that, when fine-tuning how group X of neurons responds to situation Y, you don't have to worry about what impacts your changes might have in completely different situations ABC that don't cause group X to fire.

I suspect language was ultimately built on top of the above.  First you have groups of organisms that recognize certain things (i.e. they have certain groups of neurons that fire in response to perceiving something in the range of that thing) and respond in predictable ways; then you have organisms that notice the predictable behavior of other organisms, and develop responses to that; then you have organisms noticing that others are responding to their behavior, and doing certain things for the sole purpose[1] of signaling others to respond.

Learning plus parent-child stuff might be important here.  If your helpless baby responds (by crying) in different ways to different problems, and you notice this and learn the association, then you can do better at helping your baby.

Anyway, I think that at least the original notion of "a thing that I recognize to be an X" is ultimately derived from "a group of neurons that fire (reasonably reliably) when sensory input from something sufficiently like an X enters the brain".  Originally, the neuronal connections (and the concepts we might say they represented) were probably mostly hardcoded by DNA; later they probably developed a lot of "run-time configuration" (i.e. the DNA lays out processes for having the organism learn things, ranging from "what food looks like" [and having those neurons link into the hardcoded food circuit], through learning to associate mostly-arbitrary "language" tokens to concepts that existing neuron-groups recognize, to having general-purpose hardware for describing and pondering arbitrary new concepts).  But I suspect that the underlying "concept X <--> a group of neurons that fires in response to perceiving something like X, which gates the organism's responses to X" organization principle remains mostly intact.

  1. ^

    Anthropomorphic language shorthand for the outputs of evolutionary selection

romeostevensit

Jun 10, 2023

30

There seem to be sharp negentropy gradients like cell walls, so I think having things as a structure in our map makes sense as a correspondence to structures out there. Nervous systems (and brains) seem to be loci of intense information processing.

1 comment, sorted by Click to highlight new comments since: Today at 2:57 PM

So… when do we get to the place where we aren't using objects to explain how the impression of objects arises?

I'm not sure about this, but David Chapman's discussion of Boundaries, objects, and connections seems tangentially relevant, curious to know your reactions to it. Quoting the part that seems relevant:

The world is not objectively divisible into separate objects. Boundaries are, roughly, perceptual illusions, created by our brains. Moreover, which boundaries we see depends on what we are doing—on our purposes.

However, boundaries are not just arbitrary human creations. The world is immensely diverse. Some bits of it stick together much more than other bits. Some bits connect with each other in many ways besides just stickiness. The world is, in other words, patterned as well as nebulous.

Therefore, objects, boundaries, and connections are co-created by ourselves and the world in dynamic interaction.