A few weeks ago, Vaniver gave a talk discussing meaning and meaningfulness. Vaniver had some particular thing he was trying to impart. I am not sure I got the thing he intended, but what I got was interesting. Here is my bad summary of some things I got from the talk, and some of the discussion after the talk (in particular from Alex Ray). No promises that either of them endorse this.
Epistemic status: I am not very confident this is the right frame, but it seemed at least like an interesting pointer to the right frame. "
WTF is Meaning™?
Humans seem to go around asking questions like "What makes life meaningful? What is 'The Meaning of Life?'. What is my purpose? What is the point of it all?"
What is the type-signature of a "Meaning", such that we'd recognize one if we saw it?
When asking a question like this, it's easy to get lost in a floating series of thought-nodes that don't actually connect to reality. A good rationalist habit around questions like this is to ask: "Do we understand this 'meaning' concept well enough to implement it in a robot? Could a robot find things meaningful? Is there a reason we'd want robots to find things meaningful? What sort of algorithms end up asking "what is the meaning of life?"
Here is a partial, possible answer to that question.
Imagine a StarCraft playing robot.
Compared to humans, StarCraftBot has a fairly straightforward job: win games of StarCraft. It does a task, and then it either wins, or loses, and gets a boolean signal, which it might propagate back through a complex neural net. Humans don't have this luxury – we get a confused jumble of signals that were proxies for what evolution actually cared about when it programmed us. We get hungry, or horny, or feelings of satisfaction that vaguely correlate with reproducing our genes.
StarCraftBot has a clearer sense of "what is my purpose."
Nonetheless, as StarCraftBot goes about "trying to get good at StarCraft", it has to make sense of a fairly complex world. Reality is high dimensional, even the simplified reality of the StarCraft universe. It has to make lots of choices, and there's a huge number of variables that might possibly be relevant.
It might need to invent concepts like "an economy", "the early game", "micro", "units", "enemy", "advantage/disadvantage." (disclosure: I am neither an ML researcher nor a Starcraft pro). Not only that, but it needs some way to navigate when to apply one of those concepts, vs another one of them. Sometimes, it might need to move up or down a ladder of abstraction.
StarCraftBot has had the Meaning of Life spelled out for it, but it still needs a complex ontology for navigating how to apply that meaningfulness. And as it constructs that ontological framework for itself, it may sometimes find itself confused about "What is a unit? Are units and buildings meaningfully different? What principles underly a thriving economy?"
Now, compare this to humans. We have a cluster of signals that relate to surviving, and reproducing, and ensuring our tribe survives and flourishes. We end up having to do some kind of two-way process, where we figure out...
- Specific things like: "Okay, what is a tiger? What is food? What is my family? What is 'being a craftsman?' or 'being a hunter?'"
- Higher order things like "What is the point of all of this? how do all of these things tie together? If I had to tradeoff my survival, or my children's, or my tribes', which would I do? What is my ultimate goal?"
A thing that some religions and cultures do is tie all these things together into a single narrative, with multiple overlapping tiers. You have goals relating to your own personal development, and to raising a family, and to having a role in your tribe that helps it flourish as a group, and (in some cases) to some higher purpose of 'serve god' or 'serve the ancestors' or 'protect the culture.'
The idea here is something like "Have a high level framework for navigating various tactical and strategic goals, that is coherent such that when you move from one domain to another, you don't have to spend too much time re-orienting or resolving contradictions between them. Each strategic frame allows you filter out tons of extraneous detail and focus on the decision-at-hand."
Hammers, Relationships and Fittingness
Meanwhile, another concept that might bear on "Why do humans sit around saying 'what does it all mean!?'" is fittingness.
Say you have a hammer.
The hammer has a shape – a long handle, a flat hammer-part, and a curved hook thingy. There are many different ways you could interact with the hammer. You could kick it with your feet. You could grab it by the curved hook thingy. You could grab it by the handle. You could try to eat it
How do you relate to the hammer? It's not enough to know it exists. If a chimpanzee were to find a hammer, they might need some sense of "what is the hammer for?". Once they realize they can bash walnuts open with it, or maybe bash in the skull of a rival chimpanzee, they might get the sense of "oh, the thing I'm supposed to do here is grab the handle, and swing."
Later, if their concept-schemas comes to include nails and timber and houses, they might think "ohhhhh, this has a more specific, interesting purpose of hammering nails into wood to build things."
Later still, they might realize "ohhhhhhhhhh, this weird hook thing on the end is for pulling nails out." This involves using the hammer a different way than they might have previously.
Hammers vs Fathers
Okay. So, you might come upon a hammer and say: "I have this weird-shaped-object, I could fit myself around it in various ways. I could try to eat it. It's unclear how to fit it into my hand, and it's unclear how to fit it against the other parts of my environment. But after fiddling around a bunch, it seems like this thing has a purpose. It can bash walnuts or skulls or nails."
The process of figuring that out is a mental motion some people need to make sometimes.
Another mental motion people make sometimes is to look around at their tribe, their parents, their children, their day-to-day activities, and to ask questions like "how do I fit in here?".
Say you have a father. There are a bunch of ways you can interact with your father. You can poke them on the nose. You can cry at them. You can ask them philosophical questions. You can silently follow their instructions. You can grab them and shake them and yell "Why don't you understand me!!?".
Which of those is helpful depends on your goals, and what stage of life you're at, and what sort of tribe you live in (if any).
If you are a baby, "poke your father on the nose" is in some sense what you're supposed to be doing. You're a baby. Your job is to learn basic motor skills and crudely mimic social things going on around you and slowly bootstrap yourself into personhood.
If you're in some medieval cultures, and you are male and your father is a blacksmith, then your culture (and correspondingly, your father's personality), might give you a particular set of affordances: follow their instructions about blacksmithing and learn to be a blacksmith. [citation needed]. Learn some vaguely defined "how to be a man" things.
You can say to your dad "I wanna be a poet" and ask him questions about poetry, but in this case that probably won't go very well because you are a medieval peasant and society around you does not provide much opportunity to learn poetry, nor do anything with it. [citation needed again]
You can grab your father and shake him and say "why don't you understand me!!!?". Like the chimpanzee holding a hammer by the wrong end, mashing walnuts with the wooden handle, that sorta kinda works, but it is probably not the best way to accomplish your goals.
As you grow up, the culture around you might also offer you particular affordances and not others. You have a strong affordance for becoming a blacksmith. I don't really know how most medieval societies work but maybe you have other affordances like "become a tailor if for some reason you are drawn to that" or "join the priesthood" or "become a brigand" or "open an inn." Meanwhile you can "participate in tribal rituals" and "help raise barns when that needs doing", or you can ignore people and stick to your blacksmith shop being kinda antisocial.
Those might lead you to have different relationships with your father.
Analogy or Literal?
It's currently unclear me if the questions "how do I relate to my hammer" and "how do I relate to my father?" are cute analogies for each other, or if they are just literally the same mental motion applied to very different phenomena.
I'm currently leaning into "they are basically the same thing, on some level." People and hammers and tribes are pretty different, and they have very different knobs you can fiddle with. But, maybe, the fundamental operation is the same: you have an interface with reality. You have goals. You have a huge amount of potential details to think about. You can carve the interface into natural joints that make it easier to reason about and achieve your goals. You fiddle around with things, either physically in reality on in your purely mental world. You figure out what ways of interacting with stuff actually accomplishes goals.
A schema for how to relate to your father might seem limiting. But, it is helpful because reality is absurdly complex, and you have limited compute for reasoning about what to do. It is helpful to have some kind of schema for relating to your father, whether it's a schema society provides you, or one you construct for yourself.
Having a mutually understood relationship prunes out the vast amount of options and extraneous details, down to something manageable. This is helpful for your father, and helpful for you.
Relating and Meaning
So, in summary, here is a stab at what meaning and relating might be, in terms that might actually be (ahem) meaningful if you were building a robot from scratch.
A relationship might be thought as "a set of schemas for interacting with something, that let you achieve your goals." Your relationship with a hammer might be simple and unidirectional. Your relationship with a human might be much more complex, because both of you have potential actions that include modeling each other, thinking strategically, cooperating or defecting in different ways over time, etc. This creates a social fabric, with a weirder set of rules for how to interact with it.
Meaning is... okay geez I got to the end of this essay and I'm still not sure I can concisely describe "Meaning" rather than vaguely gesturing at it.
The dictionary definition of "meaning" that comes up when I google it is about words, and what words mean. I think this is relevant to questions like "what does it all mean?" or "what is the meaning of life?", but a few steps removed. When I say "what do the letters H-O-T mean?" I'm asking about the correspondence between an abstract symbol, and a particular Thing In Reality (in this case, the concept of being high-temperature).
When I ask "What does my job mean?", or "what does my relationship with my father mean?" or "what is the meaning of life?", I'm asking "how do my high level strategic goals correspond to each other, in a way that is consistent, minimizes overhead when I shift tasks, and allows me to confidently filter out irrelevant details?"
While typing this closing summary, I think "Meaningmaking" might be a subtype of "Relating". If Relating is fiddling-around-with or reflecting-on a thing, until you understand how to interact with it, then I think maybe "Meaningmaking" is fiddling around with your goals and high level strategies until you feel like you have a firm grasp on how to interact with them.
...
Anyway, I am still a bit confused about all this but those were some thoughts on Meaning and Relating. I am interested in other people's thoughts.
Thanks for the link to the video. It's short and pretty concise and decent production quality, and frankly I don't disagree with most of it. It seems like in many ways this idea of an extended mindspace I'm getting from the video relates quite directly to a study like Social Factors Engineering and Industrial Psychology, both fields I thought of pursuing in school as I have interests in Architecture, Design, Industrial Design and Psychology. Environmental Press is the psychological effect your environment has on you and I'm absolutely convinced of the importance of designing with this idea in mind.
Where I start to have questions is at the point where the narrator posits the idea that, fundamentally having a computer in your mind is no different than sitting at one. To show you why I'll give a couple examples. First, imagine this person, Steve, who has a computer in his head that lets him just think about surfing the internet, and the computer in his head just makes it happen. Cool if not a little scary to contemplate.
Next, imagine Steve is sitting at a computer. He can't just think about surfing the internet, at least at this point. He has to use a mouse, a keyboard, a screen, a computer with internet capabilities and a subscription to some sort of Internet Provide as well as his hands and his eyes, which all require the use of motor neurons. Given all these things, he can surf the internet.
However, consider he gets into an accident, and loses both of his arms. Now, he may have all that other stuff he had before, but he has no way to turn the computer on, or use the mouse or use a keyboard. He can use voice activation, but this is a fundamentally different way of utilizing our concepts of language to carry out a task. With hands, he uses his fingers to type on a keyboard, and must process the thoughts in his head in a different way than if he can just use his vocal chords and voice to command the computer to do what he wants to do. The parts of the brain he's utilizing for each task are different.
Now, consider that Mary might keep notes in a notebook which she relies on heavily to do her job. Suppose she wants to keep it secret, so she places it on top of a tall shelf that can only be reached with a small ladder. What if someone steals the ladder? The notebook is still there in the room, it's just unreachable on the top shelf, but she can't access the information in it because she requires the ability to look at the pages.
It's not just the creation of signs and symbols in the outside world which create meaning, it's also the ability to interpret them, which requires navigating an outside world physically in order to put our bodies in a position to decode them. For Steve it means knowing the alphabet so he can use a keyboard productively, and having the physical hardware (his arms and hands) in order to manipulate the proper tools to access the information online he wants. For Mary, it means being able to have her notebook in front of her so that she can encode meaning into it by writing into it, and decoding it by reading it with her eyes. The internal experience of humans requires our physical bodies to navigate an outside world in order to meet our needs for survival. Having a computer in our heads doesn't.
"I think this is what EY was getting at when he wrote about us being "supported by the time in which we live".
Who is EY? I don't know what this is from.
"The reason we find it harder by default to see how the "external" and "internal" are really related is, I think, a matter of habit. "
I tend to think of this as a Western thing. I've been studying and practicing Buddhism for a couple decades, and have found it difficult to relate to a lot of Western culture because of it. As a westerner, I struggle with my beliefs about individuality, responsibility, and identity because of my Buddhist practice and training. Westerners tend to be more ego-centric, with strong sense's of individual identity and personality, and I think this isn't as much of an issue with Easterners, who tend to be more family and community focused, less individually concerned with personal issues, for lack of a better phrase. The fact that their countries and cultures tend to be much more ethnically homogenous just allows for them to create a much stronger sense of cultural identity than most Americans, and Westerners in general I think.
What I mean by that is that I think because of their beliefs and practices, it's much easier for Easterners to see their place in the broader world, and to think as a more well directed social group as a whole. Plus, for the Japanese their practice and belief of Shinto creates a world view that imbues everything with a spirit of sorts, and Buddhism and Chinese Taoism, really promote the idea that humans are intimately tied to the natural world. In mostly Western countries with a strong Judeo-Christian culture, there is a long history of struggle between the evils of the natural world, and the virtues of Gods world. The civilizing of humanity involves the rejection of our animal natures, and our exit from the mountains, valleys and woods and into the cities, far removed from nature.
In fact Taoism is from what I understand practically the opposite of this idea, in that it is the civilizing process of society which ruins human nature. So sort of having a foot in both hemispheres presents me with just as many challenges on a day to day basis as advantages often times, as I admire people with strong senses of self and direction, people who are often outspoken and who become successful because they only do what they want to do. But it's my beliefs in the interconnectedness of everything which makes that difficult for me to do.
Maybe even more to the point, theoretical physics and Buddhism tend to work well together as they are both reliant on a strong rational viewpoint of reality which believes that reality is an illusion, and it is only through study, practice and contemplation of it with the right tools that allows people to catch glimpses of the world without the illusion.
So long story short, at least half the time it's harder to see myself as an individual with the right to pursue my best interests - even, and especially at the expense of others at times - than it is for me to see myself as part of an interconnected and interdependent whole. That's many times just as true for the natural world as it is the human one. Plus, I'm not averse to the belief in some sort of global planetary (if not universal) consciousness, like Gaia or something along those line, of which we are all a part, even if we don't recognize ourselves as such. In many ways I think that is a very real possibility, and that it's this concept which is hidden behind the veil of illusion that Buddhism and Science seek to pull back. In reality, it is the concept of 'I' or 'me' that we have which is the illusion.
No desire to become a monk though, so I try to enjoy being a lay Buddhist, while holding onto my Buddhist beliefs. Many of these views are backed up by my Buddhist study, and the theory of the illusion of self tends to play well with many of the ideas of physics and how they add up to 'consciousness' although the idea of 'self' is somewhat less supported. There is no 'self' in reality, only the illusion of self, but in Western culture this idea is very difficult to stomach. "Of course there is a self!" "I'm me!"
But the fact we can't find it is sort of a good indication that it's not what we think it is. The illusion of self is what causes suffering, life is suffering, there is cessation from suffering, it is the study and practice of the Dharma, so on and so on. The reduction of suffering is a pretty consistent theme in most worthwhile human endeavors, but it seems to me that the race to create artificial life is bound to produce suffering. I'm of the opinion that heading that off at the pass is a pretty noble cause, which is why I've been thinking and trying to write about ethics in technology for a while.
I think this Western bias of creating AI which is self aware, and the search for 'consciousness' and 'self' in order to replicate it artificially is causing a lot of suffering for ML and AI researchers already anyway! lol
But I could be wrong about a lot of this. I can certainly see where this idea of Expanded Mind lends itself to valuing the creation of self driving cars and smart cities. But I have my reasons for why I think that at this point we are still making the same cognitive mistakes which have led to the creation of so many of the worlds problems already. And without clearing those issues up first, we really are harnessing the raw awesome power of distributed computing and neural networks to miss the target by an astronomical distance instead of just by a mile this time.
Not to be fatalistic about it, of course, but I really hope I can put my ideas into words and pictures well enough to bring my concerns to the right people. Like I said in an earlier post, I really hoped to start a non-profit so I could address these concerns in a more rigorous and targeted way, but I've got no experience as a leader or in running a business. If I can at least make a start on some of it by starting a conversation which can create some influence, that's cool. However, I am tired of being a starving artist. In the best of all possible worlds, I'd be rich and wouldn't really care about these things, but I'd settle for being able to make a living working on trying to solve some of the worlds problems.
Heading off the coming robot revolution is a little ways off I think, though. Hopefully! :)
I'm a little curious about your background, and were you the one that produced that video? If so, kudos, video production isn't easy.