I recently attended a discussion group whose topic, at that session, was Death. It brought out deep emotions. I think that of all the Silicon Valley lunches I've ever attended, this one was the most honest; people talked about the death of family, the death of friends, what they thought about their own deaths. People really listened to each other. I wish I knew how to reproduce those conditions reliably.
I was the only transhumanist present, and I was extremely careful not to be obnoxious about it. ("A fanatic is someone who can't change his mind and won't change the subject." I endeavor to at least be capable of changing the subject.) Unsurprisingly, people talked about the meaning that death gives to life, or how death is truly a blessing in disguise. But I did, very cautiously, explain that transhumanists are generally positive on life but thumbs down on death.
Afterward, several people came up to me and told me I was very "deep". Well, yes, I am, but this got me thinking about what makes people seem deep.
At one point in the discussion, a woman said that thinking about death led her to be nice to people because, who knows, she might not see them again. "When I have a nice thing to say about someone," she said, "now I say it to them right away, instead of waiting."
"That is a beautiful thought," I said, "and even if someday the threat of death is lifted from you, I hope you will keep on doing it—"
Afterward, this woman was one of the people who told me I was deep.
At another point in the discussion, a man spoke of some benefit X of death, I don't recall exactly what. And I said: "You know, given human nature, if people got hit on the head by a baseball bat every week, pretty soon they would invent reasons why getting hit on the head with a baseball bat was a good thing. But if you took someone who wasn't being hit on the head with a baseball bat, and you asked them if they wanted it, they would say no. I think that if you took someone who was immortal, and asked them if they wanted to die for benefit X, they would say no."
Afterward, this man told me I was deep.
Correlation is not causality. Maybe I was just speaking in a deep voice that day, and so sounded wise.
But my suspicion is that I came across as "deep" because I coherently violated the cached pattern for "deep wisdom" in a way that made immediate sense.
There's a stereotype of Deep Wisdom. Death: complete the pattern: "Death gives meaning to life." Everyone knows this standard Deeply Wise response. And so it takes on some of the characteristics of an applause light. If you say it, people may nod along, because the brain completes the pattern and they know they're supposed to nod. They may even say "What deep wisdom!", perhaps in the hope of being thought deep themselves. But they will not be surprised; they will not have heard anything outside the box; they will not have heard anything they could not have thought of for themselves. One might call it belief in wisdom—the thought is labeled "deeply wise", and it's the completed standard pattern for "deep wisdom", but it carries no experience of insight.
People who try to seem Deeply Wise often end up seeming hollow, echoing as it were, because they're trying to seem Deeply Wise instead of optimizing.
How much thinking did I need to do, in the course of seeming deep? Human brains only run at 100Hz and I responded in realtime, so most of the work must have been precomputed. The part I experienced as effortful was picking a response understandable in one inferential step and then phrasing it for maximum impact.
Philosophically, nearly all of my work was already done. Complete the pattern: Existing condition X is really justified because it has benefit Y: "Naturalistic fallacy?" / "Status quo bias?" / "Could we get Y without X?" / "If we had never even heard of X before, would we voluntarily take it on to get Y?" I think it's fair to say that I execute these thought-patterns at around the same level of automaticity as I breathe. After all, most of human thought has to be cache lookups if the brain is to work at all.
And I already held to the developed philosophy of transhumanism. Transhumanism also has cached thoughts about death. Death: complete the pattern: "Death is a pointless tragedy which people rationalize." This was a nonstandard cache, one with which my listeners were unfamiliar. I had several opportunities to use nonstandard cache, and because they were all part of the developed philosophy of transhumanism, they all visibly belonged to the same theme. This made me seem coherent, as well as original.
I suspect this is one reason Eastern philosophy seems deep to Westerners—it has nonstandard but coherent cache for Deep Wisdom. Symmetrically, in works of Japanese fiction, one sometimes finds Christians depicted as repositories of deep wisdom and/or mystical secrets. (And sometimes not.)
If I recall correctly an economist once remarked that popular audiences are so unfamiliar with standard economics that, when he was called upon to make a television appearance, he just needed to repeat back Econ 101 in order to sound like a brilliantly original thinker.
Also crucial was that my listeners could see immediately that my reply made sense. They might or might not have agreed with the thought, but it was not a complete non-sequitur unto them. I know transhumanists who are unable to seem deep because they are unable to appreciate what their listener does not already know. If you want to sound deep, you can never say anything that is more than a single step of inferential distance away from your listener's current mental state. That's just the way it is.
To seem deep, study nonstandard philosophies. Seek out discussions on topics that will give you a chance to appear deep. Do your philosophical thinking in advance, so you can concentrate on explaining well. Above all, practice staying within the one-inferential-step bound.
To be deep, think for yourself about "wise" or important or emotionally fraught topics. Thinking for yourself isn't the same as coming up with an unusual answer. It does mean seeing for yourself, rather than letting your brain complete the pattern. If you don't stop at the first answer, and cast out replies that seem vaguely unsatisfactory, in time your thoughts will form a coherent whole, flowing from the single source of yourself, rather than being fragmentary repetitions of other people's conclusions.
I can only speak for myself, but I think most of us are defining "immortality" as "living for at least a million years" rather than Greg Egan's "Not, dying after a very long time; just not dying, ever."
Now I certainly have no moral objection to the latter state of affairs. As I sometimes like to tell people, "I want to live one more day. Tomorrow I will still want to live one more day. Therefore I want to live forever, proof by induction on the positive integers."
But flippant remarks aside, I'm not sure how I feel about real immortality, if such a thing should be physically permissible. Do I want to live longer than a billion years, live longer than a trillion years, live longer than a googolplex years, live longer than Graham's Number, live so long it has to be expressed in Conway chained arrow notation, live longer than Busy_Beaver(100)?
Note that I say "live longer than Graham's Number", not "live longer than Graham's Number years/seconds/millennia", because these are all essentially the same number. Living for this amount of time does not just require the ability to circumvent thermodynamics, it requires the ability to build custom universes with custom laws of physics. And the vast majority of integers are very much larger than that, or even Busy_Beaver(100). Perhaps this is possible. Perhaps not.
The emotional connection that I feel to my future self who's lived for Graham's Number is pretty much nil, on its own. But my self of tomorrow, with whom I identify very strongly, will be just a tiny bit closer. As I fulfill or abandon old goals, I will adopt new ones. The connection may be vicarious, but it is there.
And I certainly see a very great difference between humanity continuing forever, versus humanity continuing to Graham's Number and then halting; a difference very much worth dying for. (It follows that my discount rate is 1.)
So, as I usually tell people:
"Do I want to live forever? I don't know. Ask me again in a million years. Maybe then I'll have decided how I feel about immortality. I am a short-term thinker; I take my life one eon at a time."
You can't use "humanity" and "Graham's Number" in the same sentence.