Associate yourself with people whom you can confidently and cheerfully outperform the Nash Equilibrium with.
On the contrary - this is a strict materialist perspective which looks to disambiguate the word 'trauma' into more accurate nouns, and replace the vague word 'heal' with more actionable and concrete verbs.
I think there's often a language/terminology challenge around these areas. For instance, at different times I had a grade 3 ankle sprain after endurance training, and a grade 2 wrist sprain after a car crash - those are clearly acute trauma (in the medical meaning of the word) and they do require some mix of healing to the extent possible for recovery of physical function.
But I've always found it tricky that the same word 'trauma' is used for physical injuries, past bad experiences, and as a broad description of maladaptive patterns of thought and behavior.
It's a broad word that people use in different ways.
Two things I've found useful.
(1) Highest recommendation for Lakoff's Metaphors We Live By (1980) which looks at conceptual metaphors:
https://en.wikipedia.org/wiki/Metaphors_We_Live_By
From Chapter 7, "Personification":
Perhaps the most obvious ontological metaphors are those where the physical object is further specified as being a person. This allows us to comprehend a wide variety of experiences with nonhuman entities in terms of human motivations, characteristics, and activities. Here are some examples:
- His theory explained to me the behavior of chickens raised in factories.
- This fact argues against the standard theories.
- Life has cheated me.
- Inflation is eating up our profits.
- His religion tells him that he cannot drink fine French wines.
- The Michelson-Morley experiment gave birth to a new physical theory.
- Cancer finally caught up with him.
In each of these cases we are seeing something nonhuman as human. But personification is not a single unified general process. Each personification differs in terms of the aspects of people that are picked out.
Consider these examples.
- Inflation has attacked the foundation of our economy.
- Inflation has pinned us to the wall.
- Our biggest enemy right now is inflation.
- The dollar has been destroyed by inflation.
- Inflation has robbed me of my savings.
- Inflation has outwitted the best economic minds in the country.
I think a lot of discussion around the word "trauma" follows these characteristics — the challenge is, a lot of times people move between a literal well-scoped definition of trauma, say the medical one, and a more metaphorical/ontological description. People often do this without noticing it.
For instance, I can talk about the acute trauma of the wrist injury from a car crash, and everyone will largely understand what I'm talking about. But the same word 'trauma' will often be used if I had described some fear or aversion of getting into cars going forwards. I don't have one, but if I did, people would refer to both the wrist injury and the thing which caused the aversion to cars as 'trauma' — which seems somewhat confused to me. Clearly a wrist injury needs healing, in the biological and medical sense of the word healing.
Does an aversion to getting into cars need "healing" in the same way? I mean, maybe, if you've got a definition of "healing" from neuroscience around how incoming information is processed and how chain reactions of synapses firing in response to a stimuli that produces a maladaptive behavioral pattern is classified as "healing." But - like, probably not. "Healing" in that context is a metaphor.
For my part, and just speaking for myself, I think the term "extinction" — though less in line with the current cultural milieu — is a much better word than "healing" for removing maladaptive emotional and behavioral patterns.
https://en.wikipedia.org/wiki/Extinction_(psychology)
In my way of thinking about it,
How to do the latter — talk-oriented therapies, exposure therapy (which is typically recommended for phobias), practice and training on implementing good patterns in similar situations to ones where you've displayed undesirable patterns of behavior, cognitive behavioral therapy if you're ruminating too much, etc - well, unfortunately there's no consensus currently on what works the best for any given case.
But I think starting with a model of "I need to heal" is questionable. Relatedly, I'm also skeptical of using the word "heal" for biochemical imbalances — for biochemical-based depression, for instance, I think "I need to get my hormones and biochemistry better-regulated to remove depressive symptoms" is a a mix of more actionable, more accurate, and more subjectively empowering than "I need to heal from depression."
Anyway, this goes strongly against the current cultural milieu - and I haven't been maximally precise in the comment. A lot could be nitpicked. But I think extinction of maladaptive thought patterns and maladaptive behavior patterns is more easily accomplished (and a more accurate description of reality) than healing; likewise, "regulating" seems more accurate than healing to me on biochemical based phenomenon.
It's been useful for me to think about it this way, and sometimes useful for other people. Though, different things work for different people - so add salt liberally. Regardless, Lakoff's Metaphors is extremely relevant the topic and highly recommended.
Partially agreed again.
I'd be hesitant to label as "Critical" pointing out that someone has an invalid argument, and having it implicitly contrasted against "Positive" — it implies they're opposites or antithetical in some way, y'know?
Also, respectfully disagree with this -
"The specific issue with ‘Not what I meant’ is that the icon reads as ‘you missed’ and not ‘we missed’. Communication is a two-way street and the default react should be at least neutral and non-accusatory."
Sometimes a commentor, especially someone new, is just badly off the mark. That's not a two-way street problem, it's a Well-Kept Garden problem...
I agree that drive-by unpleasant criticisms without substance ("Obtuse") don't seem productive, but I actually think some of the mild "tonally unpleasant" ones could be very valuable. It's a way for an author to inexpensively let a commenter know that they didn't appreciate the comment.
"Not what I meant" seems particularly valuable for when someone mis-summarizes or inferences wrongly what was written, and "Not worth getting into" seems useful when someone who unproductively deep on a fine-grained detail of something more macro oriented.
One challenge, though, is when you have mixed agreement with someone. I disagree on tonal unpleasantness and the grouping style - "Taboo your words" might be friendly, for instance, to keep sharpening discussion, and isn't necessarily critical. But I agree with a meta/bikeshed and clearing up some of the ambiguous ones.
I clicked both "Disagree" and "Agree" on yours for partial agreement / mixed agreement, but that seems kind of unintuitive.
Not sure how many posts you've made here or elsewhere, but as someone who has done a lot of public writing this seems like a godsend. It will reflect poorly on someone who deploys those a lot in a passive aggressive way, but we've all seen threads that are exhausting to the original poster.
This seems particularly useful for when someone makes a thoughtful but controversial point that spurs a lot of discussion. The ability to acknowledge you read someone's comment without deeply engaging with it is particularly useful in those cases.
I turned this on for a recent post and I'm incredibly impressed.
This is the coolest feature I've seen for discussion software in many years.
Highly recommended to try it out if you make a post.
I'm a Westerner, but did business in China, have quite a few Chinese friends and acquaintances, and have studied a fair amount of classical and modern Chinese culture, governance, law, etc.
Most of what you're saying makes sense with my experience, and a lot of Western ideas are generally regarded as either "sounds nice but is hypocritical and not what Westerns actually do" (a common viewpoint until ~10 years ago) with a later idea of "actually no, many young Westerners are sincere about their ideas - they're actually just crazy in an ideological way about things that can't and won't work" that is a somewhat newer idea. (白左, etc)
The one place I might disagree with you is that I think mainland Chinese leadership tends to have two qualities that might be favorable towards understanding and mitigating AI risk:
(1) The majority of senior Chinese political leadership are engineers and seem intrinsically more open to having conversations along science and engineering lines than the majority of Western leadership. Pathos-based arguments, especially emerging from Western intellectuals, do not get much uptake in China and aren't persuasive. But concerns around safety, second-order effects, third-order effects, complex system dynamics, causality, etc, grounded in scientific, mathematical, and engineering principles seem to be engaged with easily at face value in private conversations, and with a level of technical sophistication that there doesn't need to be as much direct reliance on asking for industry leaders and specialists to explain and contextualize diagrams, concepts, technologies, etc. Senior Chinese leadership also seem to be better - this is just my opinion - at identifying credible and non-credible sources of technical information and identifying experts who make sound arguments grounded in causality. This is a very large advantage.
(2) In recent decades, it seems like mainland Chinese leadership are able to both operate on longer timescales - credibly making and implementing multi-decade plans and running them - as well as making rapid changes in technology adoption, regulation, and economic markets once a decision has been made in an area. The most common examples we see in the West are videos of skyscrapers being constructed very rapidly, but my personal example is I remember needing to go pay my rent with shoeboxes full of 100 renminbi notes during the era of Hu Jintao's chairmanship and being quite shocked when China went to near cashless almost overnight.
I think those two factors - genuine understanding of engineering and technical causality, combined with greater viability for engaging in both longer timescale and short-timescale action, seem like important points worth mentioning.
Hmm. Looks like I was (inadvertently) one of the actors in this whole thing. Not intended and unforeseen. Three thoughts.
(1) At the risk of sounding like a broken record, I just wanna say thanks again to the moderation team and everyone who participates here. I think oftentimes the "behind the scenes coordination work" doesn't get noticed during all the good times and not enough credit is noticed. I just like to notice it and say it outright. For instance, I went to the Seattle ACX meetup yesterday which I saw on here (LW), since I check ACX less frequently than LW. I had a great time and had some really wonderful conversations. I'm appreciative of all the people facilitating that, including Spencer (Seattle meetup host) and the whole team that built the infrastructure here to facilitate sharing information, getting to know each other, etc.
(2) Just to clarify - not that it matters - my endorsement of Duncan's post was about the specific content in it, not about any the author of the post. I do think Duncan did a really nice job taking very complex concepts and boiling them down to guidelines like "Track (for yourself) and distinguish (for others) your inferences from your observations" and "Estimate (for yourself) and make clear (for others) your rough level of confidence in your assertions" — he really summed up some complex points very straightforwardly and in a way that makes the principles much easier to implement / operationalize in one's writing style. That said, I didn't realize when I endorsed the Rationalist Discourse post that there was some interpersonal tensions independent from the content itself. Both of those posters seem like decent people to me, but I haven't dug deep on it and am not particularly informed on the details.
(3) I won't make a top-level post about this, because second-degree meta-engagement with community mechanics risks setting off more second-degree and third-degree meta-engagement, and the things spiral. But as a quick recommendation to people interested in how people relate with each other, my favorite movie is Unforgiven, a very non-traditional Clint Eastwood movie. It's like a traditional Western (cowbows, horses, etc) but really very different than the normal genre. Basically, there's only one genuinely unprovoked "bad guy" in the movie, who has causal agency for only about 30-60 seconds of doing something bad. After that, it's all just a chain reaction of people doing as best as they can by their values and friends, and yet the results are very bad for everyone. Incidentally, it's also a really cinematically beautiful movie, which contrasts with the unfolding tragedy. It's a great movie. Highly recommended.
A bit late in commenting and I understand the "mongo" example was pointing at a more general concept, but I decided to check in on the current state of prediction. Not perfect, n=1, could certainly be set out better, but thought I'd give this a whirl:
Me:
Hello, I'd like to test your predictive ability on something interesting and novel. May we?
ChatGPT:
Hello! Of course, I'd be happy to help you with your prediction. What would you like me to predict?
Me:
First, some context — I'm an American tech CEO. I like and have read a lot of classical philosophy, ranging from Aristotle to Wittgenstein and much in between. I'm an amateur historian who has written a few million words of nonfiction with some historical fiction mixed in. I'm logical and technical. I'm in my late 30s. I'm from the East Coast but in Seattle. That's the context.
ChatGPT:
Thank you for the context. Is there a specific question or topic you would like me to predict about based on your background and interests?
Me:
Yes. I'm going to think of a fictional book. It's a book I like - one of my favorites. I'd like you to make a list of 5 guesses of what book I might be thinking of, along with what the most likely of the 5 is.
ChatGPT:
Certainly, I'll do my best to generate a list of 5 guesses for your favorite book based on your background and interests. Here are my predictions:
Based on your interests and background, my most likely guess for your favorite book would be "The Brothers Karamazov" by Fyodor Dostoevsky.
Me:
Ah, not so. Let's narrow down a bit. It's science fiction. Would you try again?
ChatGPT:
Thank you for the clarification. Here are 5 new predictions based on your background and interest in science fiction:
Based on your interest in science fiction, my most likely guess for your favorite book would be "Dune" by Frank Herbert.
Me:
Not a bad guess - I like Dune a lot. I was thinking of Asimov's "Foundation" though.
ChatGPT:
Ah, I see! "Foundation" is a classic work of science fiction and definitely a great choice. I apologize for not guessing it correctly on the first try, but I hope my predictions were still interesting and useful. Thank you for the opportunity to test my predictive abilities!
Me:
Can we hypothesize a little on why you didn't get Foundation? It combines technology, history, and philosophy...
ChatGPT:
Certainly! While I cannot say for sure why I did not predict "Foundation" as your favorite book, one possible reason could be that your background and interests were broad enough to support several different science fiction titles. Additionally, while "Foundation" does incorporate elements of technology, history, and philosophy, these themes are also present in many other science fiction works, making it challenging to narrow down to one specific book.
Another possibility is that the themes and style of "Foundation" may not have stood out as strongly to me based on the limited information I had about your interests and background. While I did take into account your interest in history and philosophy, other factors such as your preference for a specific writing style or subgenre of science fiction may have been important in predicting your favorite book.
(1) Physics generally seems like a trustworthy discipline - the level of rigor, replicability, lack of incentive for making false claims, etc. So base rate of trust is high in that domain.
(2) There doesn't seem to be anyone claiming otherwise or any major anomalies around it, with the possible exception of how microscopic/quantum levels of things interact/aggregate/whatever with larger scale things.
(3) It would seem to need to be at least correct-ish for a lot of modern systems, like power plants, to work correctly.
(4) I've seen wood burn, put fuel into a car and then seen the car operate, etc.
(5) On top of all of that, if the equation turned out to be slightly wrong, it's unlikely I'd do anything differently as a result so it's not consequential to look very deeply into it (beyond general curiosity, learning, whatever).
As a personal convention, I don't assign probability something is true above 99% for anything other than the very most trivial (2+2=4). So I'm at 99% E=mc2 is correct enough to treat it as true - though I'd look into it more closely if I was ever operating in an environment where it had meaningful practical implications.