Why people aren't clamoring in the streets for the end of sickness and death?
What? Why? How would clamoring in the streets causally contribute to the end of sickness and death? Even if we interpret "clamoring in the streets" as a metonym for other forms of mass political action—presumably with the aim of increasing government funding for medical research?—it still just doesn't seem like a very effective strategy compared to more narrowly-targeted interventions that can make direct incremental progress on the problem.
Concrete example: I have a friend who just founded a company to use video of D. magnia to more efficiently screen for potential anti-aging drugs. The causal pathway between my friend's work and defeating aging is clear: if the company succeeds at building their water-flea camera rig drug-discovery process, then they might discover promising chemical compounds, some of which (after further research and development) will successfully treat some of the diseases of aging.
Of course, not everyone has the skillset to do biotechnology work! For example, I don't. That means my causal contributions to ending sickness and death will be much more indirect. For example, my work o
...Meta-note: while your comment adds very reasonable questions and objections which you went to the trouble of writing up at length (thanks!), its tone is slightly more combative than I'd like discussion of my posts to be. I don't think conditions pertain that'd make that the ideal style here. I should perhaps put something like this in my moderation guidelines (update: now added).
I'd be grateful if you write future comments with a little more . . . not sure how to articulate . . .something like charity and less expression of incomprehension, more collaborative truth-seeking. Comment as though someone might have a reasonable point even if you can't see it yet.
If you don't understand the other person's point (even after thinking a bit), what's the collaborative move, other than expressing incomprehension? It seems that anything else would be pretending you understand when you actually don't, which is adversarial to the collaborative truth-seeking process.
Connotation, denotation, implication, and subtext all come into play here, as do the underlying intent one can infer from them. If you don't understand someone's point, it's entirely right to to state that, but there are diverse ways of expressing incomprehension. Contrast:
Though inferences about underlying intent and mindstates are still only inferences, I'd say the first version is a lot more expected from a stance of "I assign some credence to you have a point that I missed (or at least act as though I do for the sake of production discussion) and I'm willing to listen so that we can talk and figure out which of us is really correct here." When I imagine the second one, it feels like it comes from a place of "You are obviously wrong. Your reasoning is obviously wrong. I want you and everyone else to know that you're wrong an...
If someone is wrong, this should definitely be made legible, so that no one leaves believing the wrong thing. The problem is with the "obviously" part. Once the truth of the object-level question is settled, there is the secondary question of how much we should update our estimate of the competence of whoever made a mistake. I think we should by default try to be clear about the object-level question and object-level mistake, and by default glomarize about the secondary question.
I read Ruby as saying that we should by default glomarize about the secondary question, and also that we should be much more hesitant about assuming an object-level error we spot is real. I think this makes sense as a conversation norm, where clarification is fast, but is bad in a forum, where asking someone to clarify their bad argument frequently leads to a dropped thread and a confusing mess for anyone who comes across the conversation later.
I, at least, am a social monkey.
I basically don't find this compelling, for reasons analogous to No, It's not The Incentives, it's you. Yes, there are ways to establish emotional safety between people so that I can point out errors in your reasoning in a way that reduces the degree of threat you feel. But there are also ways for you to reduce the number of bucket errors in your mind, so that I can point out errors in your reasoning without it seeming like an attack on "am I ok?" or something similar.
Versions of this sort of thing that look more like "here is how I would gracefully make that same objection" (which has the side benefit of testing for illusion of transparency) seem to me more likely to be helpful, whereas versions that look closer to "we need to settle this meta issue before we can touch the object level" seem to me like they're less likely to be helpful, and more likely to be the sort of defensive dodge that should be taxed instead of subsidized.
I, at least, am a social monkey.
I basically don’t find this compelling, for reasons analogous to No, It’s not The Incentives, it’s you.
Strongly agreed. To expand on this—when I see a comment like this:
If I am bluntly told I am wrong (even if I agree, even in private—but especially in public), I will feel attacked (if only at the S1 level), threatened (socially), and become defensive.
The question I have for anyone who says this sort of thing is… do you endorse this reaction? If you do, then don’t hide behind the “social monkey” excuse; honestly declare your endorsement of this reaction, and defend it, on its own merits. Don’t say “I got defensive, as is only natural, what with your tone and all”; say “you attacked me”, and stand behind your words.
But if you don’t endorse this reaction—then deal with it yourself. Clearly, you are aware that you have it; you are aware of the source and nature of your defensiveness. Well, all the better; you should be able, then, to attend to your own involuntary responses. And if you fail to do so—as, being only human, you sometimes (though rarely, one hopes!) will—then the right thing to do is to apologize to your interlocutor: “I know that
...But if you don’t endorse this reaction—then deal with it yourself.
I agree with the above two comments (Vaniver's and yours) except for a certain connotation of this point. Rejection of own defensiveness does not imply endorsement of insensitivity to tone. I've been making this error in modeling others until recently, and I currently cringe at many of my "combative" comments and forum policy suggestions from before 2014 or so. In most cases defensiveness is flat wrong, but so is not optimizing towards keeping the conversation comfortable. It's tempting to shirk that responsibility in the name of avoiding the danger of compromising the signal with polite distortions. But there is a lot of room for safe optimization in that direction, and making sure people are aware of this is important. "Deal with it yourself" suggests excluding this pressure. Ten years ago, I would have benefitted from it.
That is pretty much my picture. I agree completely about the trickiness of it all.
and this might not be the best path to go down.
At some point I'd be curious to know your thoughts on the other potential paths.
I think some means of communicating are going to be more effective than others
Yes, marketing is important.
I think there is still some prior that you are correct and I curious to hear your thoughts", or failing that "You are very clearly wrong here yet I still respect you as a thinker who is worth my time to discourse with." [...] I feel like for there to be productive and healthy discussion you have to act as though at least one of the above statements is true, even if it isn't.
You can just directly respond to your interlocutor's arguments. Whether or not you respect them as a thinker is off-topic. "You said X, but this is wrong because of Y" isn't a personal attack!
this can go a lot better if you're open to the fact that you could be the wrong one
Your degree of openness to the hypothesis that you could be the wrong one should be proportional to the actual probability that you are, in fact, the wrong one. Rules that require people to pretend to be more uncertain than they actually are (because disagreement is disrespect) run a serious risk of degenerating into "I accept a belief from you if you accept a belief from me" social exchange.
...can you link to any examples of th
Whether or not you respect them as a thinker is off-topic.
Unless I evaluate someone else to be far above my level or I have a strong credence that there's definitely something I have to learn from them, then my interest in conversing heavily depends on whether I think they will act as though they respect me. It's not just on-topic, it's the very default fundamental premise on which I decide to converse with people or not - and a very good predictor of whether the conversation will be at all productive. I have greatly reduced motivation to talk to people who have decided that they have no respect for my reasoning, are only there to "enlighten" me, and are going to transparently act that way.
"You said X, but this is wrong because of Y" isn't a personal attack!
Not inherently. But "tone" is a big deal and yours is consistently one of attack around statements which needn't be so.
For example, I'm not sure how I'm supposed to rewrite my initial comment on this post to be more collaborative without making it worse writing.
Some examples of unnecessary aspects of your writing which make it hostile and worse:
What? Why?
As you s...
I disagree. I haven't seen that happen in any rationalist conversation I've been a part of.
Just noting that I have seen this a large number of times.
A norm, aka cultural wisdom, that says maybe you're not so obviously right as you think helps correct for this in addition to the fact that conversations go better when people don't feel they're being judged and talked down to.
I also disagree with some aspects of this, though in a more complicated way. Probably won't participate in this whole discussion but wanted to highlight my disagreement (which feels particularly relevant given that the above might be taken as consensus of the LW team)
Thanks for the informative writing feedback!
As you said yourself, this was rhetorical
I think the occasional rhetorical question is a pretty ordinary part of the way people naturally talk and discuss ideas? I can avoid it if the discourse norms in a particular space demand it, but I tend to feel like this is excessive optimization for politeness at the cost of expressivity. Perhaps different writers place different weights on the relative value of politeness, but I should hope to at least be consistent in what behavior I display and what behavior I expect from others: if you see me tone-policing others over statements whose tone is as harsh as statements I've made in comparable situations, then I would be being hypocritical and you should criticize me for it!
The tone of these sentences, appending an exclamation mark to a trivial statements [...] adding energy and surprise to your lessons
I often use a "high-energy" writing style with lots of italics and exclamation points! I think it textually mimics the way I talk when I'm excited! (I think if you scan over my Less Wrong contributions, my personal blog, or my secret ("secret") blog, you'll see this a lot.) I can see how some
...Separately from my other comment…
This is not because I think politeness is more important than truth. Emphatically not.
You say this, but… everything else I see in this thread (and some others like it) signals otherwise.
Just a note to make salient the opposite perspective—as far as I am concerned, a Less Wrong that banned Zack (and/or others like him) would be much, much less fun to participate in.
In contrast, this sort of … hectoring about punctuation, and other such minutiae of alleged ‘tone’ … I find extremely tedious, and having to attend to such things makes Less Wrong quite a bit less fun.
Like, feel free to call the site a lost cause, but I am highly surprised that you expect us to ban all the interesting people. We have basically never banned anyone from LW2 except weird crackpots and some people who violated norms really hard, but no one who I expect you would ever classify as being part of the "interesting people".
So, on the one hand, that is entirely true.
On the other hand, suppose you said to me: “Said, you can of course continue posting here, we’re not banning you, but you must not ever mention World of Warcraft again; and if you do, then we will ban you.”
Or: “Said, post as much as you like, but none of your posts must contain em-dashes—on pain of banning.”
… or something else along these lines. Well, that’s not a ban. It’s not even a temporary ban! It’s nothing at all. Right?
Would you be surprised if I stopped participating, after an injunction like that? Surely, you would not be.
Would you call what had happened a ‘ban’, then?
Now, to be clear, I do not consider Less Wrong a lost cause; as you see, I continue to participate, both on the object and the meta levels. (I understand namespace’s sentiment, of course, even if I disagree.)
That said, while the distinction between literal administrative actions, and the threat thereof, is not entirely unimportant… it is not, perhaps, the most important question, when it comes to discussions of the site’s health, and what participants we may expect to retain or lose, etc.
I think that in this context it might be helpful for me to mention that I've recently seriously considered giving up on LessWrong, not because of overt bans or censorship, but because of my impression that the nudges I do see reflect some badly misplaced priorities.
These kinds of nudges both reflect the sort of judgment that might be tested later in higher-stakes situations (say, something actually controversial enough for the right call to require a lot of social courage on the mods' part), and serve as a coordination mechanism by which people illegibly negotiate norms for later use.
I ended up deciding to contact the mods privately to see if we could double-crux on this, since "try at all" is an important thing to do before "give up" for a forum with as much talent and potential as this one. I'm only mentioning this in here because I think these kinds of things tend to be handled illegibly in ways that make them easy to miss when modeling things like chilling effects.
This is of course admirable, but also not quite the point; the question isn’t whether the policies are clear (although that’s a question, and certainly an important one also); the question is, whether the policies—whatever they are—are good.
Or, to put it another way… you said:
… I would also be surprised if the people that namespace finds most interesting are worried about being banned based on that threat. If they do, then I think I would really like to change that (obviously depending on what the exact behavior is that they feel worried about being punished for, but my model is that we mostly agree on what would be ban-worthy).
[emphasis mine]
The problem with is, essentially, the same as the problem with CEV: it’s all very well and good if everyone does, indeed, agree on what is ban-worthy (and in this case clarity of policy just is the solution to all problems)… but what if, actually, people—including “interesting” people!—disagree on this?
Consider this scenario:
Alice, a commenter: Gosh, I’m really hesitant to post on Less Wrong. I’m worried that they might ban me!
Bob, a moderator: Oh? Why do you think that, Alice? What would we ban you for, do you think? I’d like you to be to
...I do not expect people namespace considers interesting to be afraid of making their interesting contributions due to fear of being banned
It's important to think on the margin—not only do actions short of banning (e.g., "mere" threats of banning) have an impact on users' behavior (as Said pointed out), they can also have different effects on users with different opportunity costs. I expect the people Namespace is thinking of face different opportunity costs than me: their voice/exit trade-off between writing for Less Wrong and their second-best choice of forum looks different from mine.
In the past month-and-a-half, we've had:
A 135-comment meta trainwreck that started because a MIRI Research Associate found a discussion-relevant reference to my work on the philosophy of language "unpleasant" (because my interest in that area of philosophy was motivated by my need to think about something else); and,
A 34-comments-and-counting meta trainwreck that started because a Less Wrong moderator found my use of a rhetorical question, exclamation marks, and reference hyperlinks to be insufficiently "collaborative."
Neither of these discussions left me with a fear of being banned—insofa
...A 135-comment meta trainwreck... suck up an enormous amount of my time and emotional energy that I could have spent doing other things.
Ugh. I'm sorry about that. It was exactly the same for me (re time and emotional energy).
To be fair, in this context, I did say upthread that I wanted to ban Zack from my posts and possibly the entire site. As someone with moderator status (though I haven't been moderating very much to date) I should have been much more cautious about mentioning banning people, even if that's just me, no matter my level of aggravation and frustration.
I'm not sure what the criteria for "interesting" is, but my current personal leaning would be to exert more pressure than banning just crackpots and "violated norms really hard", but I haven't thought about this or discussed it all that much. I would do so before advocating hard for particular standard to be adopted widely.
But these are my personal feelings, not ones I've really discussed with the team and definitely not any team consensus about norms or policies.
(Possibly relevant or irrelevant I wrote before habryka's most recent comment below.)
This comment contains no italics and no exclamation points. (I didn't realize that was the implied request—as Wei intuited, I was trying to show that that's just how I talk sometimes for complicated psychological reasons, and that I didn't think it should be taken personally. Now that you've explicitly told me to not do that, I will. As you've noticed, I'm not always very good at subtext, but I should hope to be capable of complying with explicit requests.)
That is persuasive that you respect my ability to think and even flattering. I would have also taken it as strong evidence if you'd simply said "I respect your thinking" at some earlier point.
I don't think that would be strong evidence. Anyone could have said "I respect your thinking" in order to be nice (or to deescalate the conflict), even if they didn't, in fact, respect you. The Mnemosyne cards are stronger evidence because they already existed.
you'd come in order to do me the favor of informing me I was flat-out, no questions about it, wrong
I came to offer relevant arguments and commentary in response to the OP. Whether or not my arguments and commentary were pursasive (or show that you were "wrong") is up for each i
...Some Updates and an Apology:
I've been thinking about this thread as well as discourse norms generally. After additional thought, I've updated that I responded poorly throughout this thread and misjudged quite a few things. I think I felt disproportionately attacked by Zack's initial comment (perhaps because I haven't been active enough online to ever receive a direct combative comment like that one), and after that I was biased to view subsequent comments as more antagonistic than they probably were.
Zack's comments contain some reasonable and valuable points. I think they could be written better to let the good points be readily be seen (content, structure, and tone), but notwithstanding it's probably on the whole good that Zack contributed them, including the first one as written.
The above update makes me also update towards more caution around norms which dictate how one communicates. I think it probably would be bad if there'd been norms I could have invoked to punish or silence when I felt upset with Zack and Zack's comments. (This isn't a final statement of my thoughts, just an interim update, as I continue to think more carefully about this topic.)
So lastly, I'm sorry @Zack. I shouldn't have responded quite as I did, and I regret that I did. I apologize for the stress and aggravation that I am responsible for causing you.. Thank you for contributions and persistence. Maybe we'll have some better exchanges in the future!?
I feel sympathy for both sides here. I think I personally am fine with both kinds of cultures, but sometimes kind of miss the more combative style of LW1, which I think can be fun and productive for a certain type of people (as evidenced by the fact that many people did enjoy participating on LW1 and it produced a lot of progress during its peak). I think in an ideal world there would be two vibrant LW2s, one for each conversational culture, because right now it's not clear where people who strongly prefer combat culture are supposed to go.
A nice signal that you cared about how I felt would have been that if after I’d said your bangs (!) felt condescending to me, you’d made an effort to reduce your usage rather than ramping them up to 11.
I think he might have been trying to signal that using lots of bangs is just his natural writing style, and therefore you needn't feel condescension as a result of them.
(Meta: is this still too combative, or am I OK? Unfortunately, I fear there is only so much I know how to hold back on my natural writing style without at least one of either compromising the information content of what I'm trying to say, or destroying my motivation to write anything at all.)
Perhaps the crux is this: the example (of attitudes towards death) that you seem to be presenting as a contrast between a causal-reality worldview vs. a social-reality worldview, I'm instead interpeting as a contrast between between transhumanist social reality vs. "normie" social reality.
(This is probably also why I thought it would be helpful to mention pro-Vibrams social pressure: not to exhaustively enumerate all possible social pressures, but to credibly signal that you're trying to make an intellectually substantive point, rather than just cheering for the smart/nonconformist/anti-death ingroup at the expense of the dumb/conformist/death-accommodationist outgroup.)
a belief that aging and death are solvable
But whether aging and death are solvable is an empirical question, right? What if they're not solvable? Then the belief that aging and death are solvable would be incorrect.
I can pre
...The cases Scott talks about are individuals clamoring for symbolic action in social reality in the aid of individuals that they want to signal they care about. It's quite Hansonian, because the whole point is that these people are already dead and none of these interventions do anything but take away resources from other patients. They don't ask 'what would cause people I love to die less often' at all, which my model says is because that question doesn't even parse to them.
They seen grandma getting sicker and sicker, suffering more and more, and they feel outrage: why have we not solved this yet?
You expect them to get angry - at whom in particular? - because grandma keeps getting older? For tens of thousands of years of human history, the only alternative to this has been substantially worse for grandma. Unless she wants to die and you're talking about euthanasia, but no additional medical research is needed for that. There is no precedent or direct empirical evidence that anything else is possible.
Maybe people are wrong for ignoring speculative arguments that anti-aging research is possible, but that's a terrible example of people being bound by social reality.
I think a lot of people in the world in general actually live much more in a mindset where concrete physical thinking is real than it might seem! The problem as I see it is, people's causal calibration level varies, and people's impression of their own ability to have their own beliefs about a topic without it embarrassing them varies. The "social reality" case is what you get when someone focuses most or all of their attention on interacting with people and don't have anything hard in their lives, so they simply don't need to be calibrated about physics and must rely on others' skill in such topics.
But I don't think nearly any neuroplastic human is going to be so unfamiliar with [edit: hit submit while trying to put my cursor back! continuing writing...]
... unfamiliar with causal reality that they can't comprehend the necessity of basic tasks. They might feel comfortable and safe and therefore simply not think about the details of the physics that implements their lives, but it's not a case of there being a social reality that's a separate layer of existence. It's more like the social behavior is what you get when people don't have the emotional safety and spare time and thinking to explore learning about the physics of their lives.
does that seem accurate to y'all? what do you think?
I like this perspective.
I don't think society is blind to this distinction, but it is rarely drawn so cleanly.
In the world of social realities, there is well-known memetic protection advising away from being overdependent on the social reality alone. The children's tale "The Emperor's New Clothes" can be taken as an actor with social power asserting something bizarre, with many people entertaining/allowing this social reality, but this being obviously insufficient to change reality.
There are important inherently intersubjective concepts (like money, fun, and human value?) that seem more grounded in the social reality. That doesn't mean the all the power of the casual stance cannot be used in the study of these things there, but that their intersubjective social perspective origin should not be neglected.
To my mind, this is too vague an explanation. Why is it that far more people believe in fighting global warming than in fighting the ageing process? They both rest upon scientific premises. You may say that the causal thinkers interested in fighting global warming, managed to bring lots of social thinkers along with them, by using social mechanisms; but why did the anti-warmers manage that, when the anti-agers did not? Also, even if we just focus on causal thinkers, it's far more common to deplore global warming than it is to deplore the ageing process.
Most people who campaign on global warming don't do it because of the science. If you look at planetary boundaries for example it seems like the way we mess up the nitrogen cycle is a bigger environmental problem then global warming. Any explanation that tries to explain people fighting global warming with them believing in science or believing that protecting the environment is important has to explain why those people aren't also trying to fix the nitrogen cycle.
You have serious people who try to speak about climate change instead of global warming to communicate that global warming isn't the only environmental issue that's important but in the public eye climate change is still mainly global warming. A bunch of things for which the IPCC sees less as 90% probability are also considered by most people who believe in fighting global warming as being certain.
There are many economic actors who have an interest in getting people worried about CO2 to market solar cells to them but there are no economic actors who have an interest in getting people interested in the nitrogen cycle.
I'm not sure that in reality the differences here are that great. We all have a tremendously human lens through which we view and experience the world, and some of the distinctions here strike me as pretty arbitrary.
Why should 'loved ones' enter into the causal reality (in order to motivate a desire to end death and suffering)? Why not view each person as equal moral agents with equal moral worth? Are flowers a gift of value that bring colour and scent, or are they decaying plant matter? It seems to me there's an implicit value set tha...
I don't agree with this stark duality. Reality is reality, and it's all casual. Some of it is simple enough to model explicitly and to make fairly good plans and predictions based on calculations. Some is so complicated that we don't know what the useful models are, and while our brains have built some usable models, those models are even too complicated for us to introspect and understand mathematically.
This is a continuum rather than a classification, and behavior of other humans is toward the complex and illegible side.
For your examples, the heuristi
...They certainly don’t look like most shoes, but apparently, they’re very comfortable and good for you.
It's more complicated then they just being very good for you. If you just change your shoes to Vibriams and don't change the way you walk there's a good chance that you will hurt yourself (and as a result you have the class action lawsuit). When walking with Vibriam's it's important to hit the floor first with your toes and not your heels.
I think I got my first Vibiram's in 2011 and especially at the time where they also lo...
I think Hanson deserves the credit here for this sharp post : https://www.overcomingbias.com/2017/03/better-babblers.html
Would you expect an evolved species to care about death in the abstract? By what mechanism?
Also,
If you primarily inhabit causal reality (like most people on LessWrong)
You're in a group of people where non-conformity happens to be a social good, no need to posit specialness here. We're all running on the same architecture, some people just didn't have their personality traits and happenstance combine in a way that landed them on LW.
This seems mostly a special, human focused perspective on the fundamental distinction I'd make between the ontological and the ontic, phenomenon and noumenon, pointing at a thing and the thing itself, and indeed the map and the territory. That's not to lessen the distinction you make, because it's a different one that is more intuitive to humans because our brains seem to consider the social a separate magisterium from the "causal", but I think it draws much of its power from this underlaying distinction between what is inside and ...
The thing we actually care about... Is that social reality? People being happy and content and getting along, love and meaning - it seems to be based in large part on the fundamental question of how people feel about other people, how we get along, etc.
It might be uderstandable if you're a person that cares about those things you might think that the near term effects of how people think and feel relate to what happens effect the long term of how people think and feel and relate. If you don't have a lot of power, you might even subconsciosly think this is
...
Epistemic status: this is a new model for me, certainly rough around the joints, but I think there’s something real here.
This post begins with a confusion. For years, I have been baffled that people, watching their loved ones wither and decay and die, do not clamor in the streets for more and better science. Surely they are aware of the advances in our power over reality in only the last few centuries. They hear of the steady march of technology, Crispr and gene editing and what not. Enough of them must know basic physics and what it allows. How are people so content to suffer and die when the unnecessity of it is so apparent?
It was a failure of my mine that I didn’t take my incomprehension and realize I needed a better model. Luckily, RomeoStevens recently offered me an explanation. He said that most people live in social reality and it is only a minority who live in causal reality. I don’t recall Romeo elaborating much, but I think I saw what he was pointing at. This rest of this post is my attempt to elucidate this distinction.
Causal Reality
Causal reality is the reality of physics. The world is made of particles and fields with lawful relationships governing their interactions. You drop a thing, it falls down. You lose too much blood, you die. You build a solar panel, you can charge your phone. In causal reality, it is the external world which dictates what happens and what is possible.
Causal reality is the reality of mathematics and logic, reason and argument. For these too, it would definitely seem, exist independent of the human minds who grasp them. Believing in the truth preservation of modus ponens is not so different from believing in Newton’s laws.
Necessarily, you must be inhabiting causal reality to do science and engineering.
In causal reality, what makes things good or bad are their effects and how much you like those effects. My coat keeps me warm in the cold winter, so it is a good coat.
All humans inhabit causal reality to some extent or another. We avoid putting our hands in fire not because it is not the done the thing, but because of prediction that it will hurt.
Social Reality
Social reality is the reality of people, i.e. people are the primitive elements rather than particles and fields. The fundamentals of the ontology are beliefs, judgments, roles, relationships, and culture. The most important properties of any object, thing, or idea are how humans relate to it. Do humans think it is good or bad, welcome or weird?
Social reality is the reality of appearances and reputation, acceptance and rejection. The picture is other people and what they think the picture is. It is a collective dream. Everything else is backdrop. What makes things good or bad, normal or strange is only what others think. Your friends, your neighbors, your country, and your culture define your world, what is good, and what is possible.
Your reality shapes how you make your choices
In causal reality, you have an idea of the things that you like dislike. You have an idea of what the external world allows and disallows. In each situation, you can ask what the facts on the ground are and which you most prefer. It is better to build my house from bricks or straw? Well, what are the properties of each, their costs and benefits, etc? Maybe stone, you think. No one has built a stone house in your town, but you wonder if such a house might be worth the trouble.
In social reality, in any situation, you are evaluating and estimating what others will think of each option. What does it say about me if I have a brick house or straw house? What will people think? Which is good? And goodness here simply stands in for the collective judgment of others. If something is not done, e.g. stone houses, then you will probably not even think of the option. If you do, you will treat it with the utmost caution, there is no precedent here - who can say how others will respond?
An Example: Vibrams
Vibrams are a kind of shoe with individual “sections” for each of your toes, kind of like a glove for your feet. They certainly don’t look like most shoes, but apparently, they’re very comfortable and good for you. They’ve been around for a while now, so enough people must be buying them.
How you evaluate Vibrams will depend on whether you approach more from a causal reality angle or a social reality angle. Many of the thoughts in each case will overlap, but I contend that their order intensity will still vary.
In causal reality, properties are evaluated and predictions are made. How comfortable are they? Are they actually good for you? How expensive are they? These are obvious “causal”/”physical” properties. You might, still within causal reality, evaluate how Vibrams will affect how others see you. You care about comfort, but you also care about what your friends think. You might decide that Vibrams are just so damn comfortable they’re worth a bit of teasing.
In social reality, the first and foremost questions about Vibrams are going to be what do others think? What kinds of people wear Vibrams? What kind of person will wearing Vibrams make me? Do Vibrams fit with my identity and social strategy? All else equal, you’d prefer comfort, but that really is far from the key thing here. It’s the human judgments which are real.
An Example: Arguments, Evidence, and Truth
Causal reality is typically accompanied by a notion of external truth. There is way reality is, and that’s what determines what happens. What’s more, there are ways of accessing this external truth as verified by these methods yielding good predictions. Evidence, arguments, and reasoning can often work quite well.
If you approach reality foremost with a conception of external truth and that broadly reasoning is a way to reach truth, you can be open to raw arguments and evidence changing your mind. These are information about the external world.
In social reality, truth is what other people think and how they behave. There are games to be played with “beliefs” and “arguments”, but the real truth (only truth?) that matters is how these are arguments go down with others. The validity of an argument comes from its acceptance by the crowd because the crowd is truth. I might accept that within the causal reality game you are playing that you have a valid argument, but that’s just a game. The arguments from those games cannot move me and my actions independent from how they are evaluated in the social reality.
“Yes, I can’t fault your argument. It’s a very fine argument. But tell me, who takes this seriously? Are there any experts who will support your view?” Subtext: your argument within causal reality isn’t enough for me, I need social reality to pass judgment on this before I will accept it.
Why people aren’t clamoring in the streets for the end of sickness and death?
Because no one else is. Because the done thing is to be born, go to school, work, retire, get old, get sick, and die. That’s what everyone does. That’s how it is. It’s how my parents did, and their parents, and so on. That is reality. That’s what people do.
Yes, there are some people who talk about life extension, but they’re just playing at some group game the ways goths are. It’s just a club, a rallying point. It’s not about something. It’s just part of the social reality like everything else, and I see no reason to participate in that. I’ve got my own game which doesn’t involve being so weird, a much better strategy.
In his book The AI Does Not Hate You, Tom Chivers recounts himself performing an Internal Double Crux with guidance from Anna Salamon. By my take, he is valiantly trying to reconcile his social and causal reality frames. [emphasis added, very slightly reformatted]
Most people primarily inhabit a social reality frame, and in social reality options and actions which aren’t being taken by other people who are like you and whose judgments you’re interested in don’t exist. There’s no extrapolation from physics and technology trends - those things are just background stories in the social game. They’re not real. Probably less real than Jon Snow. I have beliefs and opinions and judgments of Jon Snow and his actions. What is real are the people around me.
Obviously, you need a bit of both
If you read this post as being a little negative toward social reality, you’re not mistaken. But to be very clear, I think that modeling and understanding people is critically important. Heck, that’s exactly what this post is. For our own wellbeing and to do anything real in the world, we need to understand and predict others, their actions, their judgments, etc. You probably want to know what the social reality is (though I wonder if avoiding the distraction of it might facilitate especially great works, but alas, it’s too late for me). Yet if there is a moral to this post, it’s two things: