Sphere packing and logical uncertainty
Trying posting here since I don't see how to post to https://agentfoundations.org/.
Recently sphere packing was solved in dimension 24, and I read about it on Quanta Magazine. I found the following part of the article (paraphrased) fascinating.
Cohn and Kumar found that the best possible sphere packings in dimensions 24 could be at most 0.0000000000000000000000000001 percent denser than the Leech lattice. Given this ridiculously close estimate, it seemed clear that the Leech lattice must be the best sphere packings in dimension 24.
This is clearly a kind of reasoning under logical uncertainty, and seems very reasonable. Most humans probably would reason similarly, even when they have no idea what the Leech lattice is.
Is this kind of reasoning covered by already known desiderata for logical uncertainty?
What makes buying insurance rational?
Hey, everyone! So I've been reading an article about the expected utility, apparently to figure out whether the risk is worth taking you multiply expected value of the outcome by it's probability.
And apparently insurance companies can make money because the expected utility of buying insurance is lower than it's price.
So why would buying insurance be the rational action? I mean intuitively it makes sense(you want to avoid the risk), but it doesn't seem to fit well with this idea. If insurance is almost by definition is worth slightly less than it's price, how is it worth buying?
(sorry if it's a dumb question)
The Talos Principle
Dear members of Less Wrong, this is my very first contribution to your society and I hope that you might help me to get out of my confusion.
Back a few months ago, I tested for the first time a video game created by Croteam Studio which is called 'The Talos Principle'.
At the time, i was astonished by all the philosophical questions that the game was rising. It has kinda changed the way I see the world now, also the way I see myself.
I wanted to share my thoughts with you on the subject of 'What does being a Human mean ?'
First, I'd like to introduce you to this principle.
In Greek mythology, Talos was a giant automaton made of bronze which protected Europa in Crete from pirates and invaders.
He was known to be a gift given to Europa by Zeus himself.
He was so strong that he could crush a man's skull using only one hand, and so tall that he could circle the island's shores three times daily.
He was able to talk, think and act like he wanted to. (Except he had to obey Europa's will)
Even though his body was not organic, he was composed of a liquid-metal flowing through his veins who behaved like blood.
And here is how the principle begins. What is the fundamental difference between Talos and us, Human ?
Considering the fact that like us, he's able to think by himself, move thanks to his will and communicate like everybody does. Is he really different from us ? Sharing our own culture, history and language don't make him Human as well ?
I'm pretty sure that your first thought might be 'No way ! We are part of a biological specie. We have nothing in common with a synthetic being'.
But does our body really defines us as a Human Being ?
From a strict biological point of view, Sir Darwin would say yes, of course. And we won't be able to argue with that.
But if you take a Human being, for instance Platon, and you just cut his leg off and replace it with a synthetic prosthesis.
Would this person still be Platon ?
It appears that the answer to this question is yes, according to all the people who suffered from any kind of accidents which led them to give up a part of their body.
They were still the same. Of course they suffered from phantom pains and others psychological damages, but in the end, they remain the same as before.
Let's get back to our example. Now imagine that this synthetic-leg-equipped-Platon just had an accident that has made him lose his right arm. Profused with empathy, you accept to give him a prosthethic one.
Now, would this person still be Platon ?
Again, the answer is yes. Indeed, these accidents would not leave a man without leaving any kind of trauma, but he is still able to think and act like a normal Human. Thus we are assuming that he's still one of us, and that he's still himself.
So, how many times do we have to repeat the process in order to touch something that we can't exchange with anything synthesis in order to preserve Platon's Humanity (and sanity).
The answer appears to be the brain.
Deleting Brain remains the same as deleting our being. We can live with artificial heart, lungs, stomach, etc. but we can't live without our natural brain.
The brain is one of the biggest unknowns in the Human body. Doctors are claiming that we only know less of the half of how does the brain work, mystify it by the same time.
But still, we can resume the brain to its physical material. Estimated to contain 15-33 billion neurons each connected by synapses to several thousand other neurons which communicate with one another by means of long protoplasmic fibers called axons carrying trains of signal pulses called action potentials to distant parts of the brain or body targeting specific recipient cells.
Indeed, even if we do not really know for sure how every cell interacts with others we know that everything is bounded by chemistry. Every kind of information transfer can be reduced to a chemical reaction, something physical.
Every thought of our being started and ended with a chemical reaction. And we know how to replace a chemical reaction by another. We know how to simulate a potential transfer and thus we are today able to simulate a very simple brain on a computer.
( You may want to check the Blue Brain Project which illustrates everything that i'm writing. This simulation does not consist simply of an artificial neural network but involves a biologically realistic model of neurons )
So if in a close future we are able to simulate correctly a Human's brain, and therefore a whole Human body as well, can we considerate it as a Human being ?
Being aware of the material reality of the brain might make you think twice about yourself and your specie in general.
How do you describe a human being now ? Would you describe Talos as a human being as well ? Or just call it a being, refusing to give him the title of 'Human' because of the biological difference between you and it ? Therefore, can a man entirely simulated in a computer still be called human ?
Also, do not forget how the body influences the brain. Just look back on what happened to you during puberty, when sex desire overwhelmed you, making you impossible to remain calm. This happened thanks to chemicals, but it's still very interesting to see how a single chemical can have a huge influence on your consciousness.
I'm for now in a haze, so instead of lying on my bed thinking, i'd rather ask for your point of view. I'm very curious, would you kindly give it to me ?
Thanks for reading it all, I'll see your reactions in the comment section below.
[By the way, i'm a 19 years old french engineering student, i beg for your pardon concerning my english expression]
Making My Peace with Belief
I grew up in an atheistic household.
Almost needless to say, I was relatively hostile towards religion for most of my early life. A few things changed that.
First, the apology of a pastor. A friend of mine was proselytizing at me, and apparently discussed it with his pastor; the pastor apologized to my parents, and explained to my friend he shouldn't be trying to convert people. My friend apologized to me after considering the matter. We stayed friends for a little while afterwards, although I left that school, and we lost contact.
I think that was around the time that I realized that religion is, in addition to being a belief system, a way of life, and not necessarily a bad one.
The next was actually South Park's Mormonism episode, which pointed out that a belief system could be desirable on the merits of the way of life it represented, even if the beliefs themselves are stupid. This tied into Douglas Adam's comment on Feng Shui, that "...if you disregard for a moment the explanation that's actually offered for it, it may be there is something interesting going on" - which is to say, the explanation for the belief is not necessarily the -reason- for the belief, and that stupid beliefs may actually have something useful to offer - which then requires us to ask whether the beliefs are, in fact, stupid.
Which is to say, beliefs may be epistemically irrational while being instrumentally rational.
The next peace I made with belief actually came from quantum physics, and reading about how there were several disparate and apparently contradictory mathematical systems, which all predicted the same thing. It later transpired that they could all be generalized into the same mathematical system, but I hadn't read that far before the isomorphic nature of truth occurred to me; you can have multiple contradictory interpretations of the same evidence that all predict the same thing.
Up to this point, however, I still regarded beliefs as irrational, at least on an epistemological basis.
The next peace came from experiences living in a house that would have convinced most people that ghosts are real, which I have previously written about here. I think there are probably good explanations for every individual experience even if I don't know them, but am still somewhat flummoxed by the fact that almost all the bizarre experiences of my life all revolve around the same physical location. I don't know if I would accept money to live in that house again, which I guess means that I wouldn't put money on the bet that there wasn't something fundamentally odd about the house itself - a quality of the house which I think the term "haunted" accurately conveys, even if its implications are incorrect.
If an AI in a first person shooter dies every time it walks into a green room, and experiences great disutility for death, how many times must it walk into a green room before it decides not to do that anymore? I'm reasonably confident on a rational level that there was nothing inherently unnatural about that house, nothing beyond explanation, but I still won't "walk into the green room."
That was the point at which I concluded that beliefs can be -rational-. Disregard for a moment the explanation that's actually offered for them, and just accept the notion that there may be something interesting going on underneath the surface.
If we were to hold scientific beliefs to the same standard we hold religious beliefs - holding the explanation responsible rather than the predictions - scientific beliefs really don't come off looking that good. The sun isn't the center of the universe; some have called this theory "less wrong" than an earth-centric model of the universe, but that's because the -predictions- are better; the explanation itself is still completely, 100% wrong.
Likewise, if we hold religious beliefs to the same standard we hold scientific beliefs - holding the predictions responsible rather than the explanations - religious beliefs might just come off better than we'd expect.
[Link] Stephen Hawking AMA answers
https://www.reddit.com/r/science/comments/3nyn5i/science_ama_series_stephen_hawking_ama_answers/
the vast majority of the discussion is about AI risk.
The Temptation to Bubble
"Never discuss religion or politics."
I was raised in a large family of fundamentalist Christians. Growing up in my house, where discussing politics and religion were the main course of life, the above proverb was said often -- as an expression of regret, shock, or self-flagellation. Later, the experience impressed a deep lesson about bubbling up that even intelligent and rational people fall into. And I ... I am often tempted, so tempted, to give in.
Religion and political identity were the languages of love in my house. Affirming the finer points of a friend's identical values was a natural ritual, like sharing coffee or a meal together, and so soothing we attributed the afterglow to God himself. We can use some religious nonsense to illustrate, but please keep in mind, there's a much more interesting point here than "certain religious views are wrong".
A point of controversy was an especially excellent topic of mutual comfort. How could anyone else be *so* stupid as to believe we came from monkeys and monkeys came from *nothing*! that exploded a gazillion years ago, especially given all the young earth creation evidence that they stubbornly ignored. They obviously just wanted to sin and needed an excuse. Agreeing about something like this, you both felt smarter than the hostile world, and you had someone to help defend you against that hostility. We invented byzantine scaffolding for our shared delusions to keep the conversation interested and agree with each other in ever more creative ways. We esteemed each other, and ourselves, much more.
This safety bubble from the real world would allow denial of anything too painful. Losing a loved one to cancer? God will heal them. God mysteriously decided not this time? They're in Heaven. Did your incredible stupidity lose you your job, your wife, your reputation? God would forgive you and rescue you from the consequences. You could probably find a Bible verse to justify anything you're doing. Ironically, this artificial shell of safety, which kept us from ever facing the pain and finality reality often has, made us all the more fragile inside. The bubble became necessary to psychologically survive.
In this flow of happy mirror neuron dances, minor disagreements felt like a slap on the face. The shock afterward burned harder than a hand-print across the face.
25 years and, what seems like 86 billion light years of questioning, testing, and learning from that world-view, can see even beyond religion, people fall into bubbles so easily. The political conservatives only post articles from conservative blogs. The liberals post from liberal news sources. None have ever gone hunting on the opposing side for ways to test their own beliefs even once. Ever debate someone over a bill that they haven't even read? All their info comes from the pravda wing of their preferred political party / street gang, none of it is first hand knowledge. They're in a bubble.
Three of the most popular religions that worship the same God will each tell you the others are counterfeits, despite the shared moral codes, values, rituals and traditions. Apple fanboys who wholesale swallowed the lies about their OS / machines being immune from viruses, without ever having read one article of an IT security blog. It's not just confirmation bias at work, people live in an artificial information bubble of information sources that affirm their identity, soothe their egos, and never test any idea that they have. Scientific controversies create bubbles no less. But it doesn't even take a controversy, just a preferred source of information -- news, blogs, books, authors. Even if such sources attempt to present an idea or argument from the others who disagree, they do not present it with sufficient force.
Even Google will gladly do this for you by customizing your search results by location, demographic, past searches, etc, to filter out things you may not want to see, providing a convenient invisible bubble for you even if you don't want it!
If you're rational, there's daily work to break the bubbles by actually looking for ways to test the beliefs you care about. The more you care about them, the more they should be tested.
Problem is, the bigger our information sharing capabilities are, the harder it is to find quality information. Facebook propaganda posts get repeated over and over. Re-tweets. Blog reposts. Academic "science" papers that have never been replicated, but are in the news headlines everywhere. The more you actually dig into the agitprop looking for a few gems, or at least sources of interesting information, the more you realize even the questions have been framed wrongly, especially over controversial things. Without searching for high quality evidence about a thing, I resign myself to "no opinion" until I care enough to do the work.
And now you don't fit in anyone's bubble. Not in politics, not in religion, not even in technical arenas where people bubble up also. Take politics ... it's not that I'm a liberal and I miss the company of my conservative friends, or the other way around. Like the "underground man" I feel I actually understand the values and arguments from both sides, leading to wanting to tear the whole system apart and invent new ways or angles of addressing the problems.
But try to have a conversation, for example, about the trade-offs of huge military superiority the US has created: costs and murder vs eventually conceding dominance to who knows who, as they say-- you either wear the merciless boot or live with it on your neck. Approaching the topic this way, and you may be seen as a weak peacenik who dishonors our hero troops or as a monster who gladly trades blood for oil; you're not even understood as having no firm conclusion.
Okay, so don't throw your pearls before swine you say. But you know, you're going to have to do it quite a few times just to find out where the pig-pen ends and information close to the raw sources and unbiased data begin. If you want to hear interesting new ideas from other minds, you're going to have to accept that they are biased and often come from inside their bubble. If you want to test your own beliefs, actively seek to disprove what you think, you will have to wade through oceans of bullshit and agitprop to find the one pearl that shifts your awareness. There is no getting around the work.
Then there are these kinds of situations: my father has also left the fundamentalist fold, but he has gone deeply into New Age mysticism instead of the more skeptical method I've taken. I really want to preserve our closeness and friendship. I know I can't change his mind, but he really likes to talk about this stuff so to stay close I should really try hard to understand his perspective and ideas. But even asking to define terms like "higher consciousness" or explain experiences of "higher awareness" or try to understand the predictions about human "evolutionary" steps coming up ... and he falls back to "it can't be described" or "it's beyond our present intelligence to grasp" or even "beyond rational thought" to understand. So I can artificially nod along not understanding a damn word about it, or I can try to get some kind of hook into his ideas and totally burst his bubble, without even trying. Bursting someone's bubble is not cool. If you burst their bubble, they will cry. If only inwardly. Burst their bubble, and they will try to burst yours, not to help you but from pain.
Problem is, trying to burst your own bubble, you're breaking everyone else's bubbles left and right.
There is the temptation to seek out your own bubble just for temporary comfort ... just how many skeptical videos about SpiritScience or creationism or religion am I going to watch? The scale of evidence is already tipped so far, investing more time to learn more details that nudge it 0.0001% toward 100% isn't about anything other than emotional soothing. Emotional soothing is dangerous; it's reinforcing my bubbles that I will now have to work all the harder to burst, to test, and to train myself to have no emotional investment in any provisional belief.
But it is so, so tempting, when you see yet another propaganda post for the republicrips or bloodocrat gang, vast scientific conspiracy posts, watch your friends and family shut down mid-conversation, so tempting to go read another Sagan book that teaches me nothing new but makes me feel good about my current provisional beliefs. It's tempting to think about blocking friends who run a pravda outlet over facebook, or even shut down your facebook account. It's tempting to give up on family in their own bubble and artificially nod along to concepts that have no meaning.
To some extent, I am even giving in by writing this ... I would like to see many other rationalists feel the same way and affirm my perspective and struggle with this, and that reinforces my bubble, doesn't it? There are probably psychological limits and needs that make some degree of it minimal. We're compelled to eat, but if give ourselves over to that instinct without regard or care it will eventually kill us.
Don't bubble, don't give into the temptation, keep working to burst the bubbles that accrete around you. It's exhausting, it's painful, and it's the only thing keeping your eyes open to reality.
And friend, as you need it here and there, come here and I'll agree with you about something we both already have mountains of evidence for and almost none against. ;)
Kant's Multiplication
In this community there is a respected technique of "shutting up and multiplying". However using it in many realistic ethical dillemas can be difficult. Imagine a situation: there is a company, and each its employee gains utility for pressing buttons. Each employee has a one-use only button that when pressed gives an employee one hundred units of utility, while all the others lose a unit each. They can't communicate about the button and there are no other effects. Is it ethical to press the button?
This is an extremely simple situation. Utilitarianism, no matter which one, would easily say that it's ethical to press the button if there are less than one hundred and one employee and unethical if more than one hundred and one. I believe (the proponents of other ethical theories may correct me if I'm wrong) that both virtue ethics (a person demonstrates a vice by pressing a button) and deontology (that's a kind of stealing and stealing is wrong) as they're usually used (and not as a utiitarianism substitute) would say it's wrong to be the first one to press the button, and so all the eleven employees would lose ninety utils.
But the only reason this situation is so simple under utilitarianism is that we've got a direct access to the employees utility functions. Usually, though, that's not the case. If we want to make a decision in a common question such as "is it ethical to throw a piece of trash on the road, or is it better to carry it to the trash bin" or "is it okay to smoke in a room with other people inside" we had to calculate the utility we gain from throwing it right here versus the utility of all the people. We can also use quick rules, which would say "no" in the both situations. But if there's no rule or two rules, or we don't trust one, then it would be useful to have a method that's more reliable than our Fermi estimations of utility or even money.
I believe there is such a method, and as you probably already figured out, it's the question "what would've happened if everyone does something like this". It's most often used in the context of deontology, but for a utilitarian it allows to feel the shared costs.
What am I talking about? Imagine we have to estimate if we should throw a piece of trash on a road. To calculate we're taking the number of people N that will be travelling this road, calculate their average loss for irritation R of seeing a piece of trash on the road and multiply them. The NR we got we have to compare to the loss X of taking the trash to the bin. Is it difficult to get the sign right? I guess it is. Now let's imagine every traveller has thrown a piece of trash. Now let's suppose your loss of utility is the same for each piece of trash you see and your irritation is about average for the travellers here. How much utility are you going to lose? The same NR. But now imagining this loss and comparing it to the loss of hauling the trash to the bin is much easier and I believe is even more accurate.
To use this method right a utilitarian should be careful not to make a few errors. I'm going to demonstrate a few points using a "smoking in a crowded room" example.
First of all, we shouldn't use worldbuilding too much. "If everyone here always smoked, they'd install a powerful ventilation system, so I'd be okay". That wouldn't sum the utilities in a right way because the ventilation system doesn't exist. So we should change only a single aspect of behavior and not any reactions for that.
Second, we have to remember that a sum of effects is not always a good substitution for a sum of utilities. That's why we cannot say something like: "If everyone here smoked, we'd die of suffocation, so smoking here is as bad as killing a person". That's as an addition of "don't judge people on the utility of what they do, judge them when judging has a high utility" aspect.
I believe the second point may work to the opposite direction with the trash example. That is, the more trash there is, the less irritation a single piece gives. That means to counter this effect we have to imagine there is more trash than if everyone threw it away once.
And the third point is that the person doing the calculation is not always similar to the average one. "If everyone smoked I'd be okay, I've got no problem with rooms full of smoke" fails to calculate the total utility of people there unless they're all smokers, and maybe even then.
This method if used correctly may be a good addition to the well-known here "shut up and multiply" and also is an example of a good tradition of stealing ideas from differing theories.
(I'm not a native speaker and I haven't got much experience of writing in English, so I'd be especially grateful for any grammar corrections. I don't know if the tradition here is to send them via PM or to use a special thread)
MIT Technology Review - Michael Hendricks opinion on Cryonics
http://www.technologyreview.com/view/541311/the-false-science-of-cryonics/
Michael Hendricks is a neuroscientist and assistant professor of biology at McGill University.
vaccination research/reading
Vaccination is probably one of the hardest topics to have a rational discussion about. I have some reason to believe that the author of http://whyarethingsthisway.com/2014/10/23/the-cdc-and-cargo-cult-science/ is someone interested in looking for the truth, not winning a side - at the very least, I'd like to help him when he says this:
I genuinely don’t want to do Cargo Cult Science so if anybody reading this knows of any citations to studies looking at the long term effects of vaccines and finding them benign or beneficial, please, be sure to post them in the comments.
I'm getting started on reading the actual papers, but I'm hoping this finds someone who's already done the work and wants to go post it on his site, or if not, someone else who's interested in looking through papers with me - I do better at this kind of work with social support.
Typical Sneer Fallacy
I like going to see movies with my friends. This doesn't require much elaboration. What might is that I continue to go see movies with my friends despite the radically different ways in which my friends happen to enjoy watching movies. I'll separate these movie-watching philosophies into a few broad and not necessarily all-encompassing categories (you probably fall into more than one of them, as you'll see!):
(a): Movie watching for what was done right. The mantra here is "There are no bad movies." or "That was so bad it was good." Every movie has something redeeming about it, or it's at least interesting to try and figure out what that interesting and/or good thing might be. This is the way that I watch movies, most of the time (say 70%).
(b): Movie watching for entertainment. Mantra: "That was fun!". Critical analysis of the movie does not provide any enjoyment. The movie either succeeds in 'entertaining' or it fails. This is the way that I watch movies probably 15% of the time.
(c): Movie watching for what was done wrong. Mantra: "That movie was terrible." The only enjoyment that is derived from the movie-watching comes from tearing the film apart at its roots--common conversation pieces include discussion of plot inconsistencies, identification of poor directing/cinematography/etc., and even alternative options for what could have 'fixed' the film to the extent that the film could even said to be 'fixed'. I do this about ~12% of the time.
(d): Sneer. Mantra: "Have you played the drinking game?". Vocal, public, moderately-drunken dog-piling of a film's flaws are the only way a movie can be enjoyed. There's not really any thought put into the critical analysis. The movie-watching is more an excuse to be rambunctious with a group of friends than it is to actually watch a movie. I do this, conservatively, 3% of the time.
What's worth stressing here is that these are avenues of enjoyment. Even when a (c) person watches a 'bad' movie, they enjoy it to the extent that they can talk at length about what was wrong with the movie. With the exception of the Sneer category, none of these sorts of critical analysis are done out of any sort of vindictiveness, particularly and especially (c).
So, like I said, I'm mostly an (a) person. I have friends that are (a) people, (b) people, (c) people, and even (d) people (where being a (_) person means watching movies with that philosophy more than 70% of the time).
This can generate a certain amount of friction. Especially when you really enjoy a movie, and your friend starts shitting all over it.
Or at least, that's what it feels like from the inside! Because you might have really enjoyed a movie because you thought it was particularly well-shot, or it evoked a certain tone really well, but here comes your friend who thought the dialogue was dumb, boring, and poorly written. Fundamentally, you and your friend are watching the movie for different reasons. So when you go to a movie with 6 people who are exclusively (c), category (c) can start looking a whole lot like category (d) when you're an (a) or (b) person.
And that's no fun, because (d) people aren't really charitable at all. It can be easy to translate in one's mind the criticism "That movie was dumb" into "You are dumb for thinking that movie wasn't dumb". Sometimes the translation is even true! Sneer Culture is a thing that exists, and while its connection to my 'Sneer' category above is tenuous, my word choice is intentional. There isn't anything wrong with enjoying movies via (d), but because humans are, well, human, a sneer culture can bloom around this sort of philosophy.
Being able to identify sneer cultures for what they are is valuable. Let's make up a fancy name for misidentifying sneer culture, because the rationalist community seems to really like snazzy names for things:
Typical Sneer Fallacy: When you ignore or are offended by criticism because you've mistakenly identified it as coming purely from sneer. In reality, the criticism was genuine and actually true, to the extent that it represents someone's sincere beliefs, and is not simply from a place of malice.
This is the point in the article where I make a really strained analogy between the different ways in which people enjoy movies, and how Eliezer has pretty extravagantly committed the Typical Sneer Fallacy in this reddit thread.
Some background for everyone that doesn't follow the rationalist and rationalist-adjacent tumblr-sphere: su3su2u1, a former physicist, now data scientist, has a pretty infamous series of reviews of HPMOR. These reviews are not exactly kind. Charitably, I suspect this is because su3su2u1 is a (c) kind of person, or at least, that's the level at which he chose to interact with HPMOR. For disclosure, I definitely (a)-ed by way through HPMOR.
su3su2u1 makes quite a few science criticisms of Eliezer. Eliezer doesn't really take these criticisms seriously, and explicitly calls them "fake". Then, multiple physicists come out of the woodwork to tell Eliezer he is wrong concerning a particular one involving energy conservation and quantum mechanics (I am also a physicist, and su3su2u1's criticism is, in fact, correct. If you actually care about the content of the physics issue, I'd be glad to get into it in the comments. It doesn't really matter, except insofar as this is not the first time Eliezer's discussions of quantum mechanics have gotten him into trouble) (Note to Eliezer: you probably shouldn't pick physics fights with the guy whose name is the symmetry of the standard model Lagrangian unless you really know what you're talking about (yeah yeah, appeal to authority, I know)).
I don't really want to make this post about stupid reddit and tumblr drama. I promise. But I think the issue was rather succinctly summarized, if uncharitably, in a tumblr post by nostalgebraist.
The Typical Sneer Fallacy is scary because it means your own ideological immune system isn't functioning correctly. It means that, at least a little bit, you've lost the ability to determine what sincere criticism actually looks like. Worse, not only will you not recognize it, you'll also misinterpret the criticism as a personal attack. And this isn't singular to dumb internet fights.
Further, dealing with criticism is hard. It's so easy to write off criticism as insincere if it means getting to avoid actually grappling with the content of that criticism: You're red tribe, and the blue tribe doesn't know what it's talking about. Why would you listen to anything they have to say? All the blues ever do is sneer at you. They're a sneer culture. They just want to put you down. They want to put all the reds down.
But the world isn't always that simple. We can do better than that.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)