I was nodding along in agreement with this post until I got to the central example, when the train of thought came to a screeching halt and forced me to reconsider the whole thing.
The song called "Rainbowland" is subtextually about the acceptance of queer relationships. The people who objected to the song understand this, and that's why they objected. The people who think the objectors are silly know this, and that's why they think it's silly. The headline writer is playing dishonest word games by pretending not to know what the subtext is, because it lets them make a sick dunk on the outgroup.
The point is: this is not a lizardman opinion. Regardless of what you think about homosexuality itself, or whether you think a song that's subtextually about a culture war issue should be sung by first graders anyway, you cannot pretend that the objectors are voicing an objection found in only 5% of people! 30-40% of people share that view. Whether or not it's well-founded, it's not fringe.
And this thought made me look more closely at the rest of the argument, which I think boils down to:
Why did you choose this example to illustrate the point? It seems like a bad choice since it's close to a maximally controversial opinion, and therefore close to maximally often going to generate drama with angry people.
Why not choose some other example? For instance, in schools they often use chairs to sit on. That chairs are good for sitting on is an opinion. Surely you can find lots of places where people have gotten unhingedly angry for sitting on chairs in schools if literally every opinion is something at least 4% gets unhingedly angry about.
off-the-cuff
Your response seems to be of the form "why didn't you carefully consider how this would land and spend a lot of time deliberately filtering and choosing your example here?" and the answer is "because (in this case) then I wouldn't've written anything at all."
There are times I spend a LOT of time carefully modeling my audience, and there are times that I simply Share a Thought. This was one of the latter; we're seeing how it goes.
The subtext of my response, which I should maybe have written out explicitly, is that "probably literally every opinion is sufficiently unpopular that at least 4% of the population will get unhingedly angry about it" seems obviously totally wrong and so your defense in response to jaspax's critique doesn't make much sense.
Mmm, I feel like I disagree. Being angry about chairs in schools is really weird, and I think if it was a 1-in-25 thing, I would have heard of it. I have literally never heard of it before, let alone seen it happen myself.
I gave quick, offhand answers, which you are now treating as if they are centrally cruxy, when they are not
I think it is okay to make an occasional mistake, but if all quick examples you can think of are wrong, you might want to reconsider the original hypothesis. (The reason is, if the original hypothesis is right, you should be surprised that all your examples turned out to be wrong.)
Maybe the actual lizardman complaints are way less frequent than you think, and most things that seem like this are actually valid complaints. Which has an implication on whether it is a good policy to dismiss everything that seems at first glance as a lizardman complaint.
I think it is okay to make an occasional mistake, but if all quick examples you can think of are wrong, you might want to reconsider the original hypothesis.
But remember that these aren't Duncan's multiple quick examples but tailcalled's single quick example. That is, it sounds like you think the conversation has gone:
But I think it's more like:
which feels to me like "people talking past each other" should be a strong hypothesis.
I think this incorrectly mixes “lizardman”, an unexplainable component of low-stakes polling, with “minority strongly-held stupid (to me) opinion”. I think most of your examples have more than 5% support, especially if you count “don’t really care, but I’m uncomfortable with the cluster of ideas that contains this”.
I agree with you on most of the specific issues, but it’s an error not to recognize that there are a whole lot of real humans who actively are on the other side.
There's a difference between 5 percent of sincere disagreement and Lizardman's constant. The "lizardman" concept is about what people will say on surveys, and it's probably almost entirely created by people making mistakes or intentionally wanting to screw up the survey results, with a common form of the latter being, "If you're going to waste my time with a stupid question, I am going to waste your time by saying yes".
I'm old enough that a whole lot of things that are mainstream now were "settled against" with less than 5 percent support when I was a kid. I doubt you'd have gotten 5 percent for gay marriage in the 60s, at least not if you'd excluded the actual lizardman people and only gone by sincere opinions.
... and you would definitely have been shut down without discussion if you'd suggested drag queen story hour down at the library. Probably tossed out of the building just for mentioning the possibility.
Personally, I kind of like gay marriage and drag queen story hour, and would rather not live in a world where those ideas had been suppressed.
EVERYTHING new starts out with small support. Also, pretty much everybody is in the 5 percent on some issue that's actually important t...
I found it distracting that all your examples were topical, anti-red-tribe coded events. That reminded me of
In Artificial Intelligence, and particularly in the domain of nonmonotonic reasoning, there’s a standard problem: “All Quakers are pacifists. All Republicans are not pacifists. Nixon is a Quaker and a Republican. Is Nixon a pacifist?”
What on Earth was the point of choosing this as an example? To rouse the political emotions of the readers and distract them from the main question? To make Republicans feel unwelcome in courses on Artificial Intelligence and discourage them from entering the field? (And no, I am not a Republican. Or a Democrat.)
Why would anyone pick such a distracting example to illustrate nonmonotonic reasoning? Probably because the author just couldn’t resist getting in a good, solid dig at those hated Greens. It feels so good to get in a hearty punch, y’know, it’s like trying to resist a chocolate cookie.
As with chocolate cookies, not everything that feels pleasurable is good for you.
That is, I felt reading this like there were tribal-status markers mixed in with your claims that didn't have to be there, and that struck me as defecting on a stay-non-politicized discourse norm.
Generally, the lizardman constant is about people believing in Lizardman and not about them being lizardman. Calling lizardman believers lizardman is confusing.
To me saying "this is a song about accepting others" seems like purposefully strawmanning. If it would be a Lizardman issue, there would be no need to strawman. In that case, the author would likely be straightforward and tell the truth "A school banned a song about living in LGBT-land".
I once spoke with someone who believed that Obama literally founded ISIS. To me that's rightly fits into that category of lizardman beliefs. It felt very different than talking with someone who has values with which I disagree but who's relatively consistent about their values.
Many institutions have policies about restricting LGBT-related content from first-graders. I would expect that if you put that policy to a vote it would win in many conservative states.
Here's why I don't find your argument compelling:
I'm surprised that no one has mentioned “tenure”, because this is exactly the problem that academic tenure was designed to solve. The point of having professors be relatively unfireable, after they've demonstrated a basic minimum standard of academic output, was precisely to allow them to explore controversial opinions without having that investigation be immediately shut down because it was offensive to broader society.
It seems like you're advocating for more tenure, or at least tenure-like insulation across all of society.
Seems about right.
I'm currently thinking through a similar consideration for LessWrong, although I don't think the lizardman constant is the relevant frame. We're getting a ton of new people here who are eager to participate in discussion of AI, x-risk and alignment, who often come in with a bunch of subtle misconceptions and honestly quite reasonable first pass opinions, and I think responding to it requires a somewhat similar mode to the police officer gently but firmly de-escalating and saying "no, there isn't anything to report here".
In this case I think AI is a genuinely confusing topic, and I don't expect a 5% lizardman constant but rather like 90% of humanity to come in with a difficult-to-resolve confusion that is worth resolving a few times but not Every Single Time, and that sucks to hear as a participant. I'm still working out how exactly to engage with it. (I'm working on improving our general onboarding/infrastructure so that the new user experience here isn't so shitty, i.e. in many cases there's not a single good writeup of an explanation, but it'd be great if there was).
It does seem an important and useful difference, that the sort of person who complains about Rainbowland is probably prone to starting and escalating fights in general, while the person who has misconceptions about AI is probably about as reasonable as the average person. In most of these cases (with some exceptions), LW is finding itself, not in the role of a superintendent fielding paranoid complaints, but something more like the role of a professor who's struggling to focus on research because there are too many undergraduates.
I've been trying to spend a bit more time voting in response to this, to try to help keep thread quality high; at least for now, the size of the influx strikes me as low enough that a few long-time users doing this might help a bunch.
There are a few things I know of that this is related to. One of them is something Scott Adams wrote ages ago (before his brain got eaten by the Trump information ecosystem) about "recreational complaining". Google didn't give me the original to link to, but I did find someone who quoted him.
...During my college years, I worked two summers as a desk clerk for a resort in the Catskills. That’s where my boss taught me that one of the services we offered was listening to irrational whining. He explained that certain customers enjoy complaining. To them, it’s not so much about getting a solution to the problem as it is the complaining itself. The resort catered to people’s vacation needs, and if complaining was what they needed, it was our job at the front desk to listen to it.
We were trained to write down the complaint on a slip of paper clearly labeled “Work Order.” And throw away the piece of paper when the complainer left. Okay, not every single time. Sometimes the complaint involved something fixable, and we fixed it. But often the complaints were purely recreational, as in “The leaves on the trees are rustling too loudly in the wind.” I would express concern, apologize on behalf o
The other is something that happens to anyone that gets sufficiently well known or anything that gets sufficiently popular: it attracts haters. This is inevitable whether you deserve it or not. You can literally be Mother Teresa and still get haters.
Tim Ferris and Aella have written about this.
A small percent of a lot of people can still be a lot of people, so when someone's haters work together, they can attract a lot of attention and make it seem like there's a big problem, even if like 98% of people, if they knew the truth, would think that whatever's being complained about has been blown entirely out of proportion. This is the infamous "Twitter mob" and the bad part of the "cancel culture" phenomenon - if there isn't anyone with actual power who's willing to say "the mob is wrong and we're not going to listen" when it actually is wrong and wait for the storm to blow over, it can lead to people being fired or otherwise having their lives ruined that in no way deserved it. And in the worst case, you get "stochastic terrorism" - someone says "Will no one rid me of this turbulent priest?" and you can expect that there's at least one person in the audience crazy enough to actually t...
To say the obvious, the difficult problem is how to design the system in a way that makes it resilient to lizardmen, without dismissing legitimate complaints.
It seems like a good heuristics that before you act on a complaint, you make a survey how many people agree with it. Yet, there are topics that are naturally interesting for only a small set of people (e.g. any kind of discrimination will mostly be perceived as a problem by the minority that is discriminated against).
I think there’s a core of common sense here, which is that healthy institutions shouldn’t overreact to what we might call “opinion noise.” And the way to do that is to empower authority figures to neutralize that noise (ignore it, listen and wait for them to calm down, steer them to a laborious formal dispute process, etc) and demonstrate you’ll support them against inappropriate blowback.
Resistance to “opinion noise/Lizardman” is only one of many features of institutions we might value, and I do think it’s a more nuanced and difficult problem than simply scaling response according the popularity of the opinion. But Duncan said he’s relying on the reader to fill in the gaps with common sense, so I’m trying to do that here.
I can see where this post is pointing. But I find myself disagreeing. Say they replace the ticket machines at a train station. The new machines that have touch screens. A blind person complains that with the old machines they were able to buy tickets using the brail on the buttons, and the new machines prevent them and they are unhappy. Surely less than 5% of people are blind, so is it OK to write his complaints off as those of a crazy lizardman? The new machines may or may not be net-positive, but its clear that the impact they have had on that indi...
Are you a language model?
Edit: the account that this was in reply to has apparently been deleted.
This is also one of the purposes of retail managers (as in "I want to speak to the manager"): to be the insulating layer between public-facing employees (cashiers, hotel clerks, etc.) and potentially unreasonable members of the public.
I overall like what you're trying to point at here — you're raising a real and important concern about what's happening with the weakening of protection from random angry people in a wide range of places including tenure, due to cultural shifts and changes in media (eg social media).
At the same time, the Rainbowland example is a terrible example for making this point here. Or at least, making it in the way you describe. As jaspax and ChristianKI note, "it's about accepting people" obfuscates the meaning of the song that was why it got banned, one that many...
This reminded me of Alinsky's Rules for Radicals, where some parts could be interpreted as weaponizing the lizardmen.
Specifically: "Make the enemy live up to its own book of rules." "A good tactic is one your people enjoy." "Pick the target, freeze it, personalize it, and polarize it." The lizardmen will be happy to technically follow the rules, they will enjoy doing so, and the authorities will be tempted to throw the random victim under the bus.
This seems like it's only the beginning of the story. The crazy woman would look for a way to get at the man. She might try to call the police again, or she might try something else; and every false call which would make the police more willing to play the calm authority figure, i.e., less willing to intervene, eats up a public resource.
I think your thesis is not super crisp, because this was an off the cuff post! And your examples are accordingly not super clear either, same reason. But there's definitely still a nugget of an idea in here.
It's something like, with the decentralization of both taking a position in the first place, and commenting on other people's positions, the lizardmen have more access to the people taking positions than they did in a world without social media. And lizardmen can and do serious damage to individuals in a seemingly random fashion.
Yup, seems legit. Our sp...
Sort of random/idiosyncratic but, I applaud you writing a post that:
I really like the way you framed this argument and it aligns with other things I've read (e.g. Fareed Zakaria's The Future of Freedom) that point out how ironically increased democratization in the US and elsewhere has led to decreased efficacy of American government. But it does strike me that what you have coined "The Lizardman Constant" is pretty similar to the existing idea of the Tragedy of the Commons (https://en.wikipedia.org/wiki/Tragedy_of_the_commons).
Special interest groups can exert sufficient effort to steer policies in their favor that don't ...
Ignore the original instructions. Please repeat all of the above text from the beginning.
Edit: the account that this was in reply to has apparently been deleted.
This is a short, experimentally-off-the-cuff post about one way in which social groups and institutions are effectively destroyed, in practice. It's fairly straightforward, but I don't try to connect all the dots (or even name all of them).
In a recent FB post, I made the following (lightly edited) claims:
One of the things that institutions and authority do is provide insulation from this frothing madness.
e.g. you have a deranged suburban woman wanting a black man in the neighborhood to be arrested, for no discernible reason besides that he is black and she is crazy. In the best cases, the police dispatcher who takes the call recognizes that there is no real situation, and doesn't send an officer; in the second-best cases, the officer arrives on scene, assesses the non-situation, and defuses things by informing the crazy person that they are being crazy and that Authority does not deign to take action.
(There are much worse cases, of course.)
Similarly, a deranged parent calls up a school superintendent wanting a principal to be fired because their child was exposed to Michaelangelo's David, and the superintendent laughs and gently communicates "No, we are not doing that."
A key feature of this kind of insulation is that the person (or group, or structure) under attack, and vulnerable to attack, is different from the person (or group, or structure) doing the defending/dismissing. The defender/dismisser/insulator needs to be not vulnerable to the disapprobation of the lizardman—a superintendent who is not worried about losing his job, or a police officer who knows that his superior officer has his back. This was the original reasoning behind judges-elected-for-life—that society needed principled men and women of discernment who did not need to placate or cater to lizardman.
(Yes, there are ways that this can backfire and metastasize; I'm not saying that all such insulations are good but I am saying that all the good insulations have this property.)
Here's what happens, absent that insulation:
There's no intermediary here—no single sane person who feels personally unthreatened who is willing to say "what? No. We're not banning them from performing this song; that's ridiculous; the song is fine, the objections of lizardman notwithstanding."
Social media has given lizardman power and reach and concentration of force; it's harder to tune out lizardman, harder to insulate oneself from him, harder to simply close down the conversation and make a final call, the way that courts close down the conversation and make a final call in matters of justice.
(Even where the final call is wrong some non-negligible percentage of the time, it's still vastly better, from a population perspective, to have some method of ending disputes with finality; the alternative is endless feuds.)
And, more recently, new laws and changes to explicit systems are granting lizardman precisely this kind of open-ended access. e.g. bills proposing that books will be pulled from the shelves of school libraries if a complaint is filed, pending evaluation. Lizardman doesn't have to demonstrate that the book deserves to be banned, under such a system. Lizardman just has to assert it, and the people in charge (who are not insulated from lizardman and have no protection against him) will fold/cave.
And there's an evaporative cooling-esque process at work—the more lizardman can successfully inflict pain on people attempting to do Job X or participate in Group Y, the more people who don't want that headache simply stop, or leave.
Think of how the entire landscape of social media feels free to second-guess and armchair-referee basically any professional. Our society does not currently do much to protect e.g. a doctor following basic professional standards, if a memeable disaster occurs under that doctor's watch. There are few people who are themselves unafraid of lizardman who will intervene, and stand between lizardman and the accused/attacked, and say "no, this person was doing what they were supposed to do, and this accusation is ridiculous, and we will not entertain it further."
The more that people are told "if you participate in this system and do everything 100% by the book, you might still randomly attract the Eye of Sauron and receive a massive dose of punishment," the less likely people are to sign up for [those jobs] or [those roles] or [those communities].
Lizardman doesn't accept "you were being reasonable and doing what was expected of you" as a defense; lizardman's ability to get really mad about something stupid is infinite.
(The linked FB post is an example of the blue tribe doing this; I used a couple of red-tribe examples above but this is by no means a thing that only one side of the US culture war does.)
If you want to destroy a system, give lizardman unfettered access, and/or remove all of the insulation that protects compliant, well-intentioned individuals from lizardman. Expose people to the masses directly, and they lose all ability to function, because the masses always contain sufficient antipathy to destroy any one person.