I am afraid of the anglerfish. Maybe this is why the comments on my blog tend to be so consistently good.

Recently, a friend was telling me about the marketing strategy for a project of theirs. They favored growth, in a way that I was worried would destroy value. I struggled to articulate my threat model, until I hit upon the metaphor of that old haunter of my dreamscape, the anglerfish.

I am sure that something like the general process I'm describing is common and important. I am not sure at all of the details, but I am going to try to state this strongly and vividly, with little hedging, for the sake of clarity.

The Anglerfish

The anglerfish lives in waters too far beneath the surface of the sea for sunlight to reach. It dangles a luminescent lure in front of itself. This resembles a fishing angle, whence comes its name. This lure attracts animals of the deep sea, which approach the anglerfish, and are consequently eaten by it.

Why - in the deep sea where no sunlight can reach - would evolution favor animals that are attracted to light? I do not know exactly what strategy the Anglerfish's prey are following, but we do know some general things about why an animal might be attracted to light.

The secondary uses of such a strategy are clear enough. Once some deep-sea-dwellers emit light, larger animals that predate on them might do better if attracted to light sources. But that presupposes the existence of other animals that already emit light, for other reasons.

What are the primary uses of light? In a region where no other creatures emit light, here are some reasons why would might begin to do so:

  • To illuminate potential prey.
  • As a ward, to warn potential competitors that one is prepared to defend territory.
  • To attract complementary animals, either as symbiotes, or as mates.

In all these cases, the purpose of the light is to reveal information. In all but the first case, it is to share information with others, in order to enable cooperation. Perhaps the purest version of this is the mating display. We can see this in the firefly, which uses its distinctive patterns of luminescent flashes to find mates.

The firefly has some information. It activates a beacon, in order to find someone with complementary information, in order to engage in productive exchange. Likewise for deep-sea fish who mate or find symbiotes by means of a light display.

Some fireflies mimic the mating flashes of other fireflies, in order to attract them, not for exchange of genetic information, but in order to extract calories.

The predation strategy of the anglerfish, properly generalized is a strategy that predates on all information-seeking behavior, whether competitive or cooperative. The anglerfish does not need to know that the animal that just swam in front of it is evaluating its mating display and finds it wanting, or is looking for a very different creature as a symbiote. So long as there are animals seeking illumination, the anglerfish only cares that some calories and raw materials have been brought within reach of a single burst of swimming and the clamping shut of its great maw.

Typically, a predator has to be more sophisticated than the creatures on which it preys. But the anglerfish follows a simple, information-poor strategy, that preys on sophisticated, information-rich ones. It doesn’t have to be a particularly skilled mimic - it simply preys on the fact that creatures seeking information will move towards beacons.

Subcultures as ecosystems

In David Chapman’s geeks, MOPs, and sociopaths, “geeks” are the originators of subcultures. They are persons of refined taste and discernment. They found subcultures by discovering or creating something they believe to be of intrinsic value. The originators of this information share it with others, and the first to respond enthusiastically will be other geeks, who can tell that the content of the message is valuable.

Eventually, enough geeks congregate together, and the thing they are creating together becomes valuable enough, that people without the power to independently discern the source of value can tell that value is being created. These Chapman calls “Members Of the Public”, or “MOP”s. Geeks map roughly onto Aellagirl's possums, MOPs onto otters.

In the right ratios, MOPs and Geeks are symbiotes. The MOPs enjoy the benefits of the thing the geeks created, and are generally happy to share their social capital, including money, with the geeks.

But from another perspective MOPs are an exploitable resource, which the geeks have gathered in one place but are neither efficiently exploiting, nor effectively defending. This attracts people following a strategy of predating on such clusters of MOPs. These predators, whom Chapman names “sociopaths,” do not care about the idiosyncratic value the geeks are busy creating. What they do care about, is the generic resources - attention, money, volunteer hours, social proof - that the MOPs provide.

Iterated improvement

To summarize the above: Geeks build beacons. Initially these beacons are not very bright, but they are sending out high-information signals which attract other geeks looking for that information. Eventually, enough geeks are contributing to the beacons that they become bright enough to attract MOPs.

Chapman’s sociopaths can’t just waltz in and propose that everyone give them things for nothing. After all, everyone in their feeding ground was attracted to it by something about it, something that distinguishes it from other places in the culture. They need to look like a part of the scene. So they start by imitating, or proposing refinements to, the beacons the geeks have erected.

The geeks are only putting up a very particular kind of beacon. There are a lot of constraints on exactly what sort of signal they are willing to send. This is the same as saying that their beacons have a lot of information content. From the geeks’ perspective, the exchange of this information is the whole point of setting up beacons, and the presence of friendly MOPs is merely a happy side effect.

But from the sociopaths’ perspective, these information-bearing constraints are mere shibboleths. Chapman’s sociopaths will follow whatever rules they have to in order to pass as contributors to the subculture, but they won’t put independent effort into understanding why these rules are the ones they have to follow. Instead, their contribution is to iteratively improve the beacons’ ability to attract prey.

As sociopaths test out variations in their beacons, they will learn which variants are best at attracting people, by means of trial and error. Three things about this will reduce the relative proportion of geeks in the subculture, and therefore the geeks’ influence. First, since MOPs are less sensitive to fine variations in signal than geeks are, random mutations in beacon design are more likely to attract more MOPs than more geeks. Second, as the overall process becomes better at attracting MOPs, more sociopaths will notice that it is a promising feeding ground.

Finally, many changes that are neutral or beneficial for attracting MOPs, will, from the geeks’ perspective, seem like the introduction of errors. This will make the signal less attractive to geeks who have not already invested in the subculture.

What does this process look like from the geeks’ perspective?

At first - people are coming into the geeks’ subculture, and trying to contribute to it. These newcomers are putting a lot of energy into creating new content, but from time to time introduce perplexing errors. But, they are getting a lot of people interested in this wonderful information you’ve created, so the geeks are not inclined to complain. The MOPs basically trust the geeks’ implied endorsement, and accept the new contributors on the same footing as the old ones.

But now there are two forces at play affecting the content of the signals being sent. One is a force correcting errors - the geeks’ desire to preserve, transmit, and develop the original information-content of the signal. The other force introduces errors: the sociopaths’ desire to attract more MOPs. When the second force becomes stronger than the first, the sociopaths are now the dominant faction, and able to coordinate to suppress geek attempts to correct errors that make the message more popular.

At first, the MOPs’ acceptance of the sociopaths depended in part on the geeks’ tacit endorsement. But once a sufficiently powerful faction of sociopaths has been given social proof, they can wield the force of disendorsement against the geeks. The only meaningful constraint is that MOPs don’t like conflict, so the sociopaths will want to avoid escalating to a point where the conflict becomes overt.

From the sociopaths’ perspective, the geeks were inexplicably donating their time and energy to discovering a new signal to broadcast, that would attract a pool of MOPs to feed on. But the geeks were - again incomprehensibly - neither exploiting nor defending that resource. The sociopath strategy invests in general understanding of social dynamics, but does not need to understand the specific content of what the geeks are trying to do. The sociopath need only know that some attention, money, volunteer hours, and social proof have been brought within reach of a competent marketing and sales effort.

From the sociopaths' perspective, they are not introducing errors - they are correcting them.

The paradigmatic predator is sufficiently smarter than its targets to anticipate and manipulate their behavior. But Chapman’s sociopaths follow a simple, information-poor strategy, that preys on sophisticated, information-rich ones. This strategy doesn’t have to understand the signal as well as the geeks do - the geeks will help it pass their tests. It simply iterates empirically towards shining the most attractive beacon it can, of a kind that has already been selected to attract its prey.

The predation strategy of Chapman’s sociopath is a strategy that predates on all information-seeking behavior, whether competitive or cooperative.

What is to be done?

I've used Chapman's terms because they're reasonably widely used jargon - Chapman borrowed the term "sociopath" from Venkatesh Rao's quite dysphemistic Gervais Principle series - but it's important to remember that on this model, sociopaths are not necessarily universally bad or mean people. They just don't care about your project. This is fine. You don't care about most people's projects. Likewise, most people don't care about yours. The problem is when you let those people run your project.

Humanity has a long tradition of exploiting the information-exchange strategies of other creatures for our own use. I'm writing this from the house of a friend who has chickens in her backyard. The chickens want to lay eggs in order to make more chickens - but to my friend, the eggs are just food.

Nor is it always bad to be food to this sort of strategy. To a publicly traded corporation like Starbucks, I am little but a source of revenue and reputation (which ultimately matters because it attracts more revenue sources). This is fine, because I just want my coffee. I am not extending trust for Starbucks to have my best interests at heart.

As far as Chapman's sociopaths know, they are just doing what one does to beacons - trying to make them more pleasing to more people. They are cooperating with the geeks as sincerely as they know how - as sincerely as the believe to be possible. In many cases they simply don’t understand that the original signal had value. There's little point in being indignant about this. Just don't put them in charge.

Likewise, the term MOP comes with a little more sneering than I think is appropriate. In general, if you are contemptuous of people for trusting you, something is going wrong.

Nor is indignation the right response to people who showed up and tried to participate in good faith. MOPs are more or less defined by not knowing what is going on with respect to your subculture, and while in some sense they might be culpable for that, they are the vast majority of human beings - the members of the public - and that is simply not a reasonable intervention point to target. It's hard to know what's going on. If it weren't, the world would look very different.

No, the people who need to do something about the corruption of a message are the people who care the most about that message: the geeks. In subcultures following this lifecycle, geeks have committed a key sin: trying to get something for nothing, by pretending to be more popular than we are.

People playing sociopath strategies gain a foothold in subcultures, because they bring in more resources, get more people involved, get attention from respectable people, raise money - since they are paying attention to how attractive their beacons are, not whether they are correct (from a geek perspective).

The obvious strategy to counter this is to speak up early and often when errors are being introduced. It is not a sin to be error-tolerant, in the sense of not immediately expelling people for making errors. But it is always a sin, in an otherwise-cooperative community, to suppress the calling-out of errors, in order to avoid making a scene, scaring off the MOPs, harming morale and momentum. If you are a geek in that sort of subculture, the MOPs are relying on your implied endorsement of the other content-creators. If you remain silent in the face of error, then you are betraying this trust. There is no additional error-correction system that will save you - you were supposed to be the error-correction system.

If you and your collaborators diligently follow this practice, then this will enable the creation of common knowledge when someone is reliably introducing errors, and either failing to correct them or making the minimum possible correction. You will have shared knowledge of track records - who is introducing information, and who is destroying it with noise. It is only with this knowledge that you can begin to have actual community standards.

This is why I’ve been so outspoken about problems I see in Effective Altruism - and plan to write on problems I see in the Rationality community. A few years ago, my relation to these things was something more like that of a MOP. I got excited about their ideas, trusted the people in charge to be doing what they said they were doing, and tried to reciprocate by bringing more resources like attention and money their way.

To their great credit, these overlapping communities were helpful in waking me up to my own sense of judgment and aesthetics. This helped me see what was going on a little more clearly.

I don’t have a working alternative up and running, but I feel a responsibility to speak up loudly and clearly enough that me three years ago would have noticed that something smelled off.

I have to do this - I owe it to anyone who trusts my tacit endorsement by association - or anyone who trusted my more overt endorsements in the past. And to myself; I care about the content, not the attractiveness of the beacon.

Finally, some advice for geeks, founders of subcultures, constructors of beacons. Make your beacon as dim as you can get away with while still transmitting the signal to those who need to see it. Attracting attention is a cost. It is not just a cost to others; it increases the overhead cost you pay, of defending this resource against predatory strategies. If you have more followers, attention, money, than you know how to use right now - then either your beacon budget is unnecessarily high, or you are already being eaten.

Don't take more than you can use. Who hoards food, finds flies.

On comment sections

It's puzzled me for a while, why my personal blog - which barely gets comments at all - gets comments of such a high typical quality. I'd imagined that to get really good comments, I'd have to put up with a lot of mediocre ones and some quite bad ones. But I don't.

A lot of the writing advice I've received has been basically been telling me to manage the reader's expectations. To deliver an entertainment experience. To tell a story, a narrative. I've found this prospect vaguely offensive, but haven't had words for what about it seemed so bad.

But, when I look at the comment sections on more popular blogs, they are not consistently good.

I have cross-posted much of my writing to LessWrong. There, I get some readers for free, initially attracted by the lure of Eliezer Yudkowsky's engagingly written sequences of blog posts on rationality, or the even more engagingly written Harry Potter and the Methods of Rationality. This is valuable, but I don't have the same experience I get on my own blog, where almost every comment that is not actually spam is one that I am very glad to have read.

This is true even when I've written on highly politicized topics, such as the sexual politics of the Trump election.

Part of why I don't feel like making my writing like Eliezer Yudkowsky's Sequences, like Scott Alexander's Slate Star Codex, might be that I am reluctant to invite the kinds of low-quality engagement those writings get, mixed in with the high-quality stuff. Scott and Eliezer have had to ban people. I haven't. I'd actually be happy if my readers lowered the quality and relevance threshold for commenting somewhat.

Of course, sometimes it's worth trading off average quality for quantity. I might do so in the future - the badness of my writing is not entirely intentional. I'm not saying that Scott and Eliezer are wrong - just that my intuitions were correctly noticing a cost to doing things their way.

If I do make that trade, I'll have to do more work such as moderating comments, to maintain the quality of what is right now a beautiful unwalled garden. But for now, no one here is just along for the entertaining ride - I don't think anyone could get excited by my blog for the "quality of the writing." If someone's excited by one of my posts, it's not because I leaned hard on their generic "excitement" buttons. It pretty much has to be because I explained well a thing they were puzzled by, or made an argument that they, in their own autonomous judgment, find relevant and interesting.

I'm not sending out the brightest beacon - just a beacon strong enough to send a high-fidelity signal.


* In the sense of very-low-quality automated advertising pretending to be personal communication, not in the sense of the foodstuff

New Comment
9 comments, sorted by Click to highlight new comments since:

I appreciate your outspokenness on these things. Writing like yours on EA has made me pause after having been resigned for a long time that these communities weren't (and maybe never were) growing towards my idealizations of them. I don't know how much we want the same things, and anyway I'm perhaps too much of an outsider with other commitments these days to make too much noise, but I'll continue to look forward to your posting.

Taking up your framework, I'm not sure how much of what I see is predatory behavior by sociopaths (though there is that, malicious or otherwise) versus ordinary selection pressure in a loose coalition of different sorts of geeks, some whom may think they're the same sort. Either way, it seems like I've connected with more like-minded people by dimming my beacon even into obscurantism than otherwise.

The geeks, ideally, prefer to keep their beacon emitting on a very narrow, specific frequency band only – they’d prefer to keep all others out besides genuine geeks, and therefore their signal will need to be practically invisible to everyone else. This is kind of how I imagine the early proto-rationalist subcultures, you basically have to know exactly where to look to be able to find them, and already possess a large fraction of the required background knowledge.

It would have continued like that, if it wasn’t for the fact that they eventually needed people with the skills to attract attention, resources, and who had the charisma to motivate a lot of work being done. In other words, they needed sociopaths – they didn’t need MOPS so much at first, but they probably thought it would be easier to attract the sociopaths by attracting lots of MOPs first. The sociopaths feed off of the attention and approval of MOPs, therefore you need the MOPs to get the sociopaths. It is unlikely the sociopaths will be attracted to the original, pure-geek subculture at first.

But I think the key here is that there are no good reasons for the geeks to start emitting on a wider frequency band unless they desperately need help. But generally, when they first start to do this, it is sort of done haphazardly and without real knowledge of the full consequences. That’s not to say it’s ever done with negligence, it’s just usually extremely difficult to make good predictions about the full effects, since we’re operating at much higher complexity here.

I don’t see a lot of great solutions around this because, as you mention, the sociopaths will invariably seek control. And if you don’t give them control, they will leave, or worse, demolish your reputation, so that you aren’t able to seek support from anyone ever again.

It seems only reasonable then to introduce unbreakable rules. The downside of course is that lots of rationalists are against this sort of thing by their natures - even I feel weird suggesting it - and they prefer the freedom from social norms and expectations, and are more partial to consequentialist ways of behavior. MOPs, however, probably operate more comfortably within rule-systems, which forces the sociopaths to abide by them as well, since it’s much easier for the MOPs to notice and react when a rule has been broken.

You can’t include a rule like “The most trustworthy person is so-and-so” because then the sociopaths will simply try to tarnish that person’s reputation. And you can’t include a rule like “All of our content-creation and decision-making must be accomplished anonymously”, because even though it prevents sociopaths from taking control, they gain nothing from membership in the group. The sociopaths must be able to gain what they want from membership in the group, but they also have to be beholden to the operating protocols of said group without being able to manipulate those protocols. The only way to do that, I think, is to make the protocols very explicit and clear, and have mechanisms that make it very obvious when they are being broken.

I think you're underrating MOPs here and overrating sociopaths, for the same reason people overrate Ra.

Would you mind fleshing this out a bit more? I feel like when you say "overrate Ra" this could be meant in more than one sense - i.e., to overvalue social norms or institutions in general, or, in the specific sense of this discussion, to regard sociopaths as having more inherent worth to an institution or group than they truly have.

Happy to try!

I mean specifically those things, not social institutions generally. Ra and Sociopaths are both optimized for getting people to wave a flag that says X, but not necessarily for getting people to do X.

Less confident about this, but I think a lot of the perceived value of Sociopaths is just that they're willing to give MOPs instructions, when Geeks are confused and trying to treat the MOPs like defective Geeks instead of their own thing. (I am totally guilty of this.)

The interesting question to me is, are sociopaths ever useful to have or are they inherently destructive and malign? You used the analogy of an anglerfish, and I don't think there are many better comparisons you would want to make if your goal is to show something at the furthest (negative) edge of what we consider aesthetically pleasing - one of the most obviously hostile-looking creatures on the planet. To me that certainly seems intentional.

There are sort of three definitions of "sociopath" that get used here, and they often overlap, or perhaps are sort of assumed to be the same. One is the traditional definition of sociopath - someone who basically lacks morals and empathy - and the others are some combination of the Gervais Principle Sociopath and the Chapman Sociopath. The Gervais sociopath seems to be someone who actually is capable of doing good things sometimes, because they are the only ones with both the creative vision and the charisma to organize and convince a bunch of people to do stuff, but they often lie and cheat and create delusions in order to get there. The Chapman kind is similar, but is also someone who comes into social groups just to prey on people and feed their own ego or whatever it is they really want. And they usually cause destruction, which is different from the kind of sociopaths that create value to society or an economy or something like that.

But with respect to community building, or Effective Altruism, the question is, given that your group will invariably eventually become composed of various personality types, some geeks, MOPs, and sociopaths, is your primary goal to filter this pool down - let's say by making it hard for sociopaths to find their way in - or just by making it so that everyone's objectives are in some way aligned with the overarching goals, without removing anyone?

By the way, this question does not at all apply to your choice of how to write your posts. I think if you want to write in a way that acts as a high-fidelity signal but not be the brightest beacon possible, that makes perfect sense for personal writings.

The distinction between these types of "sociopath" is quite important, thanks for making it explicit.

I think the Wikipedia article on Entryism might be somewhat relevant. This can be defended against by having a weaker signal as you propose.

On the other hand, there is also the issue of people who popularise and simplify concepts. I suspect that this is the reason why so many people were unhappy about Intentional Insights. I suppose the debate comes down to whether it is better for people have partial understanding of an idea or whether this is counter-productive because then people are starting from a misconception, rather than nothing.

My default behaviour is to work by myself. I figure if I can find something really interesting then I can get other geeks interested. I think this causes my writing skills to atrophy somewhat. I occasionally try to shine a beacon, so they don't get too bad and to see if anyone else is going the same direction.