Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Dragon Army: Theory & Charter (30min read)

40 Post author: Duncan_Sabien 25 May 2017 09:07PM

Author's note: This IS a rationality post (specifically, theorizing on group rationality and autocracy/authoritarianism), but the content is quite cunningly disguised beneath a lot of meandering about the surface details of a group house charter.  If you're not at least hypothetically interested in reading about the workings of an unusual group house full of rationalists in Berkeley, you can stop here.  


Section 0 of 3: Preamble

Purpose of post:  Threefold.  First, a lot of rationalists live in group houses, and I believe I have some interesting models and perspectives, and I want to make my thinking available to anyone else who's interested in skimming through it for Things To Steal.  Second, since my initial proposal to found a house, I've noticed a significant amount of well-meaning pushback and concern à la have you noticed the skulls? and it's entirely unfair for me to expect that to stop unless I make my skull-noticing evident.  Third, some nonzero number of humans are gonna need to sign the final version of this charter if the house is to come into existence, and it has to be viewable somewhere.  I figured the best place was somewhere that impartial clear thinkers could weigh in (flattery).

What is Dragon Army [Barracks]?  It's a high-commitment, high-standards, high-investment group house model with centralized leadership and an up-or-out participation norm, designed to a) improve its members and b) actually accomplish medium-to-large scale tasks requiring long-term coordination.  Tongue-in-cheek referred to as the "fascist/authoritarian take on rationalist housing," which has no doubt contributed to my being vulnerable to strawmanning but was nevertheless the correct joke to be making, lest people misunderstand what they were signing up for.  Aesthetically modeled after Dragon Army from Ender's Game (not HPMOR), with a touch of Paper Street Soap Company thrown in, with Duncan Sabien in the role of Ender/Tyler and Eli Tyre in the role of Bean/The Narrator.

Why?  Current group housing/attempts at group rationality and community-supported leveling up seem to me to be falling short in a number of ways.  First, there's not enough stuff actually happening in them (i.e. to the extent people are growing and improving and accomplishing ambitious projects, it's largely within their professional orgs or fueled by unusually agenty individuals, and not by leveraging the low-hanging fruit available in our house environments).  Second, even the group houses seem to be plagued by the same sense of unanchored abandoned loneliness that's hitting the rationalist community specifically and the millennial generation more generally.  There are a bunch of competitors for "third," but for now we can leave it at that.

"You are who you practice being."


Section 1 of 3: Underlying models

The following will be meandering and long-winded; apologies in advance.  In short, both the house's proposed aesthetic and the impulse to found it in the first place were not well-reasoned from first principles—rather, they emerged from a set of System 1 intuitions which have proven sound/trustworthy in multiple arenas and which are based on experience in a variety of domains.  This section is an attempt to unpack and explain those intuitions post-hoc, by holding plausible explanations up against felt senses and checking to see what resonates.

Problem 1: Pendulums

This one's first because it informs and underlies a lot of my other assumptions.  Essentially, the claim here is that most social progress can be modeled as a pendulum oscillating decreasingly far from an ideal.  The society is "stuck" at one point, realizes that there's something wrong about that point (e.g. that maybe we shouldn't be forcing people to live out their entire lives in marriages that they entered into with imperfect information when they were like sixteen), and then moves to correct that specific problem, often breaking some other Chesterton's fence in the process.


For example, my experience leads me to put a lot of confidence behind the claim that we've traded "a lot of people trapped in marriages that are net bad for them" for "a lot of people who never reap the benefits of what would've been a strongly net-positive marriage, because it ended too easily too early on."  The latter problem is clearly smaller, and is probably a better problem to have as an individual, but it's nevertheless clear (to me, anyway) that the loosening of the absoluteness of marriage had negative effects in addition to its positive ones.

Proposed solution: Rather than choosing between absolutes, integrate.  For example, I have two close colleagues/allies who share millennials' default skepticism of lifelong marriage, but they also are skeptical that a commitment-free lifestyle is costlessly good.  So they've decided to do handfasting, in which they're fully committed for a year and a day at a time, and there's a known period of time for asking the question "should we stick together for another round?"

In this way, I posit, you can get the strengths of the old socially evolved norm which stood the test of time, while also avoiding the majority of its known failure modes.  Sort of like building a gate into the Chesterton's fence, instead of knocking it down—do the old thing in time-boxed iterations with regular strategic check-ins, rather than assuming you can invent a new thing from whole cloth.

Caveat/skull: Of course, the assumption here is that the Old Way Of Doing Things is not a slippery slope trap, and that you can in fact avoid the failure modes simply by trying.  And there are plenty of examples of that not working, which is why Taking Time-Boxed Experiments And Strategic Check-Ins Seriously is a must.  In particular, when attempting to strike such a balance, all parties must have common knowledge agreement about which side of the ideal to err toward (e.g. innocents in prison, or guilty parties walking free?).

 

Problem 2: The Unpleasant Valley

As far as I can tell, it's pretty uncontroversial to claim that humans are systems with a lot of inertia.  Status quo bias is well researched, past behavior is the best predictor of future behavior, most people fail at resolutions, etc.

I have some unqualified speculation regarding what's going on under the hood.  For one, I suspect that you'll often find humans behaving pretty much as an effort- and energy-conserving algorithm would behave.  People have optimized their most known and familiar processes at least somewhat, which means that it requires less oomph to just keep doing what you're doing than to cobble together a new system.  For another, I think hyperbolic discounting gets way too little credit/attention, and is a major factor in knocking people off the wagon when they're trying to forego local behaviors that are known to be intrinsically rewarding for local behaviors that add up to long-term cumulative gain.

But in short, I think the picture of "I'm going to try something new, eh?" often looks like this:


... with an "unpleasant valley" some time after the start point.  Think about the cold feet you get after the "honeymoon period" has worn off, or the desires and opinions of a military recruit in the second week of a six-week boot camp, or the frustration that emerges two months into a new diet/exercise regime, or your second year of being forced to take piano lessons.

The problem is, people never make it to the third year, where they're actually good at piano, and start reaping the benefits, and their System 1 updates to yeah, okay, this is in fact worth it.  Or rather, they sometimes make it, if there are strong supportive structures to get them across the unpleasant valley (e.g. in a military bootcamp, they just ... make you keep going).  But left to our own devices, we'll often get halfway through an experiment and just ... stop, without ever finding out what the far side is actually like.

Proposed solution: Make experiments "unquittable."  The idea here is that (ideally) one would not enter into a new experiment unless a) one were highly confident that one could absorb the costs, if things go badly, and b) one were reasonably confident that there was an Actually Good Thing waiting at the finish line.  If (big if) we take those as a given, then it should be safe to, in essence, "lock oneself in," via any number of commitment mechanisms.  Or, to put it in other words: "Medium-Term Future Me is going to lose perspective and want to give up because of being unable to see past short-term unpleasantness to the juicy, long-term goal?  Fine, then—Medium-Term Future Me doesn't get a vote."  Instead, Post-Experiment Future Me gets the vote, including getting to update heuristics on which-kinds-of-experiments-are-worth-entering.

Caveat/skull: People who are bad at self-modeling end up foolishly locking themselves into things that are higher-cost or lower-EV than they thought, and getting burned; black swans and tail risk ends up making even good bets turn out very very badly; we really should've built in an ejector seat.  This risk can be mostly ameliorated by starting small and giving people a chance to calibrate—you don't make white belts try to punch through concrete blocks, you make them punch soft, pillowy targets first.

And, of course, you do build in an ejector seat.  See next.

 

Problem 3: Saving Face

If any of you have been to a martial arts academy in the United States, you're probably familiar with the norm whereby a tardy student purchases entry into the class by first doing some pushups.  The standard explanation here is that the student is doing the pushups not as a punishment, but rather as a sign of respect for the instructor, the other students, and the academy as a whole.

I posit that what's actually going on includes that, but is somewhat more subtle/complex.  I think the real benefit of the pushup system is that it closes the loop.  

Imagine you're a ten year old kid, and your parent picked you up late from school, and you're stuck in traffic on your way to the dojo.  You're sitting there, jittering, wondering whether you're going to get yelled at, wondering whether the master or the other students will think you're lazy, imagining stuttering as you try to explain that it wasn't your fault—

Nope, none of that.  Because it's already clearly established that if you fail to show up on time, you do some pushups, and then it's over.  Done.  Finished.  Like somebody sneezed and somebody else said "bless you," and now we can all move on with our lives.  Doing the pushups creates common knowledge around the questions "does this person know what they did wrong?" and "do we still have faith in their core character?"  You take your lumps, everyone sees you taking your lumps, and there's no dangling suspicion that you were just being lazy, or that other people are secretly judging you.  You've paid the price in public, and everyone knows it, and this is a good thing.

Proposed solution: This is a solution without a concrete problem, since I haven't yet actually outlined the specific commitments a Dragon has to make (regarding things like showing up on time, participating in group activities, and making personal progress).  But in essence, the solution is this: you have to build into your system from the beginning a set of ways-to-regain-face.  Ways to hit the ejector seat on an experiment that's going screwy without losing all social standing; ways to absorb the occasional misstep or failure-to-adequately-plan; ways to be less-than-perfect and still maintain the integrity of a system that's geared toward focusing everyone on perfection.  In short, people have to know (and others have to know that they know, and they have to know that others know that they know) exactly how to make amends to the social fabric, in cases where things go awry, so that there's no question about whether they're trying to make amends, or whether that attempt is sufficient.  


Caveat/skull: The obvious problem is people attempting to game the system—they notice that ten pushups is way easier than doing the diligent work required to show up on time 95 times out of 100.  The next obvious problem is that the price is set too low for the group, leaving them to still feel jilted or wronged, and the next obvious problem is that the price is set too high for the individual, leaving them to feel unfairly judged or punished (the fun part is when both of those are true at the same time).  Lastly, there's something in the mix about arbitrariness—what do pushups have to do with lateness, really?  I mean, I get that it's paying some kind of unpleasant cost, but ...


Problem 4: Defections & Compounded Interest

I'm pretty sure everyone's tired of hearing about one-boxing and iterated prisoners' dilemmas, so I'm going to move through this one fairly quickly even though it could be its own whole multipage post.  In essence, the problem is that any rate of tolerance of real defection (i.e. unmitigated by the social loop-closing norms above) ultimately results in the destruction of the system.  Another way to put this is that people underestimate by a couple of orders of magnitude the corrosive impact of their defections—we often convince ourselves that 90% or 99% is good enough, when in fact what's needed is something like 99.99%.

There's something good that happens if you put a little bit of money away with every paycheck, and it vanishes or is severely curtailed once you stop, or start skipping a month here and there.  Similarly, there's something good that happens when a group of people agree to meet in the same place at the same time without fail, and it vanishes or is severely curtailed once one person skips twice.

In my work at the Center for Applied Rationality, I frequently tell my colleagues and volunteers "if you're 95% reliable, that means I can't rely on you."  That's because I'm in a context where "rely" means really trust that it'll get done.  No, really.  No, I don't care what comes up, DID YOU DO THE THING?  And if the answer is "Yeah, 19 times out of 20," then I can't give that person tasks ever again, because we run more than 20 workshops and I can't have one of them catastrophically fail.

(I mean, I could.  It probably wouldn't be the end of the world.  But that's exactly the point—I'm trying to create a pocket universe in which certain things, like "the CFAR workshop will go well," are absolutely reliable, and the "absolute" part is important.)

As far as I can tell, it's hyperbolic discounting all over again—the person who wants to skip out on the meetup sees all of these immediate, local costs to attending, and all of these visceral, large gains to defection, and their S1 doesn't properly weight the impact to those distant, cumulative effects (just like the person who's going to end up with no retirement savings because they wanted those new shoes this month instead of next month).  1.01^n takes a long time to look like it's going anywhere, and in the meantime the quick one-time payoff of 1.1 that you get by knocking everything else down to .99^n looks juicy and delicious and seems justified.

But something magical does accrue when you make the jump from 99% to 100%.  That's when you see teams that truly trust and rely on one another, or marriages built on unshakeable faith (and you see what those teams and partnerships can build, when they can adopt time horizons of years or decades rather than desperately hoping nobody will bail after the third meeting).  It starts with a common knowledge understanding that yes, this is the priority, even—no, wait, especially—when it seems like there are seductively convincing arguments for it to not be.  When you know—not hope, but know—that you will make a local sacrifice for the long-term good, and you know that they will, too, and you all know that you all know this, both about yourselves and about each other.

Proposed solution: Discuss, and then agree upon, and then rigidly and rigorously enforce a norm of perfection in all formal undertakings (and, correspondingly, be more careful and more conservative about which undertakings you officially take on, versus which things you're just casually trying out as an informal experiment), with said norm to be modified/iterated only during predecided strategic check-in points and not on the fly, in the middle of things.  Build a habit of clearly distinguishing targets you're going to hit from targets you'd be happy to hit.  Agree upon and uphold surprisingly high costs for defection, Hofstadter style, recognizing that a cost that feels high enough probably isn't.  Leave people wiggle room as in Problem 3, but define that wiggle room extremely concretely and objectively, so that it's clear in advance when a line is about to be crossed.  Be ridiculously nitpicky and anal about supporting standards that don't seem worth supporting, in the moment, if they're in arenas that you've previously assessed as susceptible to compounding.  Be ruthless about discarding standards during strategic review; if a member of the group says that X or Y or Z is too high-cost for them to sustain, believe them, and make decisions accordingly.

Caveat/skull: Obviously, because we're humans, even people who reflectively endorse such an overall solution will chafe when it comes time for them to pay the price (I certainly know I've chafed under standards I fought to install).  At that point, things will seem arbitrary and overly constraining, priorities will seem misaligned (and might actually be), and then feelings will be hurt and accusations will be leveled and things will be rough.  The solution there is to have, already in place, strong and open channels of communication, strong norms and scaffolds for emotional support, strong default assumption of trust and good intent on all sides, etc. etc.  This goes wrongest when things fester and people feel they can't speak up; it goes much better if people have channels to lodge their complaints and reservations and are actively incentivized to do so (and can do so without being accused of defecting on the norm-in-question; criticism =/= attack).

 

Problem 5: Everything else

There are other models and problems in the mix—for instance, I have a model surrounding buy-in and commitment that deals with an escalating cycle of asks-and-rewards, or a model of how to effectively leverage a group around you to accomplish ambitious tasks that requires you to first lay down some "topsoil" of simple/trivial/arbitrary activities that starts the growth of an ecology of affordances, or a theory that the strategy of trying things and doing things outstrips the strategy of think-until-you-identify-worthwhile-action, and that rationalists in particular are crippling themselves through decision paralysis/letting the perfect be the enemy of the good when just doing vaguely interesting projects would ultimately gain them more skill and get them further ahead, or a strong sense based off both research and personal experience that physical proximity matters, and that you can't build the correct kind of strength and flexibility and trust into your relationships without actually spending significant amounts of time with one another in meatspace on a regular basis, regardless of whether that makes tactical sense given your object-level projects and goals.

But I'm going to hold off on going into those in detail until people insist on hearing about them or ask questions/pose hesitations that could be answered by them.


Section 2 of 3: Power dynamics

All of the above was meant to point at reasons why I suspect trusting individuals responding to incentives moment-by-moment to be a weaker and less effective strategy than building an intentional community that Actually Asks Things Of Its Members.  It was also meant to justify, at least indirectly, why a strong guiding hand might be necessary given that our community's evolved norms haven't really produced results (in the group houses) commensurate with the promises of EA and rationality.

Ultimately, though, what matters is not the problems and solutions themselves so much as the light they shine on my aesthetics (since, in the actual house, it's those aesthetics that will be used to resolve epistemic gridlock).  In other words, it's not so much those arguments as it is the fact that Duncan finds those arguments compelling.  It's worth noting that the people most closely involved with this project (i.e. my closest advisors and those most likely to actually sign on as housemates) have been encouraged to spend a significant amount of time explicitly vetting me with regards to questions like "does this guy actually think things through," "is this guy likely to be stupid or meta-stupid," "will this guy listen/react/update/pivot in response to evidence or consensus opposition," and "when this guy has intuitions that he can't explain, do they tend to be validated in the end?"

In other words, it's fair to view this whole post as an attempt to prove general trustworthiness (in both domain expertise and overall sanity), because—well—that's what it is.  In milieu like the military, authority figures expect (and get) obedience irrespective of whether or not they've earned their underlings' trust; rationalists tend to have a much higher bar before they're willing to subordinate their decisionmaking processes, yet still that's something this sort of model requires of its members (at least from time to time, in some domains, in a preliminary "try things with benefit of the doubt" sort of way).  I posit that Dragon Army Barracks works (where "works" means "is good and produces both individual and collective results that outstrip other group houses by at least a factor of three") if and only if its members are willing to hold doubt in reserve and act with full force in spite of reservations—if they're willing to trust me more than they trust their own sense of things (at least in the moment, pending later explanation and recalibration on my part or theirs or both).

And since that's a) the central difference between DA and all the other group houses, which are collections of non-subordinate equals, and b) quite the ask, especially in a rationalist community, it's entirely appropriate that it be given the greatest scrutiny.  Likely participants in the final house spent ~64 consecutive hours in my company a couple of weekends ago, specifically to play around with living under my thumb and see whether it's actually a good place to be; they had all of the concerns one would expect and (I hope) had most of those concerns answered to their satisfaction.  The rest of you will have to make do with grilling me in the comments here.

 

"Why was Tyler Durden building an army?  To what purpose?  For what greater good? ...in Tyler we trusted."

 

Power and authority are generally anti-epistemic—for every instance of those-in-power defending themselves against the barbarians at the gates or anti-vaxxers or the rise of Donald Trump, there are a dozen instances of them squashing truth, undermining progress that would make them irrelevant, and aggressively promoting the status quo.

Thus, every attempt by an individual to gather power about themselves is at least suspect, given regular ol' incentive structures and regular ol' fallible humans.  I can (and do) claim to be after a saved world and a bunch of people becoming more the-best-versions-of-themselves-according-to-themselves, but I acknowledge that's exactly the same claim an egomaniac would make, and I acknowledge that the link between "Duncan makes all his housemates wake up together and do pushups" and "the world is incrementally less likely to end in gray goo and agony" is not obvious.

And it doesn't quite solve things to say, "well, this is an optional, consent-based process, and if you don't like it, don't join," because good and moral people have to stop and wonder whether their friends and colleagues with slightly weaker epistemics and slightly less-honed allergies to evil are getting hoodwinked.  In short, if someone's building a coercive trap, it's everyone's problem.

 

"Over and over he thought of the things he did and said in his first practice with his new army. Why couldn't he talk like he always did in his evening practice group? No authority except excellence. Never had to give orders, just made suggestions. But that wouldn't work, not with an army. His informal practice group didn't have to learn to do things together. They didn't have to develop a group feeling; they never had to learn how to hold together and trust each other in battle. They didn't have to respond instantly to command.

And he could go to the other extreme, too. He could be as lax and incompetent as Rose the Nose, if he wanted. He could make stupid mistakes no matter what he did. He had to have discipline, and that meant demanding—and getting—quick, decisive obedience. He had to have a well-trained army, and that meant drilling the soldiers over and over again, long after they thought they had mastered a technique, until it was so natural to them that they didn't have to think about it anymore."

 

But on the flip side, we don't have time to waste.  There's existential risk, for one, and even if you don't buy ex-risk à la AI or bioterrorism or global warming, people's available hours are trickling away at the alarming rate of one hour per hour, and none of us are moving fast enough to get All The Things done before we die.  I personally feel that I am operating far below my healthy sustainable maximum capacity, and I'm not alone in that, and something like Dragon Army could help.

So.  Claims, as clearly as I can state them, in answer to the question "why should a bunch of people sacrifice non-trivial amounts of their autonomy to Duncan?"

1. Somebody ought to run this, and no one else will.  On the meta level, this experiment needs to be run—we have like twenty or thirty instances of the laissez-faire model, and none of the high-standards/hardcore one, and also not very many impressive results coming out of our houses.  Due diligence demands investigation of the opposite hypothesis.  On the object level, it seems uncontroversial to me that there are goods waiting on the other side of the unpleasant valley—goods that a team of leveled-up, coordinated individuals with bonds of mutual trust can seize that the rest of us can't even conceive of, at this point, because we don't have a deep grasp of what new affordances appear once you get there.

2. I'm the least unqualified person around.  Those words are chosen deliberately, for this post on "less wrong."  I have a unique combination of expertise that includes being a rationalist, sixth grade teacher, coach, RA/head of a dormitory, ringleader of a pack of hooligans, member of two honor code committees, curriculum director, obsessive sci-fi/fantasy nerd, writer, builder, martial artist, parkour guru, maker, and generalist.  If anybody's intuitions and S1 models are likely to be capable of distinguishing the uncanny valley from the real deal, I posit mine are.

3. There's never been a safer context for this sort of experiment.  It's 2017, we live in the United States, and all of the people involved are rationalists.  We all know about NVC and double crux, we're all going to do Circling, we all know about Gendlin's Focusing, and we've all read the Sequences (or will soon).  If ever there was a time to say "let's all step out onto the slippery slope, I think we can keep our balance," it's now—there's no group of people better equipped to stop this from going sideways.

4. It does actually require a tyrant. As a part of a debrief during the weekend experiment/dry run, we went around the circle and people talked about concerns/dealbreakers/things they don't want to give up.  One interesting thing that popped up is that, according to consensus, it's literally impossible to find a time of day when the whole group could get together to exercise.  This happened even with each individual being willing to make personal sacrifices and doing things that are somewhat costly.

If, of course, the expectation is that everybody shows up on Tuesday and Thursday evenings, and the cost of not doing so is not being present in the house, suddenly the situation becomes simple and workable.  And yes, this means some kids left behind (ctrl+f), but the whole point of this is to be instrumentally exclusive and consensually high-commitment.  You just need someone to make the actual final call—there are too many threads for the coordination problem of a house of this kind to be solved by committee, and too many circumstances in which it's impossible to make a principled, justifiable decision between 492 almost-indistinguishably-good options.  On top of that, there's a need for there to be some kind of consistent, neutral force that sets course, imposes consistency, resolves disputes/breaks deadlock, and absorbs all of the blame for the fact that it's unpleasant to be forced to do things you know you ought to but don't want to do.

And lastly, we (by which I indicate the people most likely to end up participating) want the house to do stuff—to actually take on projects of ambitious scope, things that require ten or more talented people reliably coordinating for months at a time.  That sort of coordination requires a quarterback on the field, even if the strategizing in the locker room is egalitarian.

5. There isn't really a status quo for power to abusively maintain.  Dragon Army Barracks is not an object-level experiment in making the best house; it's a meta-level experiment attempting (through iteration rather than armchair theorizing) to answer the question "how best does one structure a house environment for growth, self-actualization, productivity, and social synergy?"  It's taken as a given that we'll get things wrong on the first and second and third try; the whole point is to shift from one experiment to the next, gradually accumulating proven-useful norms via consensus mechanisms, and the centralized power is mostly there just to keep the transitions smooth and seamless.  More importantly, the fundamental conceit of the model is "Duncan sees a better way, which might take some time to settle into," but after e.g. six months, if the thing is not clearly positive and at least well on its way to being self-sustaining, everyone ought to abandon it anyway.  In short, my tyranny, if net bad, has a natural time limit, because people aren't going to wait around forever for their results.

6. The experiment has protections built in.  Transparency, operationalization, and informed consent are the name of the game; communication and flexibility are how the machine is maintained.  Like the Constitution, Dragon Army's charter and organization are meant to be "living documents" that constrain change only insofar as they impose reasonable limitations on how wantonly change can be enacted.


Section 3 of 3: Dragon Army Charter (DRAFT)

Statement of purpose:

Dragon Army Barracks is a group housing and intentional community project which exists to support its members socially, emotionally, intellectually, and materially as they endeavor to improve themselves, complete worthwhile projects, and develop new and useful culture, in that order.  In addition to the usual housing commitments (i.e. rent, utilities, shared expenses), its members will make limited and specific commitments of time, attention, and effort averaging roughly 90 hours a month (~1.5hr/day plus occasional weekend activities).

Dragon Army Barracks will have an egalitarian, flat power structure, with the exception of a commander (Duncan Sabien) and a first officer (Eli Tyre).  The commander's role is to create structure by which the agreed-upon norms and standards of the group shall be discussed, decided, and enforced, to manage entry to and exit from the group, and to break epistemic gridlock/make decisions when speed or simplification is required.  The first officer's role is to manage and moderate the process of building consensus around the standards of the Army—what they are, and in what priority they should be met, and with what consequences for failure.  Other "management" positions may come into existence in limited domains (e.g. if a project arises, it may have a leader, and that leader will often not be Duncan or Eli), and will have their scope and powers defined at the point of creation/ratification.

Initial areas of exploration:

The particular object level foci of Dragon Army Barracks will change over time as its members experiment and iterate, but at first it will prioritize the following:

  • Physical proximity (exercising together, preparing and eating meals together, sharing a house and common space)
  • Regular activities for bonding and emotional support (Circling, pair debugging, weekly retrospective, tutoring/study hall)
  • Regular activities for growth and development (talk night, tutoring/study hall, bringing in experts, cross-pollination)
  • Intentional culture (experiments around lexicon, communication, conflict resolution, bets & calibration, personal motivation, distribution of resources & responsibilities, food acquisition & preparation, etc.)
  • Projects with "shippable" products (e.g. talks, blog posts, apps, events; some solo, some partner, some small group, some whole group; ranging from short-term to year-long)
  • Regular (every 6-10 weeks) retreats to learn a skill, partake in an adventure or challenge, or simply change perspective

Dragon Army Barracks will begin with a move-in weekend that will include ~10 hours of group bonding, discussion, and norm-setting.  After that, it will enter an eight-week bootcamp phase, in which each member will participate in at least the following:

  • Whole group exercise (90min, 3x/wk, e.g. Tue/Fri/Sun)
  • Whole group dinner and retrospective (120min, 1x/wk, e.g. Tue evening)
  • Small group baseline skill acquisition/study hall/cross-pollination (90min, 1x/wk)
  • Small group circle-shaped discussion (120min, 1x/wk)
  • Pair debugging or rapport building (45min, 2x/wk)
  • One-on-one check-in with commander (20min, 2x/wk)
  • Chore/house responsibilities (90min distributed)
  • Publishable/shippable solo small-scale project work with weekly public update (100min distributed)

... for a total time commitment of 16h/week or 128 hours total, followed by a whole group retreat and reorientation.  The house will then enter an eight-week trial phase, in which each member will participate in at least the following:

  • Whole group exercise (90min, 3x/wk)
  • Whole group dinner, retrospective, and plotting (150min, 1x/wk)
  • Small group circling and/or pair debugging (120min distributed)
  • Publishable/shippable small group medium-scale project work with weekly public update (180min distributed)
  • One-on-one check-in with commander (20min, 1x/wk)
  • Chore/house responsibilities (60min distributed)
... for a total time commitment of 13h/week or 104 hours total, again followed by a whole group retreat and reorientation.  The house will then enter a third phase where commitments will likely change, but will include at a minimum whole group exercise, whole group dinner, and some specific small-group responsibilities, either social/emotional or project/productive (once again ending with a whole group retreat).  At some point between the second and third phase, the house will also ramp up for its first large-scale project, which is yet to be determined but will be roughly on the scale of putting on a CFAR workshop in terms of time and complexity.

Should the experiment prove successful past its first six months, and worth continuing for a full year or longer, by the end of the first year every Dragon shall have a skill set including, but not limited to:
  • Above-average physical capacity
  • Above-average introspection
  • Above-average planning & execution skill
  • Above-average communication/facilitation skill
  • Above-average calibration/debiasing/rationality knowledge
  • Above-average scientific lab skill/ability to theorize and rigorously investigate claims
  • Average problem-solving/debugging skill
  • Average public speaking skill
  • Average leadership/coordination skill
  • Average teaching and tutoring skill
  • Fundamentals of first aid & survival
  • Fundamentals of financial management
  • At least one of: fundamentals of programming, graphic design, writing, A/V/animation, or similar (employable mental skill)
  • At least one of: fundamentals of woodworking, electrical engineering, welding, plumbing, or similar (employable trade skill)
Furthermore, every Dragon should have participated in:
  • At least six personal growth projects involving the development of new skill (or honing of prior skill)
  • At least three partner- or small-group projects that could not have been completed alone
  • At least one large-scale, whole-army project that either a) had a reasonable chance of impacting the world's most important problems, or b) caused significant personal growth and improvement
  • Daily contributions to evolved house culture
Speaking of evolved house culture...

Because of both a) the expected value of social exploration and b) the cumulative positive effects of being in a group that's trying things regularly and taking experiments seriously, Dragon Army will endeavor to adopt no fewer than one new experimental norm per week.  Each new experimental norm should have an intended goal or result, an informal theoretical backing, and a set re-evaluation time (default three weeks).  There are two routes by which a new experimental norm is put into place:

  • The experiment is proposed by a member, discussed in a whole group setting, and meets the minimum bar for adoption (>60% of the Army supports, with <20% opposed and no hard vetos)
  • The Army has proposed no new experiments in the previous week, and the Commander proposes three options.  The group may then choose one by vote/consensus, or generate three new options, from which the Commander may choose.
Examples of some of the early norms which the house is likely to try out from day one (hit the ground running):
  • The use of a specific gesture to greet fellow Dragons (house salute)
  • Various call-and-response patterns surrounding house norms (e.g. "What's rule number one?" "PROTECT YOURSELF!")
  • Practice using hook, line, and sinker in social situations (three items other than your name for introductions)
  • The anti-Singer rule for open calls-for-help (if Dragon A says "hey, can anyone help me with X?" the responsibility falls on the physically closest housemate to either help or say "Not me/can't do it!" at which point the buck passes to the next physically closest person)
  • An "interrupt" call that any Dragon may use to pause an ongoing interaction for fifteen seconds
  • A "culture of abundance" in which food and leftovers within the house are default available to all, with exceptions deliberately kept as rare as possible
  • A "graffiti board" upon which the Army keeps a running informal record of its mood and thoughts

Dragon Army Code of Conduct
While the norms and standards of Dragon Army will be mutable by design, the following (once revised and ratified) will be the immutable code of conduct for the first eight weeks, and is unlikely to change much after that.

  1. A Dragon will protect itself, i.e. will not submit to pressure causing it to do things that are dangerous or unhealthy, nor wait around passively when in need of help or support (note that this may cause a Dragon to leave the experiment!).
  2. A Dragon will take responsibility for its actions, emotional responses, and the consequences thereof, e.g. if late will not blame bad luck/circumstance, if angry or triggered will not blame the other party.
  3. A Dragon will assume good faith in all interactions with other Dragons and with house norms and activities, i.e. will not engage in strawmanning or the horns effect.
  4. A Dragon will be candid and proactive, e.g. will give other Dragons a chance to hear about and interact with negative models once they notice them forming, or will not sit on an emotional or interpersonal problem until it festers into something worse.
  5. A Dragon will be fully present and supportive when interacting with other Dragons in formal/official contexts, i.e. will not engage in silent defection, undermining, halfheartedness, aloofness, subtle sabotage, or other actions which follow the letter of the law while violating the spirit.  Another way to state this is that a Dragon will practice compartmentalization—will be able to simultaneously hold "I'm deeply skeptical about this" alongside "but I'm actually giving it an honest try," and postpone critique/complaint/suggestion until predetermined checkpoints.  Yet another way to state this is that a Dragon will take experiments seriously, including epistemic humility and actually seeing things through to their ends rather than fiddling midway.
  6. A Dragon will take the outside view seriously, maintain epistemic humility, and make subject-object shifts, i.e. will act as a behaviorist and agree to judge and be judged on the basis of actions and revealed preferences rather than intentions, hypotheses, and assumptions (this one's similar to #2 and hard to put into words, but for example, a Dragon who has been having trouble getting to sleep but has never informed the other Dragons that their actions are keeping them awake will agree that their anger and frustration, while valid internally, may not fairly be vented on those other Dragons, who were never given a chance to correct their behavior).  Another way to state this is that a Dragon will embrace the maxim "don't believe everything that you think."
  7. A Dragon will strive for excellence in all things, modified only by a) prioritization and b) doing what is necessary to protect itself/maximize total growth and output on long time scales.
  8. A Dragon will not defect on other Dragons.
There will be various operationalizations of the above commitments into specific norms (e.g. a Dragon will read all messages and emails within 24 hours, and if a full response is not possible within that window, will send a short response indicating when the longer response may be expected) that will occur once the specific members of the Army have been selected and have individually signed on.  Disputes over violations of the code of conduct, or confusions about its operationalization, will first be addressed one-on-one or in informal small group, and will then move to general discussion, and then to the first officer, and then to the commander.

Note that all of the above is deliberately kept somewhat flexible/vague/open-ended/unsettled, because we are trying not to fall prey to GOODHART'S DEMON.


Random Logistics
  1. The initial filter for attendance will include a one-on-one interview with the commander (Duncan), who will be looking for a) credible intention to put forth effort toward the goal of having a positive impact on the world, b) likeliness of a strong fit with the structure of the house and the other participants, and c) reliability à la financial stability and ability to commit fully to long-term endeavors.  Final decisions will be made by the commander and may be informally questioned/appealed but not overruled by another power.
  2. Once a final list of participants is created, all participants will sign a "free state" contract of the form "I agree to move into a house within five miles of downtown Berkeley (for length of time X with financial obligation Y) sometime in the window of July 1st through September 30th, conditional on at least seven other people signing this same agreement."  At that point, the search for a suitable house will begin, possibly with delegation to participants.
  3. Rents in that area tend to run ~$1100 per room, on average, plus utilities, plus a 10% contribution to the general house fund.  Thus, someone hoping for a single should, in the 85th percentile worst case, be prepared to make a ~$1400/month commitment.  Similarly, someone hoping for a double should be prepared for ~$700/month, and someone hoping for a triple should be prepared for ~$500/month, and someone hoping for a quad should be prepared for ~$350/month.
  4. The initial phase of the experiment is a six month commitment, but leases are generally one year.  Any Dragon who leaves during the experiment is responsible for continuing to pay their share of the lease/utilities/house fund, unless and until they have found a replacement person the house considers acceptable, or have found three potential viable replacement candidates and had each one rejected.  After six months, should the experiment dissolve, the house will revert to being simply a house, and people will bear the normal responsibility of "keep paying until you've found your replacement."  (This will likely be easiest to enforce by simply having as many names as possible on the actual lease.)
  5. Of the ~90hr/month, it is assumed that ~30 are whole-group, ~30 are small group or pair work, and ~30 are independent or voluntarily-paired work.  Furthermore, it is assumed that the commander maintains sole authority over ~15 of those hours (i.e. can require that they be spent in a specific way consistent with the aesthetic above, even in the face of skepticism or opposition).
  6. We will have an internal economy whereby people can trade effort for money and money for time and so on and so forth, because heck yeah.

Conclusion: Obviously this is neither complete nor perfect.  What's wrong, what's missing, what do you think?  I'm going to much more strongly weight the opinions of Berkelyans who are likely to participate, but I'm genuinely interested in hearing from everyone, particularly those who notice red flags (the goal is not to do anything stupid or meta-stupid).  Have fun tearing it up.

(sorry for the abrupt cutoff, but this was meant to be published Monday and I've just ... not ... been ... sleeping ... to get it done)

Comments (575)

Comment author: JenniferRM 26 May 2017 08:37:52AM *  44 points [-]

The most common cause of the collapse of high investment intentional communities is romantic drama.

(Maybe the Dragon Barracks are so obviously a boy thing that you're taking for granted that there will be no girls in the house, but all the weird non-gendered pronouns like "a Dragon will brush its teeth" imply either an attempt to have a team composed of both men or women, or else a hilarious level of contempt for the agency of your space monkeys. I'm going to assume that you're imagining mixed gender living arrangements rather than already starting with verbal de-personalization of presumed uniformly male space monkeys...)

So anyway, assuming men and women in the house at the same time, that's what usually causes things to collapse in the long run.

The two standard failure modes are Bonobo egalitarianism that collapses due to the accumulation of residual jealousies over time or else a harem forms around the charismatic cult leader (which isn't necessarily a failure mode... it is just a sign of a cult leader whose stated community goals are a load of hypocritical baloney compared to the real goal of getting more than his "fair share" of tail -- cue the Limp Bizkit song).

There are lots of patches for this sort of thing that have historically worked for various kinds of communities. Requiring celibacy is an obvious one that monasteries often use. Disallowing any romantic statuses except "single" and "closed dyadic marriage" (with a managed "courting" status to mediate the one way transition) is another standard trick.

Whatever the rule is, the standard enforcement mechanism is "ostracism" because the real problem from a social engineering perspective is the accumulation of complicated feelings that slow and redirect the workings of the social machine away from its stated purposes and towards managing the wreckage of new and old love triangles. If you throw away the cogs that are liable to have "complicated feelings" and replace them with non-complicated cogs... then the machine should continue to run as designed?

(I think maybe the romantic mores that were junked in the US in the 1960's arose in the first place because villages are kinda like auto-poetic intentional communities. The pragmatically useful norms of village romance, that kept the village from exploding, could be semi-safely junked because (well, obviosuly "the pill" but also because) cities are anonymous and moderately well mixed... essentially everyone in a city is already pre-ostrasized by everyone else, and we each are desperately struggling to create a synthetic village-like community despite the isolating forces of urban mixing. In an already chaotic urban romantic economy a divorce causing additional minor lesioning of the local social graph is like a dust devil in a hurricane. There might actually be a lot of dust devils caused by hurricane turbulence for all I know, but I'm pretty sure no one cares much because the actual hurricane make them irrelevant.)

Anyway, for the above reasons, you might want to just say "this is a fraternity and if women want to start a rationalist sorority that can be a separate thing". Alternatively, think about romantic norms up front.

Comment author: Larks 28 May 2017 09:46:11PM *  12 points [-]

One idea that is probably necessary but not sufficient is for the Commander (and anyone else with any authority in the house) to have an absolute commitment not to sleep with anyone else in the house.

Edit: with this rule, a different/earlier version of me might have been interested. Without it I would never be.

Comment author: John_Maxwell_IV 27 May 2017 05:40:09AM *  8 points [-]

Anyway, for the above reasons, you might want to just say "this is a fraternity and if women want to start a rationalist sorority that can be a separate thing".

Possible advantage of this solution: I've noticed that male bonding gets a lot easier when a group goes from being "almost all guys" to "all guys". (I imagine it would get easier still if you are regularly doing testosterone-elevating things that require coordination with your group of guys, the way sports teams, armies, fraternities, and heavy metal bands do. I suspect men have a pack hunting instinct that gets activated in circumstances like these.)

Comment author: Jacobian 31 May 2017 04:15:52AM 9 points [-]

Data point to the contrary: I spent two years in a closed military unit with 44 guys and 5 girls (in Israel). Each of the girls went through at least a couple of in-unit boyfriends at the time, but that wasn't a major source of drama. It took quite a bit of suffering to forge the unit bonds (a 4-month combat boot camp to start our service), but by the end of it, people cared about "the unit" as a whole more than about personal drama. I certainly can't imagine that the "bonding" could have been any stronger without the girls there.

Comment author: Jacobian 31 May 2017 03:24:42PM 10 points [-]

And one final point of support for DA: while I was living in a closed barracks, with five girls, a huge workload, strict rules and significant barriers to exit, I read Ender's Game and thought "this is exactly like my life, and it's awesome".

I agree with some of the critics here that Duncan is overconfident in his ability to make this work. I also agree that there's a limit to how much you can learn from a work of fiction about space monkey superchildren. But a lot of the criticism here is even more overconfident, and it comes from people who never lived in DA-like situation in their lives so all the evidence they're basing their criticism on is fictional.

Comment author: username2 27 May 2017 10:54:10AM 6 points [-]

Romantic entanglements and their fallout are not ruled out by all male environments even if the members do not identify as homosexual. So still important to consider these issues even if there are no women at all.

Comment author: Qiaochu_Yuan 27 May 2017 06:56:07PM 6 points [-]

Can confirm. I was in a fraternity in college with many gay members, some of whom occasionally hooked up and caused manageable levels of drama. This was a relatively recent phenomenon in the history of the fraternity; I think as recently as 10 years before my time nobody was out, and then some people came out after joining.

Comment author: Duncan_Sabien 26 May 2017 09:25:15AM 6 points [-]

Currently there are both men and women interested (though many more men than women).

All of your points above seem sound at first glance, and yes, it's on the docket to be sorted out. I don't think I want to go full monastery, but there's a decent chance the house itself will end up being activity-restricted in some way.

Thanks for the detailed model-sharing.

Comment author: Raemon 27 May 2017 05:39:59AM 15 points [-]

I want to add a strong "romantic entanglements are a big risk" voice.

My worst experience with rationalists (and possibly some of their worst experiences with me) were when romance/sex conflict came up. It turns out people are really bad at being rational when that happens. (This was exacerbated by a lot of people being inexperienced, which may or may not be the case in Dragon Army, but it makes sense for romance and sex drive being something just overwhelms the prefrontal cortex)

Comment author: JenniferRM 30 May 2017 06:50:00AM *  2 points [-]

I'm glad the model was deemed useful :-) Good luck.

Comment author: Mass_Driver 26 May 2017 07:51:01AM 27 points [-]

1) I agree with the very high-level point that there are lots of rationalist group houses with flat / egalitarian structures, and so it might make sense to try one that's more authoritarian to see how that works. Sincere kudos to you for forming a concrete experimental plan and discussing it in public.

2) I don't think I've met you or heard of you before, and my first impression of you from your blog post is that you are very hungry for power. Like, you sound like you would really, really enjoy being the chief of a tribe, bossing people around, having people look up to you as their leader, feeling like an alpha male, etc. The main reason this makes me uncomfortable is that I don't see you owning this desire anywhere in your long post. Like, if you had said, just once, "I think I would enjoy being a leader, and I think you might enjoy being led by me," I would feel calmer. Instead I'm worried that you have convinced yourself that you are grudgingly stepping up as a leader because it's necessary and no one else will. If you're not being fully honest about your motivations for nominating yourself to be an authoritarian leader, what else are you hiding?

3) Your post has a very high ratio of detailed proposals to literature review. I would have liked to see you discuss other group houses in more detail, make reference to articles or books or blog posts about the theory of cohousing and of utopian communities more generally, or otherwise demonstrate that you have done your homework to find out what has worked, what has not worked, and why. None of your proposals sound obviously bad to me, and you've clearly put some thought and care into articulating them, but it's not clear whether your proposals are backed up by research, or whether you're just reasoning from your armchair.

4) Why should anyone follow you on an epic journey to improve their time management skills if you're sleep-deprived and behind schedule on writing a blog post? Don't you need to be more or less in control of your own lifestyle before you can lead others to improve theirs?

Comment author: Qiaochu_Yuan 26 May 2017 06:25:18PM 12 points [-]

I don't think I've met you or heard of you before, and my first impression of you from your blog post is that you are very hungry for power. Like, you sound like you would really, really enjoy being the chief of a tribe, bossing people around, having people look up to you as their leader, feeling like an alpha male, etc.

As someone who knows Duncan moderately well in person and has been under his leadership in a few contexts (CFAR instructor training and the recent Dragon Army experiment), I can confirm that this is nowhere close to true. What Duncan is hungry for is for the world to be better, and he thinks as a contingent fact that being the chief of this particular tribe is the best way for him to do that. I agree with Duncan's assessment of himself that if someone else stepped up to do the thing he would breathe an enormous sigh of relief, rather than be in any way jealous.

Why should anyone follow you on an epic journey to improve their time management skills if you're sleep-deprived and behind schedule on writing a blog post?

It depends on how urgent you think Duncan thinks having this blog post out sooner rather than later is. If Duncan were optimizing for looking like he has his shit together he could have either just not mentioned that he was sleep-deprived and behind schedule, or he could have gotten more sleep and fallen further behind schedule. Instead he posted the blog post, and went out of his way to mention that he was sleep-deprived and behind schedule, because he is optimizing for something else.

Comment author: Duncan_Sabien 26 May 2017 09:35:25AM *  9 points [-]

1) Thanks.

2) Nope, you're just way off (though I appreciate the candor). I thought about coming up with some sort of epistemically humble "maybe" or "I can see where you got that impression," but it seems more advisable to simply be direct, and to sound as confident as I am. I've been a leader, and I've been a follower, and I've transitioned in both directions within the same contexts, and there's no special draw there along any of the lines you laid out. In particular, I think the statement "this needs to happen, and no one else is going to do it" is actually true; if some contender wants to stand up and credibly claim they can pull this off better than me, I will IMMEDIATELY hand them the baton and breathe a sigh of relief—my actual favorite place to be is second or third in command.

Feel free to PM me if you're actually curious about my history, or to poke around my reputation within the community, or to ask any of the dozen or so people who've worked with me for a couple of years, or the twenty people who attended the dry run experiment last week (I can point you in their direction more specifically, also through PM).

(I also considered whether to update/change my tone given your first impression, but it seems to be enough of an outlier that I probably won't make any deliberate effort.)

3) I think you and I might disagree fairly strongly on the importance/value/worth of "the literature" in this arena. Part of the whole point here is that I have a solid inside view developed from a unique set of experiences that a lot of other people are doing it wrong. I think there's some value in literature review (e.g. the sources that Benquo listed up above seem worth at least an afternoon's perusing), but in three separate fields I've found that my idiosyncratic ideas that everyone said contradicted the literature and wouldn't work did, in fact, work, and produced excellent results; I'm not actually convinced that there's enough EV to justify more than a quick, 80/20 skim of the available info. I'm currently reasoning from my armchair—that's a fair point. But also the whole screed is "let's get down to the business of running experiments and gathering data," and I note again that we did already do a test weekend that gave promising preliminary support to a lot of my models and claims.

4) Another quite sound/reasonable criticism, taking the outside view with no priors to add detail to your model. In point of fact, though, it's been a 90th percentile unusual month (I'm the curriculum director in an org that just ran its most ambitious sprint of events to date, including bringing in a round of new employees whose training I was almost entirely responsible for, and then since that ended I've been churning hard on this project), and it's not particularly strong evidence about other months. Also, I think it's reasonable to posit that one needs to be more or less in control before leading others, but I note it's not obvious—I can clearly envision (for instance) models in which one person sacrifices themselves to push everyone else forward. That's not what I plan to do, but the picture isn't as straightforward as a clever-sounding false equivalency.

Also, lastly, remember the house is supposed to help me, too:

I personally feel that I am operating far below my healthy sustainable maximum capacity, and I'm not alone in that, and something like Dragon Army could help.

I'm not the only one with skills, and a big part of it is creating a construct that I can use to level up and improve. The part where I impose structure is separate from the part where maybe I could leverage social pressure to improve my own workflow.

Comment author: Lumifer 26 May 2017 04:11:36PM *  4 points [-]

I think the statement "this needs to happen, and no one else is going to do it" is actually true

Can you point to some reasons why you believe that an authoritarian commune is a good idea (besides "let's try and see what this button does")?

in three separate fields I've found that my idiosyncratic ideas that everyone said contradicted the literature and wouldn't work did, in fact, work, and produced excellent results

"Who needs literature, I'm smarter than all of them" is a worrisome attitude. By the way, did you check what the literature actually said? In my experience what "everyone says" literature claims is usually NOT what the literature really claims.

the whole screed is "let's get down to the business of running experiments and gathering data,"

What is the price for the experiment and who will pay it?

Comment author: Duncan_Sabien 26 May 2017 04:17:04PM *  7 points [-]

Er ... I think the whole post above is all about answering your first question? I'm confused, and feel somewhat strawmanned by the summary "let's try it and see what this button does." Because high-commitment, high-structure environments have a long, long history of being actually productive and useful and net-good for a lot of the people that go through them, and ought to be in the toolkit despite their known failure modes, and given the rationalist community's strong predilections towards individualism, prioritizing flexibility and following short-term motivation, and not committing to things, it seemed naive to expect that a high-commitment, high-structure environment would come into existence via committee. Note that, while not super emphasized in the post above, a major assumption is "if I'm right, I should be able to largely put down the baton six months in when the thing is clearly working," i.e. it's more about the structure than the authoritarianism specifically (the authoritarianism being simply a necessary catalyst imo).

The price for the experiment is largely distributed across its members; it's the money involved in housing and whatever difficulty people suffer from giving up a not-insignificant-but-overall-fairly-small fraction of their agency and self-determination. It's roughly analogous, I think, to the price one pays to become a black belt, only condensed down into six months rather than spread across several years.

As far as "who needs literature, I'm smarter than all of them" being worrisome—I'm okay with people being worried. Those people are being actively encouraged to influence things here, and also the whole system is based on iteration, and also I object to the strawmanning again (I've said more than once that there's some value to be had there, but am being summed up as rejecting it entirely), and also I am, in fact, smarter than a lot of them. Not all, but a lot, and it's been proven before in multiple domains, and I'd be an idiot to ignore that.

Comment author: robot-dreams 26 May 2017 03:52:10PM 7 points [-]

I agree that 4 is a concern.

I disagree about 2. After having (a) participated in the weekend experiment and (b) done some "back-channel" references on Duncan, my impression is that he hates the fact that leadership will isolate him from the group he really wants to be a part of. I expect that if the experiment is successful, Duncan will eagerly set aside leadership and integrate himself with the group.

Comment author: 18239018038528017428 26 May 2017 08:43:41PM *  26 points [-]

This post is so thoroughly repulsive and disgusting that I made an account for the sole purpose of pointing out how transparently and obviously perverse this fucked-up proposal is. Naturally I don't have any actual desire to be critical or rude; it's just that nobody else is doing it, so because of my infinite kindness and charity (if you have any doubts, rest assured that my closest friends and colleagues will all attest to my beneficent nature), I find myself obligated to step up to the batting plate, so to speak. Ah, if only someone could release me from this great burden. If only.

The author seems to have missed the part of Ender's Game about the protagonists being children. It's generally not a good thing for adults to role-play as children (the reasons for which are, I hope, sufficiently obvious to not require elaboration). The dominant impression I get from this is that this resembles the antifa movement and the anti-antifa movement: it's a bunch of immature adults LARPing but pretending that they aren't doing so.

Note that despite the author's insistence on the validity of his experience as a CFAR instructor, he fails to actually point to any concrete benefits that people have derived from that instruction -- plausibly because those benefits, when concretely stated without embellishment, are at best underwhelming. Note also that (1) no mention of dealing with problems arising from interpersonal romance are mentioned in the post and (2) the author's reply to the comment that does point out the probable future existence of such problems receives what can at best be termed a cursory and dismissive reply.

This suggests that, contrary to the author's assertion of having amassed a diverse and broad range of skills, and contrary to whatever accolades his colleagues may see fit to place upon him, he hasn't yet attained the level of social awareness of a typical American high school student. It also suggests that the author's ability to model himself and to model others has more-or-less not yet attained the level of sophistication required to view people as more than one-dimensional. I.e., the post seems to suggest an attitude of "I, a good person, will find a bunch of good people, and we'll make these good things happen". I'm pretty sure I've met high school students with a more nuanced (and less optimistic) understanding of human nature.

Naturally, this would be excused if the Berkeley rationalist community were full of people who are actually good people and who tend to get things done. Let's check: Qiaochu Yuan, one of the most mathematically sophisticated members, has to the best of my knowledge hit a dead end in his PhD, and is becoming a CFAR instructor in Seattle, which makes it seem as though he's actually concretely worse off compared to the counterfactual in which the rationalist community didn't exist; Eliezer Yudkowsky has shifted in the direction of posting practically-untrue, self-aggrandizing bullshit on Twitter and Facebook instead of doing anything productive; Arbital is best described as a failure; word is going around that Anna Salamon and Nate Soares are engaging in bizarre conspiratorial planning around some unsubstantiated belief that the world will end in ten years, leading to severe dissatisfaction among the staff of MIRI; despite the efforts of a very valiant man, people have still not realized that autogynephilic men with repressed femininity and a crossdressing fetish pretending to be women aren't actually women; CFAR itself is trending in the direction of adding bureaucracy for bureaucracy's sake; my own personal experience with people branded as "CFAR instructors" has been extremely negative, with them effectively acting arrogant out of proportion to their competence, not to mention their below-average levels of empathy; there was that bizarre scandal last year in which someone was accidentally impregnated and then decided not to abort the child, going against what had previously been agreed upon, and proceeded to shamelessly solicit donations from the rationalist community to support her child; etc., etc., etc.

In effect, there seems to be some sort of self-deception around the fact that the Berkeley rationalist community is by almost all reasonable standards severely dysfunctional, with the best people actually being on the periphery of the community. It's almost as if the author is coming up with the "Dragon Army" in an attempt to help everyone collectively delude themselves into believing they're much better than they are, because he can't bear to actually look at the Berkeley rationalist community and see it for what it is: a pile of garbage. Just like how a child from a broken family might imagine that everyone's getting along. Unfortunately(?), flinching away from the truth doesn't actually make reality go away.

Amusingly, it actually does seem as though the author partially realizes this. Let's review the criteria which the author hopes the members of "Dragon Army" will fulfill after a year's worth of cult membership:

(1) Above-average physical capacity (2) Above-average introspection (3) Above-average planning & execution skill (4) Above-average communication/facilitation skill (5) Above-average calibration/debiasing/rationality knowledge (6) Above-average scientific lab skill/ability to theorize and rigorously investigate claims (7) Average problem-solving/debugging skill (8) Average public speaking skill (9) Average leadership/coordination skill (10) Average teaching and tutoring skill (11) Fundamentals of first aid & survival (12) Fundamentals of financial management (13) At least one of: fundamentals of programming, graphic design, writing, A/V/animation, or similar (employable mental skill) (14) At least one of: fundamentals of woodworking, electrical engineering, welding, plumbing, or similar (employable trade skill)

"Above-average"? "Average"? Not exactly a high bar. "At least one employable mental skill, and at least one employable trade skill"? Is the correct inference here that the typical participant is actually expected to be not employable at all (i.e., deficient in both categories)? "First aid & survival" -- if there was ever any doubt that this is actually just sophisticated childish role-playing... The fact that I (in contrast with the Berkeley rationalist community) have put very little directed effort into the meta-goal of self-improvement and nevertheless plausibly already satisfy 11 of these 14 criteria, with the other 3 not seeming particularly difficult to attain, is not a good sign!

Despite the fixation on "evolving norms" or whatever, the author seems to be particularly blind to what social reality is actually like and what actually makes communities get along. Consider, e.g., the following quote:

for example, a Dragon who has been having trouble getting to sleep but has never informed the other Dragons that their actions are keeping them awake will agree that their anger and frustration, while valid internally, may not fairly be vented on those other Dragons, who were never given a chance to correct their behavior

Let me pose a question to the reader of my comment: would you rather live in a house where you have to constantly verbally ask the other residents to stop doing things that they could have reasonably foreseen would bother you, or would you rather live in a house where people actually used reasonable expectations of what other people want to guide their behavior and therefore acted in a way that preempted causing other people irritation?

There are two inferences to be made here:

  1. Members of the Berkeley rationalist community are particularly prone to using bureaucratic rule-setting as a way to compensate for their severely below-average social skills, and
  2. Members of the Berkeley rationalist community are particularly low-empathy and embody the worst of individualism, such that they don't actually care whether or not what they're doing might bother others until they're told to stop.

In my personal experience, both inferences are correct. Ultimately, what this comes down to is a bunch of socially-inept losers with near-autistic social skills trying to attain the sort of basic social harmony that comes naturally to more competent people via a combination of bizarre mimicry and a mountain of bureaucracy. Naturally, and contrary to the author's bizarre childish idealism, one can expect a hell of a lot of repressed irritation, interpersonal drama, and general unpleasantness from this experiment.

To top off the turd cake with a cherry, the author's science fiction writing is trash:

I felt my stomach twist, felt that same odd certainty, this time wrapped in a layer of the coldest, blackest ice. “You came to kill us,” I said. There was a soft rustle as the others straightened, pressure on my shoulders as the space between us closed. “You came to kill us all.”

Anyone who can vomit that out on a page and feel proud of it isn't fit to lead or teach anything. Period. The world would be concretely better off if the author, and anyone like him, killed themselves.

Comment author: Valentine 27 May 2017 09:12:21PM *  35 points [-]

PSA:

Do not feed trolls.

In ages past, vitriol like this would be downvoted into oblivion. This was out of recognition that norms of good discourse are more important than the content of arguments. Failure to abide by this spreads rot and makes good communal epistemic hygiene even more difficult.

I notice downvoting is disabled now. Which, sadly, means that people will be tempted to engage with this. Which reinforces a norm of having one's dissent noticed by acting like an unapologetic asshole. Which burns the future of this garden.

So as a close second, I advise just thoroughly ignoring 18239018038528017428 unless and until they step up to meet more noble conversational norms. If there are good points to be made here, they should be converted into the truth-seeking style Less Wrong aspires to so that we can all engage with them in a more hygienic way.

I appreciate Duncan's attempts to do that conversion and speak to the converted form of the argument.

But unless and until I see enough evidence to convince me otherwise, I assume 18239018038528017428's intentions are not truth-seeking. I assume they are inflammatory and will not change via civil discourse.

Ergo, request to all:

Do not feed trolls.

PS: I will follow my own advice here and have no intention of replying to 18239018038528017428 unless and until they transpose their discourse into the key of decency. I expect them to reply to me here, probably with more vitriol and some kind of personal attack and/or attempt to discredit me personally. My ignoring them should be taken as my following my own policy. Note that if 18239018038528017428 does reply with vitriol, it will probably be in some way fashioned as an attempt to make my very refusal to engage look like confirmation of their narrative. Please filter your reading of any replies to my message here accordingly.

Comment author: John_Maxwell_IV 27 May 2017 10:20:48PM *  18 points [-]

I'm the person who advocated most strongly for getting the downvote disabled, and I share some of 18239018038528017428's skepticism about the community in the Bay Area, but I strongly agree with Val's comment. There are already a ton of case studies on the internet in how fragile good conversational norms are. I'm going to email Vaniver and encourage him to delete or edit the vitriol out of comments from 18239018038528017428.

(Also ditto everything Val said about not replying to 18239018038528017428)

Comment author: Vaniver 28 May 2017 12:19:57AM 7 points [-]

I'm going to email Vaniver and encourage him to delete or edit the vitriol out of comments from 18239018038528017428.

Thanks for that; I had already noticed this thread but a policy of reporting things is often helpful. It seemed like Duncan was handling himself well, and that leaving this up was better than censoring it. It seems easier for people to judge the screed fairly with the author's original tone, and so just editing out the vitriol seems problematic.

With the new site, we expect to have mod tools that will be helpful here, like downvoting making this invisible-by-default, to ip-banning and other things to make creating a different throwaway account difficult.

Comment author: komponisto 28 May 2017 07:11:52AM 21 points [-]

For the record: at the risk of being a lonely dissenter, I strongly disagree with any notion that any of this discussion should have been censored in any way. (I was even grateful for the current impossibility of downvoting.)

Five years ago, or even two, my opinion would have been quite different. By this point, however, I have undergone a fairly massive update in the direction of thinking people are far, far too sensitive about matters of "tone" and the like. These norms of sensitivity are used to subtly restrict information flow. Ultimately Duncan and everyone else are better off knowing about the numerically-pseudonymous commenter's opinion in all of its gory detail. In fact, I would go so far as to say that the more they engage with this individual, the better; especially since the natural tendency will be to go in the opposite direction, circle the wagons, and dismiss the critic as a low-status outsider -- a behavior pattern that doesn't need more practice, IMHO.

(At any rate, the individual seems contemptuous enough of their targets that I would expect them to disengage on their own before the full value of discussion with them has been extracted.)

Comment author: John_Maxwell_IV 29 May 2017 03:30:34AM *  6 points [-]

I'm also curious to hear what made you update.

It's true that sensitivity norms can have subtle effects on a conversation, but nastiness norms can too. If you look at the study cited in the "hold off on proposing solutions" essay, you can see a case where politicizing a topic restricts the space of ideas that are explored. (I think this is actually a more natural takeaway from the study than "hold off on proposing solutions".) Nasty conversations also often see evaporative cooling effects where you are eventually just left with hardliners on each side. In general, I think nasty conversations tend to leave any line of reasoning that doesn't clearly support the position of one side or the other under-explored. (This is a pretty big flaw in my opinion, because I think divided opinions are usually an indicator of genuinely mixed evidence. If the evidence is mixed, the correct hypothesis is probably one that finds a way to reconcile almost all of it.) Furthermore I would predict that arguments in nasty conversations are less creative and generally just less well thought through.

Here's another argument. Imagine 18239018038528017428 showed you their draft comment minus the very last sentence. Then they showed you the last sentence "The world would be concretely better off if the author, and anyone like him, killed themselves." Would you tell them to add it in or not? If not, I suspect there's status quo bias, or something like it, in operation here.

Anyway, I think there better ways to address the issue you describe than going full vitriol. For example, I once worked at a company that had a culture of employees ribbing each other, and sometimes we would rib each other about things other employees were doing wrong that would be awkward if they were brought up in a serious manner. I think that worked pretty well.

In fact, I would go so far as to say that the more they engage with this individual, the better; especially since the natural tendency will be to go in the opposite direction, circle the wagons, and dismiss the critic as a low-status outsider -- a behavior pattern that doesn't need more practice, IMHO.

I just want to point out that Duncan did in fact put a tremendous amount of time in to engaging with this critic (more time than he put in to engaging with any other commenter in this thread, by my estimate).

Comment author: komponisto 29 May 2017 05:53:06AM 7 points [-]

My other comment should hopefully clarify things, as least with regard to politicization in particular.

To spell out the implications a bit more: the problem with political discourse, the reason it kills minds, is not that it gets heated; rather, it freezes people's mental categories in ways that prevent them from making ontological updates or paradigm shifts of any kind. In effect, people switch from using physical cognition to think about arguments (modus ponens, etc.), to using social cognition instead (who wins, who loses, etc.). (Most people, of course, never use anything but social cognition in arguments; politics makes even "nerds" or "intellectuals" behave like typical humans.)

It is in fact possible for "heated" or even "nasty" discourse to be very information-rich; this makes sense if you realize that what counts as "nasty" depends on social norms. If you encounter discourse from a different social context (even, for example, simply because the speaker has misunderstood the social context and its norms!) you may read it as "nasty", despite the fact that the author was specifically intending to communicate content.

Now, of course I don't consider 18239018038528017428's comment to be optimally worded -- but then, I wouldn't, because I didn't write it. This is the important thing to understand: there is value to be had in getting detailed input on the mental states of people unlike oneself.

I agree that Duncan deserves positive reinforcement for engaging with this critic to the extent he did. But I think it was actually good for him epistemically to do so, not just as a demonstration of his willingness-to-bend-over-backwards, and thus, good social nature.

Comment author: Evan_Gaensbauer 02 June 2017 06:32:47AM 5 points [-]

As someone who doesn't live in the Bay Area, has no intention of moving there in the near future, and who resents the idea that anyone who wants to be part of what ought to be a worldwide rationality needs to eventually move to the Bay Area to do so. I'm part of the rationality and effective altruism communities, and I too have taken to task community members in the Bay Area for acting as though they can solve community coordination problems with new projects when acknowledgement of the underwhelming success or failure of prior projects never seems to take place. I do that on Facebook, though, where not only my civilian identity and a track record of my behaviour is. There are closed groups or chats where things are less open, so it's not as damaging, and even if I make a post on my own Facebook feed for over one thousand people to see, if I say something wrong, at least it's out in the open so I may face the full consequences of my mistakes.

I know lots of people mentioned in '18239018038528017428' comment. I either didn't know those things about them, or I wouldn't characterize what I did know in such terms. Based on their claims, '18239018038528017428' seems to have more intimate knowledge than I do, and I'd guess is also in or around the Bay Area rationality community as well. Yet they're on this forum anonymously, framing themselves as some underdog taking down high-status community members, when the criteria for such hasn't been established other than "works at MIRI/CFAR", and what they're doing is just insulting and accusing regular people like the rest of us on the internet. They're not facing the consequences of their actions.

The information provided isn't primarily intended to resolve disputes, which I would think ought to be the best application of truth-seeking behaviour in this regard, which is expected as a if not the only primary purpose of discourse here. Primary purposes of '18239018038528017428's comment were to express frustration, slander certain individuals, and undermine and discredit Duncan's project without evidence to back up their claims. These are at cross-purposes with truth-seeking behaviour.

There's nothing I do which is more policed in terms of tone on the basis of sensitivity that '18239018038528017428' isn't doing. While we're talking about norms of sensitivity, let's talk about norms for resolving interpersonal disputes. All the differences between how I and lots of others in the community do it, even if the tone we use isn't always splendid or sensitive, and how '18239018038528017428' do it, are what separates people who have a non-zero respect for norms, and those who don't. This coming from me, a guy who lots of people think probably already flaunts social norms too much.

I am anti-sympathetic to '18239018038528017428' and whether they're censored. Another reason not to resolve interpersonal disputes like this in public on a website like LessWrong is most people in online communities don't like seeing this sort of drama dominate discourse, and in particular there are lots of us who don't care for ever more drama from one zip code being all anyone pays attention to. That defies the purpose of this site, and saps the will of people not in the Bay Area to continue to engage in the rationality community. That's not what anyone needs. Since we've established '18239018038528017428' seems close enough to probably be part of the Berkeley rationality community already, there are plenty of channels like private group chats, mailing lists, or other apps where everyone involved can be connected, but user '18239018038528017428' wouldn't need to out themselves in front of everyone to do it. They could've had had a friend do it.

There are plenty of ways they could've accomplished everything they would've wanted without being censored, and without doing it on LessWrong. When they have access to plenty of online spaces which serve the same purpose, there's no reason LW must allow that speech to the chagrin of all other users. While I get that you think a Chesterton's fence for discourse is being torn down here, I don't believe that's what's going on here, and I think the preferences of everyone else on LessWrong who isn't personally involved deserves a say on what they are and aren't okay with being censored on this site.

Comment author: komponisto 06 June 2017 05:45:55AM 4 points [-]

You don't seem to be addressing what I said very much if at all, but rather to mostly be giving your reaction to 18239018038528017428's comments. This is demonstrated by the fact that you take for granted various assumptions that it was the purpose of my comment to call into question.

In particular, the speech is not being allowed "to the chagrin of all other users". I am notably non-chagrinned by the speech being allowed, and I advocate that people be less chagrinned by such speech being allowed.

Needless to say, to be allowed is not to be approved.

Comment author: entirelyuseless 28 May 2017 01:58:08PM 2 points [-]

By this point, however, I have undergone a fairly massive update in the direction of thinking people are far, far too sensitive about matters of "tone" and the like.

What convinced you of this?

Comment author: komponisto 29 May 2017 04:06:09AM 17 points [-]

What convinced you of this?

A constellation of related realizations.

  • A sense that some of the most interesting and important content in my own field of specialization (e.g. the writings of Heinrich Schenker) violates, or is viewed as violating, the "norms of discourse" of what I took to be my "ingroup" or "social context"; despite being far more interesting, engaging, and relevant to my concerns than the vast majority of discourse that obeys those norms.

  • A sense that I myself, despite being capable of producing interesting content, have been inhibited from doing so by the fear of violating social norms; and that this (which is basically a form of cowardice) is likely to also be what is behind the stifled nature of norm-conforming discourse referred to above.

  • A sense that the ability to look beyond discourse norms (and the signaling value of violation or conformity thereto) and read texts for their information content is extremely intellectually valuable, and in particular, makes texts originating in outgroup or fargroup cultures much more accessible -- the epistemic usefulness of which should go without saying.

  • A sense that a generalized version of this principle holds: the ability to conform to discourse norms, despite their information-obstructing nature, yet still succeed in communicating, functions as a signal of high status or tight embeddedness within a community, achieved via countersignaling. In particular, it cannot be successfully imitated by those not already of similar status or embeddednees: the attempt to imitate Level 4 results in Level 1.

  • A sense that discourse norms, and norms of "civility" generally, are the result of optimization for a purpose entirely distinct from the efficient transmission of information. Namely, they are there to reduce the risk of physical violence; in fact they specifically trade off communicative efficiency for this. Hence: politics, diplomacy, law -- the domains in which discourse is most tightly "regulated" and ritualized being specifically those most concerned with the prevention of physical violence, and simultaneously those most notorious for hypocrisy and obscurantism. This, by contrast, does not seem to be what an internet forum concerned with truth-seeking (or even an associated real-life community of minimally-violent individuals living in a society characterized by historically and globally high levels of trust) is supposed to be optimizing for!

Comment author: Valentine 31 May 2017 01:04:15AM 10 points [-]

Cool. Let's play.

I notice you make a number of claims, but that of the ones I disagree with, none of them have "crux nature" for me. Which is to say, even if we were to hash out our disagreement such that I come to agree with you on the points, I wouldn't change my stance.

(I might find it worthwhile to do that hashing out anyway if the points turn out to have crux nature for you. But in the spirit of good faith, I'll focus on offering you a pathway by which you could convince me.)

But if I dig a bit, I think I see a hint of a possible double crux. You say:

A sense that discourse norms, and norms of "civility" generally, are the result of optimization for a purpose entirely distinct from the efficient transmission of information.

I agree with a steelman version of this. (I don't think it is literally entirely distinct — but I also doubt you do, and I don't want to pressure you to defend wording that I read as being intended for emphasis rather than precise description.) However, I imagine we disagree about how to value that. I think you mean to imply "…and that's bad." Whereas I would add instead "…and that's good."

In a little more detail, I think that civility helps to prevent many more distortions in communication than it causes, in most situations. This is less needed the more technical a field is (whatever that means): in math departments you can just optimize for saying the thing, and if seeming insults come out in the process then that's mostly okay. But when working out social dynamics (like, say, whether a person who's proposing to lead a new kind of rationalist house is trustworthy and doing a good thing), I think distorted thinking is nearly guaranteed without civility.

At which point I cease caring about "efficient transmission of information", basically because I think (a) the information being sent is secretly laced with social subtext that'll affect future transmissions as well as its own perceived truthiness, and (b) the "efficient" transmission is emotionally harder to receive.

So to be succinct, I claim that:

  • (1) Civility prevents more distortion in communication than it creates for a wide range of discussions, including this one about Dragon Army.
  • (2) I am persuadable as per (1). It's a crux for me. Which is to say, if I come to believe (1) is false, then that will significantly move me toward thinking that we shouldn't preserve civility on Less Wrong.
  • (3) If you disagree with me on (1) and (1) is also a crux for you, then we have a double crux, and that should be where we zoom in. And if not, then you should offer a point where you think I disagree with you and where you are persuadable, to see whether that's a point where I am persuadable.

Your turn!

Comment author: John_Maxwell_IV 29 May 2017 06:52:58AM *  7 points [-]

I'm gonna address these thoughts as they apply to this situation. Because you've publicly expressed assent with extreme bluntness, I might conceal my irritation a little less than I normally do (but I won't tell you you should kill yourself).

A sense that some of the most interesting and important content in my own field of specialization (e.g. the writings of Heinrich Schenker) violates, or is viewed as violating, the "norms of discourse" of what I took to be my "ingroup" or "social context"; despite being far more interesting, engaging, and relevant to my concerns than the vast majority of discourse that obeys those norms.

Did he tell people they should kill themselves?

This strikes me as an example of the worst argument in the world. Yes, telling people to kill themselves is an alternative discourse norm, alternative discourse norms can be valuable, but therefore telling people to kill themselves is valuable? Come on. You can easily draw a Venn diagram that refutes this argument. Alternative discourse norms can be achieved while still censoring nastiness.

A sense that I myself, despite being capable of producing interesting content, have been inhibited from doing so by the fear of violating social norms; and that this (which is basically a form of cowardice) is likely to also be what is behind the stifled nature of norm-conforming discourse referred to above.

Telling forum users they should kill themselves is not gonna increase the willingness of people to post to an online forum. In addition to the intimidation factor, it makes Less Wrong look like more of a standard issue internet shithole.

A sense that the ability to look beyond discourse norms (and the signaling value of violation or conformity thereto) and read texts for their information content is extremely intellectually valuable, and in particular, makes texts originating in outgroup or fargroup cultures much more accessible -- the epistemic usefulness of which should go without saying.

This can be a valuable skill and it can still be valuable to censor content-free vitriol.

A sense that a generalized version of this principle holds: the ability to conform to discourse norms, despite their information-obstructing nature, yet still succeed in communicating, functions as a signal of high status or tight embeddedness within a community, achieved via countersignaling. In particular, it cannot be successfully imitated by those not already of similar status or embeddednees: the attempt to imitate Level 4 results in Level 1.

Yes, it takes a lot of effort to avoid telling people that they should kill themselves... Sorry, but I don't really mind using the ability to keep that sort of thought to yourself as a filter.

A sense that discourse norms, and norms of "civility" generally, are the result of optimization for a purpose entirely distinct from the efficient transmission of information. Namely, they are there to reduce the risk of physical violence; in fact they specifically trade off communicative efficiency for this. Hence: politics, diplomacy, law -- the domains in which discourse is most tightly "regulated" and ritualized being specifically those most concerned with the prevention of physical violence, and simultaneously those most notorious for hypocrisy and obscurantism. This, by contrast, does not seem to be what an internet forum concerned with truth-seeking (or even an associated real-life community of minimally-violent individuals living in a society characterized by historically and globally high levels of trust) is supposed to be optimizing for!

If we remove Chesterton's Fences related to violence prevention, I predict the results will not be good for truthseeking. Truthseeking tends to arise in violence-free environments.

Maybe it'd be useful for me to clarify my position: I would be in favor of censoring out the nasty parts while maintaining the comment's information content and probably banning the user who made the comment. This is mainly because I think comments like this create bad second-order effects and people should be punished for making them, not because I want to preserve Duncan's feelings. I care more about trolls being humiliated than censoring their ideas. If a troll delights in taking people down a notch for its own sake, we look like simps if we don't defect in return. Ask any schoolteacher: letting bullies run wild sets a bad precedent. Let me put it this way: bullies in the classroom are bad for truthseeking.

See also http://lesswrong.com/lw/5f/bayesians_vs_barbarians/ Your comment makes you come across as someone who has led a very sheltered upper-class existence. Like, I thought I was sheltered but it clearly gets a lot more extreme. This stuff is not a one-sided tradeoff like you seem to think!

For obvious reasons, it's much easier to convert a nice website to a nasty one than the other way around. And if you want a rationalist 4chan, we already have that. The potential gains from turning the lesswrong.com domain in to another rationalist 4chan seem small, but the potential losses are large.

Comment author: komponisto 29 May 2017 08:09:48AM 10 points [-]

Because you've publicly expressed assent with extreme bluntness

Who said anything about "extreme"?

You are unreasonably fixated on the details of this particular situation (my comment clearly was intended to invoke a much broader context), and on particular verbal features of the anonymous critic's comment. Ironically, however, you have not picked up on the extent to which my disapproval of censorship of that comment was contingent upon its particular nature. It consisted, in the main, of angrily-expressed substantive criticism of the "Berkeley rationalist community". (The parts about people killing themselves were part of the expression of anger, and need not be read literally.) The substance of that criticism may be false, but it is useful to know that someone in the author's position (they seemed to have had contact with members of the community) believes it, or is at least sufficiently angry that they would speak as if they believed it.

I will give you a concession: I possibly went too far in saying I was grateful that downvoting was disabled; maybe that comment's proper place was in "comment score below threshold" minimization-land. But that's about as far as I think the censorship needs to go.

Not, by the way, that I think it would be catastrophic if the comment were edited -- in retrospect, I probably overstated the strength of my preference above -- by my preference is, indeed, that it be left for readers to judge the author.

Now, speaking of tone: the tone of the parent comment is inappropriately hostile to me, especially in light of my other comment in which I addressed you in a distinctly non-hostile tone. You said you were curious about what caused me to update -- this suggested you were interested in a good-faith intellectual discussion about discourse norms in general, such as would have been an appropriate reply to my comment. Instead, it seems, you were simply preparing an ambush, ready to attack me for (I assume) showing too much sympathy for the enemy, with whatever "ammunition" my comment gave you.

I don't wish to continue this argument, both because I have other priorities, and also because I don't wish to be perceived as allying myself in a commenting-faction with the anonymous troublemaker. This is by no means a hill that I am interested in dying on.

However, there is one further remark I must make:

Your comment makes you come across as someone who has led a very sheltered upper-class existence

You are incredibly wrong here, and frankly you ought to know better. (You have data to the contrary.)

Comment author: Elo 27 May 2017 09:50:36PM 7 points [-]

But unless and until I see evidence otherwise, I assume 18239018038528017428's intentions are not truth-seeking.

Evidence: time and energy put into the comment. Evidence: not staying silent when they could have.

I am not saying theee offending comments are valid, instead I am curious as to why you discounted what I identify as evidence?

Comment author: Valentine 27 May 2017 10:07:04PM 8 points [-]

Ah, I was using a more colloquial definition of evidence, not a technical one. I misspoke.

What goes through my mind here is, "Trolls spend a lot of time and energy making comments like this one too, and don't stay silent when they could, so I'm not at all convinced that those points are more consistent with a world where they're truth-seeking than they are with a world in which they're just trolling."

I still think that's basically true. So to me those points seem irrelevant.

I think what I mean is something more like, "Unless and until I see enough evidence to convince me otherwise…." I'll go back and edit for that correction.

Comment author: komponisto 28 May 2017 07:36:41AM 4 points [-]

norms of good discourse are more important than the content of arguments

In what represents a considerable change of belief on my part, this now strikes me as very probably false.

Comment author: Duncan_Sabien 26 May 2017 08:52:50PM *  18 points [-]

Strong support for this person's willingness to contribute the opposite opinion.

Strong support for this person's willingness to take the time to write things up in detail.

Strong appreciation for the trust implicit in this being posted here (i.e. it's a compliment along the lines of "I expect not to be punished for speaking the truth as I see it.")

Some regret/sadness that they're this triggered and vitriolic, and for the tendency toward choosing the worst or straw-est interpretation at every point rather than taking the time to question their own responses and include nuance, but on the other hand, still appreciation for how this contributes to the overall health of the discussion by opening up new threads for debate and ensuring that there isn't an echo chamber (i.e. maybe it takes that level of aggression to accomplish the thing, and a gentler critique wouldn't be taken seriously enough?).

Significant disagreement with the choice to hijack the topic at hand to vent about things that are either mostly or completely unrelated, and make claims that are unsubstantiated or wildly inaccurate, and engage in some specious logic toward the end (e.g. ad hominem fallacy).

Hope to have some time later today to respond to the better points this raises.

Thanks for your contribution.

Comment author: 18239018038528017428 26 May 2017 09:02:47PM *  5 points [-]

The fact that you think it's "ad hominem" is itself a betrayal of your own inexperience and lack of perception. It's perhaps one of the most relevant and least fallacious arguments to make: your fiction is a direct expression of your aesthetics, and the inference I draw from your fiction is that you do not have good aesthetics, and therefore should not be trying, or even pretending, to do something that by nature requires very good aesthetic sense.

It also indicates a tremendous amount of immaturity and childishness. I could have written something better in high school. That's not a good sign. Your ability to write characters and dialogue is directly tied to your ability to model the world accurately and understand the nuances of human behavior. Ergo, clichéd and trite writing is very damning.

Comment author: Elo 26 May 2017 11:43:29PM 8 points [-]

Many words. Probably took a while to write. Some unnecessary things like telling the writer to kill themselves and levelling inherent criticism like attributes of other writing. Other writing is pretty irrelevant to the qualities of this piece. You may have some points in this dung heap but you make it hard to find them. Is it even worth engaging you in conversation?

Comment author: 18239018038528017428 27 May 2017 12:48:06AM 3 points [-]

Oh, I see. You're what the Eternal September phenomenon is all about. You shouldn't feel ashamed that you aren't cognitively gifted enough to quickly and rapidly comprehend the salient points I made without substantial expenditure of mental effort, because you were born this way, which also accounts for your overestimation of the amount of time it took for me to write my comments. But please don't pollute the comment space under my comments with your puerile excretions.

Comment author: cata 28 May 2017 07:09:21AM 7 points [-]

Perhaps your excessive cognition is ironically blinding you to the grandiose mediocrity of your overwrought replies, such as this one here, which sounds like something I would have written in third grade if I wasn't already too smart to have written it then, which, as a truly capable mind might have already conceived, I was.

Comment author: math_viking 04 June 2017 04:01:32AM 6 points [-]

Your original comment, though harsh, at least contained some useful insights. Don't ruin that by posting comments that are nothing more than 6 lines of insults that no one wants to read.

Comment author: Decius 27 May 2017 12:57:04AM 2 points [-]

Part right.

Most of the arguments you set forth are more fallacious and less relevant than not liking all the author's fiction.

But that's because most of the arguments you set forth were of the type "Bay Area rationalists have had a lot of problems and therefore this specific plan will have similar problems."

Comment author: 18239018038528017428 27 May 2017 01:14:41AM 3 points [-]

Oh, I see. This is the part where you're too attached to your ingroup to realize what a total failure the Berkeley rationalist community is. I bet you also think the Sequences and HPMOR are well-written.

Comment author: Duncan_Sabien 27 May 2017 01:31:47AM *  9 points [-]

[Note: I've typed this comment without refreshing the page, and thus have not seen any of the other responses that may have cropped up in the past few hours, nor taken those responses into account in any way yet. I'm seeing only the original reply, here.]

Part 1 of ?

Repeating my thanks before heading into what will be a mix of concession and disagreement—I have qualms about the way you engaged with this post, but am grateful for the fact that you did engage, at all, rather than just staying quiet, and I want to support the core of that even as I complain about certain aspects of your chosen method.

I think your first paragraph had one clear point: "I, as a smart, perceptive person who sees things others often fail to see, found a lot of this viscerally upsetting, which is probably a sign that there are actual problems." I liked that you added this point, and I think it would've been stronger if you hadn't been so deliberately assholish with the rest of it. I'm going to take the core point seriously as I read further, and see if I can get a clear sense of what it is you see that I don't.

The comment about Ender's Game (paragraph 2) is a misunderstanding on your part, either deliberate or easy to clear up—there's no wargaming in the plan, there's no battle room, there are no other groups of people playacting as other armies. The aesthetic of Dragon Army was, in short: everyone is expected to keep their eyes open and act independently to do what seems right and sane in the moment. Groups should practice coordinating together to build trust and be capable of action-requiring-more-than-one-individual, but the assumption is that an army run by forty minds will trump an army run by one.

In paragraph 3, you make a valid point about the efficacy and usefulness of CFAR, which is indeed worth questioning, and the side you're holding down is not obviously wrong. It's a bit overwrought, given that the phrase "insistence on the validity of his experience as a CFAR instructor" is a clear strawman; I was almost as emphatic about the fact that I've written nerdy fanfic, so I think you were just looking for an opportunity to climb up on a soapbox? That being said, your point about interpersonal romance being a relevant and important factor matches my own intuition, and I wish you had appreciated the fact that I wanted to continue thinking carefully about correct solutions rather than just spam the first ideas that popped into my head.

In paragraph four, you make an entirely unfounded leap that is beneath the quality of what's expected from a poster on this forum. All of your "this suggests" are false handwaving, and I find the rest of your assertions generally laughable, given that there's only one person in this thread so far who's demonstrated deep antisocial behavior, and that you're hurling these insults from a position of anonymity. However, I'm going to continue to take things one paragraph at a time rather than assuming that I've seen your entire position as soon as I've got a mockable straw model, so we'll start fresh with your next point.

Hmmm. In the first sentence of paragraph 5, you and I seem to converge somewhat—we both agree that the Bay Area rationalist community is not living up to its promise, and has too few people doing good and impactful work. I'm glad to share this bit of world-model with you. I note that my idea for what to do about it—try a different sort of house/community—is just one possible strategy among many, and I'm curious if you have other concrete suggestions that you'd be willing to offer. I'm especially curious what you're actually doing, as you seem to have a sort of ... scathing dismissal? ... of everyone else, and I'd expect from your tone that you must be engaged in at least one concretely high-promise project (else it all smacks of rank hypocrisy). Would you be willing to detail a) what you're up to, or b) a few concrete proposals that you suspect are higher promise? At this point, it'd be hard to simply abandon the Dragon Army idea, but if a good enough alternative came along, I would take it. The point is not to be seen to be right, it's to actually make an impact.

I notice that the rest of that paragraph is basically off-topic. Without contributing to the off-topicness, I want to say that I do, indeed, find at least a couple of worthwhile points of agreement within it, but I think most of it is wrong, in addition to being somewhat morally reprehensible re: vicious attacks, and that you're overconfident in your assertions. If you'd like to shoot me a private message, I'd be happy to say where I agree and where I disagree.

Oh, interesting—paragraph six also begins with a claim I have a lot of sympathy for/agreement with. I don't hold it as strongly as you do, but I do think there's a lot of clear dysfunction and self-deception in the community, and I'd like to take steps to correct it. I don't know how to evaluate your claim that the best people are on the periphery (as I'm a weird mix of professionally central and socially somewhat distant), but again—if you'd like to make concrete recommendations about who I should talk to, or direct some of the people you hold in high esteem to comment on this thread, I suspect you're right about there being a lot of untapped value. I do note that Dragon Army is not actually pulling from the central or highest status people, but thus far looks to be made up of a lot of solid, normal, representative rationalists, so I think your claim about trying to delude people is straightforwardly false, as is your assumption that I don't see or don't want to see any warts and flaws. (I believe there are lots of people who will back me up on this, including some who will claim that I've been too hostile or critical. That's partially why I sympathize with the strength of your negativity.)

Comment author: Duncan_Sabien 27 May 2017 01:32:11AM 11 points [-]

Part 2 of 2

Ah, paragraph seven contains the unword "cult," which I think you're using to say something, but I'd rather you just actually said the thing, instead of applying the empty, stretched, multi-interpretation label. Like, I think if you laid out specific, concrete objections, I and others could benefit from them, but just saying cult is lazy name-calling.

I do somewhat agree with your objections to the list of specific skills attained after a year. I had hoped that the large word DRAFT at the top, plus the repeated statements that the whole plan was to iterate, and that I didn't expect to be able to figure out the right stuff on the first try, would've clued you in to the fact that I, too, am aware that the list is inadequate. Do you have specific suggestions for replacements? Keep in mind, the hard problem is to balance things-that-will-be-generally-useful-for-a-medium-sized-group-of-people against the fact that everyone involved has their own specific career and expertise already. Part of the impetus here is social, part of it is becoming well-rounded, part of it is practicing the skill of gaining/improving skills, and all of that is trying to avoid skating into trivial irrelevancy. Got any ideas?

As a meta note, I think that people who cower behind anonymity don't deserve to make concrete claims about their skill sets without backing them up, so until further notice and on a policy level, I'm treating your claim that you meet 11 out of 14 criteria as a flat-out lie (despite its plausibility overall). You're currently nothing and nobody and have no skills; that will change as soon as you a) reveal yourself or b) demonstrate credibility under this pseudonym.

Your next attempt to strawman things takes a sub-point out of context and deliberately ignores the actual requirement being made, which was that people hold their beliefs and models with skepticism/realize that their internal experience does not represent absolute truth, and that they treat one another with a behaviorist's lens, using revealed preferences and past behavior as predictors, rather than relying on mental summations that may be false or straw. I'm curious whether, setting aside your mockery of a subpoint, you agree with that point.

Interestingly enough, I have reasonable credence in your two inferences. In my experience, members of this community do attempt to install norms to compensate for social failings (and do have a somewhat higher-than-average level of social ineptitude). And also, I think many people in this community are low-empathy and embody the bad side of individualism. However, unlike you, I see that a lot of people are trying damn hard to correct this, and I'm curious whether you think they should be written off for not being good enough already, or whether you have specific suggestions that differ from the ones already being tried. I note that a big part of what Dragon Army intends to do is just try a whole bunch of stuff (including stuff already known to work; there's no premium on novelty), and that I think data will be better than armchair ranting.

I suspect you haven't done much in the way of looking in the mirror when you type the words "repressed irritation, interpersonal drama, and general unpleasantness." Certainly you don't meet any of my standards for "how a decent person behaves." I'm going to try to avoid the fundamental attribution error here, though, and assume that we've hit some combination of a) a bad day, b) the problems of online communication, and c) you being unusually triggered or having run out of some important resources.

I'm not going to engage with the ad hominem attack at the end, which, in addition to being wrong as a tactic, also fails in specific. I think that if you compare yourself, who is suggesting suicide as a solution, with OSC, who is definitely wrong about a lot of things but has never gone so far as to claim a fellow human would be better off killing themselves, you'll note that you might be on the wrong side. I'd check my cap for a skull, at least in the context of today's mood.

For anyone else—I welcome calm, reasoned elaboration on any of the on-topic points this person made. When I went through blow-by-blow, there were fewer than I'd hoped, but there are true and valuable and important criticisms here, and I'm glad they've been added to the mix, and I wouldn't mind further discussion of them.

Comment author: 18239018038528017428 27 May 2017 02:03:00AM *  6 points [-]

I liked that you added this point, and I think it would've been stronger if you hadn't been so deliberately assholish with the rest of it.

Sure, but it's fun to be an asshole. I love knocking people down a peg. Especially in public.

The comment about Ender's Game (paragraph 2) is a misunderstanding on your part, either deliberate or easy to clear up

Asserting that this isn't elaborate playacting is not very convincing in light of the fact that your first two proposed group norms are (1) a greeting salute and (2) a call-and-response mechanism. I played the beginning of Final Fantasy XIII two nights ago and thought that was the most cringeworthy stuff I've seen in months, but you managed to top even that.

I wish you had appreciated the fact that I wanted to continue thinking carefully about correct solutions rather than just spam the first ideas that popped into my head.

The more important thing here is that you imagine this as a problem that can be solved when in fact if the problem did arise, that would itself preclude it from being easily solved. The "solution" is to not select immature people who you can reasonably expect to get into interpersonal drama, which precludes the vast majority of the rationalist community, which is part of the point of my comment.

if you'd like to make concrete recommendations about who I should talk to

I can suggest that you talk to Satvik Beri, and maybe direct him to my comment as well, although I feel slightly bad for potentially causing him to spend time on this.

Ah, paragraph seven contains the unword "cult," which I think you're using to say something, but I'd rather you just actually said the thing, instead of applying the empty, stretched, multi-interpretation label.

I mean that the Berkeley rationalist community is a cult in the full and unqualified sense of the word "cult". You, as a high priest, naturally disagree.

Your next attempt to strawman things takes a sub-point out of context and deliberately ignores the actual requirement being made, which was that people hold their beliefs and models with skepticism/realize that their internal experience does not represent absolute truth, and that they treat one another with a behaviorist's lens, using revealed preferences and past behavior as predictors, rather than relying on mental summations that may be false or straw.

This is a good thing practically by construction.

My point is that this is almost completely unnecessary in a world where people begin by defaulting to behavior that is very unlikely to bother others. I am also gesturing at the following:

  1. The rationalist community does not default to such behavior, which is an indication of the conjunction of near-autistic social skills and remarkably low empathy, and
  2. The rationalist community does not default to such behavior, but instead of anyone pointing out that this is a reasonable thing to default to (c.f. Japanese society), people try to patch it up with legalism, bureaucracy, and a laundry list of rules, which in my experience makes it feel like I'm talking to the low-IQ HR department of a large multinational conglomerate.

The fact that the Berkeley rationalist community seems particularly bad at this is a major red flag in almost every conceivable fashion.

However, unlike you, I see that a lot of people are trying damn hard to correct this, and I'm curious whether you think they should be written off for not being good enough already

I think they should be thrown off a bridge, either metaphorically or literally. I find it detestable to have them near me at all.

I suspect you haven't done much in the way of looking in the mirror when you type the words "repressed irritation, interpersonal drama, and general unpleasantness." Certainly you don't meet any of my standards for "how a decent person behaves." I'm going to try to avoid the fundamental attribution error here, though, and assume that we've hit some combination of a) a bad day, b) the problems of online communication, and c) you being unusually triggered or having run out of some important resources.

Two questions:

  1. Does it look to you like my irritation is "repressed"?
  2. I'm completely anonymous. Exactly what interpersonal drama am I causing here?

I agree that I can be, when I want to be, a very unpleasant person.

Comment author: Duncan_Sabien 27 May 2017 02:23:25AM 10 points [-]

I don't think you actually succeeded in knocking anyone down a peg, though. I'd bet ~$50 that a neutral, outside observer (say, from a different English speaking country) would say that a) you come off far worse than anyone else in the thread and b) they didn't find your post convincing.

I think our disagreement over the distinction between playacting and not boils down to something like, I believe that the very small nuts-and-bolts of social interaction (jargon, in-jokes, simple trigger-action responses like sneeze "bless you") are more important than most people give them credit for. In other words, I think the silly theater ends up actually mattering? Or, to be more specific—I think most of it doesn't matter, but some small bits of it end up being really important, and so it's an arena I want to do explicit experimentation with. I want to see whether the small salute actually ends up being relevant to bonding and sense-of-purpose, and no, I don't have a double blind or anything like that, but I will be asking a bunch of fairly introspective people for their thoughts afterward.

I suspect, from your reaction, that you'd basically assert that this premise is false, and that the ... skin? ... of social interaction is meaningless, at least compared to the actual connections and information conveyed. This seems like a sensible, plausible position to take, but I think your mockery of the alternative hypothesis is unfounded.

I agree that if romance/sex/etc pop up, that would preclude the problem from being easily solved, but where did you get the impression that I was afraid of attempting to solve hard problems? There's definitely a filter to screen out immature or uncontrolled people; while you yourself might make it through, the persona you're currently expressing would've been rejected by the second paragraph of your original response. We've already turned away people for a variety of reasons, and at least one because of exactly this axis.

I appreciate the recommendation that I run things by Satvik. He's a perceptive thinker and I haven't run this by him yet. I wish that you'd responded in specific to more of my requests to draw out your suggestions—you're continuing to clarify your models of the problems, but not offering much in the way of replacements for the things I'm planning to try.

You're still not saying what you actually mean by the word "cult." There's a decent chance I'd agree with you—I've described the Bay Area rationalist community as a cult myself, even recently, when talking to friends and family members. But I was careful to disambiguate exactly what I meant by that, and I can't help but note that your continued refusal to spell it out makes me suspect that you don't actually have a coherent thing to say, and are just trying to score easy points.

I agree again with 1 (low empathy, etc.) though I think the strength of the effect is smaller than you seem to think it is. I think that you're still not believing me when I say I agree with 2? Note that I'm calling you out for unacceptable rudeness in this thread, for instance. I also suspect you have a huge typical mind thing going on, and vastly underestimate how easy it is for people to rub each other wrong while acting in complete good faith in a normal society—the bed example was maybe poorly chosen, but I disagree with you that it's easy to "default to behavior that is very unlikely to bother others." I've been in a wide range of social milieu, and it's much less about the actual behavior and much more about people's cough willingness to pick nits and start fights.

I think that you've lost all moral authority by doubling down on your "people should die for this" claim, and because of that, I think this'll be my last attempt to engage with you as an equal (you're not my equal; at least this facet of your personality is my clear inferior). I will, however, continue to read if you make those concrete suggestions I'm hoping you have somewhere.

In answer to your last two questions: yes, it looks like your irritation is repressed. Not here, because my main hypothesis is that here is where you finally felt safe to vent a ton of irritation that you've been repressing in other arenas, for long amounts of time. Just look back at your first post—maybe a quarter of it was in response to me, and the rest is long-simmering, long-festering frustration about a bunch of other things (some of them valid and some of them not). Textbook repress-then-explode. And 2, your claim that posting anonymously equates to not causing interpersonal drama is again so laughable that unless it's a deliberate joke, you're revealing this persona to be less socially aware than literally the most awkward and inept rationalist I've ever met.

You're not unpleasant so much as just ... not showing yourself to be worth the time. I really hoped I could get more out of you, because I actually know, on a deep level, that I don't have all the answers and the opposition is the first best place to look. But in terms of useful-criticism-per-word, you've been outdone by every other person who's registered reservation or disagreement here.

Comment author: Pimgd 29 May 2017 10:45:23AM *  5 points [-]

I don't know if I'm neutral (no, because I have an account here for a while now), but I wouldn't have the same confidence to swing that bet out of there like you do. The post in and of itself is not convincing enough for me to say that your idea won't work, but it certainly makes me go "hmm, well, he might have a point there".

Specifically:

  • "Normal" people don't need to explicitly write out all the rules for their housing with regards to social rules.
  • But here there's a large list of rules and activitities and all that with the goal of getting group housing to work properly.
  • Also, here's some examples of the group of people that you want to source your participants from having low social skills.
  • By the way, if you set up a ton of rules then it usually won't work.
  • Thus, there's a pretty big chance that the rules will not work out and that the social skills of the participants will be too low to have the group housing work.

I am not convinced that this is the truth.

However, if I read in a year from now that this is what happened, I would not be surprised.

Basically what I'm saying is I can see 1 or 2 people leaving due to drama despite the rules if you try this, with a chance greater than, I dunno, 10%?

Comment author: JacekLach 30 May 2017 06:20:50PM 4 points [-]

You're looking at content, not status (as implied by 'knocking someone down a peg'). My immediate reaction to the top-level comment was: "well, they have some good points, but damn are they embarassing themselves with this language". Possibly shaped by me being generally sceptical about the ideas in the OP.

As far as the bet is about the form of the post, rather than the content, I think Duncan's pretty safe.

Comment author: Viliam 01 June 2017 01:31:35PM *  3 points [-]

"Normal" people don't need to explicitly write out all the rules for their housing with regards to social rules.

I have seen normies having endless fights about trivial things, such as "who should buy toilet paper", that a simple explicit norm could solve. (For example "people keep buying the paper in turns, when you buy one check this box to keep everyone informed" or "Joe buys the paper, everyone else gives Joe $2 each month" or whatever.)

The best case, of course, would be trying to be nice by default, and solve explicitly the situations where the default behavior fails. But that seems like what would quite likely happen in the Dragon Army anyway... or maybe I am just applying the typical mind fallacy here.

Comment author: Lumifer 01 June 2017 03:26:26PM 2 points [-]

I have seen normies having endless fights about trivial things

You should take the Hansonian approach. Fights over toilet paper are not about toilet paper.

Comment author: math_viking 04 June 2017 03:48:22AM *  3 points [-]

I do somewhat agree with your objections to the list of specific skills attained after a year. I had hoped that the large word DRAFT at the top, plus the repeated statements that the whole plan was to iterate, and that I didn't expect to be able to figure out the right stuff on the first try, would've clued you in to the fact that I, too, am aware that the list is inadequate. Do you have specific suggestions for replacements? Keep in mind, the hard problem is to balance things-that-will-be-generally-useful-for-a-medium-sized-group-of-people against the fact that everyone involved has their own specific career and expertise already. Part of the impetus here is social, part of it is becoming well-rounded, part of it is practicing the skill of gaining/improving skills, and all of that is trying to avoid skating into trivial irrelevancy. Got any ideas?

I'm not the originator of this thread, but that part did resonate with me. I don't think there's anything wrong with those skills, but the combination of choice of skills and the desired level of competency does seem to be decidedly mediocre given the effort and people involved.

1) Above-average physical capacity

What is average? In the US, you could probably be somewhat overweight with no strength, speed, endurance, or agility to speak of and still be "above average."

(2) Above-average introspection

I would expect almost all of the people who volunteer to be part of a rationalist group house to be there or pretty close to there already.

(3) Above-average planning & execution skill (4) Above-average communication/facilitation skill (5) Above-average calibration/debiasing/rationality knowledge

I think my previous comment applies here as well. Perhaps you have a different conception of "average" than I do, but I think if you're going to establish a long-term mini-dictatorship of a group house, you should be aiming for quite a bit higher than "above average."

(6) Above-average scientific lab skill/ability to theorize and rigorously investigate claims

I don't really understand this one. Is your group house actually going to have the ability to practice conducting laboratory experiments? That's a very high overhead endeavor.

(7) Average problem-solving/debugging skill (8) Average public speaking skill (9) Average leadership/coordination skill (10) Average teaching and tutoring skill

Average? Your goals are to reach average, after a year of dedicated effort? Getting into the 80th percentile of anything numbered 1-10 on this list should require a minimum of effort on the part of dedicated individuals following strict rules, unless you have some specific medical condition interfering.

(11) Fundamentals of first aid & survival

How fundamental is fundamental? This also shouldn't take very long if you are willing to put in the effort and practice a bit (2 weeks, at the outside, though you could the true basics in a long weekend). I don't know how it's related to the rest of the goals, though, or why it's important enough to be on the rest of the list. Also, you should practice many of these skills in the actual wilderness, which means time away from everything else.

(12) Fundamentals of financial management

Again, I'm not sure what's "fundamental." You could spend 2 days on this, or the entire year.

(13) At least one of: fundamentals of programming, graphic design, writing, A/V/animation, or similar (employable mental skill) (14) At least one of: fundamentals of woodworking, electrical engineering, welding, plumbing, or similar (employable trade skill)

Do you have the ability to teach/practice trade skills at the house? I would expect leaning any of these things, to an employable level, within a year, would require spending time similar to a full-time job somewhere that has infrastructure, in addition to a significant investment of money (at least a few thousand dollars). (I checked some local welding and plumbing classes at community colleges, which is where I'm getting those numbers).

Someone who already has one of these skills (I'm guess you'll have a few coders at least) is going to be at a tremendous advantage in terms of time and possibly money compared to someone who is not. 13 and 14 are going to each represent a greater time investment than the others combined, unless you already have them.

As a meta note, I think that people who cower behind anonymity don't deserve to make concrete claims about their skill sets without backing them up, so until further notice and on a policy level, I'm treating your claim that you meet 11 out of 14 criteria as a flat-out lie (despite its plausibility overall). You're currently nothing and nobody and have no skills; that will change as soon as you a) reveal yourself or b) demonstrate credibility under this pseudonym.

I don't know if you care, but I would say I already meet a similar number of these criteria. The only one I definitely don't meet is 14. I'm willing to tie this account to my real name and explain/prove why I meet them (though some of them would be quite difficult to really prove, I could only argue).

Comment author: Duncan_Sabien 04 June 2017 01:22:56PM 4 points [-]

The problem seems to be to be the tradeoff between going deep and going wide, with the added complexity that going deep on the wrong thing seems strictly worse than going wide, and so we're defaulting to going wide where there's uncertainty.

Put another way, it's unlikely that any of those specific skills are going to be particularly important to any of our longest-term goals, but it also seems counterproductive to just sit there thinking about which direction to go in. I'm usually not the biggest expert in the room, but I usually am the most generally competent in terms of being able to fill holes or solve whatever problem crops up, and it's because I have a habit of just constantly churning and picking up new skills and methods and heuristics wherever I go. I suspect that others would benefit from a similar habit, in particular because once "the right skill" does come along, you have both the affordance to start learning it and a variety of experiences allowing you to learn quickly and efficiently.

That's a claim. Not necessarily supported, but reasonable, I think, and worth trying out.

I note that I disagree that it's easy to break averages in all of these things at once. People who don't actually check their abilities against a standard tend to be wildly overconfident, and people tend to underestimate how long it will take them to learn X or accomplish Y; these things are solidly documented. And while competence does tend to cluster (e.g. "G"), so the picture's not quite as bleak as the second half of this sentence, once you've got a dozen different domains and shooting to be above the 50% mark in all of them, you're looking at a person who's approximating one in four thousand, and when you try to get a whole group to hit that mark, the challenge is pretty real. I wouldn't be surprised if most people have most of this easy, but I think you're not fully grokking the difficulty of making everybody baseline competent in all of these domains. For instance, you note that many of these skills require only a few weeks, but I don't know if you added up all of those weeks, compared them to the time commitment, and noted that they're all being practiced off-hours and people have their own jobs and lives as well.

It's a floor, though, not a ceiling—we're aiming at "world class skill," we're just not naively expecting that getting there is going to be easy, and initial expectations are meant to be exceeded.

Various additional points ... - The trade skill goal got scaled back in response to another comment; it was the hardest/sketchiest one to begin with. - We will have some ability to practice trade skills at the house, and are adopting a norm of going and seeking professional instruction outside from time to time. - I buy that you meet a large number of these criteria; I meet most of them myself. But the ones I don't have are sticky/tricky.

Comment author: ChristianKl 27 May 2017 11:52:21AM 2 points [-]

As a meta note, I think that people who cower behind anonymity don't deserve to make concrete claims about their skill sets without backing them up, so until further notice and on a policy level, I'm treating your claim that you meet 11 out of 14 criteria as a flat-out lie (despite its plausibility overall).

The amount of criteria he hit's likely depends on the definition of average. The reference class matters a great deal.

Comment author: drethelin 29 May 2017 08:52:53PM 8 points [-]

This is why we need downvotes.

Comment author: [deleted] 28 May 2017 05:35:03AM 5 points [-]

Members of the Berkeley rationalist community are particularly low-empathy and embody the worst of individualism, such that they don't actually care whether or not what they're doing might bother others until they're told to stop.

lol

Comment author: 18239018038528017428 26 May 2017 08:57:39PM 5 points [-]

(Comment too long to add more directly.)

Somewhere else in the comments, Qiaochu says:

I am super, super in favor of this experiment, and would have enthusiastically participated fully in it something like 2 years ago, before moving to Terabithia. I think it's tackling the biggest things missing from the community and am very excited to see what happens.

Well, given the trajectory of your own life, Qiaochu, I think that actually counts as an argument against "Dragon Army", and really the rationalist community as a whole, being good for the participants. I notice that you've shifted from posting insightful, detailed blog posts to impersonally spamming links to rationalist ingroup bullshit on Facebook all the time -- in some sense it's like you've been trending in the direction of being less and less of a real person as time goes on. (Which, as a friend of mine pointed out, is actually generically very common, like how a smart and quirky high school student goes to Harvard, starts adopting more and more of a "professional" demeanor, becomes progressively less interesting, and eventually dies a mental death far in advance of their physical expiration...)

Comment author: Duncan_Sabien 27 May 2017 01:46:26AM 7 points [-]

Oh, dear. This is terrible, and I wish you hadn't posted it, because there's literally no value to be had in delivering this sort of message in this sort of way. Disendorse; I claim this is evidence that most of your arguments about social capability should be somewhat discounted, since they're coming from someone unskilled.

Comment author: Raemon 27 May 2017 03:10:04AM 16 points [-]

I honestly think this person has been engaged with enough, at least until they make the kind of concrete claims you've been asking for. I think it's commendable to have responded with the good mix of "look at their plausibly good points while calling them out on their bad points", but at some point it becomes uncommendable to engage with people who are clearly not arguing in good faith.

Comment author: Duncan_Sabien 27 May 2017 03:24:59AM 5 points [-]

Yeah, I'm done replying at this point. +1 for the outside view check, though—if I weren't already done, I would've appreciated your intervention.

Comment author: drethelin 29 May 2017 09:17:34PM 2 points [-]

I disagree.

Comment author: Duncan_Sabien 29 May 2017 09:19:24PM 3 points [-]

Fair. Care to put forth a model? You don't have to; simply weighing in is also a contribution (just a less useful one).

Comment author: drethelin 29 May 2017 09:34:05PM 8 points [-]

Our ability to concretely describe the effects of social groups on people in general are kind of limited, but things like "person X joined social group Y and now they concretely do behavior Z" are available. If you see people join a group and then become concretely worse (in your own assessment), I think it can be valuable to refer to specifics. I think it can be important and virtuous to convey what you think is a pernicious process, and unfortunately naming someone you personally know is a very effective, if cruel way to do it. Anecdata, and especially anecdata based on the content of someone's facebook feed, is not a great snapshot of a person at different times, but it's still a source of information.

I'm not sure what you think a better sort of way to deliver this sort of message is, but to some extent any nicer way to do it would be less effective in conveying how bad you think the situation is.

Comment author: Duncan_Sabien 29 May 2017 09:55:41PM 2 points [-]

That seems true and correct to me. I note that my response to this specific comment was ... motivationally entangled? ... with my responses to this person's other comments, and that I was adopting a cross-comment strategy of "try to publicly defend certain norms while engaging with everything else that doesn't violate those norms."

I think it's defensible to say that, in so doing, I lost ... fine-grained resolution? ... on the specific thing being said above, and could've teased out the value that you were able to identify above separate from my defense of a) norms and b) Qiaochu.

Thanks!

Comment author: grendelkhan 09 June 2017 09:57:06PM 3 points [-]

I strongly support this post.

It would be much better if it were less inflammatory. The last sentence, in particular, is reprehensible. But you respond to the substance of the criticism you get, not the criticism you might want or wish to have at a later time. Otherwise you might as well be slashing your own tires. The vast majority of the discussion below is simple tone policing. Someone's telling you that your house is on fire, and you're complaining that they're shouting.

It's correct that it's incredibly troubling that the author didn't even consider romantic drama in designing his bootcamp. It's correct that these are really not impressive outcomes. They're moderately-functional outcomes. Shouldn't there be some sort of control group where people attempt a similar level of life-changing upward momentum on their own and see if it was actually effective to cede their autonomy? It is correct that trying to LARP a bizarre combination of Ender's Game and Fight Club is perhaps not a sign that this person has any idea how grown-ups work.

And most troubling of all, why weren't these issues noted by anyone who Duncan ran this idea by first? Why does it take this level of willingness to break with social norms to notice the skulls? And no, intoning "I Have Noticed The Skulls" doesn't mean you've actually addressed the problem unless you actually address it. Twelfth virtue!

In a broader sense, what the hell happened? I read the Sequences roughly when they came out, commented here occasionally, moved over to SSC and, more often, the associated subreddit. I donate effectively and regularly, I do my best to tax people's bullshit with bets, and I do feats with spaced repetition. Apparently while I was doing that and not being directly involved in the community, it turned into... this. Scott Alexander is getting published in moderately prestigious outlets. AI risk is mainstream. Effective Altruism is considerably more mainstream than it was. But the community at the center of it has, if anything, regressed, from what I've seen here.

Comment author: zjacobi 28 May 2017 10:31:45PM *  19 points [-]

I think the troll obliquely raised on good point with their criticism of the example for Rule 6:

For example, a Dragon who has been having trouble getting to sleep but has never informed the other Dragons that their actions are keeping them awake will agree that their anger and frustration, while valid internally, may not fairly be vented on those other Dragons, who were never given a chance to correct their behavior

Let me pose a question to the reader of my comment: would you rather live in a house where you have to constantly verbally ask the other residents to stop doing things that they could have reasonably foreseen would bother you, or would you rather live in a house where people actually used reasonable expectations of what other people want to guide their behavior and therefore acted in a way that preempted causing other people irritation?

Treating something like your sleep disturbances as your responsibility is fine if e.g. you (like me) have lots of trouble falling asleep and something like people whispering 15 metres from your room is keeping you from falling asleep. In that case, those people are doing everything right and really don't know that they're hurting you. It is unreasonable to get angry at them if you haven't explained to them why their behaviour is bad for you.

Sometimes it's less clear though. I sometimes use the microwave after midnight. I know that the microwave can be heard in my room and in my room mate's room. When I use the microwave and think he might be asleep, I stop it before the timer finishes and it beeps loudly. There's not much excuse to wait for my room mate to specifically request that I do this; I'm more than capable of figuring out a) the microwave beeping at the end is loud and the sort of thing that can disrupt sleep and b) there's a way I can stop that from happening. It does show some failure of consideration if I were to shrug off the potential inconvenience that the microwave could present for my room mate for the slight benefit of not having to watch the microwave.

This points to one of the failure modes of Tell Culture, where people use it as an excuse to stop doing any thinking about how their actions can affect other people. This actually suggests that one potential house experimental norm could be something like "before making an action that might effect another Dragon, pause and consider how it might effective them and if the effect will be a net positive."

What this all comes down to for me is that it seems unfair to ask people to assume goodwill without also asking them to always attempt to act with goodwill.

Comment author: drethelin 29 May 2017 09:52:04PM 12 points [-]

I like this comment but I think what this and the original trollpost miss out on is that LW community in general, due to having a lot of people with autism and sensory issues, has a ton of people who actually do NOT have "reasonable expectations of what other people want to guide their behavior". The OP quoted here is making a common typical-mind type error. Of COURSE it's better to live with people who intuit your preferences and act in accordance to them without being told what they are. But it's obnoxious to shit on attempted to solutions to a problem by insisting that morally good people could never have the problem in the first place.

Comment author: zjacobi 30 May 2017 02:37:30AM 4 points [-]

Agreed. I have a bunch of social anxiety and dislike it when a certain degree of social smoothness is treated as necessary to be sorted into to the category of "good person".

My specific criticism is of people (and I don't just mean other people; I've failed here before) who could (with ease, not with Herculean effort) intuit preferences but use Tell Culture or direct communication norms to completely avoid doing so. This is especially maddening if you have social anxiety, because you're left anxious about bringing the thing up, especially to someone who seems so otherwise socially competent.

Thanks for the chance to clarify my views here!

Comment author: Duncan_Sabien 30 May 2017 03:33:23AM *  3 points [-]

Yeah, +1 for not "hiding" behind Tell Culture to save effort.

One of the fixes for the anxiety thing is Circling/Focusing/pair debugging culture, which goes a loooooong way toward both a) building the trust and safety required to bring up such issues with less anxiety and b) actually providing Schelling points for when to say it. We're also doing a weekly retrospective where it'll be low-cost and high-support to gently point at such things.

Comment author: Duncan_Sabien 28 May 2017 11:14:58PM 2 points [-]

+1 to all of that, especially especially the last line.

Comment author: orthonormal 25 May 2017 10:13:07PM 19 points [-]

In the spirit of Murphyjitsu, the most obvious failure mode that you didn't mention is that I expect you to burn out dramatically after a few weeks, from exhaustion or the psychological strain of trying to optimize the experiences of N people. The bootcamp phase is not analogous to anything I've heard of you doing sustainably for an extended period of time.

So, do you expect Dragon Army Barracks to work if Eli has to take over for you in Week Four?

Comment author: Duncan_Sabien 25 May 2017 10:24:41PM *  7 points [-]

Hmm, interesting. My self-model is somewhat incapable of burning out during this, due to an ability to run forever on spite (that's only somewhat tongue-in-cheek).

It's a solid point, though. If I condition on burnout, I think that Eli manages or not based on the level of specificity and concreteness that we managed to get in place in the first few weeks. Like, I don't think Eli is competent (yet) to create the thing, but I do think he's competent to oversee its maintenance and preservation. So that seems to put a somewhat higher priority on early systemization and scaffold-building than might have otherwise been in my plan.

Good question.

Edit: also, probably the closest analogue to this in my past is being the sole functioning RA on a dorm hall of ~30 high schoolers in a high-stress school environment. That was probably within the same order of magnitude of juggling, once you account for the fact that my increase in skill since then is balanced by the increase in complexity/responsibility. I did a lot to try to manage the experience of those thirty people.

Comment author: Valentine 25 May 2017 11:19:30PM 7 points [-]

FWIW, my model of Duncan agrees with his model of himself here. I don't expect him to burn out doing this.

…and even if he does, I expect that the combo of Eli plus the sort of people I imagine being part of Dragon Army would pull it through. Not guaranteed, but with a strong enough chance that I'm basically not worried about a failure mode along the lines of "Flops due to Duncan burnout and subsequent systems failures."

Comment author: deluks917 26 May 2017 04:56:09PM 18 points [-]

-- A note: I originally sent Duncan criticism privately. I didn't want to add too much negativity to the discussion. But Duncan asked me to post publicly and I will defer to his judgement. Its his project and he is a very capable guy. I really hope DA succeeds, the rationalist community could be doing much better on many metrics. In general I find the model of DA very promising. But I have some serious concerns.

-- The ethics code seems extremely strict.

For example this rule strikes me as extraordinarily hard to follow: "A Dragon will assume good faith in all interactions with other Dragons". As does "A Dragon will be fully present and supportive when interacting with other Dragons in formal/official contexts".

Earlier in the document Duncan said "Discuss, and then agree upon, and then rigidly and rigorously enforce a norm of perfection in all formal undertakings". This implies to me that Duncan intends to enforce the CoC pretty strictly. Should Duncan be confidant its reasonable to expect such large deviations from how humans normally operate? I should note that normal bootcamps do not require as much psychologically from their recruits. Even though bootcamps require obedience they don't normally require recruits to think a certain way.

Duncan explicitly said he was willing to modify norms that members felt were too hard to follow (" Be ruthless about discarding standards during strategic review; if a member of the group says that X or Y or Z is too high-cost for them to sustain, believe them, and make decisions accordingly."). But he also said that the CoC was unlikely to change. If I thought the CoC was meant more as a set of guidelines than strict rules I would be less worried. But that is not how I interpreted the post.

-- How many people do we expect to leave or get kicked out?

I have moderated some internet communities (And admin an active one now). Temp bans and warnings can only go so far. At some points you have to be willing to pull the trigger and ban people.

The section on reparations reassured me that Duncan was thinking hard about to keep people from falling off the path. In addition, unlike most internet communities, the DA recruits will be heavily vetted. But in order to enforce the reparations you either have to appeal to social pressure or the threat of kicking people out. I think the standards are very strict so serious discipline might be needed.

-- Are there practical or ethical problems with this plan?

People who get kicked out of DA are still required to pay rent until they can find a replacement. Assuming they are on the lease it seems highly unlikely you can kick them out of the house. However if someone gets kicked out of the house they might be pretty negative towards the rest of the group. It probably a bad situation to keep them around, but maybe they can't easily find a replacement or a new place to live.

Secondly people who get kicked out might be psychologically unable to remain at the DA barracks. But until they can find someone to replace them they are on the hook for rent. In my personal opinion joining dragon army should be a "Good deal" for anyone involved. Its important that the downside of: "get kicked out" -> "lose friends, need to find a replacement despite the fact that you got kicked out and maybe can't give DA a good review, on the hook for lots of rent" is manageable. I would really hate to see anyone get hurt. I assume Duncan shares my concerns but he didn't address them in the post.

In addition, has Duncan looked into the legalities surrounding renter's rights in California (and Berkeley in particular)? This isn't in the post even if he has done the research.

-- Duncan said the following "I also considered whether to update/change my tone given your first impression, but it seems to be enough of an outlier that I probably won't make any deliberate effort.

Its plausible to me they aren't much of an outlier. I had the same reaction, as did several people I showed Duncan's post to (though other people thought Duncan's post sounded fine). If I didn't know Duncan was the curriculum director at CFAR I would have thought he was crazy and probably dangerous. Stuff about "living under my thumb", self comparisons to Tyler Durden and the ender's game quote about "quick, decisive obedience" really worried me. Some of the most shocking stuff, from my perspective, was in the pop culture references. But a number of things in the main text gave off an extremely strong cult vibe. Some examples include the "house salute" and the "Various call-and-response patterns surrounding house norms". I should note I am not accusing Duncan of anything, based on his reputation he seems trustworthy. But his tone definitely set off loud alarm bells for me.

--

Again I am really happy people are considering new rationalist norms. Duncan seems like a very good choice to lead an experimental project. The general strategy of DA seems like a good one. But I wanted to share my concerns.

Comment author: Duncan_Sabien 26 May 2017 05:59:36PM *  3 points [-]

+1; general appreciation for your willingness to make the commentary public, so that I and others can interact with it openly.

EDIT: I got distracted dealing with the troll. I still hope to return to this comment, but if I fail to, please know that I am definitely mulling it over and taking its content seriously, and that I again thank you for posting.

Comment author: ChristianKl 27 May 2017 11:25:23AM 2 points [-]

I have moderated some internet communities (And admin an active one now). Temp bans and warnings can only go so far. At some points you have to be willing to pull the trigger and ban people.

In an internet community, you have less tools to change behavior than in personal conversations (and I say that as having moderated in a big personal development internet forum for years).

As far as personal development frameworks go ideas like of "code of perfection" can be found in Landmark (/The Four Agreements). On the other hand the actual verbal techniques advocated are NVC/Circling/Focusing/Internal Double Crux, which have values of authenticity and accepting the emotions that arise in the moment.

Humans sometimes do have instincts to see other people in bad faith. There are two ways to deal with it. ① Surpress it because you have a codex that doesn't allow the instinct to be carried out. ② Bring it authentically to the front and be open about it.

Landmarkish thought would advocate ① while Circling leads to ②. Both can work as cultural norms but they are different and if there's a desire to be in Circling mode, don't have rules that require the other.

Comment author: Decius 28 May 2017 07:43:43AM 3 points [-]

I'm managing/leading an internet gaming community, and the only tools I've ever had to use are selection and conversation.

I've had one person leave because their goal in joining was to acquire enough information and power to cause harm and they were so unsubtle about it that I was able to identify that and stop them. One additional person left because our norms of 'don't cheat' and 'be nice to our friends' were given to him gently by everyone in voice chat every time they were violated.

Oddly enough, both of those people ended up joining a specific competing group that held neither of the norms 'don't cheat' nor 'don't make public rape threats towards people who call out your cheating'.

And my selection method? Be public and pushy about what kind of norms you have, and push away people who don't already have and want to follow those norms.

Comment author: Benquo 26 May 2017 02:21:14AM *  17 points [-]

This seems similar to Leverage in a lot of ways. It seems like it would be really instructive to contrast your plan with Leverage's plan - as initially intended, and as executed - to see what you plan to invest in that they aren't, what you're not doing that they are, and costs and benefits of those differences.

Other contrasting case studies might also add clarity:

  • Esalen
  • kibbutzim
  • the old Singularity Institute house
  • residential colleges
  • fraternities
  • Buddhist monasteries
  • Christian monasteries
  • actual armies
  • actual paramilitary organizations / militias
  • Sea Org

It probably makes sense to 64/4 these with rough sketches from memory/stereotypes/Wikipedia-ing before bothering to do any time-intensive research.

Comment author: Duncan_Sabien 26 May 2017 02:23:59AM 6 points [-]

Yep. I don't have strong ties to Leverage, but I'm talking with a couple of the people and have friends involved who have better models than me. +1 to this point.

Comment author: ChristianKl 26 May 2017 11:39:50AM *  3 points [-]

Esalen is worth noting because it's a place that's extremely intellectually productive. There are many different paradigms of bodywork that come out of Esalen.

Esalen is central for the history of Feldenkrais, Rolfing and a bunch of other paradigms.

If you could build a community that succeeds to do for rationality what Esalen did for bodywork that would be a huge success.

Comment author: toonalfrink 25 May 2017 11:07:27PM 17 points [-]

For the next three months, I will embark on my own experiment of living in a high-standards high-group-activity environment. Specifically, a Buddhist temple.

The temple has an even tighter schedule. All residents wake up together at 5 am and go to sleep together at 10 pm. The rest is meditation, study and work, with 4 hours of free time. The weekends are free, so it adds up to being told what to do for 85 hours per week.

Over the years, I have stayed there six times for a week. The first days are usually a fight to adjust to the lower standards of living (the unpleasant valley). As the days go by, I become increasingly energized and sharp. When I leave, I'm in the best state I can be. Not even a CFAR workshop measures up to how much I upgrade in such a short time. And it's not the meditation. I've gone for days without really meditating and I would still upgrade.

This has led me to believe that something about our individualist style of living is profoundly wrong, at least for some people. Seems like a solution to many of our problems lies in collectivism. Think mental health, akrasia, huffelpuff virtue, etc.

I am really interested in how this is going to fly. Please do post updates. I would also love to share my perspective. I think I'll have some interesting data.

Comment author: ialdabaoth 26 May 2017 12:15:22AM *  16 points [-]

I have tried similar things.

My strongest recommendation is to beware of internal power struggles. Even if you are fully understood to be in charge, if everyone under you is in a state of emotional mutiny, you WILL become compromised, and you WILL make mistakes, and those mistakes WILL be used to justify further emotional mutiny. This will spiral until you lose everything.

Moreso, some percentage of your trusted minions WILL undergo emotional mutiny. They will discover that they'd rather be somewhere else, doing something else. They'll discover that there are people other than you they'd like in charge of their lives. They will discover that they don't trust you as much as they thought they did. Even if you pick the best people -- hell, ESPECIALLY if you pick the best people, because the best people will have other people vying for their attention, seeking to undermine you from without.

Comment author: verbiage_ecstatic 01 June 2017 06:57:50AM *  15 points [-]

Chiming in because the problem of helping people level up is close to my heart.

Putting the social dynamics of the experiment aside (since there are plenty of people discussing that aspect), I'd like to offer some good-natured skepticism about the overall approach. (Good-natured meaning, I hope you actually do pursue this because I'm genuinely curious about how this will play out -- assuming the safety concerns others have raised are handled well, of course).

My skepticism is: this is too meta and too complicated to lead to actual progress.

I spent a few years at company that tried to inculcate a deliberate process for getting to the right answer, including a culture of radical honesty and formal procedures for making decisions and learning from mistakes. This was a major priority at the company for a long period of time (last I checked, it's still going on), with backing from the entire senior management team, and was enforced by firing people who couldn't or wouldn't skillfully participate. I.e., they took it really seriously and put a lot of effort into it. The people who conceived and implemented it were in my opinion extremely smart and competent.

That said, in my opinion the effort spent on this program did more harm than good to the functioning of the company. The values and culture became an end in itself, as opposed to a means for helping achieve goals, and endless amounts of time and energy were spent debating, elucidating, learning, and critiquing the system. Competent professionals ended up becoming ineffectual because they gave up (or were forced out of) their unreflective expertise and got stuck in endless cycles of second-guessing. Some of that self-reflection may have given rise to new levels of skill (in my case, I did in fact feel like I benefited from my time there, although I think that was largely because it was my first job out of college so I didn't have that much to un-learn), but generally people felt disempowered by the initiative rather than improved.

In contrast, for the last few years, I've been running a tiny company where we have very little meta discussion and mostly just do object-level work. I feel 1000x more productive now than I did at my prior job.

My takeaway from this is that the optimal ratio of meta-level tuning to object-level practice is [small number] : [large number]. Meta-level thinking is extremely valuable and important, but I view it as the rudder on a boat: you need to be constantly making adjustments to keep pointing in the right direction, but 99% of the power generation goes into the main engine pointing forward.

If I had to generate a hypothesis as to why the concrete achievements of the rationalist community are less than might be desired, it would be that the community spends way to much of its energy on meta topics instead of on object-level progress. This is understandable, since a) meta-level discussion of rationality is what created the community in the first place, and b) object-level discussion can often be very boring compared to meta-level discussion. (I miss the intellectual stimulation of my previous job, even as I see it as basically a waste of time in terms of actually building a successful company). While understandable, I think it leads to predictable outcomes: a lot of talk happens but not much gets accomplished.

Looking at the proposed charter, I suspect there will be a very high amount of meta-level discussion, probably significantly more so than at my prior job that I thought was way too meta. That's because a) it's built in to the daily schedule, b) it's built into the mission, which is expected to evolve over time with the participants, and c) it's built into the community that the participants will be drawn from.

In addition to being too meta, I also suspect this experiment is too complex. Experimenting with a bunch of different norms, on top of the code of conduct and daily schedule, seems wildly ambitious to me. In the company I worked for, the set of norms and practices were set in stone by executive fiat, recruits to the company were presented with them prior to accepting jobs, and adherence to them were a major part of performance evaluation, and there was still a very high employee churn rate and a general agreement that the norms / practices as specified weren't consistently well-practiced throughout the company. The Dragon charter is for a smaller group of people, which makes things easier, but the norms / practices are expected to be a moving target, which makes things harder.

In my personal experiments with self-improvement, I've had the most success with extremely simple plans. My most successful self-intervention to date has been to download a simple habit tracker on my phone, and add a new daily habit, moving on to the next only after successful completion of the prior one for 30 days. When I first started trying to learn new habits, I would add a bunch of new habits at once, and I would always fail. It took me a very long time to get patient enough to only try to change one thing at a time (which requires accepting that I'm going to have habits I don't like in the interim that I don't try to do anything about).

Similarly, I've been successful growing my current company by having an extremely boring strategy of: ship code, talk to customers, ship code, talk to customers.

Simplicity does not come naturally to me; I like my ideas and strategies to be convoluted, complicated, ambitious, and interesting -- I get very bored with simple, straightforward approaches. So I'm a big believer in simplicity because I've learned the hard way against all my natural inclinations that -- unlike my natural inclinations -- it actually works.

So if I were trying to design a charter, I would pick one or two things that I think would be most likely to have a game-changing impact, and just focus on those things until they worked (or didn't). In contrast, the charter as it exists now feels to me like it has way too many moving pieces. That's just my intuition, of course, but I hope I've given a feel for where that intuition comes from.

Anyway, I admire the ambition in doing a project like this, so I hope my criticism is constructive and useful.

Comment author: Duncan_Sabien 01 June 2017 07:41:36AM *  3 points [-]

Thanks for the long and detailed response. I enjoyed reading it.

It's interesting that you highlight meta as being a dangerous failure mode—I actually strongly agree, which is why the aesthetic is tuned toward stuff like "just exercise" and "housemates should produce visible work." My sense is that a strategy of just doing stuff outstrips in practice a strategy of think really hard until you find the ideal move, especially when you take into account how many iterations you can get in if you're churning hard.

Hilariously, though, I'm further inside the rationalist bubble than I thought, because I accept your overall summation even though the intent was to be THE OBJECT LEVEL HOUSE (or at least, the house that does stuff even if it goes meta on norms). I still think we're set up to be relatively ahead, but as you point out, that's not necessarily a sufficient bar.

However, I'm much more concerned with:

In addition to being too meta, I also suspect this experiment is too complex. Experimenting with a bunch of different norms, on top of the code of conduct and daily schedule, seems wildly ambitious to me.

That rings very true to me, and has been an active concern of mine for the past couple of weeks. It seems like there are something like a hundred activities/experiments/norms/projects that are worthy of including in this, and something like 1.3 slots per week (and thus not even room for half), and I'm not at all certain how to best pick and choose and prioritize and optimize for success. In part, I'm hoping that if we just throw ourselves in and iterate (see above) we'll do better than if we agonize, but yeah, there are a lot of moving parts, and I wouldn't be surprised if we ended up trying to drastically simplify in like our fifth week house meeting.

If I had to really zero in on basics, I think they are: - Never give up on an experiment until its predetermined end date - Spend ~20 hours a week actually interacting in the same physical space as housemates (at least a subset)

... those, I think, are the iron core of the project.

Comment author: entirelyuseless 01 June 2017 02:04:21PM 2 points [-]

Spend ~20 hours a week actually interacting in the same physical space as housemates (at least a subset)

I'm curious why this is so important to you, unless that it's just something to try out. I currently live alone and I like it that way, and I see no reason why spending more time with other people would be such a great thing.

You seem really rigid about excuses though. I think the tendency will be that people will come up with an excuse which one finds it unpleasant or difficult to dispute. For example, when I was in the data science bootcamp in Berkeley, people would very frequently say, "I'm sick and I will be working from home today." Now a lot of people were in fact sick precisely because of so much physical proximity. But it was very obvious in many cases that the basic reason they were staying home was that they were tired of all the company and felt the need to get away. They did not however feel comfortable saying, "I just feel the need to get away."

The same thing was true when I lived in a monastery. You could not say "I just feel like sleeping in this morning," so people said "I didn't come this morning because I didn't feel well." We all knew that this simply meant they were tired and felt like sleeping in. But no one is comfortable confronting someone with the fact that they're not really sick if they say they are.

Comment author: Duncan_Sabien 01 June 2017 04:32:00PM *  4 points [-]

The focus on physical presence is a combination of research showing that it matters (there's some stuff I've collected from Dunbar, for example) and strong personal intuition from past experience. In many ways, it's the core of the thing being tested out, but I have a lot of weight on "it turns out to matter more than just about anything else."

re: excuses, the intention of the house is Not To Do The Stupid Thing.

Clearly, "mental health" days are a real phenomenon—I've taken some myself. And on a larger scale, psych blockers/motivational issues are also real. So it'd be stupid to a) pretend they don't happen, and b) push directly against them all the time, and never look at undercutting them or working around them. This plan pushes directly against them some, with commitments to just show up anyway, but that's not the only tool—one of the things I hope to do is increase the candor of all housemates, at least within the context of the house. This will take some practice and reinforcement, but I much prefer a norm of "Huh. I notice I just really didn't want to show up today" --> figure out what's going on and address it systematically, to a norm of "little white lie that nobody calls out."

It's also worth noting that the house has a pretty high introvert quotient, so there will be a lot of us (myself included) who are motivated to safeguard systems giving one the ability to get away from people for a while.

Comment author: Kaj_Sotala 26 May 2017 03:14:05PM *  15 points [-]

Caveat/skull: The obvious problem is people attempting to game the system—they notice that ten pushups is way easier than doing the diligent work required to show up on time 95 times out of 100.

Not a full solution, but gesturing in a direction that you might find useful: build the system in such a way that gaming it is encouraged and useful, and that the punishments are somehow self-balancing.

E.g. if the punishment is "do some chores", somebody who figures out that doing the chores is easier than their other obligations is at least clearing the list of all the chores that need to be done. If they run out of chores to do, new tasks can be added to the list, and they can choose whether doing them is still worth it.

I'm here kinda reminded of the evolution of pen'n'paper RPGs, which originally had disadvantages you could buy during character creation that made you more powerful in exchange; of course people would munchkin by "forgetting" the disadvantages during play. Newer games got past that by making disadvantages give you zero points during character creation (or even cost!), and instead had them award benefits if you roleplayed them during actual game. In general, games have gotten the better the more they have built "trying to munchkin the rules, automatically leads you to play the game more like it was designed to be played" as a fundamental game design principle.

Not sure of how to do the "self-balancing costs" thing, but I am reminded of the bidding systems some houses have for chores, where you offer money for doing some task and if someone else finds the offered amount of money more valuable than the pain of doing the chore they do it; otherwise you do it yourself.

Comment author: Duncan_Sabien 26 May 2017 03:45:20PM 5 points [-]

+1 to the general idea; not sure how to implement it myself but it's worth some five-minute timers.

Comment author: Benquo 26 May 2017 02:18:58AM *  13 points [-]

Praise: The focus on actually doing a thing is great.

Criticism: Most of this post was about methods the house will have, why these are OK, etc. Comparatively little was about what the house is going to used to accomplish outside itself. This seems worth putting much more up-front thought into given how much of the point is to make a house that can actually do a thing. Probably your methods and selection criteria are not very well-calibrated for whatever project will turn out to be best - human coordination is much easier when you're coordinating about something in particular.

Obviously you will not know everything perfectly in advance no matter how much planning you do - but planning to accomplish a particular thing is very qualitatively different from planning to accomplish things in general.

Praise: A lot of the details on how to live together well (group exercise, food, time explicitly set aside for checking in) seem really good. If step 1 is just "learn to live well together," that is itself a respectable project, and one most of the Rationalists have failed at. Probably most attempts at this fail, we only observe the old communes that didn't fall apart.

Comment author: IlyaShpitser 30 May 2017 10:18:19PM *  12 points [-]

[I don't want to be here, but this is important].

To Duncan: I am not going to say you are trying to start a cult group, like some other folks did in this thread. However, I am going to suggest some background readings on cults if you are interested. Cults are a hobby of mine. My favorite cults are Scientology, unofficial Scientology derivatives who kept most parts of the belief system (yes they exist), and the Fellowship of Friends and other Gurdjieff-offshoot cults. Also Carlos Castaneda's group is a fun one. Those are the fun ones to read about.

To people Duncan is talking to: you are a human being, not a space monkey. The space monkey road is not a good road, I speak from personal painful experience. The space monkey road is going to abstract personal growth issues in a way that will be counterproductive for you in the long run, imo.

Comment author: Duncan_Sabien 31 May 2017 06:24:16PM 6 points [-]

Ilya: if you recommend your top 2-5 sources, I'll commit to reading at least 30,000 words in the next two weeks. (I ask for more than one source in case you propose things I've already read.)

Comment author: IlyaShpitser 31 May 2017 06:39:06PM *  3 points [-]

Scientology: http://www.xenu.net/ (clambake.org). Lots of interesting links there, including about offshoots.

Castaneda: https://www.amazon.com/Sorcerers-Apprentice-Life-Carlos-Castaneda/dp/1583942068. Also some other stuff online, easy to google.

Live stuff on Robert Burton's Fellowship of Friends: http://robertearlburton.blogspot.com/. Also some exposes are googleable. Also some stuff on wikileaks. I have personal second hand info on this cult (was never in it, but know people who were). The Fellowship of Friends has their main base (Apollo, in Yuba County) in California and preys on educated, high salary types.

There are a ton of Gurdjieff offshoots in various states of virulence/danger. One thing I learned about the concept "cult" is it's a fairly fuzzy concept and sort of dissipates around the edges into fairly benign reading groups/clubs and so on. Probably has to do with how charismatic the main person (almost always male) is. So discussions of whether something is "culty" or not are, to me, kind of silly. If the question is raised at all, probably yes a bit culty.


I like reading lots of heterogenous sources and personal accounts to try to piece together what's happening in places like that, rather than books.

Comment author: Duncan_Sabien 31 May 2017 06:45:20PM *  2 points [-]

Thanks! Half of these are brand-new to me; commitment made.

Comment author: cousin_it 31 May 2017 10:08:04AM *  3 points [-]

My favorite cult to read about is Rajneeshism. It's very recent, the head guy was almost supernaturally charismatic by all accounts, and the story is hilarious! From the collection of 93 Rolls-Royces to a bioterror attack by poisoning salad bars in an Oregon town with salmonella (yes).

BTW, Scott of slatestarcodex has also chimed in against the OP's proposal:

On third thought, everyone else is right and I am wrong. The Dragon Army group house is a very bad idea, enough so that it’s okay to be forceful in encouraging Duncan to modify it or other people not to join it. This is true even if the required modifications are so hard that they end up sinking the project.

Comment author: IlyaShpitser 31 May 2017 02:38:34PM 3 points [-]

Slatestar: "Also, Duncan’s taking the wrong strategy by denying it’s a cult. His pitch should be “Hey, cults seem pretty good at controlling their members, let’s get together a bunch of people who are interested in using cult techniques to become the best people they can be by their own values, and see if we can make it work.”"

And the circle is complete.

Comment author: tristanm 31 May 2017 07:15:33PM 2 points [-]

I agree with Scott on this. When proposing that we should return to well-explored territory found to be dangerous (which is what I claim cults are), we should at least be honest about the fact that we're returning to old territory, and perhaps argue that it was in fact not as well-explored as we thought and there might be good things to be found there.

But instead, Duncan appears to be arguing that, according to the Pendulum model, we have moved so far past the "old way of doing things" that we skipped over the optimum and are now in another poor solution. He suggests his proposal is a gentle nudge towards the optimum, but this doesn't seem to square with the fact that the "cult" model is the "old way of doing things" that we we're previously stuck in. So to me it seems more like "swing even harder in the opposite direction!" when the pendulum should actually be slowing down, moving towards the optimum with less momentum than it had previously.

Comment author: Duncan_Sabien 01 June 2017 07:27:29AM *  2 points [-]

BTW, Scott of slatestarcodex has updated his post with an "on fourth thought" (in addition to his excellent theory on the dynamic motivating disagreement) that states he's moving away from concern (though not necessarily all the way to "unconcerned"). I'm hoping you would've posted this yourself—having sort of implicitly committed to using Scott's opinion as an advisory authority—if I hadn't done so myself first. Not just trusting him when he's on your side, and so forth.

I’m encouraged by this both because they seem like good ideas and because they sound like he’s thought this through more fully than I originally thought.

Also, if we are going to keep bringing in questionable outside blogging as source material, there's this, which I feel fairly treated by and comes from an author with actual relevant life experience.

Comment author: a_different_face 27 May 2017 12:45:01AM *  12 points [-]

This is a neat idea!

I expect it to fail. And I kind of wish you wouldn't try: I give maybe a 1/4 chance this fails sufficiently dramatically and publicly that I become less willing to be associated with the community because people start associating it with that failure.

In particular, here is what I expect to happen (~60% confidence it goes down something like this):

  • Someone will start regularly defecting within the first three months. Maybe they don't keep up with their chores, maybe they skip meetings, maybe they fail to get along with someone and they fight, maybe they persist in doing something they've been asked repeatedly not to do, maybe they chafe under your leadership and start practicing malicious compliance. I don't expect intentional defection so much as executive dysfunction, to be clear, but it has the same effect either way.

  • You, personally, will lack the force of character or charisma to fix it. (I haven't met you in person, so this might be way off; I'm just going off your writing and those of your pictures on Facebook I can see. But it takes an extraordinarily good manager to deal with this problem, and there's nothing in your bio which implies you are one.) You also, not being legally their military superior, won't have any actually worthwhile carrots or sticks to offer - this is the core problem, as I see it, that you lack the legal authority to properly enforce anything. Also, rationalists are weird, and often don't respond that well to the usual incentives.

  • The rest of the house will lose confidence in your leadership as a consequence.

  • Bad things. I don't actually know what happens at this step - people move out, or just stop playing by your rules and it reverts to a standard if unusually dysfunctional group house, or what.

Unfortunately I don't have fixes to offer you here, other than "try to figure out an enforcement mechanism which will work even on rationalists and which you can legally carry out". I can't think of such an enforcement mechanism, but haven't even put a full five minutes into it. Maybe you already have one in mind and I've missed it. To be clear, I don't think "ostracism" will be remotely sufficient, because of the aforementioned weirdness and the fact that people will have other friends to fall back on. (I guess you could only invite people without other friends, or require them to cut off contact with said friends, but that is a terrible idea.) I also want to say that I've seen a number of other communities either fail or struggle due to lack of an explicitly specified and actually effective enforcement mechanism for their rules.


Tiny side note: I think it's very important that members have regular one-on-one meetings with someone other than you, in case their problems are problems with you which they aren't willing to bring up to your face.

Comment author: Duncan_Sabien 27 May 2017 01:51:22AM 2 points [-]

Thanks for this detailed model. I had a sense of this as a failure mode, but I like the specific way you've expressed it.

I do actually have a fair bit of managerial skill. I dunno if it's better than 1/100, but it's at least in that range. I also completely agree about regular one-on-one meetings with other people; in part, that's what the "pair debugging/rapport building" time commitment is. I wonder if you think it's important that they be with a specific other person, or if you think just fostering lots of one-on-one communication hits the thing you're gesturing toward?

Comment author: jbeshir 30 May 2017 03:48:26PM *  11 points [-]

On the positive side, I think an experiment in a more centrally managed model makes sense, and group activity that has become integrated into routine is an incredibly good commitment device for getting the activity done- the kind of social technology used in workplaces everywhere that people struggle to apply to their other projects and self-improvement efforts. Collaborative self-improvement is good; it was a big part of what I was interested in for the Accelerator Project before that became defunct.

On the skulls side, though, I think the big risk factor that comes to mind for me for any authoritarian project wasn't addressed directly. You've done a lot of review of failed projects, and succeeded projects, but I don't get an impression you've done much of a review of abusive projects. The big common element I've seen in abusive projects is that unreasonable demands were made that any sensible person should have 'defected' on- they were asked things or placed under demands which from the outside and in retrospect staying in the group was in no way worth meeting- and people didn't defect. They stayed in the abusive situation.

A lot of abusive relationships involve people trading off their work performance and prospects, and their outside relationship prospects, in order to live up to commitments made within those relationships, when they should have walked. They concede arguments when they can't find a reason that will be accepted because the other person rejects everything they say, rather than deciding to defect on the personhood norm of use of reasons. I see people who have been in abusive relationships in the past anxiously worrying about how they will find a way to justify themselves in circumstances where I would have been willing to bite the bullet and say "No, I'm afraid not, I have reasons but I can't really talk about them.", because the option of simply putting their foot down without reasons- a costly last resort but an option- is mentally unavailable to them.

What I draw from the case studies of abusive situations I've encountered, is that humans have false negatives as well as false positives about 'defection'; that is, people maintain commitments when they should have defected as well as defecting when they should have maintained commitments. Some of us are more prone to the former, and others are more prone to the latter. The people prone to the former are often impressively bad at boundaries, at knowing when to say no, at making a continually updated cost/benefit analysis to their continued presence in an environment, at protecting themselves. Making self-protection a mantra indicates that you've kind of seen a part of it, but the overall model being "humans defect on commitments too much" rather than "humans are lousy at knowing when to commit and when not to" seems like it will miss consideration of what various ideas will do with false negatives often.

The rationalist community as a whole probably is mostly people with relatively few false negatives and mostly false positives. Most of us know when to walk and are independent enough to be keeping an eye on the door when things get worrying, and have no trouble saying "you seem to be under the mistaken impression I need to give you a reason" if people try to reject our reasons. So I can understand failures the other way not being the most salient thing. But the rationalist community as a whole is mostly people who won't be part of this project.

When you select out the minority who are interested in this project, I think you will get a considerably higher rate of people who fail in the direction of backing down if they can't find a reason that (they think) others will accept, in the direction of not having good boundaries, and more generally in the direction of not 'defecting' enough to protect themselves. And I've met enough of them in rationalist-adjacent spaces that I know they're nearby, they're smart, they're helpful, some are reliable, and they're kind of vulnerable.

I think as leader you need to do more than say "protect yourself". I think you need to expect that some people you are leading will /not/ say no when they should, and you won't successfully filter all of them out before starting no more than you'll filter all people who will fail in any other way out before starting. And you need to take responsibility for protecting them, rather than delegating it exclusively for them to handle. To be a bit rough, "protect yourself" seems like trying to avoid part of the leadership role that isn't actually optional: that if you fail in the wrong way you will hurt people, and you as leader are responsible for not failing in that way, and 95% isn't good enough. The drill instructor persona does not come off as the sort of person who would do that- with the unidirectional emphasis on committing more- and I think that is part of why people who don't know you personally find it kinda alarming in this context.

(The military, of course, from which the stereotype originates, deals with this by simply not giving two shits about causing psychological harm, and is fine either severely hurting people to turn them into what it needs or severely hurting them before spitting them out if they are people who are harmed by what it does.)

On the somewhat more object level, the exit plan discussed seems wildly inadequate, and very likely to be a strong barrier against anyone who isn't one of our exceptional libertines leaving when they should. This isn't a normal house share, and it is significantly more important than a regular house share that people are not prevented from leaving by financial constraints or inability to find a replacement who's interested. The harsh terms typical of an SF house share are not suitable, I think.

The finding a replacement person part seems especially impractical, given most people trend towards an average of their friends and so if their friends on one side are DA people, and they're unsuited to DA, their other friends are probably even more unsuited to DA on average. I would strongly suggest taking only financial recompense on someone leaving for up to a limited number of months of rent if a replacement is not secured, and either permitting that recompense to be paid back at a later date after immediate departure, or requiring it as an upfront deposit, to guarantee safety of exit.

If there are financial costs involved with ensuring exit is readily available, there are enough people who think that this is valuable that it should be possible to secure capital for use in that scenario.

Comment author: Duncan_Sabien 30 May 2017 04:42:50PM *  3 points [-]

Strong approval of all of this. The short answer is, I've spent tens of hours working more closely with the people who will actually be involved looking at all of the issues you raise here. We're all aware of things like the potential for emotional abuse and financial entrapment, and putting possible solutions into place, and I simply didn't feel the need to lengthen the post by another third to include stuff that's only half-in-progress and also largely too detailed/irrelevant to outsiders.

(As a single bite-sized example: the "protect yourself" mantra is there to lay the baseline, but thus far we're also including a) explicit "non-conformity" training in bowing out of activities, coupled with strong norms of socially supporting people who "rule #1" themselves out, and clear ways to resolve anxiety or embarrassment and save face, b) weekly open-ended retrospectives that include room for anonymous feedback as well as public, c) two one-on-ones per week with me in which the number one focus is "how are you, can you be supported in any way," d) outside check-ins with someone completely unrelated to the house, to provide a fresh perspective and safe outlet, and e) regular Circling and pair debugging so that everyone knows "where everyone is" and has a cheap Schelling point for "I need help with X.")

Comment author: Screwtape 30 May 2017 08:23:11PM 2 points [-]

This is tangentially related at best, but if you have some high quality non-conformity training I would love to borrow it for my local purposes. I've got some, but still feel like it's the largest weakness in the rationality training I've been doing.

Comment author: cousin_it 29 May 2017 07:08:16PM *  11 points [-]

I think most people can do well by joining the kinds of relationships that are time-tested (marriage, friendship, work, school, gym, army, church...) From how much trouble it took society to get these halfway working and find decent boundaries, you should be skeptical of inventing new ones that will work in your lifetime. Especially if they look suspiciously similar to cults which we already know don't work.

And I'm not even sure why you need to invent new relationships! You might feel like you have huge problems that require one huge hammer to solve, but that feeling is deceptive. Mitigating the problems one by one, with boring well-known fixes, is easier and works better. If you want to get fit, join a gym. If you want to learn something, go to school. These will give you the right amount of structure and your daily dose of socialization, without regimenting your life like a boot camp, and you'll be guided by competent people instead of fumbling your way as a crowd of amateurs.

Comment author: Duncan_Sabien 29 May 2017 07:25:51PM 5 points [-]

I think there are a fair number of wrong (or at least underjustified/unfounded) claims in the above. e.g. "cults don't work."

This is largely not a new invention, and is instead largely a return to structures and values that have been known to work in the past, and have been loosened/undermined in the past few decades.

Comment author: cousin_it 29 May 2017 08:08:13PM *  8 points [-]

I think there are a fair number of wrong (or at least underjustified/unfounded) claims in the above. e.g. "cults don't work."

My opinion of CFAR just fell from "neutral" to "mildly harmful" because they hired someone who's willing to say the above. On old LW (where Eliezer wrote a sequence on avoiding cults and I was contributing decision theory math) this would've been unbelievable. Or maybe I've been missing the signs, not being in the Bay Area.

Comment author: Duncan_Sabien 29 May 2017 08:15:44PM *  8 points [-]

You're not thinking or arguing clearly, and are instead leaping to conclusions and pulling from stereotypes.

If you lose respect for CFAR over that, it's the result of your own confusion, and the loss of your endorsement is not one I'd lose sleep over.

One can say "guns are indeed effective" and not be advocating for wanton gun violence. It's a statement about objective reality—guns do things—not a statement about normative values. Similarly, I can argue with your claim "cults don't work" (which is clearly, demonstrably false on at least some axes; cults were in fact successful enough to cause large damage to a lot of people's lives at the very least) without saying "HECK YEAH, GO CULTS."

I'll continue to engage, or not, based on whether or not you respond reasonably to the above. Sorry for the impatience, but I've written thousands upon thousands of words in this thread by now, and I'm not at all in the mood to let people strawman me at this point (even if they want to try to pull a sneaky status move by claiming seniority-on-the-forum and trying to shame a certain kind of statement without any model behind the shaming).

(I also note that you didn't bother to respond AT ALL to my claim that you're making unfounded leaps, nor to my claim that this is in fact a return to previous proven systems rather than an attempt to invent a new one, which makes me think that in addition to smushing together unrelated things in your arguments, you're not actually here to discuss, i.e. swap statements back and forth on a topic and in fact interact with what the other person is saying, and are instead here to just score points or confirm (rather than falsify) your own models.)

Comment author: cousin_it 29 May 2017 08:58:03PM *  2 points [-]

If you took my original comment to mean that cults are harmless, that's a bit bizarre.

As for previous proven systems, I'm not sure which ones you mean. The closest analogue is religious or socialist communes, which turn bad too often for my taste. The happiest exception is kibbutzim which weren't nearly as authoritarian as your idea. Then you have the army, which exists today just fine and we know what it's good for, not sure why we need another one. Then there are boarding schools, sport camps etc. but these are based on learning from professionals which you don't have.

Comment author: Duncan_Sabien 29 May 2017 09:14:31PM *  5 points [-]

sigh.

I took your original comment to be saying "cults don't work."

Then, when I said "they do, though," I took your second comment to be pearl-clutching and saying "well, now I think CFAR must be (slightly) evil or stupid for hiring someone who is willing to say out loud that cults work (gasp)."

You cannot possibly have drawn out of my statements above "Duncan thinks cousin_it thinks cults are harmless."

I'm going to disengage because it's not easy to have discourse with you (say things clearly, stick to a topic, expose reasoning, actually make progress toward truth or convergence). I don't understand how your reasoning process works. I'm finding this subthread frustrating and low-value, and thus far the specific points I have been able to tease out of what you're saying, I generally disagree with (and trust my domain knowledge and expertise more than I trust your skepticism-without-any-concrete-evidence-backing-it-up-from-someone-who's-already-demonstrated-willingness-to-make-unfounded-leaps).

Comment author: drethelin 29 May 2017 08:43:20PM 4 points [-]

This is why we need downvotes.

Comment author: cousin_it 29 May 2017 09:57:38PM *  6 points [-]

Actually I agree. It feels weird to see that one person upvoted my comment without knowing how many would have downvoted it. The same might apply to Duncan's post, from the comments it seems like it was really polarizing, but the score only shows the 28 upvotes. If I may be allowed another reference to old LW, Eliezer used to advocate that people downvote more, ideally without replying. I think he saw it as a defense against noise and then left when the noise became too much.

Comment author: theotetia 26 May 2017 09:15:34PM 11 points [-]

Have you ever lived under obedience? This is often considered a prerequisite for holding command of e.g. a monastery.

Comment author: 18239018038528017428 27 May 2017 01:28:20AM 2 points [-]

Would anyone who has lived under obedience write such an astoundingly self-unaware post?

The answer to both questions is no.

Comment author: CronoDAS 26 May 2017 12:42:31AM 11 points [-]

I couldn't comment on the linked Medium article, so I'd like to say that, for many students, particularly middle and high school students, it is simply not true that they are in class voluntarily. I was routinely threatened with dire consequences if I didn't go to school, and attempts to remain at home and refuse to go were met with physical force - I was literally pulled out of my bed and taken to the car or bus. School is about as voluntary as the military draft.

Comment author: Duncan_Sabien 26 May 2017 01:08:27AM *  4 points [-]

You missed the entire point.

Edit: my original response was unnecessarily brusque and rude, and I apologize. I can elaborate further, but in the meantime, you might squint at the doc again, because it was a particular message about agency aimed at people in exactly your kind of situation.

Comment author: CronoDAS 26 May 2017 04:30:39PM 10 points [-]

The end result of my experiment in school refusal was being put on psychiatric medication. (Which actually did help, if you consider changing my preferences to something more socially acceptable to be helping.)

In hindsight, my best strategy might have been seeking a diagnosis of delayed sleep phase syndrome and requesting accomodations under the Americans with Disabilities Act. (The trigger for all this was that the school changed its starting time from 8:10 AM to 7:40 AM and I was not willing to deal with getting up any earlier.)

I was in a special education school from third to seventh grade, and I was absolutely forced to be physically present at that school as much as any prison inmate was forced to be physically present in prison. They couldn't force me to do schoolwork, and there were times I accepted a loss of privileges as the consequence for not participating, but any attempt to leave would be met by physical force. (The school even had a "time-out room" in which a student that became violent - a not uncommon occurrence - could be locked inside until he or she had calmed down.)

Participation was indeed a choice. Being physically present was not.

Comment author: robot-dreams 26 May 2017 03:20:45PM *  2 points [-]

Going to class was not voluntary for me either. The consequences of not going to class included: parents screaming at me, parents kicking my ass (tiger parent style; we didn't do "grounding" in my household), truancies going onto my "permanent record", a full day of detention on a Saturday, etc. Things that people call "voluntary" don't usually result in physical and emotional damage if you don't do them.

Nonetheless, I skipped class a few times in middle school, and I suffered the consequences as a result. Were the consequences worth the glorious days of freedom that I spent skateboarding near the beach, sitting in a local comic book store marathoning manga, etc.? Maybe; maybe not.

But whether I go to class is a choice that I alone have the freedom to make. My parents and the school can set the consequences, and they can apply a lot of pressure to make particular options more or less appealing, but they can never take away my ability to choose.

Comment author: Qiaochu_Yuan 26 May 2017 11:10:39PM 2 points [-]

but they can never take away my ability to choose.

So far! Security mindset.

Comment author: Nisan 27 May 2017 04:53:44AM 10 points [-]

Are there people external to the project who are going to keep an eye on this? I think it would be sensible for each participant to have a buddy outside the house who checks in with them regularly. And for each buddy to know who the other buddies are.

Comment author: taygetea 27 May 2017 02:39:10PM *  8 points [-]

This post puts me maybe 50% the way to thinking this is a good idea from my previous position.

My largest qualm about this is well-represented by a pattern you seem to show, which starts with saying "Taking care of yourself always comes first, respect yourself", then getting people to actually act on that in simple, low-risk low-involvement contexts, and assuming that means they'll actually be able to do it when it matters. People can show all the signs of accepting a constructed social norm when that norm is introduced, without that meaningfully implying that they'll use it when push comes to shove. Think about how people act when actual conflicts with large fight/flight/freeze responses interact with self-care norms. I suspect some typical-mind, as my model of you is better at that than most people. I think it depends on what "running on spite" cashes out to. This is kind of a known skull, but I think the proposed solution of check-ins is probably insufficient.

My other big concern is what comments like your reply to Peter here imply about your models and implicit relationship to the project. In this comment, you say you'll revise something, but I pretty strongly anticipate you still wanting people to do the thing the original wording implied. This seems to defuse criticism in dangerous ways, by giving other people the impression that you're updating not just the charter, but your aesthetics. Frankly, you don't seem at all likely to revise your aesthetics. And those, ultimately, determine the true rules.

To summarize the nature of my issues here in a few words: aesthetic intuitions have huge amounts of inertia and can't be treated like normal policy positions, and people's self-care abilities (and stress-noticing abiities) cannot be trusted in high-stress environments, even under light to moderate testing.

-Olivia

Comment author: Duncan_Sabien 27 May 2017 04:45:19PM *  3 points [-]

I'm unlikely to revise the aesthetics, but a) the particular operationalization/expression of those aesthetics, and b) the boundary/balance between both the aesthetics and other people's agency are fully open to debate, iteration, and consensus.

The whole point is to test out the aesthetic as it exists, to see whether it produces a better life for people, so it's important not to compromise it until some actual testing has taken place. But imagine e.g. a constructed social norm is approved of, proves to be problematic twice, and has one week left before its originally established "re-evaluate" point—I posit you get much better data out of seeing what happens if you keep the norm firmly in place, see the fallout for a week, watch people grumble and adjust, and then re-evaluate on schedule, than if you just constantly say "NOPE, DIDN'T WORK, SCREW THAT."

I think there's a strong instinct to buck norms and update in the moment, and that this is a pendulum swing thing—it's good that we do this a lot more than we did two decades ago, but it's bad that we do it as much as we do. There's value in learning to live with rules that don't change, or rules that are slightly stupid, and by setting rules firmly in place for e.g. three weeks at a time, I think you capture some of that value, at a low price in terms of loss of the flexibility thing.

Does that seem coherent/a valid response to your qualm?

Another way to say this is that I think the bar for "discard this norm" should be raised one notch higher from (straw description) "it bothered one of us once" to "it bothered several of us several times." If you keep it past the former, I think you see interesting effects in how people shape themselves around one another, and I think there's some valuable effect from transferring some sovereignty back from the individual to the social fabric (i.e. everybody's not just quittable at all times).

Comment author: Qiaochu_Yuan 31 May 2017 12:26:57AM 7 points [-]

I would like everyone posting criticism, especially heated criticism, to keep very firmly in mind that Duncan did not have to write this. Whatever your opinion of him, at least make sure you've factored in the evidence that he wrote this whole, weird thing, complete with references to Ender's Game, Fight Club, etc. instead of writing either 1) nothing or 2) something much more reassuring.

There are critics who think Duncan is incompetent and overconfident, and about this hypothesis I can say at least that it is consistent with Duncan having written this post. Then there are critics who think Duncan is, I dunno, evil or power-hungry or something, and I think those people are mostly failing to see what is in front of them.

Comment author: gwillen 26 May 2017 10:26:37AM 7 points [-]

I find this project very interesting! I can imagine an alternate-universe version of me being super excited to join it. I think it's even possible that the this-universe version of me could benefit a lot from joining it. (I would see most of the benefit from myself in solving Problem 2, I think.)

But... I think there is not more than an 80% chance I would make it 6 months in such an environment without hitting the eject button to preserve my own sense of (physical or psychological) safety. (That is, a chance of at least 20% that I would hit the eject button.) I do think it's great that Code of Conduct rule #1 encourages people to protect their own safety even at the cost of leaving the project. (Although for people of limited economic means this might be hard to execute, given the need to find a replacement, so probably "has the means to deal with needing to leave if the project doesn't work out" is a screening factor.)

It's possible this is just a fact about me, more than about the project. But I don't have the sense that a lot of other members of the rationalosphere would well tolerate, say, an actual military boot camp environment, which feels a lot like the direction this is aimed. It's possible I'm misunderstanding the degree of control you / the project expects to exert over the lives of the participants. But I know that I got happier when I adopted the rule that adulthood means never letting anybody force me to do anything that feels unsafe, even if refusing has significant costs. (For comparison, my largest concern about going to a CFAR workshop was that being subjected to a "comfort zone expansion" exercise, while in remote woods, with complete strangers, on a sunk cost of thousands of dollars, would be a high-stakes problem if I didn't like how it went. Pete Michaud correctly disabused me of this concern during the interview.) Again, perhaps this just means that Dragon Army is not for me. But I'm curious what you think about it. It seems hard to imagine I could go 6 months of committing to try to perfectly execute all the stated rules plus one experimental norm per week without ending up in at least one situation where following the rules felt unsafe.

Separately, I'm interested in whether you think Problem 4 could be tackled separately from an all-consuming project like Dragon Army. I feel like I have seen the "desperately hoping nobody will bail after the third meeting" thing a lot before, but usually the context is "a bunch of people vaguely want to get a thing done but nobody has really committed to it", in which context bailing after the third meeting is not violating any norms or agreements. Without making any new norms, one already has the option of actually asking for explicit commitments, rather than just seeing who shows up, and I think this option is not used often enough. I guess the failure mode of trying to solve Problem 4 alone is, if you ask for explicit commitments, you discover that people just won't give them in the first place. Dragon Army seems like a big hammer to solve this but maybe it's the only way?

Comment author: Duncan_Sabien 26 May 2017 10:59:54AM *  11 points [-]

I think the main issue here is culture. Like, I agree with you that I think most members of the rationalsphere wouldn't do well in a military bootcamp, and I think this suggests a failing of the rationalist community—a pendulum that swung too far, and has weakened people in a way that's probably better than the previous/alternative weakness, but still isn't great and shouldn't be lauded. I, at least, would do fine in a military bootcamp. So, I suspect, would the rationalists I actually admire (Nate S, Anna S, Eli T, Alex R, etc). I suspect Eliezer wouldn't join a military bootcamp, but conditional on him having chosen to do so, I suspect he'd do quite well, also. There's something in there about being able to draw on a bank of strength/go negative temporarily/have meta-level trust that you can pull through/not confuse pain with damage/not be cut off from the whole hemisphere of strategies that require some amount of battering.

It makes sense to me that our community's allergic to it—many people entered into such contexts before they were ready, or with too little information, or under circumstances where the damage was real and extreme. But I think "AVOID AT ALL COSTS! RED FLAG! DEONTOLOGICAL REJECTION!" is the wrong lesson to take from it, and I think our community is closer to that than it is to a healthy, carefully considered balance.

Similarly, I think the people-being-unreliable thing is a bullshit side effect/artifact of people correctly identifying flexibility and sensitivity-to-fluctuating-motivation as things worth prioritizing, but incorrectly weighting the actual costs of making them the TOP priorities. I think the current state of the rationalist community is one that fetishizes freedom of movement and sacrifices all sorts of long-term, increasing-marginal-returns sorts of gains, and that a few years from now, the pendulum will swing again and people will be doing it less wrong and will be slightly embarrassed about this phase.

(I'm quite emphatic about this one. Of all the things rationalists do, this one smacks the most of a sort of self-serving, short-sighted immaturity, the exact reason why we have the phrase "letting the perfect be the enemy of the good.")

I do think Problem 4 can probably be solved incrementally/with a smaller intervention, but when I was considering founding a house, one of my thoughts was "Okay, good—in addition to all the other reasons to do this, it'll give me a context to really turn a bazooka on that one pet peeve."

Comment author: Vaniver 26 May 2017 06:32:36PM 6 points [-]

I suspect Eliezer wouldn't join a military bootcamp, but conditional on him having chosen to do so, I suspect he'd do quite well, also.

Eliezer wasn't able to complete high school, for what I suspect are related reasons. (The sleep thing may have contributed, but I think it was overdetermined.)

I think I would have been extremely miserable if I had gone through boot camp at 18; I think I would have been able to bear going through it by ~25.

Comment author: catch223 28 May 2017 11:33:49PM *  4 points [-]

As someone who's done the whole military thing (am I alone?), I agree with your view that most members of the rationalsphere would struggle immensely in bootcamp, both in turns of physicality and culture (I'm referring mostly to the Army and Marines here, which focus on actual combat training vs. the Air Force and Navy that don't).

I totally agree that you would have 0 problems (other than patience with the stupid parts) as you have a high degree of physical ability, emotional resilience, and general cognitive ability. You would very likely excel. I could say the same of Val and Pete, and I'm sure Eli would do well (I don't know the others you listed well enough to venture a guess).

I have never met Eliezer. However, I suspect he would struggle a great deal and be unlikely to succeed from what I've read and been told. I can't imagine Eliezer playing say football well either. My model of him just says he's simply not optimized for that kind of environment where his intellectual strengths would be limited and his weaknesses amplified. It's just not a remotely optimal environment for someone who is (according to my model of him) built like a race car, extreme performance within strict parameters (flat track, maintenance, etc.).

And that's okay. The military enlisted system at least typically focuses on taking both physical and intellectual generalists and training them to perform a specific job. It's all about the averages. The cockpit is decidedly not adjusted for individual needs or specialized performance for the vast majority of military personnel.

I do hope you're at least somewhat right about the long-term, increasing-marginal-returns sorts of gains, since that's my current strategy for achieving high impact on important matters.

Comment author: Qiaochu_Yuan 26 May 2017 06:41:59PM *  4 points [-]

I think a relatively tight analogy can be made between attitudes towards the authoritarianism of a military bootcamp and attitudes towards romantic relationships. Like, if you go through a string of really bad relationships with partners who consistently abused you, you might update that there's something inherently abusive about relationships and that you just shouldn't be in one again, ever, because your autonomy is too important. On the other hand there is such a thing as a healthy relationship, even a healthy relationship in which you have less than perfect autonomy because you've made some commitments that you're following through on, and you might be lucky enough to find yourself in one in the future if you're open to the possibility and search carefully for someone to commit to.

I think I disagree that the pendulum will swing back in the future though. The rationality community being the way it is now, prioritizing flexibility the way it does now, probably has the property that it attracts people who are prioritizing flexibility and turns off people who are looking for reliability. So if anything I expect the problem to get worse over time unless someone makes a deliberate effort to attract looking-for-reliability sorts of people - hopefully Dragon Army can do this.

Comment author: Decius 26 May 2017 12:00:34AM 7 points [-]

"roughly 90 hours a month (~1.5hr/day plus occasional weekend activities)" My math says that those weekend activities total the 1.5 hours every day has and also 10 additional hours every weekend.

"Any Dragon who leaves during the experiment is responsible for continuing to pay their share of the lease/utilities/house fund, unless and until they have found a replacement person the house considers acceptable, or have found three potential viable replacement candidates and had each one rejected. After six months, should the experiment dissolve, the house will revert to being simply a house, and people will bear the normal responsibility of "keep paying until you've found your replacement." "

It seems counterproductive to have people who have left the experiment living in the same house until they are replaced. Exit terms such as 'two months notice, or less if a suitable replacement can be found or otherwise agreed' are less coercive.

Comment author: Duncan_Sabien 26 May 2017 01:12:24AM *  3 points [-]

Yeah, your exit norm is more what I was looking for. Thanks for the rework/reword ... I'll update it to something more like that soon.

The actual number we're shooting for is 30h/week, but not 30 hours every week. More like 20 hours most weeks and 40 or 50 every now and then.

Comment author: Viliam 01 June 2017 06:02:30PM 6 points [-]

Reading the comments... well, this escalated quickly.

I can imagine this going either horribly right or horribly wrong. So I appreciate if a group of volunteers actually does the experiment, instead of everyone offering their preferred analogy for what should happen. Preferably with good safety mechanism, of which I can imagine two, already mentioned in this debate:

(1) Give members a mandatory time off, once in a while, to spend with their friends outside the "Army". Not just a weekend, but a full week, once in a while.

(2) If possible, it would be good to reduce the financial impact of leaving the group as much as possible. In a perfect world, there would be none. But of course, if you want to live in the same house, that costs money. It would be nice if the group could somehow collect extra money, as an insurance, to allow people leave without financial consequences. Perhaps make everyone pay 10% or 20% extra for the house?

There is always a tension between freedom and commitment, and between individual freedom and group cooperation. It seems generally good to err on the side of freedom, because people in positions of power often have a bias in favor of less freedom (for others, of course), so this is how we balance it. On the other hand, akrasia -- almost a proverbial trait of wannabe rationalists -- is often an inability to follow one's own commitments. Already damaging for individuals; making group activity almost impossible. It would be nice to be able to overcome this, and enter high-commitment situations (with limited scope, for limited time). Otherwise, we lose a lot of potential.

I can imagine myself benefitting from some kind of commitment enforcement, and rational life coaching in general. Of course, the devil is in the details. That's where things can go wrong easily. But if we can create enough safeguards, I support trying this, because there is so much to win.

A possible approach could be to select in advance two or three people trusted by the rationalist community as supervisors of the project. The supervisors would not participate in the project directly, but would have regularly scheduled meetings with members, individually, outside of the project, where the members could provide their opinions, and after hearing all of them, the supervisors would post an anonymized summary report on LW.

Comment author: Duncan_Sabien 01 June 2017 06:29:47PM *  3 points [-]

This is all generally sensible. +1

EDIT: Except for the part about posting an anonymized summary report on LW. It's entirely reasonable to have outside advisors and supervisors (in the sense of "well, if the thing's as good as I say it'll be, then I have no reason to want to hide"). However, it's silly to pretend that the house grants LW any kind of oversight, or specifically seeks LW's approval—I posted here because I thought LW would be a) mildly interested and b) would, in exchange for the mild interestingness be willing to provide some solid, concrete criticism, but that's pretty much as far as it goes.

Comment author: Duncan_Sabien 01 June 2017 03:03:02AM *  6 points [-]

An excellent post from Slatestarscratchpad that sums up (I think) something like 85% of the fundamental disagreement that's fueling the more heated clashes:

One thing that’s seemed striking to me in this Dragon Army discussion is the priors on different people’s threat assessments.

I remember when I was younger, I used to want to meet my friends from the Internet, and my parents were horrified, and had all of these objections like “What if they’re pedophiles who befriended you so they could molest you?” or “What if they’re kidnappers who befriended you so they could kidnap you?”, or less lurid possibilities like “What if they’re creepy drug people and they insist on bringing you along to their creepy drug abuse sessions and won’t let you say no?”

And I never developed a good plan that countered their concerns, like “I will bring pepper spray so I can defend myself”. It was more about rolling my eyes and telling them that never happened in real life. I’ve now met hundreds of Internet friends, and I was absolutely right - it’s never happened, and any effort I put into developing a plan would have been effort wasted.

I’m not claiming there are no Internet pedophiles or kidnappers. I’m saying that based on my own Internet communities, and my threat-detection abilities, and the base rate, I was pretty sure it was more in the realm of terrorism (the kind of stuff you hear about on the news) than the realm of car accidents (the stuff that happens to real people and that you must be guarding yourself against at every moment).

This is also how I think of people turning out to be abusers. It’s possible that anyone I date could turn out to be an abuser, just like it’s possible I could be killed by a terrorist, but it’s not something likely enough that I’m going to take strong precautions against it. This is obviously a function of my personal situations, but it’s a real function of my personal situation, which like my Internet-friend-meeting has consistently been confirmed over a bunch of different situations.

(Please don’t give me the “that’s just male privilege!” speech; men and women get abused at roughly similar rates. I do think that probably women are socialized to fear abuse much more, and that’s a big part of this, and probably other axes of marginalization contribute more)

One interesting thing about Tumblr and the SJ-sphere in particular is that because it comes disproportionately from marginalized communities, it has this sort of natural prior of “people often turn out to be abusers, every situation has to be made abuser-proof or else it will be a catastrophe”. I once dated someone I knew on Tumblr who did a weird test on me where (sorry, won’t give more details) they deliberately put me in a situation where I could have abused them to see what I would do. When they told me about this months later, I was pretty offended - did I really seem so potentially-abusive that I had to be specifically cleared by some procedure? And people explained to me that there’s this whole other culture where somebody being an abuser is, if not the norm, at least high enough to worry about with everyone.

I’m not sure what percent of the population is more like me vs. more like my date. But I think there’s a failure mode where someone from a high-trust culture starts what they think is a perfectly reasonable institution, and someone from a low-trust culture says “that’s awful, you didn’t make any effort to guard against abusers!”. And then the person from the high-trust culture gets angry, because they’re being accused of being a potential abuser, which to them sounds as silly as being accused of being a potential terrorist. If you told your Muslim friend you wouldn’t hang out with him without some safeguards in case he turned out to be a terrorist, my guess is he’d get pretty upset. At the very least it would engender the “stop wasting my time” reaction I had when my parents made me develop anti-pedophile plans before meeting my Internet friends.

And then the person from the low-trust culture gets angry, because the person has just dismissed out of hand (or even gotten angry about) a common-sense attempt to avoid abuse, and who but an abuser would do something like that?

I think it’s interesting that the Dragon Army idea received more positive feedback or constructive criticism on LW (where it was pitched to, and which is probably culturally more similar to me) and more strongly negative feedback on Tumblr (which is more full of marginalized people and SJ-aligned people, and also maybe more full of abusers as judged by the number who get called out all the time).

Comment author: taygetea 03 June 2017 09:02:11AM 2 points [-]

I think people tend to need a decent amount of evidence before they start talking about someone looking potentially abusive. Then the crux is "does this behavior seem normal or like a predictive red flag?". In those cases, your lived experience directly influences your perception. Someone's actions can seem perfectly fine to most people. But if some others experience spooky hair-raising flashes of their questionably abusive father or a bad ex, that's evidence. The people who didn't think anything was weird brush off the others as oversensitive, risk averse, or paranoid. Then those raising alarms think of everyone else as callous, imperceptive, or malicious. It's not just people who don't alieve the correct base rates. Certainly those people exist, though they're much more plentiful on Tumblr than in person or on LW. It's very non-obvious whether a strong reaction is correct.

Neither side can truly accept the other's arguments. It's a bad situation when both sides consider the other's reasoning compromised beyond repair. That brings politics and accusations of bad faith on all sides. But there is a fact of the matter, and the truth is actually unclear. Anyone thinking at enough of a distance from the issue should have honest uncertainty. I suspect you're particularly prone to refusing to let the conflicting experience of others be seen by your deep internal world-models, to strongly underestimating the validity and reliability of that type of evidence. That would cause what you say to be parsed as bad faith, which other people then respond to in kind. That would cause a positive feedback loop where your prior shifts even further away from them having useful things to say. Then you'd end up a frog boiled in a pot of drama nobody else is experiencing. I'm not sure this is what's happening, but it looks plausible.

Comment author: iamaknave 30 May 2017 07:20:57PM 6 points [-]

One particularly dangerous failure mode is that people may lose the capacity to recognize when the situation is toxic, unhealthy or counter-productive. The sunk cost fallacy is a powerful thing, as are the effects of strong emotional attachment. You may want to consider having a mandatory short vacation period from the house. This will allow people to take some space to get perspective on the house.

You also may want to mandate external social supports such as therapy, external friend groups, etc.

Comment author: jimrandomh 30 May 2017 07:25:10PM 2 points [-]

Agree. I think there are ways to do this without making it seem scary or unnatural, like "everyone visits family for a week around Thanksgiving".

Comment author: handoflixue 30 May 2017 11:10:29AM 6 points [-]

First, you seem to think that "Getting Useful Things Done" and "Be 99.99% Reliable" heavily correlate. The military is infamous for bloated budgets, coordination issues, and high rates of sexual abuse and suicide. High-pressure startups largely fail, and are well known for burning people out. There is a very obvious failure state to this sort of rigid, high pressure environment and... you seem unaware of it.

Second, you seem really unaware of alternate organizational systems that actually DO get things done. The open source community is largely a loose model of "80% reliable" components, and yet great things get built by these collaborations. Rome wasn't built in a day, and neither was Linux.

"we often convince ourselves that 90% or 99% is good enough, when in fact what's needed is something like 99.99%."

Third, and most bluntly: I don't think you have the slightest knowledge of Fault Tolerant Design, or how to handle Error Cases, if you would say something like this. I write software that can rely on it's inputs working maybe 80% of the time. This is accounting software, so it is NOT allowed to fuck up on corner cases. And I do it just fine. 80% is perfectly sufficient, if you know how to build a system that fails safely.

I think this makes you a uniquely bad candidate for this sort of endeavor, because the first iteration of this experiment is going to be running at maybe 80% reliability. You're going to have a ton of bugs to iron out, and the first run needs to be someone who can work with 80%. And you seem pretty blunt that you're inept in that area.

Fourth, your thresholds for success are all nebulous. I'd really expect testable predictions, ideally ones that are easy for the community to evaluate independent of your own opinions. It seems like the goal of this exercise should be to produce data, more than results.


All that said, I do value the focus on iteration. I think you will be prone to making more mistakes, and inflicting more unnecessary suffering on participants, but I do not think you have any sort of malicious intent. And with no one else really stepping up to run this sort of experiment... well, if people are willing to make that sacrifice, I'm happy to learn from them?

But I think you dramatically over-estimate your ability, and you're selling short how badly the first version is going to go. There are going to be bugs. You are going to need to learn to deal with the 80% that you get.

And on top of that, well, the consequences for failure are actually worse than being homeless, since you're also responsible for finding a replacement. That's a really huge risk to ask people to take, when you yourself have absolutely nothing at stake.

I think your heart may well be in the right place, but the idea as currently conceived is actively harmful, and desperately needs to build in much better safety protocols. It also needs to be much clearer that this is an initial draft, that it will go badly as people try to figure this out, and that initial participants are going to be suffering through an unoptimized process.


Finally: You don't have a fail safe for if the whole idea proves non-viable. As it stands right now, you kick everyone out but leave them on the hook for rent until they've run 3 replacement candidates by you. In the meantime, you enjoy a rent free house.

It really feels like it needs an "ABORT" button where the participants can pull the plug if things get out of control; if you turn out power mad; or if it just turns out a significant number of participants badly estimated how this would go.

The fact that you have nothing on the line, and no fail-safe / abort clause... really, really worries me?


TL;DR: Your plan is dangerous and you haven't given nearly enough thought to keeping people safe. Scrap what you have and rebuilt it from the ground up with the notion of this being a safe experiment (and I want to emphasis both the word "safe" and the word "experiment" - you should be expecting the initial version of this to fail at producing results, and instead largely produce data on how to do this better in the future)

Comment author: Duncan_Sabien 30 May 2017 04:45:27PM *  2 points [-]

Nah.

(Having exchanged half a dozen comments with cousin_it, I now recognize the pattern of a) you're defaulting to the least charitable interpretation at every possible split point, b) many of your claims and conclusions are flat-out false, c) you're incredibly confident that you're correct about all of your assumptions and are including zero nuance or uncertainty, and therefore d) this thread will produce nothing of value. I feel no need to convince people who a, b, and c, especially those who are unable to distinguish object level standards from meta level ones. Contrast your post with jbeshir's, for instance, which is also highly critical but in an entirely constructive way that doesn't make the same mistakes.)

Yes, we have noticed the skulls. (http://slatestarcodex.com/2017/04/07/yes-we-have-noticed-the-skulls/)

Comment author: Kaj_Sotala 30 May 2017 07:10:58PM *  6 points [-]

Datapoint: I thought handoflixue's comment was much more reasonable and less uncharitable than cousin_it's opening comment was; in particular, the points about needing an explicit abort procedure sounded very reasonable and it makes me slightly worried to see you making a comment that implies you're just disregarding them. (only slightly because of my personal trust in you and your abilities; I expect that people who don't know you, will get much more worried)

EDIT: I wrote this comment before reading your reply to jbeshir's comment; your response there further reduces my worry.

Comment author: katydee 26 May 2017 02:29:19AM 6 points [-]

With respect to power dynamics point one and two, there is another person known to the community who is perhaps more qualified and already running something which is similar in several respects - Geoff Anders of Leverage Research. So I don't think this is precisely the only group making an attempt to hit this sort of thing, though I still find it novel and interesting.

(disclaimer: I was at the test weekend for this house and am likely to participate)

Comment author: Lumifer 26 May 2017 02:20:29AM 6 points [-]

For the record: not for me. At all.

Comment author: Vaniver 27 May 2017 07:08:35AM 8 points [-]

I am Jack's complete lack of surprise.

Comment author: Duncan_Sabien 26 May 2017 07:53:33AM 5 points [-]

I'm curious whether the not for me is "there are different kinds of people and different kinds of brains and different kinds of personalities, and they actually sometimes need different nutrients, and this one is bad for Lumifers," or whether it's "there's something fundamentally broken here that I'm particularly sensitive to but others are more able to tolerate."

If the latter, I'd love it if you ever manage to put it into words. The goal is to avoid as many of the Stupid Things as possible.

Comment author: [deleted] 01 June 2017 12:37:48AM *  8 points [-]

So, I have actually lived in a semi-authoritarian culture, and have a sort of unique experience of seeing high rates of autism function under that culture (and let's not deny the high rates of autism in this subculture). While this doesn't sound like "cult" to me, I can think of a couple ways gratuitous harm could occur even if everyone is operating in good faith.

  1. Person A harms Person B. Person B realizes that their violation threshold is much lower than they thought when they signed on, and they want to bring it up for discussion, but you and Person A have a much better rapport than you and Person B. And Person B was uniquely attracted to this because they need their self-care to largely be outsourced to a group structure. So they don't actually have the skills they need to be agenty outside of group expectations, and simply continue to be harmed while being unable to bring it to anyone's attention until it's much to late to repair the relationships. I'd like to present myself as someone who has gotten feedback along the lines of "you're competent and mature" and who still does this sort of thing. It's not something that's easily predicted by the person or by people observing them.

  2. As mentioned in (1), simply outsourcing functionality to a group structure can leave people helpless when they have to act against the group or act without the group. I don't see much thought put towards transition plans for people when they leave DAB. Relating back to the childhood and adolescent experiences I claimed gave me insight into this, I have seen a lot of people flail once their version of the role you're taking here is gone. And they get hurt. This applies even more to people who've required extra structure to function, as in the case of autism (and I am one of those autistic kids that flailed). You might say that people are accepting that they will get no transition help once they leave the immersive, structured environment you're creating, but it seems naive to not at least prep them for the struggles they might have.

2a. Transition is even more important given that this is a necessarily isolating endeavor. The things you're proposing take a ton of time! People will be making a lot of interpersonal sacrifices to participate, and that will degrade whatever safety net they think they'll have if they leave.

Personally, I'm trying really really hard to separate criticisms from an aesthetic distaste and the fact that this looks like things I have been actively harmed by, when the people in charge were people who loved me and had my best interests at heart. So, apologies, because this comment is definitely biased by that.

As far as "there are different kinds of people and this is bad for helldalgos" goes, this is bad because I would do something like this if I tried to participate: outsource most of my functionality to group norms, overstate my ability to be transparent enough to function in a high trust environment like this, end up hiding rule violations, feel guilty, become dishonest, and have periodic emotional and mental breakdowns where I burn all of my relationships in the house to the ground. The fact that I behave like this under authoritarian structures might be a problem, but it's not one that's fixed all at once by starting an immersive group project where someone is in charge of me. I said a few hours ago to someone else that I would definitely participate if I didn't have so many roots where I live now and if I could actually stand living in the Bay, but upon reflection, I think not.

Comment author: Duncan_Sabien 01 June 2017 12:50:24AM 2 points [-]

This is outstanding, and I appreciate you taking the time to write it up.

I think 1) is an interesting and important dynamic that I had not previously thought about, and I'm curious if you have concrete thoughts as to how to repair it. I think that simply acknowledging it, and committing to accede to opinions-other-than-my-own in evaluating whether it's going on, is an important first step but only gets like 15% of the way there. Similarly, I think norms of regular retrospectives and Circling-type environments will make it marginally more likely that people can bring this stuff forward and get it addressed, but not entirely because anxiety, status, etc.

My first brainstorm there produces things like anonymous feedback structures, "interrupting" norms where people are free to call things to a halt, requests-to-speak-privately and requests-for-third-party-mediation as strong guaranteed "yesses," and maybe something like a norm that people can call for discussion or mediation to halt until their ideological Turing test has been passed? e.g. I can't just brush past your claim of harm; you have an absolute right to stop things from moving forward until you are satisfied that I at least understand the magnitude of your internal experience, even if I disagree with your description of what happened externally.

As for 2), it's an ongoing conversation; back-and-forth in these comments has already produced a lot of clarity on both non-defecty, structured ways of leaving, and also urgent, ejector-seat methods. (I've been a little slow to post concrete details because I claim the person clamoring for them loudest isn't engaging in good faith, but I'd be happy to PM). My current sense, though, is that these structures, while they should be put in place as soon as possible, should also be discussed with the group, rather than emerging entirely under my models.

Thanks again, particularly for your separating criticisms from aesthetic distaste—I feel you absolutely succeeded at that goal, and I felt that your comment was both a) actually valuable and b) entirely constructive.

Comment author: Lumifer 26 May 2017 03:49:07PM 7 points [-]

Mostly the former. I am an individualist and dislike collectivism. As befits a proper individualist :-) I also recognize that people are different and what's good for me is not necessarily good for thee. I can survive and function in collectivist environments like you propose, but I don't like them and don't see a good reason for me to be there.

As to the latter, it's hard to do a pre-mortem on something that's still in flux. Communes of different kinds -- from monasteries to kibbutzim and hippies -- have been around for many centuries and clearly some people like them and find them useful. There's enough history (which I'm not all that familiar with) to learn where the common pitfalls lie and what are the major trade-offs that you would be facing. I can't recommend a book, but I'm sure there's a few.

Generally speaking, I would expect the most likely mode of failure to be the way power dynamics develop. Authority and power are complicated and deadly -- tightly-knit communities can go very bad quickly this way (consult your favourite cult group horror story). Adding sex to the mix generally makes things... more volatile. The rationalist community doesn't strike me as being particularly capable of managing power issues.

Comment author: entirelyuseless 26 May 2017 03:17:22PM 4 points [-]

I agree with Lumifer.

Emotionally, the whole proposal strikes me as cultlike in a bad way. I can't defend that as a factual claim since I only skimmed the post (precisely because it is not relevant to me), but I am pretty sure that living in such a situation even for a short while would make me feel very, very bad.

Comment author: Duncan_Sabien 26 May 2017 03:34:42PM *  3 points [-]

Same question posed to you—to the best of your ability to tell, is this a bug in the system, a bug in you personally, or a simple instance of diff'rent strokes for diff'rent folks? And if a bug in the system, can you point straight at it?

Comment author: ksv 31 May 2017 01:14:28PM 2 points [-]

The biggest concern/red flag for me is one aspect of the authoritarian nature of the project. I would be perfectly fine with fully outsourcing decisions (giving higher intellectual status) but not with being a subordinate in full generality. What I'm trying to point at is the difference between "What should I do? He said to do "x" and I trust his expertise so this is my best option and I'm going to make myself do it if unpleasant" and someone forcing me to do the thing.

Which of the two would be my intuitive reaction depends mostly on your character/attitude and this is something that is completely missing from the discussion so far. Hopefully that is because people know you so they are sure it wouldn't be a problem but your comments here only show competence and don't exclude arrogance or enjoying power too much and beginning to boss people around. I found concerning the comparisons to military bootcamps and talking about tyrants as this somewhat paints the image of "someone shouting at people to do stuff" which I expect to have severe negative effects and build up resentment quickly. In other words it seems to me that constraining your image strictly to the one who decides what is to be done as opposed to someone who also enforces the execution would reduce the risk of failure of the experiment. Enforcing by regulating incentives should be fine as it won't speak to System 1 and provoke the low-level "Who are you to tell me what to do" reaction.

Maybe this is an obvious point that having a nice and respectful leader is better than powerful tyrant but I'm not sure how far I can generalize from my own preferences so decided to share anyway. Apologies if this doesn't make sense or wastes your time, I'm new to posting here.

Comment author: MaryCh 02 June 2017 03:58:52PM *  5 points [-]

First, thank you for writing the post so fully and readably - it is really impressive! And I wish you would go to do this, in whatever way you would decide upon. But even if I thought full well the setup was safe (which I do) and the results were exactly as intended, in the most useful and generally good way, I wouldn't join.

Because I think that when people become parents, they suddenly find themselves in a world that is much more uncertain. You can't reliably say that you will sleep through the night, for example, even when the kid mostly does. And this is already hard enough to get used to - I know from experience - and it is also hard to begin anew (though this might be less so for men.) Imagine having actually trained yourself to be 100% in control of what you do, or even letting other people know that you are such kind of person. It's just not robust.

Comment author: Duncan_Sabien 02 June 2017 05:37:06PM 2 points [-]

Thanks for the comment. This is unique among perspectives given so far, and I liked seeing it a lot.

Comment author: handoflixue 31 May 2017 07:56:59PM 5 points [-]

Concerns about your philosophy

1) You focus heavily on 99.99% reliability. That's 1-in-10,000. If we only count weekdays, that's 1 absence every 40 years, or about one per working lifetime. If we count weekends, that's 1 absence every 27 years, or 3 per lifetime. Do you really feel like this is a reasonable standard, or are you being hyperbolic and over-correcting? If the latter, what wold you consider an actual reasonable number?

2) Why does one person being 95% reliable cause CFAR workshops to fail catastrophically? Don't you have backups / contingencies? I'm not trying to be rude, I'm just used to working with vastly less fragile, more fault-tolerant systems, and I'm noticing I am very confused when you discuss workshops failing catastrophically.

the problem is that any rate of tolerance of real defection (i.e. unmitigated by the social loop-closing norms above) ultimately results in the destruction of the system.

3) Numerous open source programs have been written via a web of one-shot and low-reliability contributors. In general, there's plenty of examples of successful systems that tolerate significantly more than 0.01% defection. Could you elaborate on why you think these systems "close the loop", or aren't destroyed? Could you elaborate on why you think your own endeavors can't work within those frameworks? The framing seems solidly a general purpose statement, not just a statement on your own personal preferences, but I acknowledge I could be misreading this.

4) You make a number of references to the military, and a general philosophy of "Obedience to Authority". Given the high rate of sexual assault and pointless bureaucracy in the actual military, that seems like a really bad choice of role model for this experiment. How do you plan to avoid the well known failure states of such a model?

5) You raise a lot of interesting points about Restitution, but never actually go in to details. Is that coming in a future update?

every attempt by an individual to gather power about themselves is at least suspect, given regular ol' incentive structures and regular ol' fallible humans

6) You seem to acknowledge that you're making an extraordinary claim here when you say "I've noticed the skulls". Do you think your original post constitutes extraordinary proof? If not, why are you so upset that some people consider you suspect, and are, as you invited them to do, grilling you and trying to protect the community from someone who might be hoodwinking members?

7) Do you feel comfortable with the precedent of allowing this sort of recruiting post from other people (i.e. me)? I realize I'm making a bit of an ask here, but if I, handoflixue, had written basically this post and was insisting you should trust me that I'm totally not running a cult... would you actually trust me? Would you be okay with the community endorsing me? I am using myself specifically as an example here, because I think you really do not trust me - but I also have the karma / seniority to claim the right to post such a thing if you can :)

Comment author: Duncan_Sabien 01 June 2017 02:47:58AM *  2 points [-]

I note for others reading this comment and wondering why it hasn't been addressed that I've ceased replying to handoflixue and a couple of other posters on a policy level, for reasons surrounding norms of discourse, strawmanning, epistemic humility, presence or absence of good faith, etc. It's possible that the above contains good questions or insights; if someone else chooses to repost/re-ask/rephrase sections of this, I'll likely respond to them.

http://lesswrong.com/lw/c1/wellkept_gardens_die_by_pacifism/

Comment author: PeterBorah 25 May 2017 09:50:04PM 5 points [-]

Somewhat scattered reactions:

  • I am really interested to see the result of this experiment.

  • I think the underlying models are extremely plausible, with the next bullet point as a possible exception.

  • I am aesthetically very skeptical of phrases like "absolutely reliable" (in Problem 4). I don't think it's possible for something to be absolutely reliable, and it seems dangerous/brittle to commit to achieving something unachievable. However, this may be primarily an aesthetic issue, since I think the solution presented in Problem 3 is very sensible.

  • I don't buy claim 4, "It does actually require a tyrant". I agree that it isn't always possible to achieve consensus. I don't think that hierarchical authority is the only way to solve that problem. Democratic Centralism is a well-tested alternative, for instance.

  • I find the code of conduct worrisome, at least as presented. The rules seem likely to encourage hypocrisy and dishonesty, since they make psychologically implausible demands which in many cases are undetectable at time of infraction. This could potentially be mitigated by norms encouraging confession/absolution for sins, but otherwise I expect this to have corrosive effects.

  • I am totally uninterested in joining the experiment, despite my interest in its outcome. I would be likely to be interested in substantially more time-boxed activities with similar expectations.

Comment author: Screwtape 30 May 2017 05:24:40PM *  4 points [-]

0) This is not for me, not because of a bug in the proposed structure but because I don't know you and don't know any of the people recommending you. There are two people that immediately came to mind who, if they proposed this with themselves in your place, I would join up with over most situations and three more I would probably follow like this over my current situation.

1) You can't name something Dragon Army and not expect nerd pedantry, but this is pedantry with a point behind it. Dragon Army (in the book) distributed leadership down as much as possible. Each toon leader had more degrees of freedom from Ender's plans, each toon had a second who was expected to make decisions, and soldiers were more free to question their toon leaders. I know Dragon Army (the name) has a certain positive association in rationalist circles, but what you're describing sounds more like Salamander Army. This is meant as nerd pedantry more than disagreement with your proposed goals or metrics (Salamander was doing really well in the standings after all) but the difference between Salamander and Dragon hierarchy seems important in this context. Dragon Army won by having a dozen good commanders all thinking at once, Salamander won by having one or two good commanders and being able to expect sharp obedience from everyone under them.

2) The second highest value change (Highest is brought up in point 0) would be some form of "I Told You So" and accountability. I find I am much happier to submit to doing things I think are incorrect if my dissension has been recorded and I can point at it later. Something like an internal prediction market is probably overkill and would erode confidence in leadership in a bad way, but a norm where someone could say "I'm 70% confident this treehouse won't support enough weight if we nail it like that" and someone quickly sticks that in a google form might be fast enough not to interrupt things. This may or may not help with general cohesion or be relevant to the people who are actually probably joining.

This is sort of related to how often "sure, I'll do it the way you said as long as I have it in writing that I think it's dumb" has saved me by covering my rear, it also provides an important check on an incompetent leader, but mostly I'd want it because then the nagging thought "this is a bad idea" is out of my head and I can forget about it for a while. It's sort of like singing a song out loud sometimes stops it being stuck in your head.

3) "Internal economy trading effort for money and so on" Can I pay someone to do my lateness-apology push ups for me? That's a joking example, but given the likelihood of having large income discrepancies something of that nature may come up, and it might be worth having a framework for it. In the same ballpark, intense cooperation seems like it might be odd in non-DA associated things. Examples; what happens if one member applies for a job at a company another member works for? What happens if one member commits a crime and asks other members to be their alibi? I don't really expect either of those examples to actually come up, but they are examples where organizations structurally similar to what you're proposing can do very well for its members in ways that maybe aren't good for the surrounding social structures.

4) If I knew that this general sort of setup was working well for all concerned, I wouldn't consider it lasting indefinitely with the same leader to be a bad thing. That said, since you stated an intention to only lead it for about a year, 'temporary' leaders leading indefinitely is pretty strongly associated with this general sort of setup no longer working well for all concerned. If this started today, and you were still leading it in two years, I'd take that as evidence something has gone wrong. This gets lessened greatly if individual people are regularly rotating out of the group and all have wonderful praises for it.

All of the above is even more true for romantic/sexual relations between the leadership and the rank-and-file.

5) I'm strongly in favour of this being tried, and I'll be reading any updates with great interest. Good luck!

Comment author: Duncan_Sabien 30 May 2017 05:42:46PM 2 points [-]

Thanks for the detailed comment!

1) Yeah, I'm emphasizing the more authoritarian parts, because those are the more dangerous/aversive ones, but in fact Dragon Army is the source of the aesthetic. I agree with almost everything you said in 1), and that's what the house is supposed to be like. Don't forget, though, that while Ender distributed authority as broadly as possible, he was firmly, absolutely in command, in the end. When he spoke, they moved. The key thing was that a) he used that as rarely as possible and b) he didn't undercut his toon leaders when he exercised central authority.

2) Yeah, absolutely. We've already installed a norm of making direct, one-to-one bets, and are almost certainly going to install prediction markets and "I told you so" structures. In particular, I think the people originally opposed to a given failed experiment should be given greater weight in the next decision, if their predictions about that experiment came true. It's tough to balance this against "creating perverse incentives," but I think we can manage it.

3) Yes. It's tricky, because we have to work out rates-of-exchange between e.g. rich and poor participants, but an internal economy is something I hope to create with second-priority urgency (i.e. in the first couple of months).

4) I'm not committed to ceasing after a year, if all is going swimmingly, but essentially I want to open that question up to the group itself after six months.

5) Thanks!

Comment author: malcolmocean 28 May 2017 08:30:52PM *  4 points [-]

I want to publicly express my strong support for this experiment/meta-experiment.

I think that my support is particularly noteworthy as I'm presently a core member of a different taking-each-other-seriously co-living experiment that is profoundly different in its philosophy. (Mine is not in Berkeley, nor rationalist.) Therefore some people might assume that I would be opposed to Dragon Army Barracks.

Things in common between the experiment I'm part of and Dragon Army Barracks:

  • is "high-commitment, high-standards, high-investment"
  • is trying to actually make & achieve something together
  • is addressing unanchored abandoned loneliness thing
  • has consciously explicated commitments and assumptions
  • is intended to produce a high-level of consistent excellence and ability to effectively collaborate

Things that are different:

  • We're very far from authoritarian or hierarchical. Although we're also not egalitarian, consensus-based, or even democratic per se... but we have essentially zero of telling-other-people-what-to-do
  • Our basic collective navigating framework is Kegan-5 / fluid mode / post-rational, rather than Kegan-4 / systematic mode / rational (good summary of this distinction)
  • Our focus is almost entirely on the meta-level of building the new cultural platform we're building. We don't have any expectations of each other on the levels of specific object-level projects or explicit behavioral norms (aside from ones necessary for the house's function)

I think that these differences are core to why I am part of this project that I'm part of, and why I consider it to be the most valuable investment I could be making with my time and energy. I am, therefore, non-Berkeley-residence aside, not going to be applying to DA. As I said above though, I strongly support Dragon Army Barracks as an experiment and potentially as an ongoing resource to individual and collective growth.

Reasons why I think that DA is a good idea:

  • Expected value of high amounts of worthwhile object-level output. As Sebastian Marshall says, "the gains made from living more purposefully are forever - the time you've spent well will remains well-spent even if you fall off for a while sometimes. Most people don't even try, which is why most people don't succeed."
  • I expect it will also produce a lot of developmental progress for people involved; that if you were to be able to sort rationalists by amount of growth in a year, the Dragons would all be in the top quartile, and would occupy many of the top 10 slots. This, even if the experiment were to end after 6 months.
  • The DA Barracks is an intervention that is attempting to produce change on a very fundamental level of the system that is a group house. This is a powerful leverage point (see Donella Meadow's article... I would say this is around a 2 or 3, and most group houses have only done mild experiments at the 4-6 level.)
  • I agree with and/or resonate with the six points that Duncan makes in Section 2 of this document.
  • The project-level value of learning here is also very high: this will greatly inform future experiments, whatever their leadership basis.
  • If I had kids, I would absolutely sign them up for any summer camps or classes Duncan was running. I think the amount of power he would have in relation to them would be similar to the amount of power he'll have in this situation.

A final reason is this: I think that we as humanity need to rapidly make progress on being able to effectively coordinate in non-hierarchical ways, which is what the project I'm part of is about. Corollarily, humanity is kind of mediocre at doing this in many contexts. Therefore if non-hierarchical projects aren't emphatically directed towards solving that challenge itself, I expect them to be outperformed by projects that are leveraging existing understanding about how to coordinate effectively in hierarchical ways. i.e. in this case, Dragon Army Barracks.

Comment author: Qiaochu_Yuan 29 May 2017 12:06:18AM 3 points [-]

I really, really wish Kegan levels didn't come in an order, so a claim to be at a higher Kegan level than someone else didn't look so starkly like a claim to superiority. It's turning me off even trying to take them seriously, because everyone who uses them looks like they're just self-aggrandizing to me.

Comment author: malcolmocean 29 May 2017 09:27:28PM 4 points [-]

I'm totally with you in wishing that Kegan levels weren't getting socially entangled with claims to superiority!

...but that can't be achieved in the way you describe: they would be a fundamentally different thing if they didn't come in the order they do. It's not a personality typing system, it's a model of human development over time. Probably some people who are talking about them are self-aggrandizing; people are known to do that with just about everything they can get their hands on.

I suspect that your heuristics about not trusting people who brag about their Kegan levels are probably decently good heuristics, as it could be reasonably expected that that would be ineffective in just the way you're describing here.

I first learned about the CDT model from a conversation I had with someone who used to work with Kegan, and who readily noted that he was not himself consistently operating out of stage 5. Robert Kegan has said that about himself too, which I found surprising and originally interpreted as being a failure mode in the opposite direction—false humility or something. But now it strikes me as not that unlikely. There's a big difference between being able to recognize abstractly (or in others) what it means to be subject to one's own interpretations & ideologies, and being able to actually not do it.

There's an unfortunate phenomenon here, where the value of the concept gets diluted because the people who are finding the Kegan models helpful but aren't claiming to be at higher Kegan levels than others... are harder to notice.

Anyway, I realize that I may sound like I'm making a superiority claim here myself. I will address that directly, kind of like Duncan is doing re: skulls above.

My understanding—based more on reading things like this than Kegan's own work—is that the "fluid mode" (~=K-5) does have capabilities that the "systematic mode" (~=K-4) does not; much like multivariate calculus can be used to re-derive the equation for the volume of a sphere, but not the reverse. Is multivariate calculus superior to sphere equations? In functional senses yes, but not in a social status way. And also not in all domains! It's certainly slower if you just need to calculate the volumes of a bunch of spheres.

I've spent a considerable amount of time over the past year working to develop the ability to operate in the fluid mode, and I think that that makes a lot of sense for me and many other people, but I don't think that that's highest priority for everyone right now. Hence my strong support for Dragon Army.

Comment author: Duncan_Sabien 29 May 2017 09:32:42PM *  2 points [-]

I like the paragraph "my understanding" a lot. In particular, while I think I have some limited, flickering access to K5, I notice that operations which come out of being solidly K4 often cause me to outstrip/outperform people who are entirely in K5, which seems to me to be something analogous to "I'm successfully calculating the volumes of a bunch of spheres and you're just stuck there mired in re-derivation."

i.e. relative strengths in different domains.

Comment author: Dapple 27 May 2017 06:44:33PM 4 points [-]

I think it's a solid proposal.

One major caveat I think is that it's a structure that wouldn't work for most people in the rationality community. Calling most of them libertines incompatible with such a strict framework wouldn't be too far from the truth. But those are the views of a very distant outsider who doesn't know the the deeper views/feelings of the Berkeleyans you refer to, and is only familiar at a superficial glance.

But for a niche group of strongly driven baby rationalists lacking for direction/purpose who aren't opposed to operating within a strict structure, I don't know how this wouldn't be an ideal framework to use.

As a former military enlisted, I think all the military comparisons made are valid. Allow me to include one more. I believe that also like the military, there will be a high turnover rate - once people get what they want out of the community, they leave. As I allude to earlier, the appeal of joining is acquiring skills in discipline/organization/direction. Once those are acquired, there is very little left to motivate people to stay. But, in both cases, this isn't really a bad thing either. If everyone leaves after the one year commitment, but they reflect on the experience positively, then it would still be considered a success.

Comment author: passinglunatic 27 May 2017 12:40:09AM 4 points [-]
  1. Sounds awful to me. I would absolutely hate to live somewhere where I was regularly told what to do and/or expected to fit in with rituals. I tolerate this kind of thing at work because I have to.

  2. What will you say when people come to you saying "I'm not sure this is really worth it for me"? I personally don't think self-improvement is a very stable overall goal. In my cursory acquaintance, most cults/high-demand living situations tend to believe in "something greater" - often something quite ridiculous, but nonetheless something bigger than the individual. Perhaps it is important to have something which seems to trump feelings of personal discomfort.

Comment author: Duncan_Sabien 27 May 2017 01:52:27AM 4 points [-]

Basically what I tell people (in answer to 2) is "ABSOLUTELY trust that instinct. This requires pretty high confidence that this is the right move, and DEFINITELY high confidence that if it's the wrong move you won't take significant damage over the six month period. If you're unsure, the answer should be 'no.'"

Comment author: ChristianKl 26 May 2017 10:33:57AM 4 points [-]

A "culture of abundance" in which food and leftovers within the house are default available to all, with exceptions deliberately kept as rare as possible

That reminds me of an event during a retreat where a cake couldn't get backed because they required chocolate that was brought to bake the cake was consumed beforehand. It was even baking-chocolate.

It seems like good cooking or baking leads to people buying specific ingredients and it's bad if they can't count on those ingredients not being consumed before the planned meal.

Comment author: Duncan_Sabien 26 May 2017 10:46:49AM 3 points [-]

Yeah, I think notes saying "do not eat" will suffice; the key is just to get people to use that coin only when it's for a specific plan.

Comment author: evand 26 May 2017 01:11:22PM 5 points [-]

You might also want a mechanism to handle "staples" that individuals want. I have a few foods / ingredients I like to keep on hand at all times, and be able to rely on having. I'd have no objections to other people eating them, but if they did I'd want them to take responsibility for never leaving the house in a state of "no X on hand".

Comment author: DaystarEld 26 May 2017 04:30:33AM *  4 points [-]

I wanted to comment in order to, at the very least, publicly say that I love pretty much everything about this, and am crossing both fingers for its resounding success, both for the good it will do and the lessons we can all learn from it even if its success ends up looking different from how it's currently envisioned.

The main point of potential limitation that you acknowledge is that the time investment and rigid scheduling leaves a lot of people out of luck: either because their job is not a standard 9-5 that would allow for predictable morning or evening availability, or due to people having their own projects to work on. This can be seen as a plus, of course, since the house is going to be committing to long term and serious group projects, which is more beneficial in both directions for those who aren't currently committed to other endeavors.

So I will be interested in seeing how things adapt if someone in the house, for example, levels up a bit, finds a new job that changes their ability to commit to house requirements, or discovers or commits to a desire to create, say, a long running series of blog posts/videos/web serial/etc, but at the potential sacrifice of group projects, if not house-wide projects.

The ideal outcome in such a case might be, if possible, "wait until the current year is up then find a replacement and go do my own thing." Since people are likely to grow attached to the people and living situation of such a tight-knit "army," however, there's going to be some internal friction and conflict for many.

This ties into the larger idea, which I love, of "individual leveling up and sending superhero graduates out into the world to do lots and lots of exploring and tackle a wide number of strategies." I don't know how a "graduate" would be determined, other than potentially just by the person themselves.

Do you forsee yourself basically having a sitdown with one of your Dragons in a year or so and saying, "Hey X, I think you've grown a lot in your time with us, I'm so happy you were part of this, and I think we're hitting some pretty strong diminishing returns on what we can teach you going forward. Meanwhile I've got about a hundred people banging on the door to join us, but no room for them. You have three months to think seriously about what you're going to do next, and then we'll help you find a new place to live, preferably nearby if you want to stay close?"

Comment author: JasonGross 26 May 2017 04:18:12AM *  4 points [-]

Positive:

After reading the "What is Dragon Army [Barracks]?", my emotional response was "oooh, maybe I want to join!", whereas before, my emotional tone was "looks interesting and I want to see what happens"+"long-term social commitment tied to housing, ahhhhhhhh"

Less positive:

"its members are willing to hold doubt in reserve and act with full force in spite of reservations—if they're willing to trust me more than they trust their own sense of things (at least in the moment, pending later explanation and recalibration on my part or theirs or both)." Owwwww. This is not a thing I think I'm capable of, and this is not a thing I think I want to twist myself into being capable of. That said, there's an approximation to this (which might or might not be what you are actually pointing at), which I could easily see myself doing: it could become the case that my sense of things would frequently be "Duncan has more domain knowledge and better intuitions and a better sense of things than I do here", and I could possibly act with full force in spite of reservations when that is my sense of things (and am in fact doing something like that right now by posting here rather than on your FB wall), but, at least for me, trust is not yet a decision or a choice, but a thing that is built.

Relatedly, I think point 5 in the code of conduct is where I have the most internal pushback; committing to being unconditionally fully present and supportive, even if, e.g., I'm emotionally blown out or ~triggered, seems ... violating.

Comment author: Duncan_Sabien 26 May 2017 05:10:35AM *  2 points [-]

It's more the latter approximation, but it is nonzero the first thing, which is a skill I think extremely worth building for transfer to other arenas (e.g. "I have no reason to be greater than 1% confident that this strategy to ameliorate AI ex-risk will work, but also it will only work if I try full-force for six months, and I don't have any better options...").

Note that rule 1 (protect yourself) supersedes rule 5 (maybe that wasn't clear) and there will be ways to regain face/the house is committed to not doing the Stupid Thing.

Comment author: CronoDAS 26 May 2017 12:06:47AM 4 points [-]

I am frequently only 95% reliable or less. This is likely a bad thing and has led me to compensate in what are probably a lot of bad ways. Among them are a general reluctance to make commitments and a fear of responsibility. Is this something fixable or something I should deal with and work around?

Comment author: Duncan_Sabien 26 May 2017 01:10:56AM 3 points [-]

I don't think reluctance to make commitments here is actually a bad patch—there's something really mature and refreshing about someone who says, "Look, I find myself needing to cancel a lot, so I don't want to PROMISE I'll be there, K?"

I think it's fixable, but it's also possible that it's not a super urgent priority/the most important thing for you. I'd evaluate THAT question first, and then if you decide it is important, take things slowly as you try to improve it. Expect to make mistakes, look for what actually works rather than what should work, etc.

Comment author: Valentine 25 May 2017 11:54:25PM *  4 points [-]

I really like this. I enjoy your aesthetic and ambition.

[…]But something magical does accrue when you make the jump from 99% to 100%[…]

There's something about this whole section that nags me. I really, really like the aesthetic… and yet… there's something about how it's phrased here that inspires a wish in me to argue with you about what you said.

I think what you're trying to get at here is how, when you convert a "shades of grey" perspective into a "No, this either hits these standards or it doesn't" kind of discrete clarity, it's possible to switch from approximation to precision. And when you chain together steps that each have to work, you can tell what the output is much more clearly if you're getting each step to give a binary "Yes, I'm working properly" or "Nope, not quite meeting the defined standard."

And I think you're using this to suggest that Dragon Army should be a system with discretely clear standards and with each component of the system (i.e., each person) either (a) definitely meeting that standard or (b) recognizing where they don't and then building up to that standard. This makes the whole system dependable in a way you just cannot do if there are no clear discrete standards or if the system is lax about some component not meeting the standards (e.g., giving someone a pass for merely "trying").

I think this is what you mean when you say, "[…]the `absolute' part is important." The word "absolute" is standing in for having these standards and endeavoring for literally 100% of the (finitely many, discrete) components of the system 100% meeting those standards.

Confirm/deny/improve?

All of the above was meant to point at reasons why I suspect trusting individuals responding to incentives moment-by-moment to be a weaker and less effective strategy than building an intentional community that Actually Asks Things Of Its Members.

Yep, I agree. Free markets are a terrible strategy for opposing Moloch.

It's worth noting that the people most closely involved with this project (i.e. my closest advisors and those most likely to actually sign on as housemates) have been encouraged to spend a significant amount of time explicitly vetting me with regards to questions like "does this guy actually think things through," "is this guy likely to be stupid or meta-stupid," "will this guy listen/react/update/pivot in response to evidence or consensus opposition," and "when this guy has intuitions that he can't explain, do they tend to be validated in the end?"

I just want to add public corroboration on this point. Yes, Duncan encouraged along these lines. My own answers round to "is good" in each case. I'm really just flat-out not worried about him leading Dragon Army.

And it doesn't quite solve things to say, "well, this is an optional, consent-based process, and if you don't like it, don't join," because good and moral people have to stop and wonder whether their friends and colleagues with slightly weaker epistemics and slightly less-honed allergies to evil are getting hoodwinked. In short, if someone's building a coercive trap, it's everyone's problem.

I really like that you point out things like this.

Should the experiment prove successful past its first six months, and worth continuing for a full year or longer, by the end of the first year every Dragon shall have a skill set including, but not limited to[…]

I like the list, overall. I can give you a more detailed commentary in person rather than digging in here. Let me know if you'd prefer it done in public here. (Just trying not to overly tax public attention with personal impressions.)

[…]we are trying not to fall prey to GOODHART'S DEMON.

Heh. That reference made me laugh. :-) I like that as a focus, as will surprise you not at all.

Comment author: Duncan_Sabien 26 May 2017 05:25:12AM *  2 points [-]

Confirm. More particularly, I'm pointing at something like "being able to rely on a plot that requires ten things to go right" (e.g. a CFAR workshop).

Feel free to add any number of additional thoughts and personal impressions—I like the idea of being able to say "But I did due diligence! We argued everything right out in the open fora!"

Comment author: Raemon 25 May 2017 11:46:50PM 4 points [-]

I'm in the "would probably like to be in round 2 of the experiment" camp (I think there's probably a frustratingly large number of people in that camp and a frustratingly small number in the "let's do this!" camp. I hope there's enough of the latter for this to work.

My main question (which you may or may not yet know the answer to) is "if you don't have a Duncan, what version of this experiment would you recommend running?

Comment author: Duncan_Sabien 28 May 2017 05:22:40AM 4 points [-]

Get your house together and start a norm of "ironclad try-things experiments" lasting no less than two weeks and no more than four (with overlap or not being up to you). So, you have a regular house meeting where you all say, "What thing to we want to try, and top-down make ourselves keep trying past the first possible warning signs, because we suspect there's value on the far side of the valley?"

And then you run something like eight full experiments before you abandon the meta-norm.

Comment author: Dagon 25 May 2017 09:29:47PM 4 points [-]

I applaud the experiment, and the writeup! Do you have a place where you'll publish metrics (people contacted, interest level, etc. before starting, and self-reported or objective measures of your stated objectives every week)?

Comment author: artemium 01 June 2017 06:16:55AM *  3 points [-]

This is a good idea that should definitely be tested. I completely agree with the Duncan that modern society, and especially our community is intrinsically to allergic to authoritarian structure despite strong historical proof that this kind of organisations can be quite effective.

would consider joining in myself but given my location that isn't an option.

I do think that in order to build successful organisation based on authority the key factor are personal qualities and charisma of the leader and rules play smaller part.

As long as project is based on voluntary participation, I don't see why anyone should find it controversial. Wish you all the best.

Comment author: ChristianKl 26 May 2017 10:27:55AM 3 points [-]

If, of course, the expectation is that everybody shows up on Tuesday and Thursday evenings, and the cost of not doing so is not being present in the house, suddenly the situation becomes simple and workable.

Does this means that a person who's ill and needs to be a week in the hospital will get kicked out? What about a person who's absent for a funeral of a relative? Business trips?

Comment author: Duncan_Sabien 26 May 2017 10:49:48AM 2 points [-]

The number of excuses for not being present is basically the most restrictive list you'd expect—if you're literally not in town, if you're sick, if you're attending to a personal tragedy. The idea is not to make the house anyone's first priority, it's to make it something like everyone's third priority (but actually above all but a couple of things).

So, no missing exercise because of a party, no missing it because you kinda need to work late, etc. Maybe missing for a once-in-a-year opportunity like a talk or a concert that you've been looking forward to for ages, with specific recompense to your housemates for the cost imposed by your absence? But in short, it's the thing that other stuff has to work around, not vice-versa.

Comment author: Decius 27 May 2017 01:05:01AM 3 points [-]

Losing one's job to avoid missing a house meeting (needed to work late) is the kind of bad priority that should be addressed.

Perhaps some kind of explicit measure where housemates judge and excuse or not each case on a case-by-case basis, including a measure to request leave in advance as well as in arrears?

Comment author: Duncan_Sabien 27 May 2017 01:48:47AM 2 points [-]

Sorry, I should've been more clear. "Kinda" was the important operational word, there, and you're correct to point out that the priorities could be easily be construed as clearly bad.

I think your latter norm is basically what's going to happen. The key thing I want to avoid is the slippery slope whereby there's no clear line of "this counts as a defection." I think needing to work late is 100% acceptable. What I was pointing at was something like, "I could wrap this up by coming in early tomorrow, or I could defect on the standing group exercise appointment ..."

I want to thank you for the number of concrete, clear criticisms you're making, and the manner in which you're making them. I like your style.

Comment author: Decius 28 May 2017 07:58:18AM 2 points [-]

A defection would be any case in which a member did not arrive on time or participate fully. Period.

I'm suggesting that there be a formal process by which a member arrives late, performs ten pushups, and joins the event in progress. At the conclusion of the event, he says "My Uber driver was involved in a minor collision on my way here and that delayed me for too long to arrive on time." and (by secret ballot?) the Army votes and some adequate margin of them excuse the failure.

The other aspect I suggested is that a Dragon might say "[event] is next week and I would like to attend but it conflicts with exercise. May I be excused from exercise for [event]?". Again, the Army would vote and decide if the absence is excused.

I'm at a loss as to what to do to sanction a member who is not excused. The military has a long list of 'corrective actions' and 'punishments' that they can apply only because they don't constitute 'kidnapping' or other crimes. I guess you could possibly make those '[task] or removal from the Army', but that runs straight into the eviction problem. I think that it's absolutely critical that there's a credible threat underlying the discipline, precisely so that it is less likely to be needed, and the only one I find plausible is ejection, which becomes complicated because of Housing law and morality.

Comment author: gwillen 26 May 2017 07:11:16PM 2 points [-]

Ok, this sounds quite a bit less authoritarian than I was picturing, and I basically did expect that you were planning to require this to be essentially everyone's first priority, maybe tied with paid employment at best, but even then requiring that paid employment take specific forms not conflicting with the experiment. (I had definitely framed it this way in my head when I was asking my other question in this thread.) I don't know if I'm the only one.

Comment author: Duncan_Sabien 27 May 2017 02:01:48AM *  2 points [-]

This is part of why I'm glad the conversation is unfolding as it is—probably not a lot of people will read literally every comment, but for anyone who's confused, we have a clear record of where I was wrong and changed my mind, or where I was unclear and people raised confusions.

I think DA should be third or fourth, with obvious things that might come ahead of it being work, family, pre-existing strong friendships, romance, and lifelong core passions.

Comment author: sohois 01 June 2017 11:27:05AM 2 points [-]

I've got a seemingly obvious flaw to point out; in fact, it appears so obvious to me that I would be surprised if it hadn't been addressed in the original post or one of the subsequent comments and I simply skipped over it. Nonetheless, it may be of use.

I feel that the whole experiment is rather undermined by selection bias. I think its a fair assumption that you would want this method tried elsewhere were the experiment successful, you would want "Dragon Houses" to pop up anywhere where there is a sufficient rationalist community. However, it would be an error to think that the dragon house model would work elsewhere once you aren't picking out the people most suitable for living there, and have to expand to more general types. Again, I feel it is a fair assumption that some people will simply be a lot more suited to authoritarian communities, such as the Army, than others. If you can pre-approve for authoritarian types, and eject those who don't fit as you identify them, then it seems far more likely that the community will survive, but it could still be inferior to another model that does not select so heavily.

Is this merely a proof of concept? i.e., you will run the dragon house for a short period of time under perfect conditions to ensure that it is not a complete disaster, and does not result in the 'toxic cult' dangers that others have outlined, before expanding the experiment? In which case the selection bias would be removed in the second run and you could ascertain the general effectiveness of the model.

Apologies for the poor construction of the above, I struggled somewhat to put it into words but I hope you can comprehend regardless

Comment author: handoflixue 31 May 2017 08:06:40PM 2 points [-]

Concerns about you specifically as a leader

1) This seems like an endeavor that has a number of very obvious failure modes. Like, the intentional community community apparently bans this sort of thing, because it tends to end badly. I am at a complete loss to name anything that really comes close, and hasn't failed badly. Do you acknowledge that you are clearly treading in dangerous waters?

2) While you've said "we've noticed the skulls", there's been at least 3 failure modes raised in the comment which you had to append to address (outsider safety check-ins, an abort/exit strategy, and the issue of romantic entanglement). Given that we've already found 3 skulls you didn't notice, don't you think you should take some time to reconsider the chances that you've missed further skulls?

Comment author: handoflixue 31 May 2017 07:39:13PM 2 points [-]

Genuine Safety Concerns

I'm going to use "you have failed" here as a stand-in for all of "you're power hungry / abusive", "you're incompetent / overconfident", and simply "this person feels deeply misled." If you object to that term, feel free to suggest a different one, and then read the post as though I had used that term instead.

1) What is your exit strategy if a single individual feels you have failed? (note that asking such a person to find a replacement roommate is clearly not viable - no decent, moral person should be pushing someone in to that environment)

2) What is your exit strategy if a significant minority of participants feels you have failed? (i.e. enough to make the rent hit significant on you, not enough to outvote you)

3) What is your exit strategy if a majority of participants feel you have failed? (I realize you addressed this one somewhere in the nest, but the original post doesn't mention it, and says that you're the top of the pack and the exception to an otherwise flat power structure, so it's unclear if a simple majority vote actually overrules you)

4) What legal commitments are participants making? How do those commitments change if they decide you have failed? (i.e. are you okay with 25% of participants all dropping out of the program, but still living in the house? Under what conditions can you evict participants from their housing?)

5) What if someone wants to drop out, but can't afford the cost of finding new housing?

6) It sounds like you're doing this with a fairly local group, most of whom know each other. Since a large chunk of the community will be tied up in this, are you worried about peer pressure? What are you doing to address this? (i.e. if someone leaves the experiment, they're also not going to see much of their friends, who are still tied up spending 20+ hours a week on this)

Questions I think you're more likely to object to

(Please disregard if you consider these disrespectful, but I think they are valid and legitimate questions to ask of someone who is planning to assume not just leadership, but a very Authoritarian leadership role)

7) You seem to encounter significant distress in the face of people who are harshly critical of you. How do you think you'll handle it if a participant freaks out and feels like they are trapped in an abusive situation?

8) In this thread, you've often placed your self-image and standards of respect/discourse as significantly more important than discussion of safety issues. Can you offer some reassurances that safety is, in fact, a higher priority than appearances?

Comment author: 18239018038528017428 31 May 2017 04:26:23AM 2 points [-]

I find it funny that Duncan is branding everyone who persistently disagrees with him as a "troll" or arguing in "bad faith" and so on and so forth.

It seems that he's unfamiliar with the reaction of disgust being directed toward him.

Before anything else, the original post is disgusting.

I suggest that Duncan should kill himself not because I believe that telling people to kill themselves is an "instrumentally rational" argumentative position, but rather because I'm disgusted by his continued existence. I'm asserting that if I could reshape the world at will, he would not be part of my world. The fact that he persists in existing is an affront to my sense of what's right.

This is likely the case for many of the other critics in this thread.

Nobody actually believes that Duncan is some sort of "evil" or "power-hungry" like a Disney cartoon antagonist.

Some people do believe that Duncan is fucked up in the head and is externalizing his personal issues, which involves some (sub)conscious drive toward power and the formation of a cult-like organization most attractive to people suffering from the same mental health problems.

Some people do believe that "Dragon Army" will be deeply harmful to the participants on a deep-seated, instinctual level. Duncan's attempts to persuade these people that "Dragon Army" is fine by writing even more bullshit in the style of his original post belies a deep misunderstanding of what exactly is wrong here.

Don't shit on my doorstep and try to convince me you've given me a wonderful present by, e.g., appealing to the complex aroma of your turd, or making passionate appeals to its shape, or pushing out another one for comparison. The correct action is to remove the turd and never interact with me again.

tl;dr: It doesn't matter how well Duncan "replies" to criticisms because there is no world in which there exists a person who would (1) write the original post and (2) is also a person who is capable of fixing the problems with his proposal. There is no real point in discussion because this is just a non-starter; by virtue of what it is, it cannot be fixed. You can't beautify a turd, nor are there ways in which someone can non-offensively shit on your doorstep.

Comment author: drethelin 31 May 2017 05:59:20AM 9 points [-]

THIS IS WHY WE NEED DOWNVOTES.

Comment author: metatroll 31 May 2017 06:52:14AM 2 points [-]
Comment author: username2 02 June 2017 03:50:28PM 2 points [-]

Thank you metatroll; you're our only hope.

Comment author: MaryCh 01 June 2017 02:36:46PM 6 points [-]

Why do people insist on "critisizing people" instead of "criticizing proposals"? this is like fundamental attribution error, only intentional.

Comment author: cousin_it 31 May 2017 09:20:44AM *  6 points [-]

As someone who has also criticized Duncan strongly, I don't think telling him to kill himself ended up helping your goals, whatever they are. There's no point trying to influence people and doing it poorly.

Comment author: Jacobian 30 May 2017 07:14:53PM 2 points [-]

We will have an internal economy whereby people can trade effort for money and money for time and so on and so forth, because heck yeah.

The last time I lived in an actual barracks, we did exactly that and it worked out great. In brief, all chores were assigned values in a currency and auctioned out to members. Members with less currency had priority in taking over tasks. If no one volunteered, the person with the least currency had to do it. Eventually, these points were used for some bartering of mutual services.

There's more detail and more bad jokes in the blog post I wrote about it.

Comment author: Sinal 27 May 2017 05:24:08AM 2 points [-]

This post makes me miss my days in marching band, or in the Boy Scouts. Honestly it doesn't sound all that authoritarian. Can you not accomplish the same thing using a traditional organization and a meeting place? Why does it have to be a house?

Comment author: John_Maxwell_IV 27 May 2017 07:27:00AM *  8 points [-]

This post makes me miss my days in marching band, or in the Boy Scouts. Honestly it doesn't sound all that authoritarian.

I agree with the sentiment. It seems that most things in modern culture like marching band or Boy Scouts which demand commitment and/or group cohesion are at least a few decades old. I suspect this is because we have developed cultural antibodies towards the creation of new things like this (as evidenced by some of the comments in this thread).

When Tocqueville visited the United States in the 1830s, it was the Americans' propensity for civic association that most impressed him as the key to their unprecedented ability to make democracy work. "Americans of all ages, all stations in life, and all types of disposition," he observed, "are forever forming associations. There are not only commercial and industrial associations in which all take part, but others of a thousand different types--religious, moral, serious, futile, very general and very limited, immensely large and very minute... Nothing, in my view, deserves more attention than the intellectual and moral associations in America."

Source: http://xroads.virginia.edu/~HYPER/DETOC/assoc/bowling.html (This quote is part of a longer essay about declining social capital in the US)

If we've lost cultural memories about how to create new associations, early attempts to get "association culture" going again may fail. But they seem like very worthwhile experiments. (I suppose if people dislike learning things the hard way, they might be able to read about the early history of successful associations and glean some best practices?)

Comment author: John_Maxwell_IV 27 May 2017 11:09:43PM 9 points [-]

Exploring this intuition more deeply: Successful communities are known to do community stuff like sing together and have secret handshakes. If a person in a fledgling community proposes doing something this for the first time, people in our culture are apt to shut them down by saying it's weird or (if done for the explicit purpose of community-building) inauthentic. The skeptics miss the fact that the weirdness is a feature, not a bug. Doing weird stuff with other people builds deep friendships, in the same way sharing private thoughts and fears builds deep friendships. Then somewhere along the line the weird stuff starts to become a tradition, and the fact that it's a tradition builds group cohesion in a different way.

Hypothesis for why the antibodies exist: People noticed that there were standard methods for creating in-group identification, and these methods were exploited by con artists, advertisers, managers trying to get their employees to work harder, teachers trying to get their students to behave, etc. Antibodies formed in response.

Comment author: Kaj_Sotala 28 May 2017 04:11:22PM 6 points [-]

Hypothesis for why the antibodies exist: People noticed that there were standard methods for creating in-group identification, and these methods were exploited by con artists, advertisers, managers trying to get their employees to work harder, teachers trying to get their students to behave, etc. Antibodies formed in response.

Given that the standard response to "weird" groups that demand cohesion/commitment seems to be "that sounds like a cult", it feels like these antibodies could have developed after the cult scares, which Wikipedia tells me showed up seriously in the 1970s.

Comment author: Duncan_Sabien 28 May 2017 05:25:43AM 2 points [-]

I like both of these posts a lot. Thanks for adding them—they helped me make explicit something implicit that I felt very strongly.

Comment author: Qiaochu_Yuan 27 May 2017 09:00:16AM 5 points [-]

Can you not accomplish the same thing using a traditional organization and a meeting place? Why does it have to be a house?

A couple of reasons occur to me. First, everyone's real goddamn busy. If you already live in a rationalist house and also have a job there's not gonna be a ton of time or attention left in your life for other stuff as big as what Duncan wants Dragon Army to be. Second, Duncan wants people to do things like exercise with each other first thing in the morning before heading off to work, and it seems really annoyingly difficult to coordinate something like this with anyone other than the people you live with.

In general it's just way, way easier to coordinate all sorts of activities with the people you live with than with anybody else. My most direct experience with this was living in a fraternity and seeing the difference between the brothers who did and didn't live in the house; there was a big difference in terms of social accessibility and bonding, and accordingly we strongly encouraged people to live in the house when at all possible.

Comment author: oge 26 May 2017 03:06:04PM 2 points [-]

Hey Duncan, where can I sign up for this?

Comment author: ChristianKl 26 May 2017 10:55:40AM 2 points [-]

At least one of: fundamentals of woodworking, electrical engineering, welding, plumbing, or similar (employable trade skill)

How much time investment do you think it needs to pick up one of those skills?

Comment author: ChristianKl 26 May 2017 10:51:10AM 2 points [-]

A Dragon will take responsibility for its actions, emotional responses, and the consequences thereof, e.g. if late will not blame bad luck/circumstance, if angry or triggered will not blame the other party. [...] a Dragon who has been having trouble getting to sleep but has never informed the other Dragons that their actions are keeping them awake will agree that their anger and frustration, while valid internally, may not fairly be vented on those other Dragons

This sounds like you need an agreement about what it means to blame and what it means to vent another person. Circling norms would suggest that it's good if a person expresses their emotions.

One solution would be that if a person goes in venting mode, the reaction of the other person would be to go into Circling/NVC or Focusing queries and the venting person is responsible to answer the queries.

Comment author: GuySrinivasan 26 May 2017 02:31:49AM 2 points [-]

Love it. Reminds me of my strong preference to make rules for myself and follow them even when it seems locally silly over trying to continually make good decisions in the moment.