Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Dagon 25 May 2017 09:29:47PM 0 points [-]

I applaud the experiment, and the writeup! Do you have a place where you'll publish metrics (people contacted, interest level, etc. before starting, and self-reported or objective measures of your stated objectives every week)?

Comment author: Duncan_Sabien 25 May 2017 09:38:55PM 0 points [-]

That's not been formally set, but yes—that's the biggest ask we get from outsiders interested, and it's clearly one of the "obvious things" that we ought to do, so it's been part of the plan for a while now. We just have to hammer out the details once the group is set.

Depending on interest, we may publish those updates here on LW, or make them available through my blog or FB, or some other option we haven't thought of yet.

Dragon Army: Theory & Charter (30min read)

Duncan_Sabien 25 May 2017 09:07PM

Author's note: This IS a rationality post (specifically, theorizing on group rationality and autocracy/authoritarianism), but the content is quite cunningly disguised beneath a lot of meandering about the surface details of a group house charter.  If you're not at least hypothetically interested in reading about the workings of an unusual group house full of rationalists in Berkeley, you can stop here.  


Section 0 of 3: Preamble

Purpose of post:  Threefold.  First, a lot of rationalists live in group houses, and I believe I have some interesting models and perspectives, and I want to make my thinking available to anyone else who's interested in skimming through it for Things To Steal.  Second, since my initial proposal to found a house, I've noticed a significant amount of well-meaning pushback and concern à la have you noticed the skulls? and it's entirely unfair for me to expect that to stop unless I make my skull-noticing evident.  Third, some nonzero number of humans are gonna need to sign the final version of this charter if the house is to come into existence, and it has to be viewable somewhere.  I figured the best place was somewhere that impartial clear thinkers could weigh in (flattery).

What is Dragon Army [Barracks]?  It's a high-commitment, high-standards, high-investment group house model with centralized leadership and an up-or-out participation norm, designed to a) improve its members and b) actually accomplish medium-to-large scale tasks requiring long-term coordination.  Tongue-in-cheek referred to as the "fascist/authoritarian take on rationalist housing," which has no doubt contributed to my being vulnerable to strawmanning but was nevertheless the correct joke to be making, lest people misunderstand what they were signing up for.  Aesthetically modeled after Dragon Army from Ender's Game (not HPMOR), with a touch of Paper Street Soap Company thrown in, with Duncan Sabien in the role of Ender/Tyler and Eli Tyre in the role of Bean/The Narrator.

Why?  Current group housing/attempts at group rationality and community-supported leveling up seem to me to be falling short in a number of ways.  First, there's not enough stuff actually happening in them (i.e. to the extent people are growing and improving and accomplishing ambitious projects, it's largely within their professional orgs or fueled by unusually agenty individuals, and not by leveraging the low-hanging fruit available in our house environments).  Second, even the group houses seem to be plagued by the same sense of unanchored abandoned loneliness that's hitting the rationalist community specifically and the millennial generation more generally.  There are a bunch of competitors for "third," but for now we can leave it at that.

"You are who you practice being."


Section 1 of 3: Underlying models

The following will be meandering and long-winded; apologies in advance.  In short, both the house's proposed aesthetic and the impulse to found it in the first place were not well-reasoned from first principles—rather, they emerged from a set of System 1 intuitions which have proven sound/trustworthy in multiple arenas and which are based on experience in a variety of domains.  This section is an attempt to unpack and explain those intuitions post-hoc, by holding plausible explanations up against felt senses and checking to see what resonates.

Problem 1: Pendulums

This one's first because it informs and underlies a lot of my other assumptions.  Essentially, the claim here is that most social progress can be modeled as a pendulum oscillating decreasingly far from an ideal.  The society is "stuck" at one point, realizes that there's something wrong about that point (e.g. that maybe we shouldn't be forcing people to live out their entire lives in marriages that they entered into with imperfect information when they were like sixteen), and then moves to correct that specific problem, often breaking some other Chesterton's fence in the process.


For example, my experience leads me to put a lot of confidence behind the claim that we've traded "a lot of people trapped in marriages that are net bad for them" for "a lot of people who never reap the benefits of what would've been a strongly net-positive marriage, because it ended too easily too early on."  The latter problem is clearly smaller, and is probably a better problem to have as an individual, but it's nevertheless clear (to me, anyway) that the loosening of the absoluteness of marriage had negative effects in addition to its positive ones.

Proposed solution: Rather than choosing between absolutes, integrate.  For example, I have two close colleagues/allies who share millennials' default skepticism of lifelong marriage, but they also are skeptical that a commitment-free lifestyle is costlessly good.  So they've decided to do handfasting, in which they're fully committed for a year and a day at a time, and there's a known period of time for asking the question "should we stick together for another round?"

In this way, I posit, you can get the strengths of the old socially evolved norm which stood the test of time, while also avoiding the majority of its known failure modes.  Sort of like building a gate into the Chesterton's fence, instead of knocking it down—do the old thing in time-boxed iterations with regular strategic check-ins, rather than assuming you can invent a new thing from whole cloth.

Caveat/skull: Of course, the assumption here is that the Old Way Of Doing Things is not a slippery slope trap, and that you can in fact avoid the failure modes simply by trying.  And there are plenty of examples of that not working, which is why Taking Time-Boxed Experiments And Strategic Check-Ins Seriously is a must.  In particular, when attempting to strike such a balance, all parties must have common knowledge agreement about which side of the ideal to err toward (e.g. innocents in prison, or guilty parties walking free?).

 

Problem 2: The Unpleasant Valley

As far as I can tell, it's pretty uncontroversial to claim that humans are systems with a lot of inertia.  Status quo bias is well researched, past behavior is the best predictor of future behavior, most people fail at resolutions, etc.

I have some unqualified speculation regarding what's going on under the hood.  For one, I suspect that you'll often find humans behaving pretty much as an effort- and energy-conserving algorithm would behave.  People have optimized their most known and familiar processes at least somewhat, which means that it requires less oomph to just keep doing what you're doing than to cobble together a new system.  For another, I think hyperbolic discounting gets way too little credit/attention, and is a major factor in knocking people off the wagon when they're trying to forego local behaviors that are known to be intrinsically rewarding for local behaviors that add up to long-term cumulative gain.

But in short, I think the picture of "I'm going to try something new, eh?" often looks like this:


... with an "unpleasant valley" some time after the start point.  Think about the cold feet you get after the "honeymoon period" has worn off, or the desires and opinions of a military recruit in the second week of a six-week boot camp, or the frustration that emerges two months into a new diet/exercise regime, or your second year of being forced to take piano lessons.

The problem is, people never make it to the third year, where they're actually good at piano, and start reaping the benefits, and their System 1 updates to yeah, okay, this is in fact worth it.  Or rather, they sometimes make it, if there are strong supportive structures to get them across the unpleasant valley (e.g. in a military bootcamp, they just ... make you keep going).  But left to our own devices, we'll often get halfway through an experiment and just ... stop, without ever finding out what the far side is actually like.

Proposed solution: Make experiments "unquittable."  The idea here is that (ideally) one would not enter into a new experiment unless a) one were highly confident that one could absorb the costs, if things go badly, and b) one were reasonably confident that there was an Actually Good Thing waiting at the finish line.  If (big if) we take those as a given, then it should be safe to, in essence, "lock oneself in," via any number of commitment mechanisms.  Or, to put it in other words: "Medium-Term Future Me is going to lose perspective and want to give up because of being unable to see past short-term unpleasantness to the juicy, long-term goal?  Fine, then—Medium-Term Future Me doesn't get a vote."  Instead, Post-Experiment Future Me gets the vote, including getting to update heuristics on which-kinds-of-experiments-are-worth-entering.

Caveat/skull: People who are bad at self-modeling end up foolishly locking themselves into things that are higher-cost or lower-EV than they thought, and getting burned; black swans and tail risk ends up making even good bets turn out very very badly; we really should've built in an ejector seat.  This risk can be mostly ameliorated by starting small and giving people a chance to calibrate—you don't make white belts try to punch through concrete blocks, you make them punch soft, pillowy targets first.

And, of course, you do build in an ejector seat.  See next.

 

Problem 3: Saving Face

If any of you have been to a martial arts academy in the United States, you're probably familiar with the norm whereby a tardy student purchases entry into the class by first doing some pushups.  The standard explanation here is that the student is doing the pushups not as a punishment, but rather as a sign of respect for the instructor, the other students, and the academy as a whole.

I posit that what's actually going on includes that, but is somewhat more subtle/complex.  I think the real benefit of the pushup system is that it closes the loop.  

Imagine you're a ten year old kid, and your parent picked you up late from school, and you're stuck in traffic on your way to the dojo.  You're sitting there, jittering, wondering whether you're going to get yelled at, wondering whether the master or the other students will think you're lazy, imagining stuttering as you try to explain that it wasn't your fault—

Nope, none of that.  Because it's already clearly established that if you fail to show up on time, you do some pushups, and then it's over.  Done.  Finished.  Like somebody sneezed and somebody else said "bless you," and now we can all move on with our lives.  Doing the pushups creates common knowledge around the questions "does this person know what they did wrong?" and "do we still have faith in their core character?"  You take your lumps, everyone sees you taking your lumps, and there's no dangling suspicion that you were just being lazy, or that other people are secretly judging you.  You've paid the price in public, and everyone knows it, and this is a good thing.

Proposed solution: This is a solution without a concrete problem, since I haven't yet actually outlined the specific commitments a Dragon has to make (regarding things like showing up on time, participating in group activities, and making personal progress).  But in essence, the solution is this: you have to build into your system from the beginning a set of ways-to-regain-face.  Ways to hit the ejector seat on an experiment that's going screwy without losing all social standing; ways to absorb the occasional misstep or failure-to-adequately-plan; ways to be less-than-perfect and still maintain the integrity of a system that's geared toward focusing everyone on perfection.  In short, people have to know (and others have to know that they know, and they have to know that others know that they know) exactly how to make amends to the social fabric, in cases where things go awry, so that there's no question about whether they're trying to make amends, or whether that attempt is sufficient.  


Caveat/skull: The obvious problem is people attempting to game the system—they notice that ten pushups is way easier than doing the diligent work required to show up on time 95 times out of 100.  The next obvious problem is that the price is set too low for the group, leaving them to still feel jilted or wronged, and the next obvious problem is that the price is set too high for the individual, leaving them to feel unfairly judged or punished (the fun part is when both of those are true at the same time).  Lastly, there's something in the mix about arbitrariness—what do pushups have to do with lateness, really?  I mean, I get that it's paying some kind of unpleasant cost, but ...


Problem 4: Defections & Compounded Interest

I'm pretty sure everyone's tired of hearing about one-boxing and iterated prisoners' dilemmas, so I'm going to move through this one fairly quickly even though it could be its own whole multipage post.  In essence, the problem is that any rate of tolerance of real defection (i.e. unmitigated by the social loop-closing norms above) ultimately results in the destruction of the system.  Another way to put this is that people underestimate by a couple of orders of magnitude the corrosive impact of their defections—we often convince ourselves that 90% or 99% is good enough, when in fact what's needed is something like 99.99%.

There's something good that happens if you put a little bit of money away with every paycheck, and it vanishes or is severely curtailed once you stop, or start skipping a month here and there.  Similarly, there's something good that happens when a group of people agree to meet in the same place at the same time without fail, and it vanishes or is severely curtailed once one person skips twice.

In my work at the Center for Applied Rationality, I frequently tell my colleagues and volunteers "if you're 95% reliable, that means I can't rely on you."  That's because I'm in a context where "rely" means really trust that it'll get done.  No, really.  No, I don't care what comes up, DID YOU DO THE THING?  And if the answer is "Yeah, 19 times out of 20," then I can't give that person tasks ever again, because we run more than 20 workshops and I can't have one of them catastrophically fail.

(I mean, I could.  It probably wouldn't be the end of the world.  But that's exactly the point—I'm trying to create a pocket universe in which certain things, like "the CFAR workshop will go well," are absolutely reliable, and the "absolute" part is important.)

As far as I can tell, it's hyperbolic discounting all over again—the person who wants to skip out on the meetup sees all of these immediate, local costs to attending, and all of these visceral, large gains to defection, and their S1 doesn't properly weight the impact to those distant, cumulative effects (just like the person who's going to end up with no retirement savings because they wanted those new shoes this month instead of next month).  1.01^n takes a long time to look like it's going anywhere, and in the meantime the quick one-time payoff of 1.1 that you get by knocking everything else down to .99^n looks juicy and delicious and seems justified.

But something magical does accrue when you make the jump from 99% to 100%.  That's when you see teams that truly trust and rely on one another, or marriages built on unshakeable faith (and you see what those teams and partnerships can build, when they can adopt time horizons of years or decades rather than desperately hoping nobody will bail after the third meeting).  It starts with a common knowledge understanding that yes, this is the priority, even—no, wait, especially—when it seems like there are seductively convincing arguments for it to not be.  When you know—not hope, but know—that you will make a local sacrifice for the long-term good, and you know that they will, too, and you all know that you all know this, both about yourselves and about each other.

Proposed solution: Discuss, and then agree upon, and then rigidly and rigorously enforce a norm of perfection in all formal undertakings (and, correspondingly, be more careful and more conservative about which undertakings you officially take on, versus which things you're just casually trying out as an informal experiment), with said norm to be modified/iterated only during predecided strategic check-in points and not on the fly, in the middle of things.  Build a habit of clearly distinguishing targets you're going to hit from targets you'd be happy to hit.  Agree upon and uphold surprisingly high costs for defection, Hofstadter style, recognizing that a cost that feels high enough probably isn't.  Leave people wiggle room as in Problem 3, but define that wiggle room extremely concretely and objectively, so that it's clear in advance when a line is about to be crossed.  Be ridiculously nitpicky and anal about supporting standards that don't seem worth supporting, in the moment, if they're in arenas that you've previously assessed as susceptible to compounding.  Be ruthless about discarding standards during strategic review; if a member of the group says that X or Y or Z is too high-cost for them to sustain, believe them, and make decisions accordingly.

Caveat/skull: Obviously, because we're humans, even people who reflectively endorse such an overall solution will chafe when it comes time for them to pay the price (I certainly know I've chafed under standards I fought to install).  At that point, things will seem arbitrary and overly constraining, priorities will seem misaligned (and might actually be), and then feelings will be hurt and accusations will be leveled and things will be rough.  The solution there is to have, already in place, strong and open channels of communication, strong norms and scaffolds for emotional support, strong default assumption of trust and good intent on all sides, etc. etc.  This goes wrongest when things fester and people feel they can't speak up; it goes much better if people have channels to lodge their complaints and reservations and are actively incentivized to do so (and can do so without being accused of defecting on the norm-in-question; criticism =/= attack).

 

Problem 5: Everything else

There are other models and problems in the mix—for instance, I have a model surrounding buy-in and commitment that deals with an escalating cycle of asks-and-rewards, or a model of how to effectively leverage a group around you to accomplish ambitious tasks that requires you to first lay down some "topsoil" of simple/trivial/arbitrary activities that starts the growth of an ecology of affordances, or a theory that the strategy of trying things and doing things outstrips the strategy of think-until-you-identify-worthwhile-action, and that rationalists in particular are crippling themselves through decision paralysis/letting the perfect be the enemy of the good when just doing vaguely interesting projects would ultimately gain them more skill and get them further ahead, or a strong sense based off both research and personal experience that physical proximity matters, and that you can't build the correct kind of strength and flexibility and trust into your relationships without actually spending significant amounts of time with one another in meatspace on a regular basis, regardless of whether that makes tactical sense given your object-level projects and goals.

But I'm going to hold off on going into those in detail until people insist on hearing about them or ask questions/pose hesitations that could be answered by them.


Section 2 of 3: Power dynamics

All of the above was meant to point at reasons why I suspect trusting individuals responding to incentives moment-by-moment to be a weaker and less effective strategy than building an intentional community that Actually Asks Things Of Its Members.  It was also meant to justify, at least indirectly, why a strong guiding hand might be necessary given that our community's evolved norms haven't really produced results (in the group houses) commensurate with the promises of EA and rationality.

Ultimately, though, what matters is not the problems and solutions themselves so much as the light they shine on my aesthetics (since, in the actual house, it's those aesthetics that will be used to resolve epistemic gridlock).  In other words, it's not so much those arguments as it is the fact that Duncan finds those arguments compelling.  It's worth noting that the people most closely involved with this project (i.e. my closest advisors and those most likely to actually sign on as housemates) have been encouraged to spend a significant amount of time explicitly vetting me with regards to questions like "does this guy actually think things through," "is this guy likely to be stupid or meta-stupid," "will this guy listen/react/update/pivot in response to evidence or consensus opposition," and "when this guy has intuitions that he can't explain, do they tend to be validated in the end?"

In other words, it's fair to view this whole post as an attempt to prove general trustworthiness (in both domain expertise and overall sanity), because—well—that's what it is.  In milieu like the military, authority figures expect (and get) obedience irrespective of whether or not they've earned their underlings' trust; rationalists tend to have a much higher bar before they're willing to subordinate their decisionmaking processes, yet still that's something this sort of model requires of its members (at least from time to time, in some domains, in a preliminary "try things with benefit of the doubt" sort of way).  I posit that Dragon Army Barracks works (where "works" means "is good and produces both individual and collective results that outstrip other group houses by at least a factor of three") if and only if its members are willing to hold doubt in reserve and act with full force in spite of reservations—if they're willing to trust me more than they trust their own sense of things (at least in the moment, pending later explanation and recalibration on my part or theirs or both).

And since that's a) the central difference between DA and all the other group houses, which are collections of non-subordinate equals, and b) quite the ask, especially in a rationalist community, it's entirely appropriate that it be given the greatest scrutiny.  Likely participants in the final house spent ~64 consecutive hours in my company a couple of weekends ago, specifically to play around with living under my thumb and see whether it's actually a good place to be; they had all of the concerns one would expect and (I hope) had most of those concerns answered to their satisfaction.  The rest of you will have to make do with grilling me in the comments here.

 

"Why was Tyler Durden building an army?  To what purpose?  For what greater good? ...in Tyler we trusted."

 

Power and authority are generally anti-epistemic—for every instance of those-in-power defending themselves against the barbarians at the gates or anti-vaxxers or the rise of Donald Trump, there are a dozen instances of them squashing truth, undermining progress that would make them irrelevant, and aggressively promoting the status quo.

Thus, every attempt by an individual to gather power about themselves is at least suspect, given regular ol' incentive structures and regular ol' fallible humans.  I can (and do) claim to be after a saved world and a bunch of people becoming more the-best-versions-of-themselves-according-to-themselves, but I acknowledge that's exactly the same claim an egomaniac would make, and I acknowledge that the link between "Duncan makes all his housemates wake up together and do pushups" and "the world is incrementally less likely to end in gray goo and agony" is not obvious.

And it doesn't quite solve things to say, "well, this is an optional, consent-based process, and if you don't like it, don't join," because good and moral people have to stop and wonder whether their friends and colleagues with slightly weaker epistemics and slightly less-honed allergies to evil are getting hoodwinked.  In short, if someone's building a coercive trap, it's everyone's problem.

 

"Over and over he thought of the things he did and said in his first practice with his new army. Why couldn't he talk like he always did in his evening practice group? No authority except excellence. Never had to give orders, just made suggestions. But that wouldn't work, not with an army. His informal practice group didn't have to learn to do things together. They didn't have to develop a group feeling; they never had to learn how to hold together and trust each other in battle. They didn't have to respond instantly to command.

And he could go to the other extreme, too. He could be as lax and incompetent as Rose the Nose, if he wanted. He could make stupid mistakes no matter what he did. He had to have discipline, and that meant demanding—and getting—quick, decisive obedience. He had to have a well-trained army, and that meant drilling the soldiers over and over again, long after they thought they had mastered a technique, until it was so natural to them that they didn't have to think about it anymore."

 

But on the flip side, we don't have time to waste.  There's existential risk, for one, and even if you don't buy ex-risk à la AI or bioterrorism or global warming, people's available hours are trickling away at the alarming rate of one hour per hour, and none of us are moving fast enough to get All The Things done before we die.  I personally feel that I am operating far below my healthy sustainable maximum capacity, and I'm not alone in that, and something like Dragon Army could help.

So.  Claims, as clearly as I can state them, in answer to the question "why should a bunch of people sacrifice non-trivial amounts of their autonomy to Duncan?"

1. Somebody ought to run this, and no one else will.  On the meta level, this experiment needs to be run—we have like twenty or thirty instances of the laissez-faire model, and none of the high-standards/hardcore one, and also not very many impressive results coming out of our houses.  Due diligence demands investigation of the opposite hypothesis.  On the object level, it seems uncontroversial to me that there are goods waiting on the other side of the unpleasant valley—goods that a team of leveled-up, coordinated individuals with bonds of mutual trust can seize that the rest of us can't even conceive of, at this point, because we don't have a deep grasp of what new affordances appear once you get there.

2. I'm the least unqualified person around.  Those words are chosen deliberately, for this post on "less wrong."  I have a unique combination of expertise that includes being a rationalist, sixth grade teacher, coach, RA/head of a dormitory, ringleader of a pack of hooligans, member of two honor code committees, curriculum director, obsessive sci-fi/fantasy nerd, writer, builder, martial artist, parkour guru, maker, and generalist.  If anybody's intuitions and S1 models are likely to be capable of distinguishing the uncanny valley from the real deal, I posit mine are.

3. There's never been a safer context for this sort of experiment.  It's 2017, we live in the United States, and all of the people involved are rationalists.  We all know about NVC and double crux, we're all going to do Circling, we all know about Gendlin's Focusing, and we've all read the Sequences (or will soon).  If ever there was a time to say "let's all step out onto the slippery slope, I think we can keep our balance," it's now—there's no group of people better equipped to stop this from going sideways.

4. It does actually require a tyrant. As a part of a debrief during the weekend experiment/dry run, we went around the circle and people talked about concerns/dealbreakers/things they don't want to give up.  One interesting thing that popped up is that, according to consensus, it's literally impossible to find a time of day when the whole group could get together to exercise.  This happened even with each individual being willing to make personal sacrifices and doing things that are somewhat costly.

If, of course, the expectation is that everybody shows up on Tuesday and Thursday evenings, and the cost of not doing so is not being present in the house, suddenly the situation becomes simple and workable.  And yes, this means some kids left behind (ctrl+f), but the whole point of this is to be instrumentally exclusive and consensually high-commitment.  You just need someone to make the actual final call—there are too many threads for the coordination problem of a house of this kind to be solved by committee, and too many circumstances in which it's impossible to make a principled, justifiable decision between 492 almost-indistinguishably-good options.  On top of that, there's a need for there to be some kind of consistent, neutral force that sets course, imposes consistency, resolves disputes/breaks deadlock, and absorbs all of the blame for the fact that it's unpleasant to be forced to do things you know you ought to but don't want to do.

And lastly, we (by which I indicate the people most likely to end up participating) want the house to do stuff—to actually take on projects of ambitious scope, things that require ten or more talented people reliably coordinating for months at a time.  That sort of coordination requires a quarterback on the field, even if the strategizing in the locker room is egalitarian.

5. There isn't really a status quo for power to abusively maintain.  Dragon Army Barracks is not an object-level experiment in making the best house; it's a meta-level experiment attempting (through iteration rather than armchair theorizing) to answer the question "how best does one structure a house environment for growth, self-actualization, productivity, and social synergy?"  It's taken as a given that we'll get things wrong on the first and second and third try; the whole point is to shift from one experiment to the next, gradually accumulating proven-useful norms via consensus mechanisms, and the centralized power is mostly there just to keep the transitions smooth and seamless.  More importantly, the fundamental conceit of the model is "Duncan sees a better way, which might take some time to settle into," but after e.g. six months, if the thing is not clearly positive and at least well on its way to being self-sustaining, everyone ought to abandon it anyway.  In short, my tyranny, if net bad, has a natural time limit, because people aren't going to wait around forever for their results.

6. The experiment has protections built in.  Transparency, operationalization, and informed consent are the name of the game; communication and flexibility are how the machine is maintained.  Like the Constitution, Dragon Army's charter and organization are meant to be "living documents" that constrain change only insofar as they impose reasonable limitations on how wantonly change can be enacted.


Section 3 of 3: Dragon Army Charter (DRAFT)

Statement of purpose:

Dragon Army Barracks is a group housing and intentional community project which exists to support its members socially, emotionally, intellectually, and materially as they endeavor to improve themselves, complete worthwhile projects, and develop new and useful culture, in that order.  In addition to the usual housing commitments (i.e. rent, utilities, shared expenses), its members will make limited and specific commitments of time, attention, and effort averaging roughly 90 hours a month (~1.5hr/day plus occasional weekend activities).

Dragon Army Barracks will have an egalitarian, flat power structure, with the exception of a commander (Duncan Sabien) and a first officer (Eli Tyre).  The commander's role is to create structure by which the agreed-upon norms and standards of the group shall be discussed, decided, and enforced, to manage entry to and exit from the group, and to break epistemic gridlock/make decisions when speed or simplification is required.  The first officer's role is to manage and moderate the process of building consensus around the standards of the Army—what they are, and in what priority they should be met, and with what consequences for failure.  Other "management" positions may come into existence in limited domains (e.g. if a project arises, it may have a leader, and that leader will often not be Duncan or Eli), and will have their scope and powers defined at the point of creation/ratification.

Initial areas of exploration:

The particular object level foci of Dragon Army Barracks will change over time as its members experiment and iterate, but at first it will prioritize the following:

  • Physical proximity (exercising together, preparing and eating meals together, sharing a house and common space)
  • Regular activities for bonding and emotional support (Circling, pair debugging, weekly retrospective, tutoring/study hall)
  • Regular activities for growth and development (talk night, tutoring/study hall, bringing in experts, cross-pollination)
  • Intentional culture (experiments around lexicon, communication, conflict resolution, bets & calibration, personal motivation, distribution of resources & responsibilities, food acquisition & preparation, etc.)
  • Projects with "shippable" products (e.g. talks, blog posts, apps, events; some solo, some partner, some small group, some whole group; ranging from short-term to year-long)
  • Regular (every 6-10 weeks) retreats to learn a skill, partake in an adventure or challenge, or simply change perspective

Dragon Army Barracks will begin with a move-in weekend that will include ~10 hours of group bonding, discussion, and norm-setting.  After that, it will enter an eight-week bootcamp phase, in which each member will participate in at least the following:

  • Whole group exercise (90min, 3x/wk, e.g. Tue/Fri/Sun)
  • Whole group dinner and retrospective (120min, 1x/wk, e.g. Tue evening)
  • Small group baseline skill acquisition/study hall/cross-pollination (90min, 1x/wk)
  • Small group circle-shaped discussion (120min, 1x/wk)
  • Pair debugging or rapport building (45min, 2x/wk)
  • One-on-one check-in with commander (20min, 2x/wk)
  • Chore/house responsibilities (90min distributed)
  • Publishable/shippable solo small-scale project work with weekly public update (100min distributed)

... for a total time commitment of 16h/week or 128 hours total, followed by a whole group retreat and reorientation.  The house will then enter an eight-week trial phase, in which each member will participate in at least the following:

  • Whole group exercise (90min, 3x/wk)
  • Whole group dinner, retrospective, and plotting (150min, 1x/wk)
  • Small group circling and/or pair debugging (120min distributed)
  • Publishable/shippable small group medium-scale project work with weekly public update (180min distributed)
  • One-on-one check-in with commander (20min, 1x/wk)
  • Chore/house responsibilities (60min distributed)
... for a total time commitment of 13h/week or 104 hours total, again followed by a whole group retreat and reorientation.  The house will then enter a third phase where commitments will likely change, but will include at a minimum whole group exercise, whole group dinner, and some specific small-group responsibilities, either social/emotional or project/productive (once again ending with a whole group retreat).  At some point between the second and third phase, the house will also ramp up for its first large-scale project, which is yet to be determined but will be roughly on the scale of putting on a CFAR workshop in terms of time and complexity.

Should the experiment prove successful past its first six months, and worth continuing for a full year or longer, by the end of the first year every Dragon shall have a skill set including, but not limited to:
  • Above-average physical capacity
  • Above-average introspection
  • Above-average planning & execution skill
  • Above-average communication/facilitation skill
  • Above-average calibration/debiasing/rationality knowledge
  • Above-average scientific lab skill/ability to theorize and rigorously investigate claims
  • Average problem-solving/debugging skill
  • Average public speaking skill
  • Average leadership/coordination skill
  • Average teaching and tutoring skill
  • Fundamentals of first aid & survival
  • Fundamentals of financial management
  • At least one of: fundamentals of programming, graphic design, writing, A/V/animation, or similar (employable mental skill)
  • At least one of: fundamentals of woodworking, electrical engineering, welding, plumbing, or similar (employable trade skill)
Furthermore, every Dragon should have participated in:
  • At least six personal growth projects involving the development of new skill (or honing of prior skill)
  • At least three partner- or small-group projects that could not have been completed alone
  • At least one large-scale, whole-army project that either a) had a reasonable chance of impacting the world's most important problems, or b) caused significant personal growth and improvement
  • Daily contributions to evolved house culture
Speaking of evolved house culture...

Because of both a) the expected value of social exploration and b) the cumulative positive effects of being in a group that's trying things regularly and taking experiments seriously, Dragon Army will endeavor to adopt no fewer than one new experimental norm per week.  Each new experimental norm should have an intended goal or result, an informal theoretical backing, and a set re-evaluation time (default three weeks).  There are two routes by which a new experimental norm is put into place:

  • The experiment is proposed by a member, discussed in a whole group setting, and meets the minimum bar for adoption (>60% of the Army supports, with <20% opposed and no hard vetos)
  • The Army has proposed no new experiments in the previous week, and the Commander proposes three options.  The group may then choose one by vote/consensus, or generate three new options, from which the Commander may choose.
Examples of some of the early norms which the house is likely to try out from day one (hit the ground running):
  • The use of a specific gesture to greet fellow Dragons (house salute)
  • Various call-and-response patterns surrounding house norms (e.g. "What's rule number one?" "PROTECT YOURSELF!")
  • Practice using hook, line, and sinker in social situations (three items other than your name for introductions)
  • The anti-Singer rule for open calls-for-help (if Dragon A says "hey, can anyone help me with X?" the responsibility falls on the physically closest housemate to either help or say "Not me/can't do it!" at which point the buck passes to the next physically closest person)
  • An "interrupt" call that any Dragon may use to pause an ongoing interaction for fifteen seconds
  • A "culture of abundance" in which food and leftovers within the house are default available to all, with exceptions deliberately kept as rare as possible
  • A "graffiti board" upon which the Army keeps a running informal record of its mood and thoughts

Dragon Army Code of Conduct
While the norms and standards of Dragon Army will be mutable by design, the following (once revised and ratified) will be the immutable code of conduct for the first eight weeks, and is unlikely to change much after that.

  1. A Dragon will protect itself, i.e. will not submit to pressure causing it to do things that are dangerous or unhealthy, nor wait around passively when in need of help or support (note that this may cause a Dragon to leave the experiment!).
  2. A Dragon will take responsibility for its actions, emotional responses, and the consequences thereof, e.g. if late will not blame bad luck/circumstance, if angry or triggered will not blame the other party.
  3. A Dragon will assume good faith in all interactions with other Dragons and with house norms and activities, i.e. will not engage in strawmanning or the horns effect.
  4. A Dragon will be candid and proactive, e.g. will not form negative models of other Dragons without giving those Dragons a chance to hear about and interact with them, or will not sit on an emotional or interpersonal problem until it festers into something worse.
  5. A Dragon will be fully present and supportive when interacting with other Dragons in formal/official contexts, i.e. will not engage in silent defection, undermining, halfheartedness, aloofness, subtle sabotage, or other actions which follow the letter of the law while violating the spirit.  Another way to state this is that a Dragon will practice compartmentalization—will be able to simultaneously hold "I'm deeply skeptical about this" alongside "but I'm actually giving it an honest try," and postpone critique/complaint/suggestion until predetermined checkpoints.  Yet another way to state this is that a Dragon will take experiments seriously, including epistemic humility and actually seeing things through to their ends rather than fiddling midway.
  6. A Dragon will take the outside view seriously, maintain epistemic humility, and make subject-object shifts, i.e. will act as a behaviorist and agree to judge and be judged on the basis of actions and revealed preferences rather than intentions, hypotheses, and assumptions (this one's similar to #2 and hard to put into words, but for example, a Dragon who has been having trouble getting to sleep but has never informed the other Dragons that their actions are keeping them awake will agree that their anger and frustration, while valid internally, may not fairly be vented on those other Dragons, who were never given a chance to correct their behavior).  Another way to state this is that a Dragon will embrace the maxim "don't believe everything that you think."
  7. A Dragon will strive for excellence in all things, modified only by a) prioritization and b) doing what is necessary to protect itself/maximize total growth and output on long time scales.
  8. A Dragon will not defect on other Dragons.
There will be various operationalizations of the above commitments into specific norms (e.g. a Dragon will read all messages and emails within 24 hours, and if a full response is not possible within that window, will send a short response indicating when the longer response may be expected) that will occur once the specific members of the Army have been selected and have individually signed on.  Disputes over violations of the code of conduct, or confusions about its operationalization, will first be addressed one-on-one or in informal small group, and will then move to general discussion, and then to the first officer, and then to the commander.


Random Logistics
  1. The initial filter for attendance will include a one-on-one interview with the commander (Duncan), who will be looking for a) credible intention to put forth effort toward the goal of having a positive impact on the world, b) likeliness of a strong fit with the structure of the house and the other participants, and c) reliability à la financial stability and ability to commit fully to long-term endeavors.  Final decisions will be made by the commander and may be informally questioned/appealed but not overruled by another power.
  2. Once a final list of participants is created, all participants will sign a "free state" contract of the form "I agree to move into a house within five miles of downtown Berkeley (for length of time X with financial obligation Y) sometime in the window of July 1st through September 30th, conditional on at least seven other people signing this same agreement."  At that point, the search for a suitable house will begin, possibly with delegation to participants.
  3. Rents in that area tend to run ~$1100 per room, on average, plus utilities, plus a 10% contribution to the general house fund.  Thus, someone hoping for a single should, in the 85th percentile worst case, be prepared to make a ~$1400/month commitment.  Similarly, someone hoping for a double should be prepared for ~$700/month, and someone hoping for a triple should be prepared for ~$500/month, and someone hoping for a quad should be prepared for ~$350/month.
  4. The initial phase of the experiment is a six month commitment, but leases are generally one year.  Any Dragon who leaves during the experiment is responsible for continuing to pay their share of the lease/utilities/house fund, unless and until they have found a replacement person the house considers acceptable, or have found three potential viable replacement candidates and had each one rejected.  After six months, should the experiment dissolve, the house will revert to being simply a house, and people will bear the normal responsibility of "keep paying until you've found your replacement."  (This will likely be easiest to enforce by simply having as many names as possible on the actual lease.)
  5. Of the ~90hr/month, it is assumed that ~30 are whole-group, ~30 are small group or pair work, and ~30 are independent or voluntarily-paired work.  Furthermore, it is assumed that the commander maintains sole authority over ~15 of those hours (i.e. can require that they be spent in a specific way consistent with the aesthetic above, even in the face of skepticism or opposition).
  6. We will have an internal economy whereby people can trade effort for money and money for time and so on and so forth, because heck yeah.

Conclusion: Obviously this is neither complete nor perfect.  What's wrong, what's missing, what do you think?  I'm going to much more strongly weight the opinions of Berkelyans who are likely to participate, but I'm genuinely interested in hearing from everyone, particularly those who notice red flags (the goal is not to do anything stupid or meta-stupid).  Have fun tearing it up.

(sorry for the abrupt cutoff, but this was meant to be published Monday and I've just ... not ... been ... sleeping ... to get it done)
Comment author: mingyuan 02 February 2017 02:02:29AM *  5 points [-]

Anecdotal data time! We tried this at last week’s Chicago rationality meetup, with moderate success. Here’s a rundown of how we approached the activity, and some difficulties and confusion we encountered.

Approach:

Before the meeting, some of us came up with lists of possibly contentious topics and/or strongly held opinions, and we used those as starting points by just listing them off to the group and seeing if anyone held the opposite view. Some of the assertions on which we disagreed were:

  • Cryonic preservation should be standard medical procedure upon death, on an opt-out basis
  • For the average person, reading the news has no practical value beyond social signalling
  • Public schools should focus on providing some minimum quality of education to all students before allocating resources to programs for gifted students
  • The rationality movement focuses too much of its energy on AI safety
  • We should expend more effort to make rationality more accessible to ‘normal people’

We paired off, with each pair in front of a blackboard, and spent about 15 minutes on our first double crux, after the resolution of which the conversations mostly devolved. We then came together, gave feedback, switched partners, and tried again.

Difficulties/confusion:

  • For the purposes of practice, we had trouble finding points of genuine disagreement – in some cases we found that the argument dissolved after we clarified minor semantic points in the assertion, and in other cases a pair would just sit there and agree on assertion after assertion (though the latter is more a flaw in the way I designed the activity than in the actual technique). However, we all agree that this technique will be useful when we encounter disagreements in future meetings, and even in the absence of disagreement, the activity of finding cruxes was a useful way of examining the structure of our beliefs.

  • We were a little confused as to whether coming up with an empirical test to resolve the issue was a satisfactory endpoint, or if we actually needed to seek out the results in order to consider the disagreement resolved.

  • In one case, when we were debating the cryonics assertion, my interlocutor managed to convince me of all the factual questions on which I thought my disagreement rested, but I still had some lingering doubt – even though I was convinced of the conclusion on an intellectual level, I didn’t grok it. When we learned goal factoring, we were taught not dismiss fuzzy, difficult-to-define feelings; that they could be genuinely important reasons for our thoughts and behavior. Given its reliance on empiricism, how does Double Crux deal with these feelings, if at all? (Disclaimer: it’s been two years since we learned goal factoring, so maybe we were taught how to deal with this and I just forgot.)

  • In another case, my interlocutor changed his mind on the question of public schools, but when asked to explain the line of argument that led him to change his mind, he wasn’t able to construct an argument that sounded convincing to him. I’m not sure what happened here, but in the future I would place more emphasis on writing down the key points of the discussion as it unfolds. We did make some use of the blackboards, but it wasn’t very systematic.

  • Overall it wasn’t as structured as I expected it to be. People didn’t reference the write-up when immersed in their discussions, and didn’t make use of any of the tips you gave. I know you said we shouldn’t be preoccupied with executing “the ideal double crux,” but I somehow still have the feeling that we didn’t quite do it right. For example, I don’t think we focused enough on falsifiability and we didn’t resonate after reaching our conclusions, which seem like key points. But ultimately the model was still useful, no matter how loosely we adhered to it.

I hope some of that was helpful to you! Also, tell Eli Tyre we miss him!

Comment author: Duncan_Sabien 02 February 2017 04:28:01AM 1 point [-]

Very useful. I don't have the time to give you the detailed response you deserve, but I deeply appreciate the data (and Eli says hi).

Comment author: Robin 10 December 2016 04:44:35AM 0 points [-]

I'm not sure what you mean and I'm not sure that I'd let a LWer falsify my hypothesis. There are clear systemic biases LWers have which are relatively apparent to outsiders. Ultimately I am not willing to pay CFAR to validate my claims and there are biases which emerge from people who are involved in CFAR whether as employees or people who take the courses (sunk cost as well as others).

Comment author: Duncan_Sabien 30 December 2016 05:12:50AM 1 point [-]

I can imagine that you might have hesitated to list specifics to avoid controversy or mud-slinging, but I personally would appreciate concrete examples, as it's basically my job to find the holes you're talking about and try to start patching them.

Comment author: CCC 07 December 2016 02:09:57PM *  2 points [-]

"Uniforms are good because they'll reduce bullying." (A because B, B --> A) "Uniforms are bad, because along with all their costs they fail to reduce bullying." (~A because ~B, ~B --> ~A)

A: "Uniforms are good"

B: "Uniforms reduce bullying"

B->A: "If uniforms reduce bullying, then uniforms are good."

~B->~A : "If uniforms do not reduce bullying, then uniforms are not good."

"A is equivalent to B": "The statement 'uniforms are good' is exactly as true as the statement 'uniforms reduce bullying'."

A->B: "If uniforms are good, then it is possible to deduce that uniforms reduce bullying."

...does that help?

Comment author: Duncan_Sabien 09 December 2016 11:17:20PM *  1 point [-]

Yep. Thanks. =)

I was misunderstanding "equivalency" as "identical in all respects to," rather than seeing equivalency as "exactly as true as."

Comment author: Robin 09 December 2016 07:30:27PM 0 points [-]

I'd take your bet if it were for the general population, not LWers...

My issue with CFAR is it seems to be more focused on teaching a subset of people (LWers or people nearby in mindspace) how to communicate with each other than in teaching them how to communicate with people they are different from.

Comment author: Duncan_Sabien 09 December 2016 11:11:01PM 0 points [-]

That's an entirely defensible impression, but it's also actually false in practice (demonstrably so when you see us at workshops or larger events). Correcting the impression (which again you're justified in having) is a separate issue, but I consider the core complaint to be long-since solved.

Comment author: rational_rob 07 December 2016 12:36:55PM 1 point [-]

I always thought of school uniforms as being a logical extension of the pseudo-fascist/nationalist model of running them. (I mean this in the pre-world war descriptive sense rather than the rhetorical sense that arose after the wars) Lots of schools, at least in America, try to encourage a policy of school unity with things like well-funded sports teams and school pep rallies. I don't know how well these policies work in practice, but if they're willing to go as far as they have now, school uniforms might contribute to whatever effects they hope to achieve. My personal opinion is in favor of school uniforms, but I'm mostly certain that's because I'm not too concerned with fashion or displays of wealth. I'd have to quiz some other people to find out for sure.

Comment author: Duncan_Sabien 07 December 2016 10:37:24PM 0 points [-]

I should note that my own personal opinions on school uniforms are NOT able-to-be-determined from this article.

Comment author: MrMind 06 December 2016 08:10:42AM *  1 point [-]

They are NOT logically equivalent.

Ah, I think I've understood where the problem lies.
See, we both agree that B --> A and -B --> -A. This second statement, as we know from logic, is equivalent to A --> B. So we both agree that B --> A and A --> B. Which yields that A is equivalent to B, or in symbols A <--> B.
This is what I was referring to: the crux being equivalent to the original statement, not that B --> A is logically equivalent to -B --> -A

Comment author: Duncan_Sabien 07 December 2016 12:56:16AM 0 points [-]

I'm probably rustier on my formal logic than you. But I think what's going on here is fuzz around the boundaries where reality gets boiled down and translated to symbolic logic.

"Uniforms are good because they'll reduce bullying." (A because B, B --> A) "Uniforms are bad, because along with all their costs they fail to reduce bullying." (~A because ~B, ~B --> ~A)

Whether this is a minor abuse of language or a minor abuse of logic, I think it's a mistake to go from that to "Uniforms are equivalent to bullying reduction" or "Bullying reductions result in uniforms." I thought that was what you were claiming, and it seems nonsensical to me. I note that I'm confused, and therefore that this is probably not what you were implying, and I've made some silly mistake, but that leaves me little closer to understanding what you were.

Comment author: MrMind 05 December 2016 08:11:36AM 0 points [-]

I hear what you're saying, which is what I hinted at with point n° 2, but "if B then A" is explicitely written in the post: last paragraph in the "How to play" section.
It seems to me you're arguing against the original poster about what "being crucial" means logically, and although I do not agree on the conclusion you reach, I do agree that the formulation is wrong.

Comment author: Duncan_Sabien 05 December 2016 07:30:37PM 0 points [-]

I'm quite confident that my formulation isn't wrong, and that we're talking past each other (specifically, that you're missing something important that I'm apparently not saying well).

What was explicitly written in the post was "If B then A. Furthermore, if not B then not A." Those are two different statements, and you need both of them. The former is an expression of the belief structure of the person on the left. The latter is an expression of the belief structure of the person on the right. They are NOT logically equivalent. They are BOTH required for a "double crux," because the whole point is for the two people to converge—to zero in on the places where they are not in disagreement, or where one can persuade the other of a causal model.

It's a crux that cuts both ways—B's true state implies A's trueness, but B's false state is not irrelevant in the usual way. Speaking strictly logically, if all we know is that B implies A, not-B doesn't have any impact at all on A. But when we're searching for a double crux, we're searching for something where not-B does have causal impact on A—something where not-B implies not-A. That's a meaningfully different and valuable situation, and finding it (particularly, assuming that it exists and can be found, and then going looking for it) is the key ingredient in this particular method of transforming argument into collaborative truth-seeking.

Comment author: Rubix 02 December 2016 01:21:23AM 0 points [-]

For the author and the audience: what are your favourite patience- and sanity-inducing rituals?

Comment author: Duncan_Sabien 05 December 2016 07:22:47PM 0 points [-]

For me, sanity always starts with remembering times that I was wrong—confidently wrong, arrogantly wrong, embarrassingly wrong. I have a handful of dissimilar situations cached in my head as memories (there's one story about popsicles, one story about thinking a fellow classmate was unintelligent, one story about judging a student's performance during a tryout, one story about default trusting someone with some sensitive information), and I can lean on all of those to remind myself not to be overconfident, not to be dismissive, not to trust too hard in my feeling of rightness.

As for patience, I think the key thing is a focus on the value of the actual truth. If I really care about finding the right answer, it's easy to be patient, and if I don't, it's a good sign that I should disengage once I start getting bored or frustrated.

View more: Next