Filter This month

Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Dragon Army: Theory & Charter (30min read)

25 Duncan_Sabien 25 May 2017 09:07PM

Author's note: This IS a rationality post (specifically, theorizing on group rationality and autocracy/authoritarianism), but the content is quite cunningly disguised beneath a lot of meandering about the surface details of a group house charter.  If you're not at least hypothetically interested in reading about the workings of an unusual group house full of rationalists in Berkeley, you can stop here.  


Section 0 of 3: Preamble

Purpose of post:  Threefold.  First, a lot of rationalists live in group houses, and I believe I have some interesting models and perspectives, and I want to make my thinking available to anyone else who's interested in skimming through it for Things To Steal.  Second, since my initial proposal to found a house, I've noticed a significant amount of well-meaning pushback and concern à la have you noticed the skulls? and it's entirely unfair for me to expect that to stop unless I make my skull-noticing evident.  Third, some nonzero number of humans are gonna need to sign the final version of this charter if the house is to come into existence, and it has to be viewable somewhere.  I figured the best place was somewhere that impartial clear thinkers could weigh in (flattery).

What is Dragon Army [Barracks]?  It's a high-commitment, high-standards, high-investment group house model with centralized leadership and an up-or-out participation norm, designed to a) improve its members and b) actually accomplish medium-to-large scale tasks requiring long-term coordination.  Tongue-in-cheek referred to as the "fascist/authoritarian take on rationalist housing," which has no doubt contributed to my being vulnerable to strawmanning but was nevertheless the correct joke to be making, lest people misunderstand what they were signing up for.  Aesthetically modeled after Dragon Army from Ender's Game (not HPMOR), with a touch of Paper Street Soap Company thrown in, with Duncan Sabien in the role of Ender/Tyler and Eli Tyre in the role of Bean/The Narrator.

Why?  Current group housing/attempts at group rationality and community-supported leveling up seem to me to be falling short in a number of ways.  First, there's not enough stuff actually happening in them (i.e. to the extent people are growing and improving and accomplishing ambitious projects, it's largely within their professional orgs or fueled by unusually agenty individuals, and not by leveraging the low-hanging fruit available in our house environments).  Second, even the group houses seem to be plagued by the same sense of unanchored abandoned loneliness that's hitting the rationalist community specifically and the millennial generation more generally.  There are a bunch of competitors for "third," but for now we can leave it at that.

"You are who you practice being."


Section 1 of 3: Underlying models

The following will be meandering and long-winded; apologies in advance.  In short, both the house's proposed aesthetic and the impulse to found it in the first place were not well-reasoned from first principles—rather, they emerged from a set of System 1 intuitions which have proven sound/trustworthy in multiple arenas and which are based on experience in a variety of domains.  This section is an attempt to unpack and explain those intuitions post-hoc, by holding plausible explanations up against felt senses and checking to see what resonates.

Problem 1: Pendulums

This one's first because it informs and underlies a lot of my other assumptions.  Essentially, the claim here is that most social progress can be modeled as a pendulum oscillating decreasingly far from an ideal.  The society is "stuck" at one point, realizes that there's something wrong about that point (e.g. that maybe we shouldn't be forcing people to live out their entire lives in marriages that they entered into with imperfect information when they were like sixteen), and then moves to correct that specific problem, often breaking some other Chesterton's fence in the process.


For example, my experience leads me to put a lot of confidence behind the claim that we've traded "a lot of people trapped in marriages that are net bad for them" for "a lot of people who never reap the benefits of what would've been a strongly net-positive marriage, because it ended too easily too early on."  The latter problem is clearly smaller, and is probably a better problem to have as an individual, but it's nevertheless clear (to me, anyway) that the loosening of the absoluteness of marriage had negative effects in addition to its positive ones.

Proposed solution: Rather than choosing between absolutes, integrate.  For example, I have two close colleagues/allies who share millennials' default skepticism of lifelong marriage, but they also are skeptical that a commitment-free lifestyle is costlessly good.  So they've decided to do handfasting, in which they're fully committed for a year and a day at a time, and there's a known period of time for asking the question "should we stick together for another round?"

In this way, I posit, you can get the strengths of the old socially evolved norm which stood the test of time, while also avoiding the majority of its known failure modes.  Sort of like building a gate into the Chesterton's fence, instead of knocking it down—do the old thing in time-boxed iterations with regular strategic check-ins, rather than assuming you can invent a new thing from whole cloth.

Caveat/skull: Of course, the assumption here is that the Old Way Of Doing Things is not a slippery slope trap, and that you can in fact avoid the failure modes simply by trying.  And there are plenty of examples of that not working, which is why Taking Time-Boxed Experiments And Strategic Check-Ins Seriously is a must.  In particular, when attempting to strike such a balance, all parties must have common knowledge agreement about which side of the ideal to err toward (e.g. innocents in prison, or guilty parties walking free?).

 

Problem 2: The Unpleasant Valley

As far as I can tell, it's pretty uncontroversial to claim that humans are systems with a lot of inertia.  Status quo bias is well researched, past behavior is the best predictor of future behavior, most people fail at resolutions, etc.

I have some unqualified speculation regarding what's going on under the hood.  For one, I suspect that you'll often find humans behaving pretty much as an effort- and energy-conserving algorithm would behave.  People have optimized their most known and familiar processes at least somewhat, which means that it requires less oomph to just keep doing what you're doing than to cobble together a new system.  For another, I think hyperbolic discounting gets way too little credit/attention, and is a major factor in knocking people off the wagon when they're trying to forego local behaviors that are known to be intrinsically rewarding for local behaviors that add up to long-term cumulative gain.

But in short, I think the picture of "I'm going to try something new, eh?" often looks like this:


... with an "unpleasant valley" some time after the start point.  Think about the cold feet you get after the "honeymoon period" has worn off, or the desires and opinions of a military recruit in the second week of a six-week boot camp, or the frustration that emerges two months into a new diet/exercise regime, or your second year of being forced to take piano lessons.

The problem is, people never make it to the third year, where they're actually good at piano, and start reaping the benefits, and their System 1 updates to yeah, okay, this is in fact worth it.  Or rather, they sometimes make it, if there are strong supportive structures to get them across the unpleasant valley (e.g. in a military bootcamp, they just ... make you keep going).  But left to our own devices, we'll often get halfway through an experiment and just ... stop, without ever finding out what the far side is actually like.

Proposed solution: Make experiments "unquittable."  The idea here is that (ideally) one would not enter into a new experiment unless a) one were highly confident that one could absorb the costs, if things go badly, and b) one were reasonably confident that there was an Actually Good Thing waiting at the finish line.  If (big if) we take those as a given, then it should be safe to, in essence, "lock oneself in," via any number of commitment mechanisms.  Or, to put it in other words: "Medium-Term Future Me is going to lose perspective and want to give up because of being unable to see past short-term unpleasantness to the juicy, long-term goal?  Fine, then—Medium-Term Future Me doesn't get a vote."  Instead, Post-Experiment Future Me gets the vote, including getting to update heuristics on which-kinds-of-experiments-are-worth-entering.

Caveat/skull: People who are bad at self-modeling end up foolishly locking themselves into things that are higher-cost or lower-EV than they thought, and getting burned; black swans and tail risk ends up making even good bets turn out very very badly; we really should've built in an ejector seat.  This risk can be mostly ameliorated by starting small and giving people a chance to calibrate—you don't make white belts try to punch through concrete blocks, you make them punch soft, pillowy targets first.

And, of course, you do build in an ejector seat.  See next.

 

Problem 3: Saving Face

If any of you have been to a martial arts academy in the United States, you're probably familiar with the norm whereby a tardy student purchases entry into the class by first doing some pushups.  The standard explanation here is that the student is doing the pushups not as a punishment, but rather as a sign of respect for the instructor, the other students, and the academy as a whole.

I posit that what's actually going on includes that, but is somewhat more subtle/complex.  I think the real benefit of the pushup system is that it closes the loop.  

Imagine you're a ten year old kid, and your parent picked you up late from school, and you're stuck in traffic on your way to the dojo.  You're sitting there, jittering, wondering whether you're going to get yelled at, wondering whether the master or the other students will think you're lazy, imagining stuttering as you try to explain that it wasn't your fault—

Nope, none of that.  Because it's already clearly established that if you fail to show up on time, you do some pushups, and then it's over.  Done.  Finished.  Like somebody sneezed and somebody else said "bless you," and now we can all move on with our lives.  Doing the pushups creates common knowledge around the questions "does this person know what they did wrong?" and "do we still have faith in their core character?"  You take your lumps, everyone sees you taking your lumps, and there's no dangling suspicion that you were just being lazy, or that other people are secretly judging you.  You've paid the price in public, and everyone knows it, and this is a good thing.

Proposed solution: This is a solution without a concrete problem, since I haven't yet actually outlined the specific commitments a Dragon has to make (regarding things like showing up on time, participating in group activities, and making personal progress).  But in essence, the solution is this: you have to build into your system from the beginning a set of ways-to-regain-face.  Ways to hit the ejector seat on an experiment that's going screwy without losing all social standing; ways to absorb the occasional misstep or failure-to-adequately-plan; ways to be less-than-perfect and still maintain the integrity of a system that's geared toward focusing everyone on perfection.  In short, people have to know (and others have to know that they know, and they have to know that others know that they know) exactly how to make amends to the social fabric, in cases where things go awry, so that there's no question about whether they're trying to make amends, or whether that attempt is sufficient.  


Caveat/skull: The obvious problem is people attempting to game the system—they notice that ten pushups is way easier than doing the diligent work required to show up on time 95 times out of 100.  The next obvious problem is that the price is set too low for the group, leaving them to still feel jilted or wronged, and the next obvious problem is that the price is set too high for the individual, leaving them to feel unfairly judged or punished (the fun part is when both of those are true at the same time).  Lastly, there's something in the mix about arbitrariness—what do pushups have to do with lateness, really?  I mean, I get that it's paying some kind of unpleasant cost, but ...


Problem 4: Defections & Compounded Interest

I'm pretty sure everyone's tired of hearing about one-boxing and iterated prisoners' dilemmas, so I'm going to move through this one fairly quickly even though it could be its own whole multipage post.  In essence, the problem is that any rate of tolerance of real defection (i.e. unmitigated by the social loop-closing norms above) ultimately results in the destruction of the system.  Another way to put this is that people underestimate by a couple of orders of magnitude the corrosive impact of their defections—we often convince ourselves that 90% or 99% is good enough, when in fact what's needed is something like 99.99%.

There's something good that happens if you put a little bit of money away with every paycheck, and it vanishes or is severely curtailed once you stop, or start skipping a month here and there.  Similarly, there's something good that happens when a group of people agree to meet in the same place at the same time without fail, and it vanishes or is severely curtailed once one person skips twice.

In my work at the Center for Applied Rationality, I frequently tell my colleagues and volunteers "if you're 95% reliable, that means I can't rely on you."  That's because I'm in a context where "rely" means really trust that it'll get done.  No, really.  No, I don't care what comes up, DID YOU DO THE THING?  And if the answer is "Yeah, 19 times out of 20," then I can't give that person tasks ever again, because we run more than 20 workshops and I can't have one of them catastrophically fail.

(I mean, I could.  It probably wouldn't be the end of the world.  But that's exactly the point—I'm trying to create a pocket universe in which certain things, like "the CFAR workshop will go well," are absolutely reliable, and the "absolute" part is important.)

As far as I can tell, it's hyperbolic discounting all over again—the person who wants to skip out on the meetup sees all of these immediate, local costs to attending, and all of these visceral, large gains to defection, and their S1 doesn't properly weight the impact to those distant, cumulative effects (just like the person who's going to end up with no retirement savings because they wanted those new shoes this month instead of next month).  1.01^n takes a long time to look like it's going anywhere, and in the meantime the quick one-time payoff of 1.1 that you get by knocking everything else down to .99^n looks juicy and delicious and seems justified.

But something magical does accrue when you make the jump from 99% to 100%.  That's when you see teams that truly trust and rely on one another, or marriages built on unshakeable faith (and you see what those teams and partnerships can build, when they can adopt time horizons of years or decades rather than desperately hoping nobody will bail after the third meeting).  It starts with a common knowledge understanding that yes, this is the priority, even—no, wait, especially—when it seems like there are seductively convincing arguments for it to not be.  When you know—not hope, but know—that you will make a local sacrifice for the long-term good, and you know that they will, too, and you all know that you all know this, both about yourselves and about each other.

Proposed solution: Discuss, and then agree upon, and then rigidly and rigorously enforce a norm of perfection in all formal undertakings (and, correspondingly, be more careful and more conservative about which undertakings you officially take on, versus which things you're just casually trying out as an informal experiment), with said norm to be modified/iterated only during predecided strategic check-in points and not on the fly, in the middle of things.  Build a habit of clearly distinguishing targets you're going to hit from targets you'd be happy to hit.  Agree upon and uphold surprisingly high costs for defection, Hofstadter style, recognizing that a cost that feels high enough probably isn't.  Leave people wiggle room as in Problem 3, but define that wiggle room extremely concretely and objectively, so that it's clear in advance when a line is about to be crossed.  Be ridiculously nitpicky and anal about supporting standards that don't seem worth supporting, in the moment, if they're in arenas that you've previously assessed as susceptible to compounding.  Be ruthless about discarding standards during strategic review; if a member of the group says that X or Y or Z is too high-cost for them to sustain, believe them, and make decisions accordingly.

Caveat/skull: Obviously, because we're humans, even people who reflectively endorse such an overall solution will chafe when it comes time for them to pay the price (I certainly know I've chafed under standards I fought to install).  At that point, things will seem arbitrary and overly constraining, priorities will seem misaligned (and might actually be), and then feelings will be hurt and accusations will be leveled and things will be rough.  The solution there is to have, already in place, strong and open channels of communication, strong norms and scaffolds for emotional support, strong default assumption of trust and good intent on all sides, etc. etc.  This goes wrongest when things fester and people feel they can't speak up; it goes much better if people have channels to lodge their complaints and reservations and are actively incentivized to do so (and can do so without being accused of defecting on the norm-in-question; criticism =/= attack).

 

Problem 5: Everything else

There are other models and problems in the mix—for instance, I have a model surrounding buy-in and commitment that deals with an escalating cycle of asks-and-rewards, or a model of how to effectively leverage a group around you to accomplish ambitious tasks that requires you to first lay down some "topsoil" of simple/trivial/arbitrary activities that starts the growth of an ecology of affordances, or a theory that the strategy of trying things and doing things outstrips the strategy of think-until-you-identify-worthwhile-action, and that rationalists in particular are crippling themselves through decision paralysis/letting the perfect be the enemy of the good when just doing vaguely interesting projects would ultimately gain them more skill and get them further ahead, or a strong sense based off both research and personal experience that physical proximity matters, and that you can't build the correct kind of strength and flexibility and trust into your relationships without actually spending significant amounts of time with one another in meatspace on a regular basis, regardless of whether that makes tactical sense given your object-level projects and goals.

But I'm going to hold off on going into those in detail until people insist on hearing about them or ask questions/pose hesitations that could be answered by them.


Section 2 of 3: Power dynamics

All of the above was meant to point at reasons why I suspect trusting individuals responding to incentives moment-by-moment to be a weaker and less effective strategy than building an intentional community that Actually Asks Things Of Its Members.  It was also meant to justify, at least indirectly, why a strong guiding hand might be necessary given that our community's evolved norms haven't really produced results (in the group houses) commensurate with the promises of EA and rationality.

Ultimately, though, what matters is not the problems and solutions themselves so much as the light they shine on my aesthetics (since, in the actual house, it's those aesthetics that will be used to resolve epistemic gridlock).  In other words, it's not so much those arguments as it is the fact that Duncan finds those arguments compelling.  It's worth noting that the people most closely involved with this project (i.e. my closest advisors and those most likely to actually sign on as housemates) have been encouraged to spend a significant amount of time explicitly vetting me with regards to questions like "does this guy actually think things through," "is this guy likely to be stupid or meta-stupid," "will this guy listen/react/update/pivot in response to evidence or consensus opposition," and "when this guy has intuitions that he can't explain, do they tend to be validated in the end?"

In other words, it's fair to view this whole post as an attempt to prove general trustworthiness (in both domain expertise and overall sanity), because—well—that's what it is.  In milieu like the military, authority figures expect (and get) obedience irrespective of whether or not they've earned their underlings' trust; rationalists tend to have a much higher bar before they're willing to subordinate their decisionmaking processes, yet still that's something this sort of model requires of its members (at least from time to time, in some domains, in a preliminary "try things with benefit of the doubt" sort of way).  I posit that Dragon Army Barracks works (where "works" means "is good and produces both individual and collective results that outstrip other group houses by at least a factor of three") if and only if its members are willing to hold doubt in reserve and act with full force in spite of reservations—if they're willing to trust me more than they trust their own sense of things (at least in the moment, pending later explanation and recalibration on my part or theirs or both).

And since that's a) the central difference between DA and all the other group houses, which are collections of non-subordinate equals, and b) quite the ask, especially in a rationalist community, it's entirely appropriate that it be given the greatest scrutiny.  Likely participants in the final house spent ~64 consecutive hours in my company a couple of weekends ago, specifically to play around with living under my thumb and see whether it's actually a good place to be; they had all of the concerns one would expect and (I hope) had most of those concerns answered to their satisfaction.  The rest of you will have to make do with grilling me in the comments here.

 

"Why was Tyler Durden building an army?  To what purpose?  For what greater good? ...in Tyler we trusted."

 

Power and authority are generally anti-epistemic—for every instance of those-in-power defending themselves against the barbarians at the gates or anti-vaxxers or the rise of Donald Trump, there are a dozen instances of them squashing truth, undermining progress that would make them irrelevant, and aggressively promoting the status quo.

Thus, every attempt by an individual to gather power about themselves is at least suspect, given regular ol' incentive structures and regular ol' fallible humans.  I can (and do) claim to be after a saved world and a bunch of people becoming more the-best-versions-of-themselves-according-to-themselves, but I acknowledge that's exactly the same claim an egomaniac would make, and I acknowledge that the link between "Duncan makes all his housemates wake up together and do pushups" and "the world is incrementally less likely to end in gray goo and agony" is not obvious.

And it doesn't quite solve things to say, "well, this is an optional, consent-based process, and if you don't like it, don't join," because good and moral people have to stop and wonder whether their friends and colleagues with slightly weaker epistemics and slightly less-honed allergies to evil are getting hoodwinked.  In short, if someone's building a coercive trap, it's everyone's problem.

 

"Over and over he thought of the things he did and said in his first practice with his new army. Why couldn't he talk like he always did in his evening practice group? No authority except excellence. Never had to give orders, just made suggestions. But that wouldn't work, not with an army. His informal practice group didn't have to learn to do things together. They didn't have to develop a group feeling; they never had to learn how to hold together and trust each other in battle. They didn't have to respond instantly to command.

And he could go to the other extreme, too. He could be as lax and incompetent as Rose the Nose, if he wanted. He could make stupid mistakes no matter what he did. He had to have discipline, and that meant demanding—and getting—quick, decisive obedience. He had to have a well-trained army, and that meant drilling the soldiers over and over again, long after they thought they had mastered a technique, until it was so natural to them that they didn't have to think about it anymore."

 

But on the flip side, we don't have time to waste.  There's existential risk, for one, and even if you don't buy ex-risk à la AI or bioterrorism or global warming, people's available hours are trickling away at the alarming rate of one hour per hour, and none of us are moving fast enough to get All The Things done before we die.  I personally feel that I am operating far below my healthy sustainable maximum capacity, and I'm not alone in that, and something like Dragon Army could help.

So.  Claims, as clearly as I can state them, in answer to the question "why should a bunch of people sacrifice non-trivial amounts of their autonomy to Duncan?"

1. Somebody ought to run this, and no one else will.  On the meta level, this experiment needs to be run—we have like twenty or thirty instances of the laissez-faire model, and none of the high-standards/hardcore one, and also not very many impressive results coming out of our houses.  Due diligence demands investigation of the opposite hypothesis.  On the object level, it seems uncontroversial to me that there are goods waiting on the other side of the unpleasant valley—goods that a team of leveled-up, coordinated individuals with bonds of mutual trust can seize that the rest of us can't even conceive of, at this point, because we don't have a deep grasp of what new affordances appear once you get there.

2. I'm the least unqualified person around.  Those words are chosen deliberately, for this post on "less wrong."  I have a unique combination of expertise that includes being a rationalist, sixth grade teacher, coach, RA/head of a dormitory, ringleader of a pack of hooligans, member of two honor code committees, curriculum director, obsessive sci-fi/fantasy nerd, writer, builder, martial artist, parkour guru, maker, and generalist.  If anybody's intuitions and S1 models are likely to be capable of distinguishing the uncanny valley from the real deal, I posit mine are.

3. There's never been a safer context for this sort of experiment.  It's 2017, we live in the United States, and all of the people involved are rationalists.  We all know about NVC and double crux, we're all going to do Circling, we all know about Gendlin's Focusing, and we've all read the Sequences (or will soon).  If ever there was a time to say "let's all step out onto the slippery slope, I think we can keep our balance," it's now—there's no group of people better equipped to stop this from going sideways.

4. It does actually require a tyrant. As a part of a debrief during the weekend experiment/dry run, we went around the circle and people talked about concerns/dealbreakers/things they don't want to give up.  One interesting thing that popped up is that, according to consensus, it's literally impossible to find a time of day when the whole group could get together to exercise.  This happened even with each individual being willing to make personal sacrifices and doing things that are somewhat costly.

If, of course, the expectation is that everybody shows up on Tuesday and Thursday evenings, and the cost of not doing so is not being present in the house, suddenly the situation becomes simple and workable.  And yes, this means some kids left behind (ctrl+f), but the whole point of this is to be instrumentally exclusive and consensually high-commitment.  You just need someone to make the actual final call—there are too many threads for the coordination problem of a house of this kind to be solved by committee, and too many circumstances in which it's impossible to make a principled, justifiable decision between 492 almost-indistinguishably-good options.  On top of that, there's a need for there to be some kind of consistent, neutral force that sets course, imposes consistency, resolves disputes/breaks deadlock, and absorbs all of the blame for the fact that it's unpleasant to be forced to do things you know you ought to but don't want to do.

And lastly, we (by which I indicate the people most likely to end up participating) want the house to do stuff—to actually take on projects of ambitious scope, things that require ten or more talented people reliably coordinating for months at a time.  That sort of coordination requires a quarterback on the field, even if the strategizing in the locker room is egalitarian.

5. There isn't really a status quo for power to abusively maintain.  Dragon Army Barracks is not an object-level experiment in making the best house; it's a meta-level experiment attempting (through iteration rather than armchair theorizing) to answer the question "how best does one structure a house environment for growth, self-actualization, productivity, and social synergy?"  It's taken as a given that we'll get things wrong on the first and second and third try; the whole point is to shift from one experiment to the next, gradually accumulating proven-useful norms via consensus mechanisms, and the centralized power is mostly there just to keep the transitions smooth and seamless.  More importantly, the fundamental conceit of the model is "Duncan sees a better way, which might take some time to settle into," but after e.g. six months, if the thing is not clearly positive and at least well on its way to being self-sustaining, everyone ought to abandon it anyway.  In short, my tyranny, if net bad, has a natural time limit, because people aren't going to wait around forever for their results.

6. The experiment has protections built in.  Transparency, operationalization, and informed consent are the name of the game; communication and flexibility are how the machine is maintained.  Like the Constitution, Dragon Army's charter and organization are meant to be "living documents" that constrain change only insofar as they impose reasonable limitations on how wantonly change can be enacted.


Section 3 of 3: Dragon Army Charter (DRAFT)

Statement of purpose:

Dragon Army Barracks is a group housing and intentional community project which exists to support its members socially, emotionally, intellectually, and materially as they endeavor to improve themselves, complete worthwhile projects, and develop new and useful culture, in that order.  In addition to the usual housing commitments (i.e. rent, utilities, shared expenses), its members will make limited and specific commitments of time, attention, and effort averaging roughly 90 hours a month (~1.5hr/day plus occasional weekend activities).

Dragon Army Barracks will have an egalitarian, flat power structure, with the exception of a commander (Duncan Sabien) and a first officer (Eli Tyre).  The commander's role is to create structure by which the agreed-upon norms and standards of the group shall be discussed, decided, and enforced, to manage entry to and exit from the group, and to break epistemic gridlock/make decisions when speed or simplification is required.  The first officer's role is to manage and moderate the process of building consensus around the standards of the Army—what they are, and in what priority they should be met, and with what consequences for failure.  Other "management" positions may come into existence in limited domains (e.g. if a project arises, it may have a leader, and that leader will often not be Duncan or Eli), and will have their scope and powers defined at the point of creation/ratification.

Initial areas of exploration:

The particular object level foci of Dragon Army Barracks will change over time as its members experiment and iterate, but at first it will prioritize the following:

  • Physical proximity (exercising together, preparing and eating meals together, sharing a house and common space)
  • Regular activities for bonding and emotional support (Circling, pair debugging, weekly retrospective, tutoring/study hall)
  • Regular activities for growth and development (talk night, tutoring/study hall, bringing in experts, cross-pollination)
  • Intentional culture (experiments around lexicon, communication, conflict resolution, bets & calibration, personal motivation, distribution of resources & responsibilities, food acquisition & preparation, etc.)
  • Projects with "shippable" products (e.g. talks, blog posts, apps, events; some solo, some partner, some small group, some whole group; ranging from short-term to year-long)
  • Regular (every 6-10 weeks) retreats to learn a skill, partake in an adventure or challenge, or simply change perspective

Dragon Army Barracks will begin with a move-in weekend that will include ~10 hours of group bonding, discussion, and norm-setting.  After that, it will enter an eight-week bootcamp phase, in which each member will participate in at least the following:

  • Whole group exercise (90min, 3x/wk, e.g. Tue/Fri/Sun)
  • Whole group dinner and retrospective (120min, 1x/wk, e.g. Tue evening)
  • Small group baseline skill acquisition/study hall/cross-pollination (90min, 1x/wk)
  • Small group circle-shaped discussion (120min, 1x/wk)
  • Pair debugging or rapport building (45min, 2x/wk)
  • One-on-one check-in with commander (20min, 2x/wk)
  • Chore/house responsibilities (90min distributed)
  • Publishable/shippable solo small-scale project work with weekly public update (100min distributed)

... for a total time commitment of 16h/week or 128 hours total, followed by a whole group retreat and reorientation.  The house will then enter an eight-week trial phase, in which each member will participate in at least the following:

  • Whole group exercise (90min, 3x/wk)
  • Whole group dinner, retrospective, and plotting (150min, 1x/wk)
  • Small group circling and/or pair debugging (120min distributed)
  • Publishable/shippable small group medium-scale project work with weekly public update (180min distributed)
  • One-on-one check-in with commander (20min, 1x/wk)
  • Chore/house responsibilities (60min distributed)
... for a total time commitment of 13h/week or 104 hours total, again followed by a whole group retreat and reorientation.  The house will then enter a third phase where commitments will likely change, but will include at a minimum whole group exercise, whole group dinner, and some specific small-group responsibilities, either social/emotional or project/productive (once again ending with a whole group retreat).  At some point between the second and third phase, the house will also ramp up for its first large-scale project, which is yet to be determined but will be roughly on the scale of putting on a CFAR workshop in terms of time and complexity.

Should the experiment prove successful past its first six months, and worth continuing for a full year or longer, by the end of the first year every Dragon shall have a skill set including, but not limited to:
  • Above-average physical capacity
  • Above-average introspection
  • Above-average planning & execution skill
  • Above-average communication/facilitation skill
  • Above-average calibration/debiasing/rationality knowledge
  • Above-average scientific lab skill/ability to theorize and rigorously investigate claims
  • Average problem-solving/debugging skill
  • Average public speaking skill
  • Average leadership/coordination skill
  • Average teaching and tutoring skill
  • Fundamentals of first aid & survival
  • Fundamentals of financial management
  • At least one of: fundamentals of programming, graphic design, writing, A/V/animation, or similar (employable mental skill)
  • At least one of: fundamentals of woodworking, electrical engineering, welding, plumbing, or similar (employable trade skill)
Furthermore, every Dragon should have participated in:
  • At least six personal growth projects involving the development of new skill (or honing of prior skill)
  • At least three partner- or small-group projects that could not have been completed alone
  • At least one large-scale, whole-army project that either a) had a reasonable chance of impacting the world's most important problems, or b) caused significant personal growth and improvement
  • Daily contributions to evolved house culture
Speaking of evolved house culture...

Because of both a) the expected value of social exploration and b) the cumulative positive effects of being in a group that's trying things regularly and taking experiments seriously, Dragon Army will endeavor to adopt no fewer than one new experimental norm per week.  Each new experimental norm should have an intended goal or result, an informal theoretical backing, and a set re-evaluation time (default three weeks).  There are two routes by which a new experimental norm is put into place:

  • The experiment is proposed by a member, discussed in a whole group setting, and meets the minimum bar for adoption (>60% of the Army supports, with <20% opposed and no hard vetos)
  • The Army has proposed no new experiments in the previous week, and the Commander proposes three options.  The group may then choose one by vote/consensus, or generate three new options, from which the Commander may choose.
Examples of some of the early norms which the house is likely to try out from day one (hit the ground running):
  • The use of a specific gesture to greet fellow Dragons (house salute)
  • Various call-and-response patterns surrounding house norms (e.g. "What's rule number one?" "PROTECT YOURSELF!")
  • Practice using hook, line, and sinker in social situations (three items other than your name for introductions)
  • The anti-Singer rule for open calls-for-help (if Dragon A says "hey, can anyone help me with X?" the responsibility falls on the physically closest housemate to either help or say "Not me/can't do it!" at which point the buck passes to the next physically closest person)
  • An "interrupt" call that any Dragon may use to pause an ongoing interaction for fifteen seconds
  • A "culture of abundance" in which food and leftovers within the house are default available to all, with exceptions deliberately kept as rare as possible
  • A "graffiti board" upon which the Army keeps a running informal record of its mood and thoughts

Dragon Army Code of Conduct
While the norms and standards of Dragon Army will be mutable by design, the following (once revised and ratified) will be the immutable code of conduct for the first eight weeks, and is unlikely to change much after that.

  1. A Dragon will protect itself, i.e. will not submit to pressure causing it to do things that are dangerous or unhealthy, nor wait around passively when in need of help or support (note that this may cause a Dragon to leave the experiment!).
  2. A Dragon will take responsibility for its actions, emotional responses, and the consequences thereof, e.g. if late will not blame bad luck/circumstance, if angry or triggered will not blame the other party.
  3. A Dragon will assume good faith in all interactions with other Dragons and with house norms and activities, i.e. will not engage in strawmanning or the horns effect.
  4. A Dragon will be candid and proactive, e.g. will give other Dragons a chance to hear about and interact with negative models once they notice them forming, or will not sit on an emotional or interpersonal problem until it festers into something worse.
  5. A Dragon will be fully present and supportive when interacting with other Dragons in formal/official contexts, i.e. will not engage in silent defection, undermining, halfheartedness, aloofness, subtle sabotage, or other actions which follow the letter of the law while violating the spirit.  Another way to state this is that a Dragon will practice compartmentalization—will be able to simultaneously hold "I'm deeply skeptical about this" alongside "but I'm actually giving it an honest try," and postpone critique/complaint/suggestion until predetermined checkpoints.  Yet another way to state this is that a Dragon will take experiments seriously, including epistemic humility and actually seeing things through to their ends rather than fiddling midway.
  6. A Dragon will take the outside view seriously, maintain epistemic humility, and make subject-object shifts, i.e. will act as a behaviorist and agree to judge and be judged on the basis of actions and revealed preferences rather than intentions, hypotheses, and assumptions (this one's similar to #2 and hard to put into words, but for example, a Dragon who has been having trouble getting to sleep but has never informed the other Dragons that their actions are keeping them awake will agree that their anger and frustration, while valid internally, may not fairly be vented on those other Dragons, who were never given a chance to correct their behavior).  Another way to state this is that a Dragon will embrace the maxim "don't believe everything that you think."
  7. A Dragon will strive for excellence in all things, modified only by a) prioritization and b) doing what is necessary to protect itself/maximize total growth and output on long time scales.
  8. A Dragon will not defect on other Dragons.
There will be various operationalizations of the above commitments into specific norms (e.g. a Dragon will read all messages and emails within 24 hours, and if a full response is not possible within that window, will send a short response indicating when the longer response may be expected) that will occur once the specific members of the Army have been selected and have individually signed on.  Disputes over violations of the code of conduct, or confusions about its operationalization, will first be addressed one-on-one or in informal small group, and will then move to general discussion, and then to the first officer, and then to the commander.

Note that all of the above is deliberately kept somewhat flexible/vague/open-ended/unsettled, because we are trying not to fall prey to GOODHART'S DEMON.


Random Logistics
  1. The initial filter for attendance will include a one-on-one interview with the commander (Duncan), who will be looking for a) credible intention to put forth effort toward the goal of having a positive impact on the world, b) likeliness of a strong fit with the structure of the house and the other participants, and c) reliability à la financial stability and ability to commit fully to long-term endeavors.  Final decisions will be made by the commander and may be informally questioned/appealed but not overruled by another power.
  2. Once a final list of participants is created, all participants will sign a "free state" contract of the form "I agree to move into a house within five miles of downtown Berkeley (for length of time X with financial obligation Y) sometime in the window of July 1st through September 30th, conditional on at least seven other people signing this same agreement."  At that point, the search for a suitable house will begin, possibly with delegation to participants.
  3. Rents in that area tend to run ~$1100 per room, on average, plus utilities, plus a 10% contribution to the general house fund.  Thus, someone hoping for a single should, in the 85th percentile worst case, be prepared to make a ~$1400/month commitment.  Similarly, someone hoping for a double should be prepared for ~$700/month, and someone hoping for a triple should be prepared for ~$500/month, and someone hoping for a quad should be prepared for ~$350/month.
  4. The initial phase of the experiment is a six month commitment, but leases are generally one year.  Any Dragon who leaves during the experiment is responsible for continuing to pay their share of the lease/utilities/house fund, unless and until they have found a replacement person the house considers acceptable, or have found three potential viable replacement candidates and had each one rejected.  After six months, should the experiment dissolve, the house will revert to being simply a house, and people will bear the normal responsibility of "keep paying until you've found your replacement."  (This will likely be easiest to enforce by simply having as many names as possible on the actual lease.)
  5. Of the ~90hr/month, it is assumed that ~30 are whole-group, ~30 are small group or pair work, and ~30 are independent or voluntarily-paired work.  Furthermore, it is assumed that the commander maintains sole authority over ~15 of those hours (i.e. can require that they be spent in a specific way consistent with the aesthetic above, even in the face of skepticism or opposition).
  6. We will have an internal economy whereby people can trade effort for money and money for time and so on and so forth, because heck yeah.

Conclusion: Obviously this is neither complete nor perfect.  What's wrong, what's missing, what do you think?  I'm going to much more strongly weight the opinions of Berkelyans who are likely to participate, but I'm genuinely interested in hearing from everyone, particularly those who notice red flags (the goal is not to do anything stupid or meta-stupid).  Have fun tearing it up.

(sorry for the abrupt cutoff, but this was meant to be published Monday and I've just ... not ... been ... sleeping ... to get it done)

Gears in understanding

23 Valentine 12 May 2017 12:36AM

Some (literal, physical) roadmaps are more useful than others. Sometimes this is because of how well the map corresponds to the territory, but sometimes it's because of features of the map that are irrespective of the territory. E.g., maybe the lines are fat and smudged such that you can't tell how far a road is from a river, or maybe it's unclear which road a name is trying to indicate.

In the same way, I want to point at a property of models that isn't about what they're modeling. It interacts with the clarity of what they're modeling, but only in the same way that smudged lines in a roadmap interact with the clarity of the roadmap.

This property is how deterministically interconnected the variables of the model are. There are a few tests I know of to see to what extent a model has this property, though I don't know if this list is exhaustive and would be a little surprised if it were:

  1. Does the model pay rent? If it does, and if it were falsified, how much (and how precisely) could you infer other things from the falsification?
  2. How incoherent is it to imagine that the model is accurate but that a given variable could be different?
  3. If you knew the model were accurate but you were to forget the value of one variable, could you rederive it?

I think this is a really important idea that ties together a lot of different topics that appear here on Less Wrong. It also acts as a prerequisite frame for a bunch of ideas and tools that I'll want to talk about later.

I'll start by giving a bunch of examples. At the end I'll summarize and gesture toward where this is going as I see it.

continue reading »

A Month's Worth of Rational Posts - Feedback on my Rationality Feed.

17 deluks917 15 May 2017 02:21PM

For the last two months I have been publishing a feed of rationalist articles. Oriignally the feed was only published on the SSC discord channel SSC Discord (Be charitable, kind and don't treat the place like 4chan). For the last few days I have also been publishing it on my blog deluks917.wordpress.com. I categorize the links and include a brief excerpt, review, and/or teaser. If you would like to see an exampel in practice just check today's post. The average number of links per day, in the last month, has been six. But this number has been higher recently. I have not missed a single day since I started, so I think its likely I will continue doing this. The list of blogs I check is located here: List of Blogs

I am looking for some feedback. At the bottom of this post I am  including a month's worth of posts categorized using the current system. Posts are not nescessarily in any particular order since my categorization system has not been constant over time. Lots of posts were moved around by hand. 

1 -  Should I share the feed the results somewhere other than SSC-discord + my blog? Mindlevelup suggested I write up a weekly roundup. I could share such a roundup via some on lesswrong and SSC. I would estimate the expected number of links in such a psot to be around 35. Links would be posted in chronoligcal order within categories. Alternatively I could share such a post every two weeks. Its also possible to have a mailing list but I currently find this less promising. 

2 - Do the categories make a reasonable amount of sense? What tweaks would you make. I have ocnsidered mixing some of the smaller categories (Math and CS, Amusement into "misc"). 

3 - Are there any blogs I should include/drop from the feed. For example I have been considering dropping ribbonfarm. The highest priority is to get the content thats directly about instrumental/epsitemic rationality. The bar is higher for politics and culture_war. I should note I am not going to personally include any blog without an RSS feed. 

4 - Is anyone willing to write a "Best of rationalist tumblr" post. If I write a weekly/bi-weekly round up I could combine it with an equivalent "best of tumblr" post. The tumblr post would not have to be daily, just weekly or every other week. We could take turns posting the resulting combination to lesswrong/SSC and collecting the juicy karma. However its worth noting that SSC-reddit has some controls on culture_war (outside of the CW thread). Since we want to post to r/SSC we need to keep the density of culture_war to reasonable levels. Lesswrong also has some anti-cw norms.

=== Last Month's Rationality Content === 

**Scott**

http://slatestarcodex.com/2017/05/11/silicon-valley-a-reality-check/ - What a person finds in Silicon Valley mirrors the seeker.

http://slatestarcodex.com/2017/05/09/links-517-rip-van-linkle/ - Links.

http://slatestarcodex.com/2017/04/11/sacred-principles-as-exhaustible-resources/ - Don't deplete the free speech commons.

http://slatestarcodex.com/2017/04/12/clarification-to-sacred-principles-as-exhaustible-resources/  - Clarifications and caveats on Scott's last article on free speech and sacred values.

http://slatestarcodex.com/2017/04/13/chametz/ - A Jewish Vampire Story

http://slatestarcodex.com/2017/04/17/learning-to-love-scientific-consensus/ - Scott Critiques a list of 10 maverick inventors. He then reconsiders his previous science skepticism.

http://slatestarcodex.com/2017/04/21/ssc-journal-club-childhood-trauma-and-cognition/ - A new study challenges the idea that child abuse reduces brain function.

http://slatestarcodex.com/2017/04/25/book-review-the-hungry-brain/ - Scott gives a favorable view of the "establishment" view of nutrition.

http://slatestarcodex.com/2017/04/26/anorexia-and-metabolic-set-point/ - Short Post (for Scott)

https://slatestarscratchpad.tumblr.com/post/160028275801/slatestarscratchpad-wayward-sidekick-you - Scott discusses engaging with ideas you find harmful. He also discusses his attitude toward making his blog as friendly as possible. [culture_war]

http://slatestarcodex.com/2017/05/01/neutral-vs-conservative-the-eternal-struggle/ - Formally neutral institutions have a liberal bias. Conservatives react by seceding and forming their own institutions. The end result is bad for society. [Culture War]

http://slatestarcodex.com/2017/05/04/getting-high-on-your-own-supply/ - "If you optimize for the epistemic culture that’s best for getting elected, but that culture isn’t also the best for running a party or governing a nation, then the fact that your culture affects your elites as well becomes a really big problem." Short for Scott.

http://slatestarcodex.com/2017/05/07/ot75-the-comment-king/ - bi-weekly visible open thread.

http://unsongbook.com/postscript-1-wrap-parties-fan-music/ - Final chapter of Unsong goes up approximately 8pm on Sunday. Unsong will have an epilogue will will go up on Wednesday. Wrap party details. (I will be at the wrap party on sunday).

http://unsongbook.com/book-iv-kings/ - "Somebody had to, no one would / I tried to do the best I could / And now it’s done, and now they can’t ignore us / And even though it all went wrong / I’ll stand against the whole unsong / With nothing on my tongue but HaMephorash"

http://unsongbook.com/chapter-71-but-for-another-gives-its-ease/ - Penultimate chapter of Unsong.

http://unsongbook.com/chapter-70-nor-for-itself-hath-any-care/ - Newest Chapter.

http://unsongbook.com/authors-note-10-hamephorash-hamephorash-party/ - Final Chapter goes up may 14. Bay Area Reading party announced.

http://unsongbook.com/chapter-69-love-seeketh-not-itself-to-please/ - Newest Chapter.

http://unsongbook.com/chapter-68-puts-all-heaven-in-a-rage/ - Newest Chapter.

**Rationalism**

http://lesswrong.com/r/discussion/lw/ozz/gearsness_of_understanding/ - "I want to point at a property of models that isn't about what they're modeling. It interacts with the clarity of what they're modeling, but only in the same way that smudged lines in a roadmap interact with the clarity of the roadmap. This property is how deterministically interconnected the variables of the model are.". The theory is applied to multiple explicit examples.

https://thepdv.wordpress.com/2017/05/11/how-i-use-beeminder/ - Short but gives details. Beeminder is the only productivity system that worked for the author.

https://putanumonit.com/2017/05/09/time-well-spent/ - Akrasia and procrastination. A review of some of the rationalist thinking on the topic. Jacob's personal take and his system for tracking his productivity.

https://putanumonit.com/2017/05/09/time-well-spent/ - Akrasia and procrastination. A review of some of the rationalist thinking on the topic. Jacob's personal take and his system for tracking his productivity.

http://kajsotala.fi/2017/05/cognitive-core-systems-explaining-intuitions-behind-belief-in-souls-free-will-and-creation-myths/ - Description of four core systems human, and other animals, are born with. An explanation of why these systems lead to belief in souls. Short.

https://mindlevelup.wordpress.com/2017/05/06/taking-criticism/ - Reframing criticism so that it makes sense to the author (who is bad at taking criticism). A Q&A between the author and himself.

http://lesswrong.com/r/discussion/lw/oz1/soft_skills_for_running_meetups_for_beginners/ - Concrete advice for running meetups. Not especially focused on beginning organizers. Written by the person who organized Solstice.

http://effective-altruism.com/ea/19t/mental_health_resource_for_ea_community/ - A breakdown of the most useful information about Mania and Psychosis. Extremely practical advice. Julia Wise.

http://bearlamp.com.au/working-with-multiple-problems-at-once - Problems add up and you run out of time. How do you get out? Very practical.

http://agentyduck.blogspot.com/2017/05/creativity-taps.html - Practical ideas for exercising creativity.

http://lesswrong.com/r/discussion/lw/oyk/acting_on_your_intended_preferences_what_does/ - What does it look like in practice to pursue your goals. A series of practical questions to ask your. Links to a previous series of blog post are included.

https://thingofthings.wordpress.com/2017/05/03/why-do-all-the-rationalists-live-in-the-bay-area/ - Benefits of living in the Bay. The Bay is a top place for software engineers even accounting for cost of living, Rationalist institutions are in the Bay, there are social and economic benefits to being around other community members.

https://qualiacomputing.com/2017/05/04/the-most-important-philosophical-question/ - “Is happiness a spiritual trick, or is spirituality a happiness trick?”

http://particularvirtue.blogspot.com/2017/05/how-to-build-community-full-of-lonely.html - Why so many rationalists feel lonely and concrete suggestions for improving social groups. Advice is given to people who are popular, lonely or organizers. Very practical.

https://hivewired.wordpress.com/2017/05/07/announcing-entropycon-12017/ - We beat smallpox, we will beat death, we can try to beat entropy. A humorous mantra against nihilism.

https://mindlevelup.wordpress.com/2017/04/30/there-is-no-akrasia/ - The author argues that akrasia isn't a "thing" its a "sorta-coherent concept". He also argues that "akrasia" is not a useful concept and can be harmful.

http://bearlamp.com.au/experiments-iterations-and-the-scientific-method/ - A Graph of the scientific method in practice. The author works through his quantified self in practice and discusses his experiences.

https://everythingstudies.wordpress.com/2017/04/29/all-the-worlds-a-trading-zone/ - Cultures with different norms and languages can interact successfully.

http://kajsotala.fi/2017/04/relationship-compatibility-as-patterns-of-emotional-association/ - What is relationship "chemistry"?

http://lesswrong.com/lw/oyc/nate_soares_replacing_guilt_series_compiled_in/ - Ebook. 45 blog posts on replacing guilt and shame with a stronger motivation.

http://mindingourway.com/assuming-positive-intent/ - "If you're actively working hard to make the world a better place, then we're on the same team. If you're committed to letting evidence and reason guide your actions, then I consider you friends, comrades in arms, and kin."

http://bearlamp.com.au/quantified-self-tracking-with-a-form/ - Practical advice based on Elo's personal experience.

http://lesswrong.com/r/discussion/lw/ovc/background_reading_the_real_hufflepuff_sequence/ - Links and Descriptions of rationalist articles about group norms and dynamics.

https://everythingstudies.wordpress.com/2017/04/24/people-are-different/ - "We need to understand, accept and respect differences, that one size does not fit all, but to (and from) each their own."

http://bearlamp.com.au/yak-shaving-2/ - "A question worth asking is whether you are in your life at present causing a build up of problems, a decrease of problems, or roughly keeping them about the same level."

http://lesswrong.com/r/discussion/lw/oxk/i_updated_the_list_of_rationalist_blogs_on_the/ - Up to date list of rationalist blogs.

https://aellagirl.com/2017/05/02/internet-communities-otters-vs-possums/ - Possums: people who like a specific culture. Otters are people who like most cultures. What happens when the percentage of otters in a community increases?

https://aellagirl.com/2017/04/24/how-i-lost-my-faith/ - "People sometimes ask the question of why it took so long. Really I’m amazed that it happened at all. Before we even approach the aspect of “good arguments against religion”, you have to understand exactly how much is sacrificed by the loss of religion."

http://particularvirtue.blogspot.com/2017/04/on-social-spaces.html - Twitter, Tumblr, Facebook etc. PV responds to Zvi's articles about facebook. PV defends tumblr and facebook and has some criticisms of twitter. Several examples are given where ratioanalist groups tried to change platforms.

http://www.overcomingbias.com/2017/04/superhumans-live-among-us.html - Some human polymaths really are superhuman. But they don't have the track record to prove it.

https://thezvi.wordpress.com/2017/04/22/against-facebook/ - Sections: 1. A model breaking down how Facebook actually works. 2. An experiment with my News Feed. 3. Living with the Algorithm. 4. See First, Facebook’s most friendly feature. 5. Facebook is an evil monopolistic pariah Moloch. 6. Facebook is bad for you and Facebook is ruining your life. 7. Facebook is destroying discourse and the public record. 8. Facebook is out to get you.

https://thezvi.wordpress.com/2017/04/22/against-facebook-comparison-to-alternatives-and-call-to-action/ - Zvi's advice for managing your information streams and discussion platforms. Facebook can mostly be replaced.

https://rationalconspiracy.com/2017/04/22/moving-to-the-bay-area/ - Downsides of the Bay. Extensively sourced. Cost of living, traffic, public transit, crime, cleanliness.

https://nintil.com/2017/04/18/still-not-a-zombie-replies-to-commenters/ - Thoughts on consciousness and identity.

http://bearlamp.com.au/an-inquiry-into-memory-of-humans/ - The reader is asked to try various interesting memory exercises.

https://www.jefftk.com/p/how-to-make-housing-cheaper - 9 ways to make housing cheaper.

http://lesswrong.com/r/discussion/lw/owb/straw_hufflepuffs_and_lone_heroes/ - Should Harry have joined Hufflepuff in HPMOR? Harry had reasons to be a lone hero, do you?

http://lesswrong.com/lw/owa/lesswrong_analytics_february_2009_to_january_2017/ - Activity graphs of lesswrong over time, which posts had the most views, links to source code and further reading.

https://thezvi.wordpress.com/2017/04/23/help-us-find-your-blog-and-others/ - Zvi will read a post from your blog and consider adding you to his RSS feed.

https://thingofthings.wordpress.com/2017/04/11/book-post-for-march/ - Books on parenting.

https://boardgamesandrationality.wordpress.com/2017/04/24/first-blog-post/ - Dealing With Secret Information in boardgames and real life.

http://www.overcomingbias.com/2017/04/mormon-transhumanists.html - The relationship between religious community and technological change. Long for Overcoming Bias.

https://putanumonit.com/2017/04/15/bad-religion/ - "Rationality is a really unsatisfactory religion. But it’s a great life hack."

https://thezvi.wordpress.com/2017/04/12/escalator-action/ - Should we walk on elevator?

https://putanumonit.com/2017/04/21/book-review-too-like-the-lightning/ - The world of Jacob's dreams, thought on AI, a book review.

**EA**

http://effective-altruism.com/ea/19y/understanding_charity_evaluation/ - A detailed breakdown of how charity evaluation works in practice. Openly somewhat speculative.

http://blog.givewell.org/2017/05/11/update-on-our-views-on-cataract-surgery/ - previously Givewell had unsuccessfully tried to find recommendable cataract surgery charities. The biggest issues were “room for funding” and “lack of high quality monitoring data”. However they believe that cataract surgery is a promising intervention and they are doing more analysis.

https://80000hours.org/2017/05/how-much-do-hedge-fund-traders-earn/ - Detailed report on career trajectories and earnings. "We found that junior traders typically earn $300k – $3m per year, and it’s possible to reach these roles in 4 – 8 years."

https://www.givedirectly.org/blog-post?id=7612753271623522521 - 8 News links about GiveDirectly, Basic Income and cash transfer.

https://80000hours.org/2017/05/most-people-report-believing-its-incredibly-cheap-to-save-lives-in-the-developing-world/ - Details of a study on how Much Americans think it costs to save a life. Discussion of why people gave such optimistic answers. "It turns out that most Americans believe a child can be prevented from dying of preventable diseases for very little – less than $100."

https://www.thelifeyoucansave.org/Blog/ID/1355/Are-Giving-Games-a-Better-Way-to-Teach-Philanthropy - Literature review on "philanthropy games". Covers both traditional student philanthropy courses and the much shorter "giving game".

https://www.givedirectly.org/blog-post?id=8255610968755843534 - Links to news stories about Effective Altruism

http://benjaminrosshoffman.com/an-openai-board-seat-is-surprisingly-expensive/ - " In exchange for a board seat, the Open Philanthropy Project is aligning itself socially with OpenAI, by taking the position of a material supporter of the project."

https://www.givedirectly.org/blog-post?id=5010525406506746433 - Links to News Articles about Give Directly, Basic Income and Cash Transfer.

https://www.givedirectly.org/blog-post?id=121797500310578692 - Report on a program to give cash to coffee farmers in eastern Uganda.

http://effective-altruism.com/ea/19d/update_on_effective_altruism_funds/ - Details from the first round of funding, community feedback, Mistakes and Updates.

http://lesswrong.com/r/discussion/lw/ox4/effective_altruism_is_selfrecommending/ - Open Philanthropy project has a closed validation loop. A detailed timeline of GiveWell/Open-Philanthropy is given and many untested assumptions are pointed out. A conceptual connection is made to confidence games.

http://lesswrong.com/r/discussion/lw/oxd/the_2017_effective_altruism_survey_please_take/ - Take the survey :)

https://www.givingwhatwecan.org/post/2017/04/career-of-professor-alan-fenwick/ - Retrospective on the career of the director of the Schistosomiasis Institute.

http://www.openphilanthropy.org/blog/new-report-early-field-growth - The history of attempts to grow new fields of research or advocacy.

https://www.givedirectly.org/blog-post?id=4406309858976986548 - news links about GiveDirectly, Basic Income and Cash Transfers

https://intelligence.org/2017/04/30/2017-updates-and-strategy/ - Outreach, expansion, detailed research plan, state of the AI-risk community.

http://blog.givewell.org/2017/05/04/why-givewell-is-partnering-with-idinsight/ - IDinsight is an international NGO that aims to help its clients develop and use rigorous evidence to improve social impact. Summary, Background, goals, initial plans.

https://www.thelifeyoucansave.org/Blog/ID/1354/A-Shift-in-Priorities-at-the-Giving-Game-Project - Finding sustainable funding, Providing measurable outcomes, improving follow ups with participants.

https://80000hours.org/2017/05/most-people-report-believing-its-incredibly-cheap-to-save-lives-in-the-developing-world/ - Details of a study on how Much Americans think it costs to save a life. Discussion of why people gave such optimistic answers. "It turns out that most Americans believe a child can be prevented from dying of preventable diseases for very little – less than $100."

https://www.thelifeyoucansave.org/Blog/ID/1355/Are-Giving-Games-a-Better-Way-to-Teach-Philanthropy - Literature review on "philanthropy games". Covers both traditional student philanthropy courses and the much shorter "giving game".

http://www.openphilanthropy.org/blog/why-are-us-corporate-cage-free-campaigns-succeeding - The article contains a timeline of cagefree reform. Some background reasons given are: Undercover investigations, College engagement, Corporate engagement, Ballot measures, Gestation crate pledges, European precedent.

https://www.givingwhatwecan.org/post/2017/04/a-successor-to-the-giving-what-we-can-trust/ - The Giving What we Can Trust has joined with the "Effective Altruism Funds" (run by the Center for Effective Altruism).

http://lesswrong.com/r/discussion/lw/oyf/bad_intent_is_a_behavior_not_a_feeling/ - Response to Nate Soares, application to EA. "If you try to control others’ actions, and don’t limit yourself to doing that by honestly informing them, then you’ll end up with a strategy that distorts the truth, whether or not you meant to."

**Ai_risk**

http://effective-altruism.com/ea/19c/intro_to_caring_about_ai_alignment_as_an_ea_cause/ - By Nate Soares. A modified transcript of the talk he gave at Google on the problem of Ai Alignment.

http://lukemuehlhauser.com/monkey-classification-errors/ , http://lukemuehlhauser.com/adversarial-examples-for-pigeons/ - Adversarial examples for monkeys and pigeons respectively.

https://intelligence.org/2017/05/10/may-2017-newsletter/ - Research updates, MIRI hiring, General news links about AI

https://intelligence.org/2017/04/12/ensuring/ - Nate Soares gives a talk at Google about "Ensuring smarter-than-human intelligence has a positive outcome". An outline of the talk is included.

https://intelligence.org/2017/04/07/decisions-are-for-making-bad-outcomes-inconsistent/ - An extended discussion of Soares's latest paper "Cheating Death in Damascus".

**Research**

https://everythingstudies.wordpress.com/2017/05/12/the-eurovision-song-contest-taste-landscape/ - Analysis of Voting patterns in the Eurovision Contest. Alliances and voting Blocs are analyzed in depth.

https://srconstantin.wordpress.com/2017/05/12/do-pineal-gland-extracts-promote-longevity-well-maybe/ - Analysis of hormonal systems and their effect on metabolism and longevity.

https://acesounderglass.com/2017/05/11/an-opportunity-to-throw-money-at-the-problem-of-medical-science/ - Help crowdfund a randomized controlled trial. A promising Sepsis treatment needs a RCT but the method is very cheap and unpatentable. So there is no financial incentive for a company to fund the study.

https://randomcriticalanalysis.wordpress.com/2017/05/09/towards-a-general-factor-of-consumption/ - Factor Analysis leads to a general factor of consumption. Discussion of the data and analysis of the model. Very thorough.

https://randomcriticalanalysis.wordpress.com/2017/04/13/disposable-income-also-explains-us-health-expenditures-quite-well/ - Long Article, lots of graphs. "I argued consumption, specifically Actual Individual Consumption, is an exceptionally strong predictor of national health expenditures (NHE) and largely explains high US health expenditures.  I found AIC to be a much more robust predictor of NHE than GDP... I think it useful to also demonstrate these patterns as it relates to household disposable income"

https://randomcriticalanalysis.wordpress.com/2017/04/15/some-useful-data-on-the-dispersion-characteristics-of-us-health-expenditures/ - US Health spending is highly concentrated in a small fraction of the population. Is this true for other countries?

https://randomcriticalanalysis.wordpress.com/2017/04/17/on-popular-health-utilization-metrics/ - An extremely graph dense article responding to a widely cited paper claiming that "high utilization cannot explain high US health expenditures."

https://randomcriticalanalysis.wordpress.com/2017/04/28/health-consumption-and-household-disposable-income-outside-of-the-oecd/ - Another part in the series on healthcare expenses. Extending the analysis to non-OECD countries. Lots of graphs.

https://randomcriticalanalysis.wordpress.com/2017/05/09/towards-a-general-factor-of-consumption/ - Factor Analysis leads to a general factor of consumption. Discussion of the data and analysis of the model. Very thorough.

https://srconstantin.wordpress.com/2017/04/12/parenting-and-heritability-overview/ - Detailed literature review on heritability and what parenting can affect. A significant number of references are included.

https://nintil.com/2017/04/23/links-7/ - Psychology, Economics, Philosophy, AI

http://lesswrong.com/r/discussion/lw/ox8/unstaging_developmental_psychology/ - A mathematical model of stages of psychological development. The linked technical paper is very impressive. Starting from an abstract theory the authors managed to create a psychological theory that was concrete enough to apply in practice.

**Math and CS**

http://andrewgelman.com/2017/05/10/everybody-lies-seth-stevens-davidowitz/ - A fairly positive review of Seth's book on learning from data.

http://eli.thegreenplace.net/2017/adventures-in-jit-compilation-part-4-in-python/ - Writing a JIT compiler in Python. Discusses both using native python code and the PeachPy library. Performance consideration are explicitly not discussed.

http://eli.thegreenplace.net/2017/book-review-essentials-of-programming-languages-by-d-friedman-and-m-wand/ - Short review. "This book is a detailed overview of some fundamental ideas in the design of programming languages. It teaches by presenting toy languages that demonstrate these ideas, with a full interpreter for every language"

http://eli.thegreenplace.net/2017/adventures-in-jit-compilation-part-3-llvm/ - LLVM can dramatically speed up straightforward source code.

http://www.scottaaronson.com/blog/?p=3221 - Machine Learning, Quantum Mechanics, Google Calendar

**Politics and Economics**

http://noahpinionblog.blogspot.com/2017/04/ricardo-reis-defends-macro_13.html - Macro is defended from a number of common criticisms. A large number of modern papers are cited (including 8 job market papers). Some addressed criticisms include: Macro relies on representative agents, Macro ignores inequality, Macro ignores finance and Macro ignores data and focuses mainly on theory.

http://econlog.econlib.org/archives/2017/04/economic_system.html - What are the fundamental questions an economic system must answer?

http://andrewgelman.com/2017/04/18/reputational-incentives-post-publication-review-two-partial-solutions-misinformation-problem/ - Gelman gives a list of important erroneous analysis in the news and scientific journals. He then considers if negative reputational incentives or post-publication peer review will solve the problem.

https://srconstantin.wordpress.com/2017/05/09/how-much-work-is-real/ - What fraction of jobs are genuinely productive?

https://hivewired.wordpress.com/2017/05/06/yes-this-is-a-hill-worth-dying-on/ - The Nazis were human too. Even if a hill is worth dying on its probably not worth killing for. Discussion of good universal norms. [Culture War]

https://srconstantin.wordpress.com/2017/05/09/chronic-fatigue-syndrome/ - Literature Analysis on Chronic Fatigue Syndrome. Extremely thorough.

https://www.gwern.net/newsletter/2017/04 - A Month's worth of links. Ai, Recent evolution, heritability and other topics.

https://thingofthings.wordpress.com/2017/05/05/the-cluster-structure-of-genderspace/ - For many traits the bell curves for men and women are quite close. Visualizations of Cohen's D. Discussion of trans specific medical interventions.

https://www.jefftk.com/p/replace-infrastructure-wholesale - Can you just dig up a city and replace all the infrastructure in a week?

https://thingofthings.wordpress.com/2017/04/19/deradicalizing-the-romanceless/ - Ozy discusses the problem of (male) involuntarily celibacy.

http://noahpinionblog.blogspot.com/2017/04/the-siren-song-of-homogeneity.html - The alt-right is about racial homogeneity. Smith Reviews the data studying whether a homogeneous society increases trust and social capital. Smith discusses the Japanese culture and his time in Japan. Smith considers the arbitrariness of racial categories despite admitting that race has a biological reality. Smith flips around some alt right slogans. [Extreme high quality engagement with opposing ideas. Culture War]

https://thezvi.wordpress.com/2017/04/16/united-we-blame/ - A list of articles about United, Zvi's thoughts on United, general ideas about airlines.

http://noahpinionblog.blogspot.com/2017/04/why-101-model-doesnt-work-for-labor.html - Noah Smith gives many reasons why the simple supply/demand model can't work for labor economics.

https://thingofthings.wordpress.com/2017/04/14/concerning-archive-of-our-own/ - Ozy defends the moderation policy of the fanction archive A03. [Culture War]

https://thingofthings.wordpress.com/2017/04/13/fantasies-are-okay/ - When are fantasies ok? What about sexual fantasies? [Culture War]

https://srconstantin.wordpress.com/2017/04/25/on-drama/ - Ritual, The Psychology of Adolf Hitler, the dangerous emotion of High Drama, The Rite of Spring.

https://qualiacomputing.com/2017/04/26/psychedelic-science-2017-take-aways-impressions-and-whats-next/ - Notes on the 2017 Psychedelic Science conference.

**Amusement**

http://kajsotala.fi/2017/04/fixing-the-4x-end-game-boringness-by-simulating-legibility/ - "4X games (e.g. Civilization, Master of Orion) have a well-known problem where, once you get sufficiently far ahead, you’ve basically already won and the game stops being very interesting."

https://putanumonit.com/2017/05/12/dark-fiction/ - Jacob does some Kabbalahistic Analysis on the Story of Jacob, Unsong Style.

https://protokol2020.wordpress.com/2017/04/30/several-big-numbers-to-sort/ - 12 Amusing definitions of big numbers.

http://existentialcomics.com/comic/183 - The Life of Francis

http://existentialcomics.com/comic/181 - A Presocratic Get Together.

https://protokol2020.wordpress.com/2017/05/07/problem-with-perspective/ - A 3D geometry problem.

http://existentialcomics.com/comic/184 - Wittgenstein in the Great War(edited)

http://existentialcomics.com/comic/182 - Captain Metaphysics and the Postmodern Peril

**Adjacent**

https://medium.com/@freddiedeboer/conservatives-are-wrong-about-everything-except-predicting-their-own-place-in-the-culture-e5c036fdcdc5 - Conservatives correctly predicted the effects of gay acceptance and no fault divorce. They have also been proven correct about liberal bias in academia and the media. [Culture War]

https://medium.com/@freddiedeboer/franchises-that-are-appropriate-for-children-are-inherently-limited-in-scope-8170e76a16e2 - Superhero movies have an intended audience that includes children. This drastically limits what techniques they can use and themes they can explore. Freddie goes into the details.

https://fredrikdeboer.com/2017/05/11/study-of-the-week-rebutting-academically-adrift-with-its-own-mechanism/ - Freddie wrote his dissertation on the College Learning Assessment, the primary source in "Academically Adrift".

https://medium.com/@freddiedeboer/politics-as-politics-12ab43429e64 - Politics as “group affiliation” vs politics as politics. Annoying atheists aren’t as bad as fundamentalist Christians even if more annoying atheists exist in educated leftwing spaces. Freddie’s clash with the identitarian left despite huge agreement on the object level. Freddie is a socialist not a liberal. [Culture War]

https://www.ribbonfarm.com/2017/05/09/priest-guru-nerd-king/ - Facebook, Governance, Doctrine, Strategy, Tactics and Operations. Fairly short post for Ribbonfarm.

https://fredrikdeboer.com/2017/05/09/lets-take-a-deep-dive-into-that-times-article-on-school-choice/ - A critique of the problems in the Time's well cited article on school choice. Points out issues with selection bias, lack of theory and the fact that "not everyone can be average".

https://fredrikdeboer.com/2017/05/09/lets-take-a-deep-dive-into-that-times-article-on-school-choice/ - A critique of the problems in the Time's well cited article on school choice. Points out issues with selection bias, lack of theory and the fact that "not everyone can be average".

http://marginalrevolution.com/marginalrevolution/2017/05/conversation-garry-kasparov.html - "We talked about AI, his new book Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins, why he has become more optimistic, how education will have to adjust to smart software, Russian history and Putin, his favorites in Russian and American literature, Tarkovsky..."

http://econlog.econlib.org/archives/2017/04/iq_with_conscie.html - "My fellow IQ realists are, on average, a scary bunch.  People who vocally defend the power of IQ are vastly more likely than normal people to advocate extreme human rights violations." There are interesting comments here: https://redd.it/6697sh .

http://econlog.econlib.org/archives/2017/04/iq_with_conscie_1.html - Short follow up to the above article.(edited)

http://marginalrevolution.com/marginalrevolution/2017/04/what-would-people-do-if-they-had-superpowers.html - Link to a paper showing 94% of people said they would use superpowers selfishly.

http://waitbutwhy.com/2017/04/neuralink.html - Elon Musk Wants to Build a wizard hat for the brain. Lots of details on the science behind Neuralink.

http://marginalrevolution.com/marginalrevolution/2017/04/dont-people-care-economic-inequality.html - Most Americans don’t mind inequality nearly as much as pundits and academics suggest.

http://marginalrevolution.com/marginalrevolution/2017/04/two-rationality-tests.html - What would you ask to determine if someone is rational? What would Tyler ask?(edited)

http://tim.blog/2017/05/04/exploring-smart-drugs-fasting-and-fat-loss-dr-rhonda-patrick/ - “Avoiding all stress isn’t the answer to fighting aging; it’s about building resiliency to environmental stress.”

http://wakingup.libsyn.com/what-should-we-eat - "Sam Harris speaks with Gary Taubes about his career as a science journalist, the difficulty of studying nutrition and public health scientifically, the growing epidemics of obesity and diabetes, the role of hormones in weight gain, the controversies surrounding his work, and other topics."(edited)

http://www.econtalk.org/archives/2017/05/jennifer_pahlka.html - Code for America. Bringing technology into the government sector.

http://heterodoxacademy.org/resources/viewpoint-diversity-experience/ - A six step process to appreciating viewpoint diversity. I am not sure this site will be the most useful to rationalists , on the object level, but its interesting to see what Haidt came up with.

http://www.econtalk.org/archives/2017/04/elizabeth_pape.html - Elizabeth Pape on Manufacturing and Selling Women's Clothing and Elizabeth Suzann(edited)

http://www.mrmoneymustache.com/2017/04/25/there-are-no-guarantees/ - Avoid Contracts. Don't work another year "just in case".

http://marginalrevolution.com/marginalrevolution/2017/04/saturday-assorted-links-109.html - Assorted Links on politics, Derrida, Shaolin Monks.

http://econlog.econlib.org/archives/2017/04/earth_20.html - Bryan Caplan was a guest on freakanomics Radio. The topic was  "Earth 2.0: Is Income Inequality Inevitable?".

https://www.ribbonfarm.com/2017/04/18/entrepreneurship-is-metaphysical-labor/ - Metaphysics as Intellectual Ergonomics. Entrepreneurship is Applied Metaphysics.

https://www.ribbonfarm.com/2017/04/13/idiots-scaring-themselves-in-the-dark/ - Getting Lost. "The uncanny. This is the emotion of eeriness, spookiness, creepiness"

**Podcast**

http://rationallyspeakingpodcast.org/show/rs-182-spencer-greenberg-on-how-online-research-can-be-faste.html - Podcast. Spencer Greenberg on "How online research can be faster, better, and more useful".

https://medium.com/conversations-with-tyler/patrick-collison-stripe-podcast-tyler-cowen-books-3e43cfe42d10 - Patrick Collison, co founder of Stripe, interviews Tyler.

http://tim.blog/2017/04/11/cory-booker/ - Podcast with US Senator Cory Booker. "Street Fights, 10-Day Hunger Strikes, and Creative Problem-Solving"

http://econlog.econlib.org/archives/2017/04/the_undermotiva_1.html - Two Case studies on libertarians who changed their views for bad reasons.(edited)

http://www.stitcher.com/podcast/vox/the-ezra-klein-show/e/death-sex-and-moneys-anna-sale-on-bringing-empathy-to-politics-50101701 - Interview with the host of the WNYC podcast Death, Sex, and Money.

http://marginalrevolution.com/marginalrevolution/2017/05/econtalk-podcast-russ-roberts-complacent-class.html - "Cowen argues that the United States has become complacent and the result is a loss of dynamism in the economy and in American life, generally. Cowen provides a rich mix of data, speculation, and creativity in support of his claims."

http://tim.blog/2017/04/16/marie-kondo/ - Podcast. "Marie Kondo is a Japanese organizing consultant, author, and entrepreneur."

http://www.econtalk.org/archives/2017/04/rana_foroohar_o.html - Podcast. Rana Foroohar on the Financial Sector and Makers and Takers

http://www.stitcher.com/podcast/vox/the-ezra-klein-show/e/cal-newport-on-doing-deep-work-and-escaping-social-media-49878016 - Cal Newport on doing Deep Work and escaping social media.

https://www.samharris.org/podcast/item/forbidden-knowledge - Podcast with Charles Murray. Controversy over The Bell Curve, the validity and significance of IQ as a measure of intelligence, the problem of social stratification, the rise of Trump. [culture war](edited)

http://www.stitcher.com/podcast/vox/the-ezra-klein-show/e/elizabeth-warren-on-what-barack-obama-got-wrong-49949167 - Ezra Klein Podcast with Elizabeth Warren.

http://marginalrevolution.com/marginalrevolution/2017/04/stubborn-attachments-podcast-ft-alphaville.html - Pocast with Tyler Cowen on Stubborn Attachments. "I outline a true and objectively valid case for a free and prosperous society, and consider the importance of economic growth for political philosophy, how and why the political spectrum should be reconfigured, how we should think about existential risk, what is right and wrong in Parfit and Nozick and Singer and effective altruism, how to get around the Arrow Impossibility Theorem, to what extent individual rights can be absolute, how much to discount the future, when redistribution is justified, whether we must be agnostic about the distant future, and most of all why we need to “think big.”"

http://www.themoneyillusion.com/?p=32435 - Notes on three podcasts. Faster RGDP growth, Monetary Policy, Tyler Cowen's philosophical views.(edited)

http://www.stitcher.com/podcast/vox/the-ezra-klein-show/e/vc-bill-gurley-on-transforming-health-care-50030526 - A conversation about which healthcare systems are possible in the USA and the future of Obamacare.

https://www.currentaffairs.org/2017/05/campus-politics-and-the-administrative-mind - The nature of College Bureaucracy. Focuses on protests and Title 9. [Culture war]

http://www.stitcher.com/podcast/vox/the-ezra-klein-show/e/cory-booker-returns-live-to-talk-trust-trump-and-basic-incomes-50054271 - "Booker and I dig into America’s crisis of trust. Faith in both political figures and political institutions has plummeted in recent decades, and the product is, among other things, Trump’s presidency. So what does Booker think can be done about it?"

http://www.stitcher.com/podcast/vox/the-ezra-klein-show/e/cal-newport-on-doing-deep-work-and-escaping-social-media-49878016 - Cal Newport on doing Deep Work and escaping social media.

http://tim.blog/2017/04/22/dorian-yates/ - Bodybuilding Champion. High Intensity Training, Injury Prevention, and Building Maximum Muscle.

Soft Skills for Running Meetups for Beginners

17 Raemon 06 May 2017 04:04PM

Having a vibrant, local Less Wrong or EA community is really valuable, but at least in my experience, it tends to end up falling to the same couple people, and if one of or all of those people get burned out or get interested in another project, the meetup can fall apart. Meanwhile, there are often people who are interested in contributing but feel intimidated by the idea.

In the words of Zvi: You're good enough, you're smart enough and people would like you. (Or more explicitly, "assume you are one level higher than you think you are.")

This is an email I sent to the local NYC mailing list trying to break down some of the soft-skills and rules-of-thumb that I'd acquired over the years, to make running a meetup less intimidating. I acquired these gradually over several years. You don't need all the skills / concepts at once to run a meetup but having at least some of them will help a lot.

These are arranged roughly in order of "How much public speaking or 'interact with strangers' skill they require."

Look for opportunities to help in any fashion 

First, if public speaking stuff is intimidating, there're many things you can do that don't require much at all. Some examples:

  • Sending a email reminder to the group for people to pick a meetup topic each week (otherwise people may forget until the last minute)
  • Bringing food, or interesting toys to add some fun things to do before or after the official meetup starts.
  • Helping out with tech (i.e. setting up projectors, printing out things in advance that need printing out in advance)
  • Take notes during interesting discussions (i.e. start up a google doc, announce that you're taking notes so people can specify if particular things should be off the record, and then post that google doc to whatever mailing list or internet group your community uses to organize)
Running Game Nights

If you're not comfortable giving a presentation or facilitating a conversation (two of the most common types of meetups), a fairly simple meetup is simply to run a game night. Find an interesting board game or card game, pitch it to the group, see if people are interested. (I recommend asking for a firm RSVP for this one, to make sure you have enough people)

Having an explicit activity can take some of the edge off of "talking in public."


Giving a short (3.5 minute) lightning talk

Sometimes, a simple meet-and-greet meetup with freeform socializing is fine. These can get a bit boring if they're they only thing you do - hanging out an d talking is often the primary goal but it's useful to have a reason to come out this particular week. 

A short lightning talk about the beginning of a meetup can spark interesting conversation. A meetup description like "So and so will be giving a short talk on X, followed by freeform discussion" can go a long way. I've heard reports that even a thoroughly mediocre lightning talk can still add a lot of value over "generic meet and greet."

(note: experimentation has apparently revealed that 3.5 minute talks are far superior to 5 minute talks, which tend to end up floundering slightly)


Facilitate OTHER people giving lightning talks

Don't have things to say yourself at all? That's okay! One of the surprisingly simple and effective forms of meetups is a mini-unconference where people just share thoughts on a particular topic, followed by some questions and discussions.

In this case, the main skill you need to acquire is the skill of "being able to cut people off when they've talked too much." 


Build the skill of Presence/Charisma/Public Speaking

Plenty of people have written about this. I recommend The Charisma Myth. I also recommend, if you've never tried it, doing the sort of exercise where you go up to random people on the street and try to talk to them. (Don't try to talk to them too much if they're not interested, but don't stress out about accidentally weirding people out. As long as you don't follow people around or talk to people in a place where they're naturally trapped with you, like a subway car, you'll be fine)

The goal is just exposure therapy for "OMG I'm talking to a person and don't know what to say", until it no longer feels scary. If the very idea of thinking about that fills you with anxiety, you can start with extremely tame goals like "smile at one stranger on your way home".

I did this for several years, sometimes in a "learn to talk to girls" sense and sometimes in a general "talk to strangers" sense, and it was really valuable.

Once you're comfortable talking at all, start paying attention to higher level stuff like "don't talk too fast, make eye contact, etc."


Think about things until you have some interesting ideas worth chatting about 

Maybe a formal presentation is scary, but a low-pressure conversation feels doable. A perfectly good meetup is "let's talk about this interesting idea I've been thinking about." You can think through ideas on your commute to work, lunch break etc, so that when it comes time to talk about it, you can just treat it like a normal conversation. (This may be similar to giving a lightning talk followed by freeform discussion, but with a bit less of an explicit talk at the beginning and a bit more structure to the discussion afterwards)

The trick is to not just think about interesting concepts, but to:


Help other people talk 

Unless you are giving a formal presentation, your job as a meetup facilitator isn't actually to talk at people, it's to get them to talk to each other. It's useful for you to have ideas, but mostly insofar as those ideas prompt interesting questions that you can ask other people, giving them the opportunity to think or to share their experiences.

 - you will probably need to "prime the pump" of discussion, starting with an explanation for why the idea seems important to think about in the first place, building up some excitement for the idea and giving people a chance to mull it over.

 - if you see someone who looks like they've been thinking, and maybe want to talk but are shy, explicitly say "hey, looks like you maybe had an idea - did you have anything you wanted to share?" (don't put too much pressure on them if the answer is "no not really.")

 - if someone is going on too long and you notice your attention or anyone else's face start to wander...


Knowing when to interrupt 

In general, try *not* to interrupt other people, but it will sometimes  be necessary if people are getting off track, or if one person's been going on too long. Doing this well is a skill I don't even know how to do, but I think it's better to be able to do it at all. Some possibilities:

 - "Hey, sorry to interrupt but this sounds like a tangent, maybe we can come back to this later during the followup conversation?"

 - "Hey, just wanted to make sure some others got a chance to share their thoughts."


Have an Agenda

Sometimes you run out of things to say, and then aren't sure what to do next. No matter what form the meetup takes, have a series of items planned out so that if things start to flounder, you can say "already, let's see what's next on the agenda", and then just abruptly move on to that.

If you're doing a presentation, this can be a series of things you want to remember to get to. If you're teaching a skill, it can be a few different exercises relating to the skill. If you're facilitating a discussion, a series of questions to ask.


Welcome Newcomers

They made nontrivial effort just to come out. They're hoping to find something interesting here. Talk to them during the "casual conversation pre-meetup" and try to get a sense of why they came, and if possible tailor the meetup make sure those desires get met. If they aren't getting a chance to talk, make sure to direct the conversation to them at least once.


Not Giving a Fuck 

The first year that I ran meetups, I found it very stressful, worried a lot about whether there was a meetup each week and whether it was good. Taking primary-responsibility for that caused it to take up a semi-permanent slot in my working memory (or at least subconscious mind), constantly running and worrying.

Then I had a year where I was just like "meh, screw it, I don't care", didn't run meetups much at all. 

Then I came back and approached it from a "I just want to have meetups as often as I can, do as good a job as I can, and if it ends up just being a somewhat awkward hangout, whatever it'll be fine." This helped tremendously.

I don't know if it's possible to skip to that part (probably not). But it's the end-goal.


More Specifically: Be Okay if People Don't Show Up

Sometimes you'll have a cool idea and you'll post it and... 1-2 people actually come. This can feel really bad. It is a thing that happens though, and it's okay, and learning how to cope with this is a key part of growing as an organizer. You should take note of when this happens and probably not do the exact sort of thing again, but it doesn't mean people don't like you, it means they either weren't interested in that particular topic or just happened to be busy that day.

(Some people may not trust me if I don't acknowledge it's at least possible that people actually just don't like you. It is. But I think it is way more likely that you didn't pitch the idea well, or build enough excitement beforehand, or that this particular idea just didn't work)

If you have an idea that is only worth doing if some critical mass of people attend, I recommend putting an explicit request "I will only do this meetup if X people enthusiastically say 'I really want to come to this and will make sure to attend.'"

It may be helpful to visualize in advance how you'll respond if 20+ people come and how you'll respond if 1-2 people come. (With the latter, aiming to have more personalized conversations rather than an "event" in the same fashion)

Building Excitement

Sometimes, people naturally think an idea is cool. A lot of the time, though, especially for weird/novel ideas, you will have to make them excited. Almost all of my offbeat ideas have required me to rally people, email them individually to check if they were coming, and talk about it in person a few times to get my own excitement to spread infectiously.

(For frame of reference, when I first pitched Solstice to the group, they were like "...really? Okay, I guess." And then I kept talking about it excitedly each week, sharing pieces of songs after the end of the formal meetup, demonstrating that I cared enough to put in a lot of work. I did similar things with the Hufflepuff Unconference)

This is especially important if you'll be putting a lot of effort into an experiment and you want to make sure it succeeds.

Step 1 - Be excited yourself. Find the kernel of an idea that seems high potential, even if it's hard to explain.

Step 2 - Put in a lot of work making sure you understand your idea and have things to say/do with it.

Step 3 - Share pieces of it in the aftermath of a previous meetup to see if people respond to it. If they don't respond at all, you may need to drop it. If at least 1 or 2 people respond with interest you can probably make it work but:

Step 4 - Email people individually. If you're comfortable enough with some people at the meetup, send them a private messaging saying "hey, would you be interested in this thing?" (People respond way more reliably to private messages than to generic "hey guys what do you think" to the group?)

Step 5 - If people are waffling on whether the idea is exciting enough to come, say straightforwardly: I will do this if and only if X people respond enthusiastically about it. (And then if they don't, alas, let the event go)

Further Reading

I wrote this out, and then remembered that Kaj Sotala has written a really comprehensive guide to running meetups (37 pages long). If you want a lot more ideas and advice, I recommend checking it out.


There is No Akrasia

17 lifelonglearner 30 April 2017 03:33PM

I don’t think akrasia exists.


This is a fairly strong claim. I’m also not going to try and argue it.

 

What I’m really here to argue are the two weaker claims that:


a) Akrasia is often treated as a “thing” by people in the rationality community, and this can lead to problems, even though akrasia a sorta-coherent concept.


b) If we want to move forward and solve the problems that fall under the akrasia-umbrella, it’s better to taboo the term akrasia altogether and instead employ a more reductionist approach that favors specificity


But that’s a lot less catchy, and I think we can 80/20 it with the statement that “akrasia doesn’t exist”, hence the title and the opening sentence.


First off, I do think that akrasia is a term that resonates with a lot of people. When I’ve described this concept to friends (n = 3), they’ve all had varying degrees of reactions along the lines of “Aha! This term perfectly encapsulates something I feel!” On LW, it seems to have garnered acceptance as a concept, evidenced by the posts / wiki on it.


It does seem, then, that this concept of “want-want vs want” or “being unable to do what you ‘want’ to do” seems to point at a phenomenologically real group of things in the world.


However, I think that this is actually bad.


Once people learn the term akrasia and what it represents, they can now pattern-match it to their own associated experiences. I think that, once you’ve reified akrasia, i.e. turned it into a “thing” inside your ontology, problems occur:


First off, treating akrasia as a real thing gives it additional weight and power over you:


Once you start to notice the patterns, it’s harder to see things again as mere apparent chaos. In the case of akrasia, I think this means that people may try less hard because they suddenly realize they’re in the grip of this terrible monster called akrasia.


I think this sort of worldview ends up reinforcing some unhelpful attitudes towards solving the problems akrasia represents. As an example, here are two paraphrased things I’ve overheard about akrasia which I think illustrate this. (Happy to remove these if you would prefer not to be mentioned.)


“Akrasia has mutant healing powers…Thus you can’t fight it, you can only keep switching tactics for a time until they stop working…”


“I have massive akrasia…so if you could just give me some more high-powered tools to defeat it, that’d be great…”  

 

Both of these quotes seem to have taken the akrasia hypothesis a little too far. As I’ll later argue, “akrasia” seems to be dealt with better when you see the problem as a collection of more isolated disparate failures of different parts of your ability to get things done, rather than as an umbrella term.


I think that the current akrasia framing actually makes the problem more intractable.


I see potential failure modes where people come into the community, hear about akrasia (and all the related scary stories of how hard it is to defeat), and end up using it as an excuse (perhaps not an explicit belief, but as an alief) that impacts their ability to do work.


This was certainly the case for me, where improved introspection and metacognition on certain patterns in my mental behaviors actually removed a lot of my willpower which had served me well in the past. I may be getting slightly tangential here, but my point is that giving people models, useful as they might be for things like classification, may not always be net-positive.


Having new things in your ontology can harm you.


So just giving people some of these patterns and saying, “Hey, all these pieces represent a Thing called akrasia that’s hard to defeat,” doesn’t seem like the best idea.


How can we make the akrasia problem more tractable, then?


I claimed earlier that akrasia does seem to be a real thing, as it seems to be relatable to many people. I think this may actually because akrasia maps onto too many things. It’s an umbrella term for lots of different problems in motivation and efficacy that could be quite disparate problems. The typical akrasia framing lumps problems like temporal discounting with motivation problems like internal disagreements or ugh fields, and more.

 

Those are all very different problems with very different-looking solutions!


In the above quotes about akrasia, I think that they’re an example of having mixed up the class with its members. Instead of treating akrasia as an abstraction that unifies a class of self-imposed problems that share the property of acting as obstacles towards our goals, we treat it as a problem onto itself.


Saying you want to “solve akrasia” makes about as much sense as directly asking for ways to “solve cognitive bias”. Clearly, cognitive biases are merely a class for a wide range of errors our brains make in our thinking. The exercises you’d go through to solve overconfidence look very different than the ones you might use to solve scope neglect, for example.


Under this framing, I think we can be less surprised when there is no direct solution to fighting akrasia—because there isn’t one.


I think the solution here is to be specific about the problem you are currently facing. It’s easy to just say you “have akrasia” and feel the smooth comfort of a catch-all term that doesn’t provide much in the way of insight. It’s another thing to go deep into your ugly problem and actually, honestly say what the problem is.


The important thing here is to identify which subset of the huge akrasia-umbrella your individual problem falls under and try to solve that specific thing instead of throwing generalized “anti-akrasia” weapons at it.


Is your problem one of remembering to do tasks? Then set up a Getting Things Done system.


Is your problem one of hyperbolic discounting, of favoring short-term gains? Then figure out a way to recalibrate the way you weigh outcomes. Maybe look into precommitting to certain courses of action.


Is your problem one of insufficient motivation to pursue things in the first place? Then look into why you care in the first place. If it turns out you really don’t care, then don’t worry about it. Else, find ways to source more motivation.


The basic (and obvious) technique I propose, then, looks like:


  1. Identify the akratic thing.

  2. Figure out what’s happening when this thing happens. Break it down into moving parts and how you’re reacting to the situation.

  3. Think of ways to solve those individual parts.

  4. Try solving them. See what happens

  5. Iterate


Potential questions to be asking yourself throughout this process:

  • What is causing your problem? (EX: Do you have the desire but just aren’t remembering? Are you lacking motivation?)

  • How does this akratic problem feel? (EX: What parts of yourself is your current approach doing a good job of satisfying? Which parts are not being satisfied?)

  • Is this really a problem? (EX: Do you actually want to do better? How realistic would it be to see the improvements you’re expecting? How much better do you think could be doing?)


Here’s an example of a reductionist approach I did:


“I suffer from akrasia.


More specifically, though, I suffer from a problem where I end up not actually having planned things out in advance. This leads me to do things like browse the internet without having a concrete plan of what I’d like to do next. In some ways, this feels good because I actually like having the novelty of a little unpredictability in life.


However, at the end of the day when I’m looking back at what I’ve done, I have a lot of regret over having not taken key opportunities to actually act on my goals. So it looks like I do care (or meta-care) about the things I do everyday, but, in the moment, it can be hard to remember.”


Now that I’ve far more clearly laid out the problem above, it seems easier to see that the problem I need to deal with is a combination of:

  • Reminding myself the stuff I would like to do (maybe via a schedule or to-do list).

  • Finding a way to shift my in-the-moment preferences a little more towards the things I’ve laid out (perhaps with a break that allows for some meditation).


I think that once you apply a reductionist viewpoint and specifically say exactly what it is that is causing your problems, the problem is already half-solved. (Having well-specified problems seems to be half the battle.)

 

Remember, there is no akrasia! There are only problems that have yet to be unpacked and solved!


Notes from the Hufflepuff Unconference (Part 1)

14 Raemon 23 May 2017 09:04PM

April 28th, we ran the Hufflepuff Unconference in Berkeley, at the MIRI/CFAR office common space.

There's room for improvement in how the Unconference could have been run, but it succeeded the core things I wanted to accomplish: 

 - Established common knowledge of what problems people were actually interested in working on
 - We had several extensive discussions of some of those problems, with an eye towards building solutions
 - Several people agreed to work together towards concrete plans and experiments to make the community more friendly, as well as build skills relevant to community growth. (With deadlines and one person acting as project manager to make sure real progress was made)
 - We agreed to have a followup unconference in roughly three months, to discuss how those plans and experiments were going

Rough notes are available here. (Thanks to Miranda, Maia and Holden for takin really thorough notes)

This post will summarize some of the key takeaways, some speeches that were given, and my retrospective thoughts on how to approach things going forward.

But first, I'd like to cover a question that a lot of people have been asking about:

What does this all mean for people outside of the Bay?

The answer depends.

I'd personally like it if the overall rationality community got better at social skills, empathy, and working together, sticking with things that need sticking with (and in general, better at recognizing skills other than metacognition). In practice, individual communities can only change in the ways the people involved actually want to change, and there are other skills worth gaining that may be more important depending on your circumstances.

Does Project Hufflepuff make sense for your community?

If you're worried that your community doesn't have an interest in any of these things, my actual honest answer is that doing something "Project Hufflepuff-esque" probably does not make sense. I did not choose to do this because I thought it was the single-most-important thing in the abstract. I did it because it seemed important and I knew of a critical mass of people who I expected to want to work on it. 

If you're living in a sparsely populated area or haven't put a community together, the first steps do not look like this, they look more like putting yourself out there, posting a meetup on Less Wrong and just *trying things*, any things, to get something moving.

If you have enough of a community to step back and take stock of what kind of community you want and how to strategically get there, I think this sort of project can be worth learning from. Maybe you'll decide to tackle something Project-Hufflepuff-like, maybe you'll find something else to focus on. I think the most important thing is have some kind of vision for something you community can do that is worth working together, leveling up to accomplish.

Community Unconferences as One Possible Tool

Community unconferences are a useful tool to get everyone on the same page and spur them on to start working on projects, and you might consider doing something similar. 

They may not be the right tool for you and your group - I think they're most useful in places where there's enough people in your community that they don't all know each other, but do have enough existing trust to get together and brainstorm ideas. 

If you have a sense that Project Hufflepuff is worthwhile for your community but the above disclaimers point towards my current approach not making sense for you, I'm interested in talking about it with you, but the conversation will look less like "Ray has ideas for you to try" and more like "Ray is interested in helping you figure out what ideas to try, and the solution will probably look very different."

Online Spaces

Since I'm actually very uncertain about a lot of this and see it as an experiment, I don't think it makes sense to push for any of the ideas here to directly change Less Wrong itself (at least, yet). But I do think a lot of these concepts translate to online spaces in some fashion, and I think it'd make sense to try out some concepts inspired by this in various smaller online subcommunities.

Table of Contents:

I. Introduction Speech

 - Why are we here?
 - The Mission: Something To Protect
 - The Invisible Badger, or "What The Hell Is a Hufflepuff?"
 - Meta Meetups Usually Suck. Let's Try Not To.

II. Common Knowledge

 - What Do People Actually Want?
 - Lightning Talks

III. Discussing the Problem (Four breakout sessions)

 - Welcoming Newcomers
 - How to handle people who impose costs on others?
 - Styles of Leadership and Running Events
 - Making Helping Fun (or at least lower barrier-to-entry)

IV. Planning Solutions and Next Actions

V. Final Words

I. Introduction: It Takes A Village to Save a World

(A more polished version of my opening speech from the unconference)

[Epistemic Status: This is largely based on intuition, looking at what our community has done and what other communities seem to be able to do. I'm maybe 85% confident in it, but it is my best guess]

In 2012, I got super into the rationality community in New York. I was surrounded by people passionate about thinking better and using that thinking to tackle ambitious projects. And in 2012 we all decided to take on really hard projects that were pretty likely to fail, because the expected value seemed high, and it seemed like even if we failed we'd learn a lot in the process and grow stronger.

That happened - we learned and grew. We became adults together, founding companies and nonprofits and creating holidays from scratch.

But two years later, our projects were either actively failing, or burning us out. Many of us became depressed and demoralized.

There was nobody who was okay enough to actually provide anyone emotional support. Our core community withered.

I ended up making that the dominant theme of the 2014 NYC Solstice, with a call-to-action to get back to basics and take care each other.

I also went to the Berkeley Solstice that year. And... I dunno. In the back of my mind I was assuming "Berkeley won't have that problem - the Bay area has so many people, I can't even imagine how awesome and thriving a community they must have." (Especially since the Bay kept stealing all the Movers and Shakers of NYC).

The theme of the Bay Solstice turned out to be "Hey guys, so people keep coming to the Bay, running on a dream and a promise of community, but that community is not actually there, there's a tiny number of well-connected people who everyone is trying to get time with, and everyone seems lonely and sad. And we don't even know what to do about this."

In 2015, that theme in the Berkeley Solstice was revisited.

So I think that was the initial seed of what would become Project Hufflepuff - noticing that it's not enough to take on cool projects, that it's not enough to just get a bunch of people together and call it a community. Community is something you actively tend to. Insofar as Maslow's hierarchy is real, it's a foundation you need before ambitious projects can be sustainable.

There are other pieces of the puzzle - different lenses that, I believe, point towards a Central Thing. Some examples:

Group houses, individualism and coordination.

I've seen several group houses where, when people decide it no longer makes sense to live in the house, they... just kinda leave. Even if they've literally signed a lease. And everyone involved (the person leaving and those remain), instinctively act as if it's the remaining people's job to fill the leaver's spot, to make rent.

And the first time, this is kind of okay. But then each subsequent person leaving adds to a stressful undertone of "OMG are we even going to be able to afford to live here?". It eventually becomes depressing, and snowballs into a pit that makes newcomers feel like they don't WANT to move into the house.

Nowadays I've seen some people explicitly building into the roommate agreement a clear expectation of how long you stay and who's responsibility it is to find new roommates and pay rent in the meantime. But it's disappointing to me that this is something we needed, that we weren't instinctively paying to attention to how we were imposing costs on each other in the first place. That when we *violated a written contract*, let alone a handshake agreement, that we did not take upon ourselves (or hold each other accountable), to ensure we could fill our end of the bargain.

Friends, and Networking your way to the center

This community puts pressure on people to improve. It's easier to improve when you're surrounded by ambitious people who help or inspire each other level up. There's a sense that there's some cluster of cool-people-who-are-ambitious-and-smart who've been here for a while, and... it seems like everyone is trying to be friends with those people. 

It also seems like people just don't quite get that friendship is a skill, that adult friendships in City Culture can be hard, and it can require special effort to make them happen.

I'm not entirely sure what's going on here - it doesn't make sense to say anyone's obligated to hang out with any particular person (or obligated NOT to), but if 300 people aren't getting the connection they want it seems like *somewhere people are making a systematic mistake.* 

(Since the Unconference, Maia has tackled this particular issue in more detail)

 

The Mission - Something To Protect

 

As I see it, the Rationality Community has three things going on: Truth. Impact. And "Being People".

In some sense, our core focus is the practice of truthseeking. The thing that makes that truthseeking feel *important* is that it's connected to broader goals of impacting the world. And the thing that makes this actually fun and rewarding enough to stick with is a community that meets our needs, where can both flourish as individuals and find the relationships we want.

I think we have made major strides in each of those areas over the past seven years. But we are nowhere near done.

Different people have different intuitions of which of the three are most important. Some see some of them as instrumental, or terminal. There are people for whom Truthseeking is *the point*, and they'd have been doing that even if there wasn't a community to help them with it, and there are people for whom it's just one tool of many that helps them live their life better or plan important projects.

I've observed a tendency to argue about which of these things is most important, or what tradeoffs are worth making. Inclusiveness verses high standards. Truth vs action. Personal happiness vs high acheivement.

I think that kind of argument is a mistake.

We are falling woefully short on all of these things. 

We need something like 10x our current capacity for seeing, and thinking. 10x our capacity for doing. 10x our capacity for *being healthy people together.*

I say "10x" not because all these things are intrinsically equal. The point is not to make a politically neutral push to make all the things sound nice. I have no idea exactly how far short we're falling on each of these because the targets are so far away I can't even see the end, and we are doing a complicated thing that doesn't have clear instructions and might not even be possible.

The point is that all of these are incredibly important, and if we cannot find a way to improve *all* of these, in a way that is *synergistic* with each other, then we will fail.

There is a thing at the center of our community. Not all of us share the exact same perspective on it. For some of us it's not the most important thing. But it's been at the heart of the community since the beginning and I feel comfortable asserting that it is the thing that shapes our culture the most:

The purpose of our community is to make sure this place is okay:

The world isn't okay right now, on a number of levels. And a lot of us believe there is a strong chance it could become dramatically less okay. I've seen people make credible progress on taking responsibility for pieces of our home. But when all is said and done, none of our current projects really give me the confidence that things are going to turn out all right. 

Our community was brought together on a promise, a dream, and we have not yet actually proven ourselves worthy of that dream. And to make that dream a reality we need a lot of things.

We need to be able to criticize, because without criticism, we cannot improve.

If we cannot, I believe we will fail.

We need to be able to talk about ideas that are controversial, or uncomfortable - otherwise our creativity and insight will be crippled.

If we cannot, I believe we will fail.

We need to be able to do those things without alienating people. We need to be able to criticize without making people feel untrusted and discouraged from even taking action. We need to be able to discuss challenging things while earnestly respecting the notion that *talking about ideas gives those ideas power and has concrete effects on social reality*, and sometimes that can hurt people.

If we cannot figure out how to do that, I believe we will fail.

We need more people who are able and willing to try things that have never been done before. To stick with those things long enough to *get good at them*, to see if they can actually work. We need to help each other do impossible things. And we need to remember to check for and do the *possible*, boring, everyday things that are in fact straightforward and simple and not very inspiring. 

If we cannot manage to do that, I believe we will fail.

We need to be able to talk concretely about what the *highest leverage actions in the world are*. We need to prioritize those things, because the world is huge and broken and we are small. I believe we need to help each other through a long journey, building bigger and bigger levers, building connections with people outside our community who are undertaking the same journey through different perspectives.

And in the process, we need to not make it feel like if *you cannot personally work on those highest leverage things, that you are not important.* 

There's the kind of importance where we recognize that some people have scarce skills and drive, and the kind of importance where we remember that *every* person has intrinsic worth, and you owe *nobody* any special skills or prestigious sounding projects for your life to be worthwhile.

This isn't just a philosophical matter - I think it's damaging to our mental health and our collective capacity. 

We need to recognize that the distribution of skills we tend to reward or punish is NOT just about which ones are actually most valuable - sometimes it is simply founder effects and blind spots.

We cannot be a community for everyone - I believe trying to include anyone with a passing interest in us is a fool's errand. But there are many people who had valuable skills to contribute who have turned away, feeling frustrated and un-valued.

If we cannot find a way to accomplish all of these things at once, I believe we will fail.

The thesis of Project Hufflepuff is that it takes (at least) a village to save a world. 

It takes people doing experimental impossible things. It takes caretakers. It takes people helping out with unglorious tasks. It takes technical and emotional and physical skills. And while it does take some people who specialize in each of those things, I think it also needs many people who are least a little bit good at each of them, to pitch in when needed.

Project Hufflepuff is not the only things our community needs, or the most important. But I believe it is one of the necessary things that our community needs, if we're to get to 10x our current Truthseeking, Impact and Human-ing.

If we're to make sure that our home is okay.

The Invisible Badger

"A lone hufflepuff surrounded by slytherins will surely wither as if being leeched dry by vampires."

- Duncan

[Epistemic Status: My evidence for this is largely based on discussions with a few people for whom the badger seems real and valuable, and who report things being different in other communities, as well as some of my general intuitions about society. I'm 75% sure the badger exists, 90% that's it worth leaning into the idea of the badger to see if it works for you, and maybe 55% sure that it's worth trying to see the badger if you can't already make out it's edges.]


 

If I *had* to pick a clear thing that this conference is about without using Harry Potter jargon, I'd say "Interpersonal dynamics surrounding trust, and how those dynamics apply to each of the Impact/Truth/Human focuses of the rationality community."

I'm not super thrilled with that term because I think I'm grasping more for some kind of gestalt. An overall way of seeing and being that's hard to describe and that doesn't come naturally to the sort of person attracted to this community.

Much like the blind folk and the elephant, who each touched a different part of the animal and came away with a different impression (the trunk seems like a snake, the legs seem like a tree), I've been watching several people in the community try to describe things over the past few years. And maybe those things are separate but I feel like they're secretly a part of the same invisible badger.

Hufflepuff is about hard work, and loyalty, and camaraderie. It's about emotional intelligence. It's about seeing value in day to day things that don't directly tie into epic narratives. 

There's a bunch of skills that go into Hufflepuff. And part of want I want is for people to get better at those skills. But It think a mindset, an approach, that is fairly different from the typical rationalist mindset, that makes those skills easier. It's something that's harder when you're being rigorously utilitarian and building models of the world out of game theory and incentives.

Mindspace is deep and wide, and I don't expect that mindset to work for everyone. I don't think everyone should be a Hufflepuff. But I do think it'd be valuable to the community if more people at least had access to this mindset and more of these skills.

So what I'd like, for tonight, is for people to lean into this idea. Maybe in the end you'll find that this doesn't work for you. But I think many people's first instinct is going to be that this is alien and uncomfortable and I think it's worth trying to push past that.

The reason we're doing this conference together is because the Hufflepuff way doesn't really work if people are trying to do it alone - I think it requires trust and camaraderie and persistence to really work. I don't think we can have that required trust all at once, but I think if there are multiple people trying to make it work, who can incrementally trust each other more, I think we can reach a place where things run more smoothly, where we have stronger emotional connections, and where we trust each other enough to take on more ambitious projects than we could if we're all optimizing as individuals.

Meta-Meetups Suck. Let's Not.

This unconference is pretty meta - we're talking about norms and vague community stuff we want to change.

Let me tell you, meta meetups are the worst. Typically you end up going around in circles complaining and wishing there were more things happening and that people were stepping up and maybe if you're lucky you get a wave of enthusiasm that lasts a month or so and a couple things happen but nothing really *changes*.

So. Let's not do that. Here's what I want to accomplish and which seems achievable:

1) Establish common knowledge of important ideas and behavior patterns. 

Sometimes you DON'T need to develop a whole new skill, you just need to notice that your actions are impacting people in a different way, and maybe that's enough for you to decide to change somethings. Or maybe someone has a concept that makes it a lot easier for you to start gaining a new skill on your own.

2) Establish common knowledge of who's interested in trying which new norms, or which new skills. 

We don't actually *know* what the majority of people want here. I can sit here and tell you what *I* think you should want, but ultimately what matters is what things a critical mass of people want to talk about tonight.

Not everyone has to agree that an idea is good to try it out. But there's a lot of skills or norms that only really make sense when a critical mass of other people are trying them. So, maybe of the 40 people here, 25 people are interested in improving their empathy, and maybe another 20 are interested in actively working on friendship skills, or sticking to commitments. Maybe those people can help reinforce each other.

3) Explore ideas for social and skillbuilding experiments we can try, that might help. 

The failure mode of Ravenclaws is to think about things a lot and then not actually get around to doing them. A failure mode of ambitious Ravenclaws, is to think about things a lot and then do them and then assume that because they're smart, that they've thought of everything, and then not listen to feedback when they get things subtly or majorly wrong.

I'd like us to end by thinking of experiments with new norms, or habits we'd like to cultivate. I want us to frame these as experiments, that we try on a smaller scale and maybe promote more if they seem to be working, while keeping in mind that they may not work for everyone.

4) Commit to actions to take.

Since the default action is for them to peter out and fail, I'd like us to spend time bulletproofing them, brainstorming and coming up with trigger-action plans so that they actually have a chance to succeed.

Tabooing "Hufflepuff"

Having said all that talk about The Hufflepuff Way...

...the fact is, much of the reason I've used those towards is to paint a rough picture to attract the sort of person I wanted to attract to this unconference.

It's important that there's a fuzzy, hard-to-define-but-probably-real concept that we're grasping towards, but it's also important not to be talking past each other. Early on in this project I realized that a few people who I thought were on the same page actually meant fairly different things. Some cared more about empathy and friendship. Some cared more about doing things together, and expected deep friendships to arise naturally from that.

So I'd like us to establish a trigger-action-plan right now - for the rest of this unconference, if someone says "Hufflepuff", y'all should say "What do you mean by that?" and then figure out whatever concrete thing you're actually trying to talk about.

II. Common Knowledge

The first part of the unconference was about sharing our current goals, concerns and background knowledge that seemed useful. Most of the specifics are covered in the notes. But I'll talk here about why I included the things I did and what my takeaways were afterwards on how it worked.

Time to Think

The first thing I did was have people sit and think about what they actually wanted to get out of the conference, and what obstacles they could imagine getting in the way of that. I did this because often, I think our culture (ostensibly about helping us think better) doesn't give us time to think, and instead has people were are quick-witted and conversationally dominant end up doing most of the talking. (I wrote a post a year ago about this, the 12 Second Rule). In this case I gave everyone 5 minutes, which is something I've found helpful at small meetups in NYC.

This had mixed results - some people reported that while they can think well by themselves, in a group setting they find it intimidating and their mind starts wandering instead of getting anything done. They found it much more helpful when I eventually let people-who-preferred-to-talk-to-each-other go into another room to talk through their ideas outloud.

I think there's some benefit to both halves of this and I'm not sure how common which set of preferences are. It's certainly true that it's not common for conferences to give people a full 5 minutes to think so I'd expect it to be someone uncomfortable-feeling regardless of whether it was useful.

But an overall outcome of the unconference was that it was somewhat lower energy than I'd wanted, and opening with 5 minutes of silent thinking seemed to contribute to that, so for the next unconference I run, I'm leaning towards a shorter period of time for private thinking (Somewhere between 12 and 60 seconds), followed by "turn to your neighbors and talk through the ideas you have", followed by "each group shares their concepts with the room."

"What is do you want to improve on? What is something you could use help with?"

I wanted people to feel like active participants rather than passive observers, and I didn't want people to just think "it'd be great if other people did X", but to keep an internal locus of control - what can *I* do to steer this community better? I also didn't want people to be thinking entirely individualistically.

I didn't collect feedback on this specific part and am not sure how valuable others found it (if you were at the conference, I'd be interested if you left any thoughts in the comments). Some anonymized things people described:

  • When I make social mistakes, consider it failure; this is unhelpful

  • Help point out what they need help with

  • Have severe akrasia, would like more “get things done” magic tools

  • Getting to know the bay area rationalist community

  • General bitterness/burned out

  • Reduce insecurity/fear around sharing

  • Avoiding spending most words signaling to have read a particular thing; want to communicate more clearly

  • Creating systems that reinforce unnoticed good behaviour

  • Would like to learn how to try at things

  • Find place in rationalist community

  • Staying connected with the group

  • Paying attention to what they want in the moment, in particular when it’s right to not be persistent

  • Would like to know the “landing points” to the community to meet & greet new people

  • Become more approachable, & be more willing to approach others for help; community cohesiveness

  • Have been lonely most of life; want to find a place in a really good healthy community

  • Re: prosocialness, being too low on Maslow’s hierarchy to help others

  • Abundance mindset & not stressing about how to pay rent

  • Cultivate stance of being able to do helpful things (action stance) but also be able to notice difference between laziness and mental health

  • Don’t know how to respect legit safety needs w/o getting overwhelmed by arbitrary preferences; would like to model people better to give them basic respect w/o having to do arbitrary amount of work

  • Starting conversations with new people

  • More rationalist group homes / baugruppe

  • Being able to provide emotional support rather than just logistics help

  • Reaching out to people at all without putting too much pressure on them

  • Cultivate lifelong friendships that aren’t limited to particular time and place

  • Have a block around asking for help bc doesn’t expect to reciprocate; would like to actually just pay people for help w stuff

  • Want to become more involved in the community

  • Learn how to teach other people “ops skills”

  • Connections to people they can teach and who can teach them

Lightning Talks

Lightning talks are a great way to give people an opportunity to not just share ideas, but get some practice at public presentation (which I've found can be a great gateway tool for overall confidence and ability to get things done in the community). Traditionally they are 5 minutes long. CFAR has found that 3.5 minute lightning talks are better than 5 minute talks, because it cuts out some rambling and tangents.

It turned out we had more people than I'd originally planned time for, so we ended up switching to two minute talks. I actually think this was even better, and my plan for next time is do 1-minute timeslots but allow people to sign up for multiple if they think their talk requires it, so people default to giving something short and sweet.

Rough summaries of the lightning talks can be found in the notes.

III. Discussing the Problem

The next section involved two "breakout session" - two 20 minute periods for people to split into smaller groups and talk through problems in detail. This was done in an somewhat impromptu fashion, with people writing down the talks they wanted to do on the whiteboard and then arranging them so most people could go to a discussion that interested them.

The talks were:

 -  Welcoming Newcomers
 -  How to handle people who impose costs on others?
 -  Styles of Leadership and Running Events
 -  Making Helping Fun (or at least lower barrier-to-entry)
 -  Circling session 

There was a suggested discussion about outreach, which I asked to table for a future unconference. My reason was that outreach discussions tend to get extremely meta and seem to be an attractor (people end up focusing on how to bring more people into the community without actually making sure the community is good, and I wanted the unconference to focus on the latter.)

I spent some time drifting between sessions, and was generally impressed both with the practical focus each discussion had, as well as the way they were organically moderated.

Again, more details in the notes.

IV. Planning Solutions and Next Actions

After about an hour of discussion and mingling, we came back to the central common space to describe key highlights from each session, and begin making concrete plans. (Names are crediting people who suggested an idea and who volunteered to make it happen)

Creating Norms for Your Space (Jane Joyce, Tilia Bell)

The "How to handle people who impose costs on other" conversation ended up focusing on minor but repeated costs. One of the hardest things to moderate as an event host is not people who are actively disruptive, but people who just a little bit awkward or annoying - they'd often be happy to change their behavior if they got feedback, but giving feedback feels uncomfortable and it's hard to do in a tactful way. This presents two problems at once: parties/events/social-spaces end up a more awkward/annoying than they need to be, and often what happens is that rather than giving feedback, the hosts stop inviting people doing those minor things, which means a lot of people still working on their social skills end up living in fear of being excluded.

Solving this fully requires a few different things at once, and I'm not sure I have a clear picture of what it looks like, but one stepping stone people came up with was creating explicit norms for a given space, and a practice of reminding people of those norms in a low-key, nonjudgmental way.

I think will require a lot of deliberate effort and practice on the part of hosts to avoid alternate bad outcomes like "the norms get disproportionately enforced on people the hosts like and applied unfairly to people they aren't close with". But I do think it's a step in the right direction to showcase what kind of space you're creating and what the expectations are.

Different spaces can be tailored for different types of people with different needs or goals. (I'll have more to say about this in an upcoming post - doing this right is really hard, I don't actually know of any groups that have done an especially good job of it.)

I *was* impressed with the degree to which everyone in the conversation seemed to be taking into account a lot of different perspectives at once, and looking for solutions that benefited as many people as possible.

Welcoming Committee (Mandy Souza, Tessa Alexanian)

Oftentimes at events you'll see people who are new, or who don't seem comfortable getting involved with the conversation. Many successful communities do a good job of explicitly welcoming those people. Some people at the unconference decided to put together a formal group for making sure this happens more.

The exact details are still under development, but I think the basic idea is to have a network of people who are interested
he idea is to have a group of people who go to different events, playing the role of the welcomer. I think the idea is sort of a "Uber for welcomers" network (i.e. it both provides a place for people running events to go to ask for help with welcoming, and people who are interested in welcoming to find events that need welcomers)

It also included some ideas for better infrastructure, such as reviving "bayrationality.org" to make it easier for newcomers to figure out what events are going on (possibly including links to the codes of conduct for different spaces as well). In the meanwhile, some simple changes were the introduction of a facebook group for Bay Area Rationalist Social Events.

Softskill-sharing Groups (Mike Plotz and Jonathan Wallis)

The leadership styles discussion led to the concept that in order to have a flourishing community, and to be a successful leader, it's valuable to make yourself legible to others, and others more legible to yourself. Even small improvements in an activity as frequent as communication can have huge effects over time, as we make it easier to see each other as we actually are and to clearly exchange our ideas. 

A number of people wanted to improve in this area together, and so we’re working towards establishing a series of workshops with a focus on practice and individual feedback. A longer post on why this is important is coming up, and there will be information on the structure of the event after our first teacher’s meeting. If you would like to help out or participate, please fill out this poll:

https://goo.gl/forms/MzkcsMvD2bKzXCQN2

Circling Explorations (Qiaochu and others)

Much of the discussion at the Unconference, while focused on community, ultimately was explored through an intellectual lens. By contrast, "Circling" is a practice developed by the Authentic Relating community which is focused explicitly on feelings. The basic premise is (sort of) simple: you sit in a circle in a secluded space, and you talk about how you're feeling in the moment. Exactly how this plays out is a bit hard to explain, but the intended result is to become better both at noticing your own feelings and the people around you.

Opinions were divided as to whether this was something that made sense for "rationalists to do on their own", or whether it made more sense to visit more explicitly Circling-focused communities, but several people expressed interest in trying it again.

Making Helping Fun and More Accessible (Suggested by Oliver Habryka)

Ultimately we want a lot of people who are able and excited to help out with challenging projects - to improve our collective group ambition. But to get there, it'd be really helpful to have "gateway helping" - things people can easily pitch in to do that are fun, rewarding, clearly useful but on the "warm fuzzies" side of helping. Oliver suggested this as a way to get people to start identifying as people-who-help.

There were two main sets of habits that worth cultivating:

1) Making it clear to newcomers that they're encouraged to help out with events, and that this is actually a good way to make friends and get more involved. 

2) For hosts and event planners, look for opportunities to offer people things that they can help with, and make sure to publicly praise those who do help out.

Some of this might dovetail nicely with the Welcoming Committee, both as something people can easily get involved with, and if there ends up being a public facing website to introduce people to the community, using that to connect people with events that could use help).

Volunteering-as-Learning, and Big Event Specific Workshops

Sometimes volunteering just requires showing up. But sometimes it requires special skills, and some events might need people who are willing to practice beforehand or learn-by-doing with a commitment to help at multiple events.

A vague cluster of skills that's in high demand is "predict logistical snafus in advance to head them off, and notice logistical snafus happening in realtime so you can do something about them." Earlier this year there was an Ops Workshop that aimed to teach this sort of skill, which went reasonably but didn't really lead into a concrete use for the skills to help them solidify.

One idea was to do Ops workshops (or other specialized training) in the month before a major event like Solstice or EA Global, giving them an opportunity to practice skills and making that particular event run smoother.

(This specific idea is not currently planned for implementation as it was among the more ambitious ones, although Brent Dill's series of "practice setting up a giant dome" beach parties in preparation for Burning Man are pointing in a similar direction)

Making Sure All This Actually Happens (Sarah Spikes, and hopefully everyone!)

To avoid the trap of dreaming big and not actually getting anything done, Sarah Spikes volunteered as project manager, creating an Asana page. People who were interested in committing to a deadline could opt into getting pestered by her to make sure things things got done. 

V. Parting Words

To wrap up the event, I focused on some final concepts that underlie this whole endeavor. 

The thing we're aiming for looks something like this:

In a couple months (hopefully in July), there'll be a followup unconference. The theme will be "Innovation and Excellence", addressing the twofold question "how do we encourage more people to start cool projects", and "how to do we get to a place where longterm projects ultimately reach a high quality state?"

Both elements feel important to me, and they require somewhat different mindsets (both on the part of the people running the projects, and the part of the community members who respond to them). Starting new things is scary and having too high standards can be really intimidating, yet for longterm projects we may want to hold ourselves to increasingly high standards over time.

My current plan (subject to lots of revision) is for this to become a series of community unconferences that happen roughly every 3 months. The Bay area is large enough with different overlapping social groups that it seems worthwhile to get together every few months and have an open-structured event to see people you don't normally see, share ideas, and get on the same page about important things.

Current thoughts for upcoming unconference topics are:

Innovation and Excellence
Personal Epistemic Hygiene
Group Epistemology

An important piece of each unconference will be revisiting things at the previous one, to see if projects, ideas or experiments we talked about were actually carried out and what we learned from them (most likely with anonymous feedback collected beforehand so people who are less comfortable speaking publicly have a chance to express any concerns). I'd also like to build on topics from previous unconferences so they have more chance to sink in and percolate (for example, have at least one talk or discussion about "empathy and trust as related to epistemic hygiene").

Starting and Finishing Unconferences Together

My hope is to get other people involved sooner rather than later so this becomes a "thing we are doing together" rather than a "thing I am doing." One of my goals with this is also to provide a platform where people who are interested in getting more involved with community leadership can take a step further towards that, no matter where they currently stand (ranging anywhere from "give a 30 second lightning talk" to "run a discussion, or give a keynote talk" to "be the primary organizer for the unconference.")

I also hope this is able to percolate into online culture, and to other in-person communities where a critical mass of people think this'd be useful. That said, I want to caution that I consider this all an experiment, motivated by an intuitive sense that we're missing certain things as a culture. That intuitive sense has yet to be validated in any concrete fashion. I think "willingness to try things" is more important than epistemic caution, but epistemic caution is still really important - I recommend collecting lots of feedback and being willing to shift direction if you're trying anything like the stuff suggested here.

(I'll have an upcoming post on "Ways Project Hufflepuff could go horribly wrong")

Most importantly, I hope this provides a mechanism for us to collectively take ideas more seriously that we're ostensibly supposed to be taking seriously. I hope that this translates into the sort of culture that The Craft and The Community was trying to point us towards, and, ideally, eventually, a concrete sense that our community can play a more consistently useful role at making sure the world turns out okay. 

If you have concerns, criticism, or feedback, I encourage you to comment here if you feel comfortable, or on the Unconference Feedback Form. So far I've been erring on the side of move forward and set things in motion, but I'll be shifting for the time being towards "getting feedback and making sure this thing is steering in the right direction."

-

In addition to the people listed throughout the post, I'd like to give particular thanks to Duncan Sabien for general inspiration and a lot of concrete help, Lahwran for giving the most consistent and useful feedback, and Robert Lecnik for hosting the space. 

[Link] Reality has a surprising amount of detail

14 jsalvatier 13 May 2017 08:02PM

Thoughts on civilization collapse

14 Stuart_Armstrong 04 May 2017 10:41AM

Epistemic status: an idea I believe moderately strongly, based on extensive reading but not rigorous analysis.

We may have a dramatically wrong idea of civilization collapse, mainly inspired by movies that obsess over dramatic tales of individual heroism.

 

Traditional view:

In a collapse, anarchy will break out, and it will be a war of all against all or small groups against small groups. Individual weaponry (including heavy weapons) and basic food production will become paramount; traditional political skills, not so much. Government collapse is long term. Towns and cities will suffer more than the countryside. The best course of action is to have a cache of weapons and food, and to run for the hills.

 

Alternative view:

In a collapse, people will cling to their identified tribe for protection. Large groups will have no difficulty suppressing or taking over individuals and small groups within their areas of influence. Individual weaponry may be important (given less of a police force), but heavy weaponry will be almost irrelevant as no small group will survive alone. Food production will be controlled by the large groups. Though the formal "government" may fall, and countries may splinter into more local groups, government will continue under the control of warlords, tribal elders, or local variants. Cities, with their large and varied-skill workforce, will suffer less than the countryside. The best course of action is to have a stash of minor luxury goods (solar-powered calculators, comic books, pornography, batteries, antiseptics) and to make contacts with those likely to become powerful after a collapse (army officers, police chiefs, religious leaders, influential families).

Possible sources to back up this alternative view:

  • The book Sapiens argues that governments and markets are the ultimate enablers of individualism, with extended-family-based tribalism as the "natural" state of humanity.
  • The history of Somalia demonstrates that laws and enforcement continue even after a government collapse, by going back to more traditional structures.
  • During China's period of anarchy, large groups remained powerful: the nationalists, the communists, the Japanese invaders. The other sections of the country were generally under the control of local warlords.
  • Rational Wiki argues that examples of collapse go against the individualism narrative.

 

Bad intent is a disposition, not a feeling

13 Benquo 01 May 2017 01:28AM

It’s common to think that someone else is arguing in bad faith. In a recent blog post, Nate Soares claims that this intuition is both wrong and harmful:

I believe that the ability to expect that conversation partners are well-intentioned by default is a public good. An extremely valuable public good. When criticism turns to attacking the intentions of others, I perceive that to be burning the commons. Communities often have to deal with actors that in fact have ill intentions, and in that case it's often worth the damage to prevent an even greater exploitation by malicious actors. But damage is damage in either case, and I suspect that young communities are prone to destroying this particular commons based on false premises.

To be clear, I am not claiming that well-intentioned actions tend to have good consequences. The road to hell is paved with good intentions. Whether or not someone's actions have good consequences is an entirely separate issue. I am only claiming that, in the particular case of small high-trust communities, I believe almost everyone is almost always attempting to do good by their own lights. I believe that propagating doubt about that fact is nearly always a bad idea.

It would be surprising, if bad intent were so rare in the relevant sense, that people would be so quick to jump to the conclusion that it is present. Why would that be adaptive?

What reason do we have to believe that we’re systematically overestimating this? If we’re systematically overestimating it, why should we believe that it’s adaptive to suppress this?

There are plenty of reasons why we might make systematic errors on things that are too infrequent or too inconsequential to yield a lot of relevant-feeling training data or matter much for reproductive fitness, but social intuitions are a central case of the sort of things I would expect humans to get right by default. I think the burden of evidence is on the side disagreeing with the intuitions behind this extremely common defensive response, to explain what bad actors are, why we are on such a hair-trigger against them, and why we should relax this.

Nate continues:

My models of human psychology allow for people to possess good intentions while executing adaptations that increase their status, influence, or popularity. My models also don’t deem people poor allies merely on account of their having instinctual motivations to achieve status, power, or prestige, any more than I deem people poor allies if they care about things like money, art, or good food. […]

One more clarification: some of my friends have insinuated (but not said outright as far as I know) that the execution of actions with bad consequences is just as bad as having ill intentions, and we should treat the two similarly. I think this is very wrong: eroding trust in the judgement or discernment of an individual is very different from eroding trust in whether or not they are pursuing the common good.

Nate's argument is almost entirely about mens rea - about subjective intent to make something bad happen. But mens rea is not really a thing. He contrasts this with actions that have bad consequences, which are common. But there’s something in the middle: following an incentive gradient that rewards distortions. For instance, if you rigorously A/B test your marketing until it generates the presentation that attracts the most customers, and don’t bother to inspect why they respond positively to the result, then you’re simply saying whatever words get you the most customers, regardless of whether they’re true. In such cases, whether or not you ever formed a conscious intent to mislead, your strategy is to tell whichever lie is most convenient; there was nothing in your optimization target that forced your words to be true ones, and most possible claims are false, so you ended up making false claims.

More generally, if you try to control others’ actions, and don’t limit yourself to doing that by honestly informing them, then you’ll end up with a strategy that distorts the truth, whether or not you meant to. The default state for any given constraint is that it has not been applied to someone's behavior. To say that someone has the honest intent to inform is a positive claim about their intent. It's clear to me that we should expect this to sometimes be the case - sometimes people perceive a convergent incentive to inform one another, rather than a divergent incentive to grab control. But, if you do not defend yourself and your community against divergent strategies unless there is unambiguous evidence, then you make yourself vulnerable to those strategies, and should expect to get more of them.The default hypothesis should be that any given constraint has not been applied to someone's behavior. To say that someone has the honest intent to inform is a positive claim about their intent. It's clear to me that we should expect this to sometimes be the case - sometimes people have a convergent incentive to inform one another, rather than a divergent incentive to grab control. 

I’ve been criticizing EA organizations a lot for deceptive or otherwise distortionary practices (see here and here), and one response I often get is, in effect, “How can you say that? After all, I've personally assured you that my organization never had a secret meeting in which we overtly resolved to lie to people!”

Aside from the obvious problems with assuring someone that you're telling the truth, this is generally something of a nonsequitur. Your public communication strategy can be publicly observed. If it tends to create distortions, then I can reasonable infer that you’re following some sort of incentive gradient that rewards some kinds of distortions. I don’t need to know about your subjective experiences to draw this conclusion. I don’t need to know your inner narrative. I can just look, as a member of the public, and report what I see.

Acting in bad faith doesn’t make you intrinsically a bad person, because there’s no such thing. And besides, it wouldn't be so common if it required an exceptionally bad character. But it has to be OK to point out when people are not just mistaken, but following patterns of behavior that are systematically distorting the discourse - and to point this out publicly so that we can learn to do better, together.

(Cross-posted at my personal blog.)

[EDITED 1 May 2017 - changed wording of title from "behavior" to "disposition"]

SlateStarCodex Meetups Everywhere: Analysis

11 mingyuan 13 May 2017 12:29AM

The first round of SlateStarCodex meetups took place from April 4th through May 20th, 2017 in 65 cities, in 16 countries around the world. Of the 69 cities originally listed as having 10 or more people interested, 9 did not hold meetups, and 5 cities that were not on the original list did hold meetups.

We collected information from 43 of these events. Since we are missing data for 1/3 of the cities, there is probably some selection bias in the statistics; I would speculate that we are less likely to have data from less successful meetups.

Of the 43 cities, 25 have at least tentative plans for future meetups. Information about these events will be posted at the SSC Meetups GitHub.

 

Turnout

Attendance ranged from 3 to approximately 50 people, with a mean of 16.7. Turnout averaged about 50% of those who expressed interest on the survey (range: 12% to 100%), twice what Scott expected. This average does not appear to have been skewed by high turnout at a few events – mean: 48%, median: 45%, mode: 53%.

On average, gender ratio seemed to be roughly representative of SSC readership overall, ranging from 78% to 100% male (for the 5 meetups that provided gender data). The majority of attendees were approximately 20-35 years old, consistent with the survey mean age of 30.6.

 

Existing vs new meetups

Approximately one fifth of the SSC meetups were hosted by existing rationality or LessWrong groups. Some of these got up to 20 new attendees from the SSC announcement, while others saw no new faces at all. The two established meetups that included data about follow-up meetings reported that retention rates for new members were very low, at best 17% for the next meeting.

Here, it seems important to make a distinction between the needs of SSC meetups specifically and rationality meetups more generally. On the 2017 survey, 50% of readers explicitly did not identify with LW and 54% explicitly did not identify with EA. In addition, one organizer expressed the concern that, “Going forward, I think there is a concern of “rationalists” with a shared background outnumbering the non-lesswrong group, and dominating the SSC conversation, making new SSC fans less likely to engage.”

This raises the question of whether SSC groups should try to exist separately from local EA/LW/rationalist/skeptic groups – this is of particular concern in locations where the community is small and it’s difficult for any of these groups to function on their own due to low membership.

Along the same lines, one organizer wondered how often it made sense to hold events, since “If meetups happen very frequently, they will be attended mostly by hardcore fans (and a certain type of person), while if they are scheduled less frequently, they are likely to be attended by a larger, more diverse group. My fear is the hardcore fans who go bi-weekly will build a shared community that is less welcoming/appealing to outsiders/less involved people, and these people will be less willing to get involved going forward.”

Suggestions on how to address these concerns are welcome.

 

Advice for initial meetings

Bring name tags, and collect everyone’s email addresses. It’s best to do this on a computer or tablet, since some people have illegible handwriting, and you don’t want their orthographic deficiencies to mean you lose contact with them forever.

Don’t try to impose too much structure on the initial meeting, since people will mostly just want to get to know each other and talk about shared interests. If possible, it’s also good to not have a hard time limit - meetups in this round lasted between 1.5 and 6 hours, and you don’t want to have to make people leave before they’re ready. However, both structure and time limits are things you will most likely want if you have regularly recurring meetups.

 

Content

Most meetups consisted of unstructured discussion in smallish groups (~7 people). At least one organizer had people pair up and ask each other scripted questions, while another used lightning talks as an ice-breaker. Other activities included origami, Rationality Cardinality, and playing with magnadoodles and diffraction glasses, but mostly people just wanted to talk.

Topics, predictably, mostly centered around shared interests, and included: SSC and other rationalist blogs, rationalist fiction, the rationality community, AI, existential risk, politics and meta-politics, book recommendations, and programming (according to the survey, 30% of readers are programmers), as well as normal small talk and getting-to-know-each-other topics.

Common ice-breakers included first SSC post read, how people found SSC, favorite SSC post, and SSC vs LessWrong (aka, is Eliezer or Scott the rightful caliph).

Though a few meetups had a little difficulty getting conversation started and relied on ice-breakers and other predetermined topics, no organizer reported prolonged awkwardness; people had a lot to talk about and conversation flowed quite easily for the most part.

One area where several organizers encountered difficulties was large discrepancies in knowledge of rationalist-sphere topics among attendees, since some people had only recently discovered SSC or were even non-readers brought along by friends, while many others were long-time members of the community. Suggestions for quickly and painlessly bridging inferential gaps on central concepts in the community would be appreciated.

 

Locations 

Meetups occurred in diverse locations, including restaurants, cafés, pubs/bars, private residences, parks, and meeting rooms in coworking spaces or on university campuses.

Considerations for choosing a venue:

  • Capacity – Some meetups found that their original venues couldn’t accommodate the number of people who attended. This happened at a private residence and at a restaurant. Be flexible about moving locations if necessary.
  • Arrangement – For social meetups, you will probably want a more flexible format. For this purpose, it’s best to have the run of the space, which you have in private residences, parks, meeting rooms, and bars and restaurants if you reserve a whole room or floor.
  • Noise – Since the main activity is talking, this is an important consideration. An ideal venue is quiet enough that you can all hear each other, but (if public) not so quiet that you will be disrupting others with your conversation.
  • Visibility – If meeting in a public place, have a somewhat large sign that says ‘SSC’ on it, placed somewhere easily visible. If the location is large or hard to find, consider including your specific location (e.g. ‘we’re at the big table in the northwest corner’) or GPS coordinates in the meetup information.
  • Permission – Check with the manager first if you plan to hold a large meetup in a private building, such as a mall, market, or café. Also consider whether you’ll be disturbing other patrons.
  • Time restrictions – If you are reserving a space, or if you are meeting somewhere that has a closing time, be aware that people may want to continue their discussions for longer than the space is available. Have a contingency plan for this, a second location to move to in case you run overtime.
  • Availability of food – Some meetups lasted as long as six hours, so it’s good to either bring food, meet somewhere with easy access to food, or be prepared to go to a restaurant.
  • Privacy – People at some meetups were understandably hesitant to have controversial / culture war discussions in public. If you anticipate this being a problem, you should try to find a more private venue, or a more secluded area.

Conclusion

Overall most meetups went smoothly, and many had unexpectedly high turnout. Almost every single organizer, even for the tiny meetups, reported that attendees showed interest in future meetings, but few had concrete plans.

These events have been an important first step, but it remains to be seen whether they will lead to lasting local communities. The answer is largely up to you.

If you attended a meetup, seek out the people you had a good time talking to, and make sure you don’t lose contact with them. If you want there to be more events, just set a time and place and tell people. You can share details on local Facebook groups, Google groups, and email lists, and on LessWrong and the SSC meetups repository. If you feel nervous about organizing a meetup, don’t worry, there are plenty of resources just for that. And if you think you couldn’t possibly be an organizer because you’re somehow ‘not qualified’ or something, well, I once felt that way too. In Scott’s words, “it would be dumb if nobody got to go to meetups because everyone felt too awkward and low-status to volunteer.”

Finally, we’d like to thank Scott for making all of this possible. One of the most difficult things about organizing meetups is that it’s hard to know where to look for members, even if you know there must be dozens of interested people in your area. This was an invaluable opportunity to overcome that initial hurdle, and we hope that you all make the most of it.

 

Thanks to deluks917 for providing feedback on drafts of this report, and for having the idea to collect data in the first place :)

That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox

11 Stuart_Armstrong 11 May 2017 09:16AM

That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox

Anders Sandberg, Stuart Armstrong, Milan M. Cirkovic

If a civilization wants to maximize computation it appears rational to aestivate until the far future in order to exploit the low temperature environment: this can produce a 1030 multiplier of achievable computation. We hence suggest the "aestivation hypothesis": the reason we are not observing manifestations of alien civilizations is that they are currently (mostly) inactive, patiently waiting for future cosmic eras. This paper analyzes the assumptions going into the hypothesis and how physical law and observational evidence constrain the motivations of aliens compatible with the hypothesis.

As far as I can tell, the paper's physics is correct (most of the energy comes not from burning stars but from the universe's mass).

However, the conclusions are likely wrong, because it's rational for "sleeping" civilizations to still want to round up stars that might be ejected from galaxies, collect cosmic dust, and so on.

The paper is still worth publishing, though, because there may other, more plausible ideas in the vicinity of this one. And it describes how future civilization may choose to use their energy.

Existential risk from AI without an intelligence explosion

9 AlexMennen 25 May 2017 04:44PM

[xpost from my blog]

In discussions of existential risk from AI, it is often assumed that the existential catastrophe would follow an intelligence explosion, in which an AI creates a more capable AI, which in turn creates a yet more capable AI, and so on, a feedback loop that eventually produces an AI whose cognitive power vastly surpasses that of humans, which would be able to obtain a decisive strategic advantage over humanity, allowing it to pursue its own goals without effective human interference. Victoria Krakovna points out that many arguments that AI could present an existential risk do not rely on an intelligence explosion. I want to look in sightly more detail at how that could happen. Kaj Sotala also discusses this.

An AI starts an intelligence explosion when its ability to create better AIs surpasses that of human AI researchers by a sufficient margin (provided the AI is motivated to do so). An AI attains a decisive strategic advantage when its ability to optimize the universe surpasses that of humanity by a sufficient margin. Which of these happens first depends on what skills AIs have the advantage at relative to humans. If AIs are better at programming AIs than they are at taking over the world, then an intelligence explosion will happen first, and it will then be able to get a decisive strategic advantage soon after. But if AIs are better at taking over the world than they are at programming AIs, then an AI would get a decisive strategic advantage without an intelligence explosion occurring first.

Since an intelligence explosion happening first is usually considered the default assumption, I'll just sketch a plausibility argument for the reverse. There's a lot of variation in how easy cognitive tasks are for AIs compared to humans. Since programming AIs is not yet a task that AIs can do well, it doesn't seem like it should be a priori surprising if programming AIs turned out to be an extremely difficult task for AIs to accomplish, relative to humans. Taking over the world is also plausibly especially difficult for AIs, but I don't see strong reasons for confidence that it would be harder for AIs than starting an intelligence explosion would be. It's possible that an AI with significantly but not vastly superhuman abilities in some domains could identify some vulnerability that it could exploit to gain power, which humans would never think of. Or an AI could be enough better than humans at forms of engineering other than AI programming (perhaps molecular manufacturing) that it could build physical machines that could out-compete humans, though this would require it to obtain the resources necessary to produce them.

Furthermore, an AI that is capable of producing a more capable AI may refrain from doing so if it is unable to solve the AI alignment problem for itself; that is, if it can create a more intelligent AI, but not one that shares its preferences. This seems unlikely if the AI has an explicit description of its preferences. But if the AI, like humans and most contemporary AI, lacks an explicit description of its preferences, then the difficulty of the AI alignment problem could be an obstacle to an intelligence explosion occurring.

It also seems worth thinking about the policy implications of the differences between existential catastrophes from AI that follow an intelligence explosion versus those that don't. For instance, AIs that attempt to attain a decisive strategic advantage without undergoing an intelligence explosion will exceed human cognitive capabilities by a smaller margin, and thus would likely attain strategic advantages that are less decisive, and would be more likely to fail. Thus containment strategies are probably more useful for addressing risks that don't involve an intelligence explosion, while attempts to contain a post-intelligence explosion AI are probably pretty much hopeless (although it may be worthwhile to find ways to interrupt an intelligence explosion while it is beginning). Risks not involving an intelligence explosion may be more predictable in advance, since they don't involve a rapid increase in the AI's abilities, and would thus be easier to deal with at the last minute, so it might make sense far in advance to focus disproportionately on risks that do involve an intelligence explosion.

It seems likely that AI alignment would be easier for AIs that do not undergo an intelligence explosion, since it is more likely to be possible to monitor and do something about it if it goes wrong, and lower optimization power means lower ability to exploit the difference between the goals the AI was given and the goals that were intended, if we are only able to specify our goals approximately. The first of those reasons applies to any AI that attempts to attain a decisive strategic advantage without first undergoing an intelligence explosion, whereas the second only applies to AIs that do not undergo an intelligence explosion ever. Because of these, it might make sense to attempt to decrease the chance that the first AI to attain a decisive strategic advantage undergoes an intelligence explosion beforehand, as well as the chance that it undergoes an intelligence explosion ever, though preventing the latter may be much more difficult. However, some strategies to achieve this may have undesirable side-effects; for instance, as mentioned earlier, AIs whose preferences are not explicitly described seem more likely to attain a decisive strategic advantage without first undergoing an intelligence explosion, but such AIs are probably more difficult to align with human values.

If AIs get a decisive strategic advantage over humans without an intelligence explosion, then since this would likely involve the decisive strategic advantage being obtained much more slowly, it would be much more likely for multiple, and possibly many, AIs to gain decisive strategic advantages over humans, though not necessarily over each other, resulting in a multipolar outcome. Thus considerations about multipolar versus singleton scenarios also apply to decisive strategic advantage-first versus intelligence explosion-first scenarios.

WMDs in Iraq and Syria

9 ChristianKl 10 May 2017 09:03PM

Tetlock wrote in Superforcasters that the US intelligence establishment was likely justified to believe that it was likely that Iraq was hiding WMDs. According to Tetlock their sin was that they asserted that it's certain that Iraq had WMD.

When first reading Superforcasters I didn't quite understand the situation. After reading https://theintercept.com/2015/04/10/twelve-years-later-u-s-media-still-cant-get-iraqi-wmd-story-right/ I did.

The core problem was that Saddam lost track of some of his chemical weapons. His military didn't do perfect accounting of them and they looked the same as conventional weapons. It takes an x-ray to tell his chemical weapons apart from the normal ones.

The US intercepted communications where Saddam told his units to ensure that they had no chemical weapons that inspectors could find. Of course, that communication didn't happen in English. That communication seems to have been misinterpreted by the US intelligence community as evidence that Saddam is hiding WMDs.

Nearly nobody understood that Iraq having chemical weapons and hiding them are two different systems because you need to know where your chemical weapons happen to be to hide them. On the same token, nobody publically argues that pure incompetence might be the cause of chemical weapon usage in Syria. We want to see human agency and if a chemical weapon exploded we want to know that someone is guilty of having made the decision to use them.
In a recent facebook discussion about Iraq and the value of IARPA, a person asserted that the US intelligence community only thought Iraq had WMDs because they were subject to political pressure. 
We have to get better at understanding that bad events can happen without people intending them to happen.

After understanding Iraq it's interesting to look at Syria. Maybe the chemical weapons that exploded in Syria didn't explode because Assad's troops or the opposition wanted to use chemical weapons. They might have simply exploded because some idiot did bad accounting and mislabeled a chemical weapon as being a conventional weapon.

The idea that WMDs explode by accident might be too horrible to contemplate. We have to be better at seeing incompetence as a possible explanation when we want to pin the guilt for having made an evil decision on another person.

CFAR workshop with new instructors in Seattle, 6/7-6/11

8 Qiaochu_Yuan 20 May 2017 12:18AM

CFAR is running its first workshop in Seattle! 

Over the past several months, CFAR has been training a new batch of instructors, including me. We're now running a workshop, without the core instructors, in Seattle from June 7th to June 11th. You can apply here, and we have an FAQ here

[Link] Nate Soares' "Assuming Good Intent"

8 Raemon 30 April 2017 05:45PM

Nate Soares' Replacing Guilt Series compiled in epub Format

8 lifelonglearner 30 April 2017 06:36AM

Hey everyone,

I really liked Nate Soares' Replacing Guilt series, which has had a major positive impact on growing my intrinsic motivation.

Recently, I compiled all the posts into an ePUB for my own reading, and I thought it might be good to share it here if anyone would like to download it for their e-readers / on-the-go-reading. (I got Nate's permission first, so it's all good.)

Google Drive link here.

Experiments, iterations and the scientific method

7 Elo 02 May 2017 12:59AM

Original post: http://bearlamp.com.au/experiments-iterations-and-the-scientific-method


Today, an ordinary day.  I woke up at 6am.  It was still dark out.  I did a quick self-check.

"Did I wake up with energy?"  No not really...  note to self.  But it is quite early.

Rewind one day.


Yesterday I woke up to a phone call, and did a quick self test...

"Did I wake up with energy?"  Yes!  Like two cups of coffee and the light of a thousand suns.

"Did I have energy 5 minutes after waking up?"  no.

"Did I have vivid dreams?" Yes

"are my fingers and toes cold?" Yes, damn.

(waking up next to me can be jarring and uncomfortable for a sleepy person)

Steps in the right direction.

I got up and did what has become my usual stack.  Weighing out 5g creatine, 3g citrulline malate, 10-25 g of soylent v1.5, 60g protein.  And to this I removed the Vitamin C and added magnesium.


Even my notes have notes.  Confounding factors yesterday include:

  • High intensity exercise (run in the morning)
  • garlic (for dinner)
  • An Iron tablet taken to solve the cold hands/feet problem (status: null)
  • lots of carbs that I had as part of dinner with a friend.

experimenting in the real world is hard and experimenting on sleep is particularly slow.  at the rate of 1 trial a day, that's many trials before you can confirm or deny a result.


Experiments

I recently rediscovered the Scientific method, by which I mean, I realised I wasn't applying it and I needed to figure out how to apply it to my life (haha... Not that I "Rediscovered the scientific method" and am awaiting my Nobel prize).

By ArchonMagnus - Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=42164616

A few main features here.  The ones that make it possible to apply these steps in real life.

  1. There are two loops, the "run experiements" loop, and the, "make theories in preparation for experiments" loop.
  2. The process goes on forever.
  3. You need to have some knowledge of the world around you before you can generate hypotheses worth testing.
  4. Hypotheses need to be run until you can confirm your models.  And being able to disprove them would help.
  5. you might need to rely on wrong-but-useful models on the way to finding the better models.

There are 7 parts to that circle.  They are hard to remember.  I looked for ways of combining those 7 to make it smaller and easier to remember.  You can cluster and bunch but you don't do anything justice.  There are just 7 parts to the method.


The first thing I took was vitamin C.  I reasoned that a bottle of vitamins was cheap and C is pretty harmless.  Then I bought a jar of fish oil capsules that is fatter than my leg.  Then some protein powder and thanks to Julian's guide Citrulline malate.  Also Creatine, Calcium, magnesium (the supermarket kind) and eventually the fancy heavy kind from a specialist store.

okay maybe not me.

I just started taking things.  Because why not.  some initial success includes:

  • Oh crap if you don't eat any protein (dieting for weight loss) and exercise too much - everything just hurts for days and days and days.  I feel great when my muscles aren't hurting all the time.
  • Fighting high cholesterol for more than the last 10 years - cholesterol now well within normal.
  • Fish oil?  What is this stuff for anyways, whatever.
  • Protein tastes better with some Vitamin C in it.  Saves the trouble of adding flavours.
  • I seem to be sleeping great lately.
  • I seem to be more assertive (likely high Testosterone, blood tests confirm)

My total sleep time went down, and I felt great.


December 24 2016

On this day I was with friends at a beach.  When a particularly charismatic friend suggested "let's go into that drain pipe half way up a cliff wall".  surrounded by about 9 people who all jokingly nodded and suggested it was a good idea.  having just earlier that week been ranting about walking the walk not just talking the talk. I silently got up, climbed the cliff wall and wandered in.  It's funny because climbing into the pipe was easy.  walking to the other end, easy.  coming back and getting down.  easy.

Following my charismatic friend in for the second time, after climbing up the cliff wall I grabbed a branch that was no longer as strong as the first time I grabbed it.  I slipped and fell 3 metres, landing on my right heel.  There was a Thud!  A Crunch! and the 8 or so people sitting around still proceeded to ask me if I was okay.  Which is an interesting question.  I was okay in that I did not die, my foot was sore, but I was okay.

I couldn't run for weeks.  By my estimates this injury set me back at least 7 weeks.  6 weeks needed for a broken bone to recover, I have not been to a doctor to get that x-ray (doctor stories for another post - I actually went to a doctor twice and asked for x-rays twice and failed to convince doctors to x-ray my foot.  I figured if I had x-ray confirmation that it was broken I wouldn't be able to do anything different so there wasn't much point pushing again and again).

At this point, because I stopped exercising, I stopped taking all the supplements.  I also got hit with a wave of, "are all these pills and supplements even doing anything?" (12 pills, 3 powders).


That's where my good moods, high energy, reduced sleep time, energy on waking up (30sec, 5min), assertiveness, mental state (lack of critical judgement), all vanished.

I couldn't really justify taking protein because I wasn't exercising enough.  But something had caused my shifts in all the good things.  And I was stuck for knowing what.  I could go back to taking everything for the heck of it, but I don't know if they did anything, or if general fitness and exercise made a bigger difference than all the supplements together.


I tried the shotgun method.  take a handful of this or a handful of that whenever I feel like it.  But what was causing the right shifts?  What could I trust?  Had anyone written this up before?  If they did it wouldn't be very relevant because the effects inside my own body would be slightly different.  I had some early luck with creatine, it seemed to reduce my sleep time and bring my energy at wake up back.  But only when taken in the afternoon.  Or was it only if I took it with protein, and not without protein.  Or maybe it had something to do with the rest of my meals.

This guessing game was not effective.  I was going to have to test this the hard way.


Iterations

Iterations...  Establishing a baseline is hard.  What am I like when I don't take anything?  What was I like before I took anything?  I never even asked, I never even tested.  And what did I want to test.  If you look back at the scientific method, I guess what I did was MAD SCIENCE.  A misshapen process of guess work and hoping things would work.  At least I took data before everything came crashing down.  I had general theories that one or a few of the things that I was doing had caused the positive change.  But this is the time for controlled experiments.

So I came up with what I wanted back the most:

  • Less hours spent asleep
  • Did I wake up with energy (30 seconds)?
  • Did I wake up with energy (5 minutes)?

And what seemed like a confounder:

  • Did I have weirdly vivid dreams?

Other things that seemed to affect my mood:

  • Did I shower today?
  • Did I exercise today?

And my conditions:

  • Eating protein
  • Sex
  • creatine
  • Exercise
  • High intensity exercise
  • Creatine
  • Citrulline malate
  • calorie deficit
  • calorie surplus
  • Magnesium
  • Vitamin C
  • Calcium
  • Garlic
  • Spices
  • Melatonin
  • Fish oil

One day at a time.  This isn't the first time I have been hit with this problem.  It's very hard to feel the very iterative core of the slow progress problem until you are doing tests one day at a time, trying to not add confounding variables.

This process, particularly on the personal internal-states self-monitoring level - they are hard.  They are slow.  They are elusive.  Do you remember how you felt last Tuesday?  Do you remember how you felt three Tuesdays ago?  Me neither.  Which when optimising for a good state of mind and happy state of being means that you can't keep track of it internally.  You need a journal, you need to run tests.

You need to pay attention to state, you need to internally get used to where you are.  Hold in your mind an idea that "this is how I am".  Then you need to control the confounding variables until you are confident that "this is my baseline".  Then you need to change things...

Add exercise.

Then you need to look out for changes to heart rate, and resting heart rate, and sweat, and showers and how energetic you feel, how tired you get at 10pm, how you feel when you wake up.  How thirsty you are generally,

Take away exercise.

How do you feel?  Does it change things?  Heart rate?  Is that actually something you can feel from the inside?  Are you sleeping better or worse or the same?

Add exercise at intensity.

How do you feel?  Is anything sore?  Can you repeat that?

Add protein.  But what dose?  30g, 60g, 120g.

Did the sore feeling go away?  Any other changes like energy level?  (at 120g my pee went green - don't worry that's just a side effect of messing with intakes).

Find the stable state, repeat until you are sure this is the stable state.  3 days, 4 days, 5 days.  Did I see a partner today?  Did I have sex?  Did I have time to go exercise?  Does this factor into things?

Add creatine.

Is there an energy level change?  Can I focus more on the same task?  Am I more thirsty? (Creatine causes more water retention)   Any changes in sleep?  Has anything new come up?  Did I eat different to usual?  Could that be a factor/  3 days, 4 days, 5 days...

Add Citrulline Malate.  What dose?  3g, 5g?

Days, days...

Did I have more energy?  Did I notice anything different?  Am I sleeping more or less?  Am I awake more?  Do the seasons have anything to do with it?  What if I exercise?  Does that help?

Get used to the stable state...  days, days, days.  Dinner at an indian restaraunt, weird dreams in the morning - it's probably the spices.  Days to get stable again.

Had pizza for dinner.  Was it the extra salt or the extra garlic?  Or one of the other herbs that made a difference?

Add Soylent.  But how much?  5g, 10g, 20g, 40g?

There's a bust of energy!  But why would Soylent do that?  What's in it?  I don't have time to work that out, Too busy doing everything else.  Staying up late, getting up earlier, soylent keeps me from the 3pm dip.  Or was it the Citrulline?

Days, days, tests, tests.

Okay it was probably the soylent, but the citrulline helps with being awake right up until after 10pm.

My fingers and toes are cold..  I don't remember having this for more than six months.  Maybe it's winter, maybe I need to supplement something else.

Add Magnesium.

BAM!

It's still dark out. Why did I wake up at 6am?  I am awake naturally and not tired.

Did I wake up with energy? No but I did have energy

Did I have weird dreams? No.

I get out of bed.

Did I wake up with energy, (5mins after wake up)? YES

I can't even describe what it feels like.  To be filled with energy.  Like being up on two cups of coffee and extra adrenaline.  What I wouldn't give to get that nagging voice in the back of my head back saying, "hey you should go exercise" like I had 6 months ago.


This is what it feels like to run experiments and iterate each day.  It's been months.  It's been painful.  What happens when you find a condition that leaves you feeling like crap - but you need to repeat the experiment for validity?

You do science is what happens.


Meta: this took the better part of 2 hours over several sessions.

Original post: http://bearlamp.com.au/experiments-iterations-and-the-scientific-method

Looking for machine learning and computer science collaborators

6 Stuart_Armstrong 26 May 2017 11:53AM

I've been recently struggling to translate my various AI safety ideas (low impact, truth for AI, Oracles, counterfactuals for value learning, etc...) into formalised versions that can be presented to the machine learning/computer science world in terms they can understand and critique.

What would be useful for me is a collaborator who knows the machine learning world (and preferably had presented papers at conferences) which who I could co-write papers. They don't need to know much of anything about AI safety - explaining the concepts to people unfamiliar with them is going to be part of the challenge.

The result of this collaboration should be things like the paper of Safely Interruptible Agents with Laurent Orseau of Deep Mind, and Interactive Inverse Reinforcement Learning with Jan Leike of the FHI/Deep Mind.

It would be especially useful if the collaborators were located physically close to Oxford (UK).

Let me know if you know or are a potential candidate, in the comments.

Cheers!

Physical actions that improve psychological health

6 arunbharatula 23 May 2017 04:33AM

Physical health impacts well-being. However, existing preventative health guidelines are inaccessible to the public because they are highly technical and require specific medical equipment. These notes are not medical advice nor meant to treat any illness. This is a compilation of findings I have come across at one time or another in relation to physical things that relate back to psychological health. I have not systematically reviewed the literature on any of these topics, nor am I an expert nor even familiar with any of them. I am extremely uncertain about the whole thing. But, I figure better to write this up and look stupid than keep it inside and act stupid. The hyperlinks point to the best evidence I could find on the matter. I write to solicit feedback, corrections and advice.

 

Microwaves are safe, but cockroaches and even ants are dangerous, and finally: happiness is dietary. If you want the well-being boosts associated with fruit (careful about fruit juice sugar though!), coffee’s aroma [text] [science news], vanilla yoghurt [news], Sufficient B vitamins and choline (alt), binge drinking or drinking in general, however, I don’t have any easy answers for you. Don’t worry about the smart drugs, nootropics are probably a misnomer. On the other hand, probiotics can treat depression

 

“There is growing evidence that a diet rich in fruits and vegetables is related to greater happiness, life satisfaction, and positive mood as well. This evidence cannot be entirely explained by demographic or health variables including socio-economic status, exercise, smoking, and body mass index, suggesting a causal link.[50] Further studies have found that fruit and vegetable consumption predicted improvements in positive mood the next day, not vice versa. On days when people ate more fruits and vegetables, they reported feeling calmer, happier, and more energetic than normal, and they also felt more positive the next day.”

- Wikipedia

 

If your diet is out of control: Mental contrasting is useful for diabetes self-management, dieting etc. Tangent: During a seminar I attended in Geneva, The World Health Organisation chief dietary authority said that suggesting dietary patterns (e.g. the Mediterranean diet) rather than individual nutrient intake (protein, creatine, carbs) is preferable. But I have yet to identify substantiating evidence. The broad consensus among lay skeptical scrutineers of the field of nutrition is that most truths, even those broadly accepted ones, are still unclear. However, I have yet to analyse the literature myself.

 

Exercise and sport are good for subject well-being, quality of life, depression, anxiety, stress and more. Plus, they are fun. You may not enjoy pleasant, wellbeing related activities. Do those activities anyway. I seldom enjoy correcting my posture. I tend to slouch and I have been specifically advised by specialised physiotherapist to correct for that. But, slouching typically doesn’t cause pain - posture correction is pseudoscience! So is many interventions related to posture correction, like standing desks. On the other hand, I love to get massages - but their benefits are short lived - so get them regularly!

 

I particularly enjoy them after resistance training or 1 minute workouts (high intensity interval training). Be careful about stretching, passive stretching can cause injury, unlike active stretching: 'Passive stretching is when you use an outside force other than your own muscle to move a joint or limb beyond its active range of motion, to put your body into a position that you couldn’t do by yourself (such as when you lean into a wall, or have a partner push you into a deeper stretch). Unfortunately, this is the most common form of stretching used.'

 

However, if you aim to bodybuild, protein supplementation is pseudoscientific broscience. And ‘form’, well, there’s broscience - like squat with your knees outwards but probably lots of credible safety related information one ought to head. For weight loss, if you want a real cheat sheet - weight loss aspirants can get it for a couple of hundred dollar SNP sequencing kit. But, I would be cautious about gene sequence driven health prescription, some services running that business rely on weak evidence. There are other ‘fad’ fitness ideas that are not grounded in science. For instance: 20 second of foam rolling (just as effective as 60 seconds) enhance flexibility (...for no longer than 10 minutes, unless it is done regularly - than it improves long term flexibility) but it is unclear whether they improve athletic performance or post-performance recovery.

 

Stretching for runners, but no other kinds of sports prevents injuries and increase range of motion [wikipedia]. Shoe inserts don’t work reliably either [Wikipedia]. Martial arts therapy is a thing. Physical exercise is good for you. Tai chi, qigong, and meditation (other than mindfulness) such as transcendental meditation are ineffective in treating depression and anxiety. If you are injured, try rehabilitation exercises. Exercise or performance enhancing drugs are both cognitive enhancers. Exercise for chronic lower back pain is a good idea.

 

Environment: Avoid outdoor air pollution near residences due to dementia/other-health risks. And, avoid chimney smoke fireplaces.

 

Anecdotally, hygiene improves self-esteem and well-being. Wipe with wet wipes if you wipe hard enough to cause blood to form, cover the toilet seat with toilet paper or don’t - it doesn’t matter safety wise unless the contaminant is <~1hr old, shower with soap, remove eye mucus, remove earwax (but not the way you think, likely), brush twice a day - with the correct technique, replacing your toothbrush every few months and softly. 'Don't rinse with water straight after toothbrushing'. Floss once a day (with a different piece of floss each flossing session) but do not brush immediately after drinking acidic substances. The effectiveness of Tooth Mousse is questionable. Visit the dentist for a check-up every now and then - I’d say about every year at least (does anyone know how to format this sentence consistent with the rest of the text - it doesn't appear to be a font size or type issue).

 

Consider sleeping with a face mask and earplugs for better sleep,  blow your nose, clean under your nails and trim them. Eye examinations should be conducted every 2-4 years for those under 40, and up to every 6 months for those 65+. There are health concerns around memory foam pillows/mattresses so latex pillows may be preferable for those who prefer a sturdier option than traditional pillows/mattresses Anecdotally, setting alarms to remind you to do things is a simple way to manage your time not just for waking up. Light therapy is also helpful in treating delayed sleep phase disorder (being a night owl!). Oh, and don’t bother loading the dishwasher with pre washed dishes (as long as you clean the filter regularly).

 

There are misconceptions around complementary therapies. The Australian Government reviewed the effective of The Alexander technique, homeopathy, aromatherapy, bowen therapy, buteyko, Feldenkrais, herbalism, homeopathy, iridology, kinesiology, massage therapy, pilates, reflexology, rolfing shiatsu, tai chi, yoga. Only for (Alexander technique, Buteyko, massage therapy (esp. Remedial massage?), tai chi and yoga was there credible (albeit low to moderate quality) evidence that they are useful for certain health conditions.

 

Stressed out reading all this? Pressing on your eyelids gently to temporarily forgo a headache can work. Traumatically stressed out? Video games can treat PTSD. Animal assisted therapy, like service dogs and therapeutic animals are also wonderful.

Thank you!

AI safety: three human problems and one AI issue

6 Stuart_Armstrong 19 May 2017 10:48AM

Crossposted at the Intelligent agent foundation.

There have been various attempts to classify the problems in AI safety research. Our old Oracle paper that classified then-theoretical methods of control, to more recent classifications that grow out of modern more concrete problems.

These all serve their purpose, but I think a more enlightening classification of the AI safety problems is to look at what the issues we are trying to solve or avoid. And most of these issues are problems about humans.

Specifically, I feel AI safety issues can be classified as three human problems and one central AI issue. The human problems are:

  • Humans don't know their own values (sub-issue: humans know their values better in retrospect than in prediction).
  • Humans are not agents and don't have stable values (sub-issue: humanity itself is even less of an agent).
  • Humans have poor predictions of an AI's behaviour.

And the central AI issue is:

  • AIs could become extremely powerful.

Obviously if humans were agents and knew their own values and could predict whether a given AI would follow those values or not, there would be not problem. Conversely, if AIs were weak, then the human failings wouldn't matter so much.

The points about human values is relatively straightforward, but what's the problem with humans not being agents? Essentially, humans can be threatened, tricked, seduced, exhausted, drugged, modified, and so on, in order to act seemingly against our interests and values.

If humans were clearly defined agents, then what counts as a trick or a modification would be easy to define and exclude. But since this is not the case, we're reduced to trying to figure out the extent to which something like a heroin injection is a valid way to influence human preferences. This makes both humans susceptible to manipulation, and human values hard to define.

Finally, the issue of humans having poor predictions of AI is more general than it seems. If you want to ensure that an AI has the same behaviour in the testing and training environment, then you're essentially trying to guarantee that you can predict that the testing environment behaviour will be the same as the (presumably safe) training environment behaviour.

 

How to classify methods and problems

That's well and good, but how to various traditional AI methods or problems fit into this framework? This should give us an idea as to whether the framework is useful.

It seems to me that:

 

  • Friendly AI is trying to solve the values problem directly.
  • IRL and Cooperative IRL are also trying to solve the values problem. The greatest weakness of these methods is the not agents problem.
  • Corrigibility/interruptibility are also addressing the issue of humans not knowing their own values, using the sub-issue that human values are clearer in retrospect. These methods also overlap with poor predictions.
  • AI transparency is aimed at getting round the poor predictions problem.
  • Laurent's work on carefully defining the properties of agents is mainly also about solving the poor predictions problem.
  • Low impact and Oracles are aimed squarely at preventing AIs from becoming powerful. Methods that restrict the Oracle's output implicitly accept that humans are not agents.
  • Robustness of the AI to changes between testing and training environment, degradation and corruption, etc... ensures that humans won't be making poor predictions about the AI.
  • Robustness to adversaries is dealing with the sub-issue that humanity is not an agent.
  • The modular approach of Eric Drexler is aimed at preventing AIs from becoming too powerful, while reducing our poor predictions.
  • Logical uncertainty, if solved, would reduce the scope for certain types of poor predictions about AIs.
  • Wireheading, when the AI takes control of reward channel, is a problem that humans don't know their values (and hence use an indirect reward) and that the humans make poor predictions about the AI's actions.
  • Wireheading, when the AI takes control of the human, is as above but also a problem that humans are not agents.
  • Incomplete specifications are either a problem of not knowing our own values (and hence missing something important in the reward/utility) or making poor predictions (when we though that a situation was covered by our specification, but it turned out not to be).
  • AIs modelling human knowledge seem to be mostly about getting round the fact that humans are not agents.

Putting this all in a table:

 

MethodValues
Not Agents
Poor PredictionsPowerful
Friendly AI
X


IRL and CIRL X


Corrigibility/interruptibility X
X
AI transparency

X
Laurent's work

X
Low impact and Oracles
X
X
Robustness

X
Robustness to adversaries
X

Modular approach

X X
Logical uncertainty

X
Wireheading (reward channel) X X X
Wireheading (human) X
X
Incomplete specifications X
X
AIs modelling human knowledge
X

 

Further refinements of the framework

It seems to me that the third category - poor predictions - is the most likely to be expandable. For the moment, it just incorporates all our lack of understanding about how AIs would behave, but this might more useful to subdivide.

[Link] How To Build A Community Full Of Lonely People

6 maia 17 May 2017 03:25PM

Hidden universal expansion: stopping runaways

5 Stuart_Armstrong 11 May 2017 09:01AM

We have a new paper out, presenting the 'aestivation hypothesis'. It's another attempt to reconcile the fact that cosmic expansion seems very easy, yet we see no trace of any alien group doing it.

The idea is that civilizations expand rapidly, but then 'go to sleep', while they wait for the temperature to drop and it becomes possible to do computations with maximal efficiency.

There are some few problems with the theory, though - mainly, why would the civilizations conceal themselves? Even if they were sleeping, they should have some automated processes rounding up intergalactic gases, preventing stars from drifting out of galaxies, and so on.

But though it's hard to justify a civilization permanently hiding, there are reasons why a civilization might hide temporarily.

Consider the following diagram:

Here, a civilization is expanding from the red point, and will eventually reach Earth (drawn not entirely to scale). It's expanding at a decent fraction of light-speed. The red sphere is their physical expansion front, while the yellow sphere is the light expansion front. When that yellow reaches Earth, we will generally be able to notice their expansion, and have some time to react to it - unless they conceal themselves as they expand.

Why would they want to do that? It's not as if we could counter their expansion, or have any chance of resisting. But there is one thing that we might be able to do: flee. Imagine that we got a hundred years warning; we might be able to rush AI, Dyson the sun, build escape ships and launch them at a significant fraction of light speed, etc. They might never be able to catch us, and, as we or our AIs fled, we could develop technologies to reduce or damage our pursuers.

Therefore, it makes sense for the expanding civilization to conceal itself until it has any other civilizations completely surrounded. That means that Dyson swarms and other major feats of stellar engineering might be delayed by many years or decades by the red civilization. So that the 'noticeability front' - the distance at which other civilizations can see clear evidence of red's expansion - lags a bit behind their actual expansion front.

[Link] Cognitive Core Systems explaining intuitions behind belief in souls, free will, and creation myths

5 Kaj_Sotala 06 May 2017 12:13PM

AI arms race

5 Stuart_Armstrong 04 May 2017 10:59AM

Racing to the Precipice: a Model of Artificial Intelligence Development

by Stuart Armstrong, Nick Bostrom, and Carl Shulman

This paper presents a simple model of an AI arms race, where several development teams race to build the first AI. Under the assumption that the first AI will be very powerful and transformative, each team is incentivised to finish first – by skimping on safety precautions if need be. This paper presents the Nash equilibrium of this process, where each team takes the correct amount of safety precautions in the arms race. Having extra development teams and extra enmity between teams can increase the danger of an AI-disaster, especially if risk taking is more important than skill in developing the AI. Surprisingly, information also increases the risks: the more teams know about each others’ capabilities (and about their own), the more the danger increases.

 

[Link] Why Most Intentional Communities Fail (And Some Succeed)

4 AspiringRationalist 22 May 2017 03:04AM

AGI and Mainstream Culture

4 madhatter 21 May 2017 08:35AM

Hi all,

So, as you may know, the first episode of Doctor Who, "Smile", was about a misaligned AI trying to maximize smiles (ish). And the latest, "Extremis", was about an alien race who instantiated conscious simulations to test battle strategies for invading the Earth, of which the Doctor was a subroutine. 

I thought the common threat of AGI was notable, although I'm guessing it's just a coincidence. More seriously, though, this ties in with an argument I thought of, and want to know your take on: i

If we want to avoid an AI arms race, so that safety research has more time to catch up to AI progress, then we would want to prevent, if at all possible, these issues from becoming more mainstream. The reason is that if AGI in public perception becomes disassociated with Terminator (i.e. laughable, nerdy, and unrealistic) and more like a serious whoever-makes-this-first-can-take-over-the-world situation, then we will get an arms race faster. 

I'm not sure I believe this argument myself. For one thing, being more mainstream has the benefit of attracting more safety research talent, government funding, etc. But maybe we shouldn't be spreading awareness without thinking this through some more.

 

Reaching out to people with the problems of friendly AI

4 Val 16 May 2017 07:30PM

There have been a few attempts to reach out to broader audiences in the past, but mostly in very politically/ideologically loaded topics.

After seeing several examples of how little understanding people have about the difficulties in creating a friendly AI, I'm horrified. And I'm not even talking about a farmer on some hidden ranch, but about people who should know about these things, researchers, software developers meddling with AI research, and so on.

What made me write this post, was a highly voted answer on stackexchange.com, which claims that the danger of superhuman AI is a non-issue, and that the only way for an AI to wipe out humanity is if "some insane human wanted that, and told the AI to find a way to do it". And the poster claims to be working in the AI field.

I've also seen a TEDx talk about AIs. The talker didn't even hear about the paperclip maximizer, and the talk was about the dangers presented by the AIs as depicted in the movies, like the Terminator, where an AI "rebels", but we can hope that AIs would not rebel as they cannot feel emotion, so we should hope the events depicted in such movies will not happen, and all we have to do is for ourselves to be ethical and not deliberately write malicious AI, and then everything will be OK.

The sheer and mind-boggling stupidity of this makes me want to scream.

We should find a way to increase public awareness of the difficulty of the problem. The paperclip maximizer should become part of public consciousness, a part of pop culture. Whenever there is a relevant discussion about the topic, we should mention it. We should increase awareness of old fairy tales with a jinn who misinterprets wishes. Whatever it takes to ingrain the importance of these problems into public consciousness.

There are many people graduating every year who've never heard about these problems. Or if they did, they dismiss it as a non-issue, a contradictory thought experiment which can be dismissed without a second though:

A nuclear bomb isn't smart enough to override its programming, either. If such an AI isn't smart enough to understand people do not want to be starved or killed, then it doesn't have a human level of intelligence at any point, does it? The thought experiment is contradictory.

We don't want our future AI researches to start working with such a mentality.

 

What can we do to raise awareness? We don't have the funding to make a movie which becomes a cult classic. We might start downvoting and commenting on the aforementioned stackexchange post, but that would not solve much if anything.



Are causal decision theorists trying to outsmart conditional probabilities?

4 Caspar42 16 May 2017 08:01AM

Presumably, this has been discussed somewhere in the past, but I wonder to which extent causal decision theorists (and many other non-evidential decision theorists, too) are trying to make better predictions than (what they think to be) their own conditional probabilities.

 

To state this question more clearly, let’s look at the generic Newcomb-like problem with two actions a1 and a2 (e.g., one-boxing and two-boxing, cooperating or defecting, not smoking or smoking) and two states s1 and s2 (specifying, e.g., whether there is money in both boxes, whether the other agent cooperates, whether one has cancer). The Newcomb-ness is the result of two properties:

  • No matter the state, it is better to take action a2, i.e. u(a2,s1)>u(a1,s1) and u(a2,s2)>u(a1,s2). (There are also problems without dominance where CDT and EDT nonetheless disagree. For simplicity I will assume dominance, here.)

  • The action cannot causally affect the state, but somehow taking a1 gives us evidence that we’re in the preferable state s1. That is, P(s1|a1)>P(s1|a2) and u(a1,s1)>u(a2,s2).

Then, if the latter two differences are large enough, it may be that

E[u|a1] > E[u|a2].

I.e.

P(s1|a1) * u(s1,a1) + P(s2|a1) * u(s2,a1) > P(s1|a2) * u(s1,a2) + P(s2|a2) * u(s2,a2),

despite the dominance.

 

Now, my question is: After having taken one of the two actions, say a1, but before having observed the state, do causal decision theorists really assign the probability P(s1|a1) (specified in the problem description) to being in state s1?

 

I used to think that this was the case. E.g., the way I learned about Newcomb’s problem is that causal decision theorists understand that, once they have said the words “both boxes for me, please”, they assign very low probability to getting the million. So, if there were a period between saying those words and receiving the payoff, they would bet at odds that reveal that they assign a low probability (namely P(s1,a2)) to money being under both boxes.

 

But now I think that some of the disagreement might implicitly be based on a belief that the conditional probabilities stated in the problem description are wrong, i.e. that you shouldn’t bet on them.

 

The first data point was the discussion of CDT in Pearl’s Causality. In sections 1.3.1 and 4.1.1 he emphasizes that he thinks his do-calculus is the correct way of predicting what happens upon taking some actions. (Note that in non-Newcomb-like situations, P(s|do(a)) and P(s|a) yield the same result, see ch. 3.2.2 of Pearl’s Causality.)

 

The second data point is that the smoking intuition in smoking lesion-type problems may often be based on the intuition that the conditional probabilities get it wrong. (This point is also inspired by Pearl’s discussion, but also by the discussion of an FB post by Johannes Treutlein. Also see the paragraph starting with “Then the above formula for deciding whether to pet the cat suggests...” in the computer scientist intro to logical decision theory on Arbital.)

 

Let’s take a specific version of the smoking lesion as an example. Some have argued that an evidential decision theorist shouldn’t go to the doctor because people who go to the doctor are more likely to be sick. If a1 denotes staying at home (or, rather, going anywhere but a doctor) and s1 denotes being healthy, then, so the argument goes, P(s1|a1) > P(s1|a2). I believe that in all practically relevant versions of this problem this difference in probabilities disappears once we take into account all the evidence we already have. This is known as the tickle defense. A version of it that I agree with is given in section 4.3 of Arif Ahmed’s Evidence, Decision and Causality. Anyway, let’s assume that the tickle defense somehow doesn’t apply, such that even if taking into account our entire knowledge base K, P(s1|a1,K) > P(s1|a2,K).

 

I think the reason why many people think one should go to the doctor might be that while asserting P(s1|a1,K) > P(s1|a2,K), they don’t upshift the probability of being sick when they sit in the waiting room. That is, when offered a bet in the waiting room, they wouldn’t accept exactly the betting odds that P(s1|a1,K) and P(s1|a2,K) suggest they should accept.

 

Maybe what is going on here is that people have some intuitive knowledge that they don’t propagate into their stated conditional probability distribution. E.g., their stated probability distribution may represent observed frequencies among people who make their decision without thinking about CDT vs. EDT. However, intuitively they realize that the correlation in the data doesn’t hold up in this naive way.

 

This would also explain why people are more open to EDT’s recommendation in cases where the causal structure is analogous to that in the smoking lesion, but tickle defenses (or, more generally, ways in which a stated probability distribution could differ from the real/intuitive one) don’t apply, e.g. the psychopath button, betting on the past, or the coin flip creation problem.

 

I’d be interested in your opinions. I also wonder whether this has already been discussed elsewhere.

Acknowledgment

Discussions with Johannes Treutlein informed my view on this topic.

How I'd Introduce LessWrong to an Outsider

4 adamzerner 03 May 2017 04:32AM

Note/edit: I'm imagining explaining this to a friend or family member who is at least somewhat charitable and trusting of my judgement. I am not imagining simply putting this on the About page. I should have made this clear from the beginning - my bad. However, I do believe that some (but not all) of the design decisions would be effective on something like the About page as well.


There's this guy named Eliezer Yudkowsky. He's really, really smart. He founded MIRI, wrote a popular fanfic of Harry Potter that centers around rationality, and has a particularly strong background in AI, probability theory, and decision theory. There's another guy named Robin Hanson. Hanson is an economics professor at George Mason, and has a background in physics, AI and statistics. He's also really, really smart.

Yudkowsky and Hanson started a blog called Overcoming Bias in November of 2006. They blogged about rationality. Later on, Yudkowsky left Overcoming Bias and started his own blog - LessWrong.

What is rationality? Well, for starters, it's incredibly interdisciplinary. It involves academic fields like probability theory, decision theory, logic, evolutionary psychology, cognitive biases, lots of philosophy, and AI. The goal of rationality is to help you be right about the things you believe. In other words, the goal of rationality is to be wrong less often. To be LessWrong.

Weird? Useful?

LessWrong may seem fringe-y and cult-y, but the teachings are usually things that aren't controversial at all. Again, rationality teaches you things like probability theory and evolutionary psychology. Things that academics all agree on. Things that academics have studied pretty thoroughly. Sometimes the findings haven't made it to mainstream culture yet, but they're almost always things that the experts all agree on and consider to be pretty obvious. These aren't some weird nerds cooped up in their parents basement preaching crazy ideas they came up with. These are early adopters who are taking things that have already been discovered, bringing them together, and showing us how the findings could help us be wrong less frequently.

Rationalists tend to be a little "weird" though. And they tend to believe a lot of "weird" things. A lot of science-fiction-y things. They believe we're going to blend with robots and become transhumans soon. They believe that we may be able to freeze ourselves before we die, and then be revived by future generations. They believe that we may be able to upload our consciousness to a computer and live as a simulation. They believe that computers are going to become super powerful and completely take over the world.

Personally, I don't understand these things well enough to really speak to their plausibility. My impression so far is that rationalists have very good reasons for believing what they believe, and that they're probably right. But perhaps you don't share this impression. Perhaps you think those conclusions are wacky and ridiculous. Even if you think this, it's still possible that the techniques may be useful to you, right? It's possible that rationalists have misapplied the techniques in some ways, but that if you learn the techniques and add them to your arsenal, they'll help you level up. Consider this before writing rationality off as wacky.

Overview

So, what does rationality teach you? Here's my overview:

  • The difference between reality, and our models of reality (see map vs. territory).
  • That things are there components. Airplanes are made up of quarks. "Airplane" is a concept we created to model reality.
  • To think in gray. To say, "I sense that x is true" and "I'm pretty sure that x is true" instead of "X is true".
  • To update your beliefs incrementally. To say, "I still don't think X is true, but now that you've showed me Y, I'm somewhat less confident." On the other hand, a Black And White Thinker would say, "Eh, even though you showed me Y, I still just don't think X is true."
  • How much we should actually update our beliefs when we come across a new observation. A little? A lot? Bayes' theorem has the answers. It is a fundamental component of rationality.
  • That science, as an institution, prevents you from updating your beliefs quickly enough. Why? Because it requires a lot of good data before you're allowed to update your beliefs at all. Even just a little bit. Of course you shouldn't update too much with bad data, but you should still nudge your beliefs a bit in the direction that the data point toward.
  • To make your beliefs about things that are actually observable. Think: if a tree falls in a forest and no one hears it, does it make a sound? Adding this technique to your arsenal will help you make sense of a lot of philosophical dilemmas. 
  • To make decisions based on consequences. To distinguish between your end goal, and the stepping stones you must pass on your way there. People often forget what it is that they are actually pursuing, and get tricked into pursuing the stepping stones alone. Ex. getting too caught up moving up the career ladder.
  • How evolution really works, and how it helps explain why we are the way we are today. Hint: it's slow and stupid.
  • How quantum physics really works.
  • How words can be wrong.
  • Utilitarian ethics.
  • That you have A LOT of biases. And that by understanding them, you could side-step the pain that they would otherwise have caused you.
  • Similarly, that you have A LOT of "failure modes", and that by understanding them, you could side-step a lot of the pain that they would otherwise have caused you.
  • Lots of healthy mindsets you should take. For example:
    • Tsuyoku Naratai - "I want to become stronger!"
    • Notice when you're confused.
    • Recognize that being wrong is exciting, and something you should embrace - it means you are about to learn something new and level up!
    • Don't just believe the opposite of what your stupid opponent believes out of frustration and spite. Sometimes they're right for the wrong reasons. Sometimes there's a third alternative you're not considering.
    • To give something a fair chance, be sure to think about it for five minutes by the clock.
    • When you're wrong, scream "OOPS!". That way, you could just move on in the right direction immediately. Don't just make minor concessions and rationalize why you were only partially wrong.
    • Don't be content with just trying. You'll give up too early if you do that.
    • "Impossible" things are often not actually impossible. Consider how impossible wireless communication would seem to someone who lived 500 years ago. Try studying something for a year or five before you claim that it is impossible.
    • Don't say things to sound cool, say them because they're true. Don't be overly humble. Don't try to sound wise by being overly neutral and cautious.
    • "Mere reality" is actually pretty awesome. You could vibrate air molecules in an extremely, extremely precise way, such that you could take the contents of your mind and put them inside another persons mind? What???? Yeah. It's called talking.
    • Shut up and calculate. Sometimes things aren't intuitive, and you just have to trust the math.
    • It doesn't matter how good you are relative to others, it matters how good you are in an absolute sense. Reality doesn't grade you on a curve.
Sound interesting? Good! It is!

Eliezer wrote about all of this stuff in bite sized blog posts (he claims it helps him write faster). About one per day. Originally, the collection of posts were referred to as The Sequences, and were organized into categories. More recently, the posts were refined and brought together into a book - Rationality: From AI to Zombies.

Personally, I believe the writing is dense and difficult to follow. Things like AI are often used as examples in places where a more accessible example could have been used instead. Eliezer himself confesses that he needs to "aim lower". Still, the content is awesome, insightful, and useful, so if you could make your way past some of the less clear explanations, I think you have a lot to gain. Personally, I find the Wiki and the article summaries to be incredibly useful. There's also HPMOR - a fanfic Eliezer wrote to describe the teachings of rationally in a more accessible way.

Gaps

So far, there hasn't been enough of a focus on applying rationality to help you win in everyday life. Instead, it's been focusing on solving big, difficult, theoretical problems. Eliezer mentions this in the preface of Rationality: From AI to Zombies. Developing the more practical, applied part of The Art is definitely something that needs to be done.

Learning how to rationally work in groups is another thing that really needs to be done. Unfortunately, rationalists aren't particularly good at working together. So far.

Community

From 2009-2014 (excluding 2010), there were surveys of the LessWrong readership. There were usually about 1,500 responders, which tells you something about the size of the community (note that there are people who read/lurk/comment, but who didn't submit the survey). Readers live throughout the globe, and tend to be come from the atheist/libertarian/technophile/sf-fan/early-adopter/programmer/etc. crowd. There are also a lot of effective altruists - people who try to do good for the world, and who try to do so as efficiently as possible. See the wiki's FAQ for results of these surveys.

There are meet-ups in many cities, and in many countries. Berkeley is considered to be the "hub". See How to Run a Successful LessWrong Meetup for a sense of what these meet-ups are like. Additionally, there is a Slack group, and an online study hall. Both are pretty active.

Community members mostly agree with the material described in The Sequences. This common jumping off point makes communication smoother and more productive. And often more fulfilling.

The culture amongst LessWrongians is something that may take some getting used to. Community members tend to:

  • Be polyamorous.
  • Drink Soylent.
  • Communicate explicitly. Eg. "I'm beginning to find this conversation aversive, and I'm not sure why. I propose we hold off until I've figured that out."
  • Be a bit socially awkward (about 1/4 are on the autism spectrum).
  • Use lots of odd expressions.
In addition... they're totally awesome! In my experience, I've found them to be particularly, caring, altruistic, empathetic, open-minded, good at communicating, humble, intelligent, interesting, reasonable, hard working, respectful and honest. Those are the kind of people I'd like to spend my time amongst.

Diaspora

LessWrong isn't nearly as active as it used to be. In "the golden era", Eliezer along with a group of other core contributors would post insightful things many times each week. Now, these core contributors have fled to work on their own projects and do their own things. There is much less posting on lesswrong.com than there used to be, but there is still some. And there is still related activity elsewhere. See the wiki's FAQ for more.

Related Organizations

MIRI - Tries to make sure AI is nice to humans.

CFAR - Runs workshops that focuses on being useful to people in their everyday lives.


Meta:

Of course, I may have misunderstood certain things. Ex. I don't feel that I have a great grasp on bayesianism vs. science. If so, please let me know.

Note: in some places, I exaggerated slightly for the sake of a smoother narrative. I don't feel that the exaggerations interfere with the spirit of the points made (DH6). If you disagree, please let me know by commenting.

Acting on your intended preferences - What does that look like in practice? (critical introspective questions)

4 Elo 03 May 2017 12:51AM

Part 1: Exploration-Exploitation
Part 2: Bargaining Trade-offs to your brain.
. Part 2a: Empirical time management
Part 3: The time that you have
. Part 3a: A purpose finding exercise
Part 4: What does that look like in practice?
. . Part 4a: Lost purposes – Doing what’s easy or what’s important
. . Part 4b.1 In support of yak shaving
. . Part 4b.2 Yak shaving 2
. Part 4c Filter on the way in, Filter on the way out…
. . Part 4d.1 Scientific method
. . Part 4d.2 Quantified self
Part 5: Call to action


It’s all good and well to know that you should be doing good work, deep work, hard work, and work that you really value.  That time is really really running out, and that sometimes you have to wrangle your brain to get it to consider the world-problem space in the right terms.  That is that you have to be making actions that are the right trade-offs between actions you want to do and the other actions you want to do.  But how do you do that?  How do you keep at it?

I suggest by first doing them – making a start at it, then constantly check that you are still doing the highly valuable actions.  How do you do that?

I suggest critical questions.  In your running consciousness you want to install critical questions.


Is this task the most important task right now?

If you have ever heard of an eisenhower matrix, this is a very powerful organisation tool that got a mention in both Getting Things done and The seven habits of highly effective people.  An eisenhower matrix is a punnet square that straddles the question of importance and urgency.

  Important Not important
Urgent Do now Schedule
Not urgent Delegate Don’t do

Knowing this table, and the suggested responses to each type of task is interesting but it doesn’t teach us to feel it in System 1.

Is this conversation valuable?

If you are in a conversation, you should check if it’s giving anyone anything good.  You don’t need to check in any other way other than thinking about it briefly.  But this can save you from many kinds of failure modes. (Future post – what happens if your check in returns, “no good is coming from this conversation”)

Do I know how to do that?

When I used to look at my to-do lists, there would from time to time be tasks that were not actions, “python” doesn’t really explain the task of how to learn to code in python.  This question is about fighting the applause lights.  The tasks that you can rest easy knowing it’s done when actually you still don’t know how to do it even if it is written on your to do list.

If I started again, would I do it like this again?

So you’re yak shaving.  This question can help you. So you reached a point where the Value of information has changed.  You are already so far into the exploration process that you know it’s time to turn the horse around and ride in the other direction.  Do you delay?  Do you keep riding to the end of the day then turn back? or do you hella high tail out of there and bolt in the right direction?  (counter: it’s okay to reach milestones along the way – like the next river – then turn around.  But I tend to suggest while keeping that in mind – what am I waiting for?)

What’s the obvious next step to write down on my list?

Not my advice, but strong advice.

What am I feeling and needing right now?

Taking a page out of NVC (watch the video in double speed).  Getting in touch with yourself and showing yourself the much needed compassion for your actions will make a big difference to how you feel along the way.  I know a great number of people who WILL themselves from action to action.  Taking mammoth amounts of energy to control every step.  But what if there was another way?  What if instead of forcing yourself to take the next step you waited until you wanted to take it?

The universe does not care how you feel on the inside as you take the next step.  There is no great reward for being a martyr to your cause, suffering and forcing yourself to move forward through the hardship.  The universe does not care about your goals.

It is possible to die alone and unfulfilled.

Morbid as it is, I come from a school of thought where I have to remind myself this or else I forget. (If this idea is uncomfortable for you then you should read about applicable advice, and consider reversing the advice to something like, “I can win, there is hope for me yet“).  For my part – I forget that I can bury myself in Facebook, in gossip, in revealed preferences that do not line up to my goals.  I forget that I could die alone having accomplished nothing, that the universe does not care.

The universe does not care in Both ways.  The universe does not care that you suffer in each step when you force yourself to do the task to force reveal your preferences to be your goals.  The universe also does not care if you don’t do that.  If you pause to compose yourself before walking into battle.  If you are actually prepared.

How can I connect with this person?

In the social context of why I want to be in the presence of others.  I have in the past found myself trapped in a superficial world of, “how are you?  I’m good thanks”, this doesn’t really line up to what I care about.  So why don’t I just skip that and get into what I want to share?

What does this person want with what they have said to me?

People are not always excellent at saying what they mean.  That’s why we make use of concepts like steelman.  That’s why we need to consider the filters and often echo back what someone is saying in order to confirm what we have heard.

Does this contribute to my goals?

I find this a hard question to grasp.  The concept of goals in my mind is such an applause light that I can’t ask that question and expect my brain to give me a mindful answer.  (I am still working on this)

If you are not doing the high value tasks for yourself – who will?


Take these questions or your own introspection questions.  Questions that get to the root of asking yourself what is going on?  What am I doing and why?  Ask them regularly.  Make it your internal operating system to ask the critical questions.  Calibrate/train your System 1 to seek out the feeling:

  • Passion that comes from doing what you care about.
  • Curiosity that comes when you notice yourself doing something not strategic, not goal aligned.
  • Excitement that comes from discovery that you need to turn the horse around.
  • Pride that comes from doing what you care about
  • Calm that comes from knowing you are on the right path
  • Sadness for what you leave behind on the journey to better things

Tune into the other feelings, take them as the cue to start riding in the other direction:

  • Dread that you are about to waste another hour of your life
  • Alarm that things are all wrong
  • Despair about being stuck where you are
  • Fluster when things surprise you
  • Distracted because you are not doing the most important thing right now

But don’t take my word for it.  Look at the feelings yourself.


The scientific method

By ArchonMagnus - Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=42164616

By ArchonMagnus – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=42164616

How can I be scientific about this process?

What actually works?  What makes true progress on the goals?


Do the high value things first, and now, and forever. Constantly check if you are doing the high value things.  Ask critical questions, then answer them when they come up!  Check in between your system 1 and system 2.  Use those s1 feelings to trigger your s2 into asking a critical question.  Make predictions, use the scientific method.


Meta: this post has been a long time coming.  I had to reread my past posts in order to get my mind to continue the train of thought that I was aiming for.  This post is missing some of the “call to action” that I was hoping to impart in it.  There will need to be another post in order to complete the series.  This post probably took me 5 hours spread over several weeks.


Part 1: Exploration-Exploitation
Part 2: Bargaining Trade-offs to your brain.
. Part 2a: Empirical time management
Part 3: The time that you have
. Part 3a: A purpose finding exercise
Part 4: What does that look like in practice?
. . Part 4a: Lost purposes – Doing what’s easy or what’s important
. . Part 4b.1 In support of yak shaving
. . Part 4b.2 Yak shaving 2
. Part 4c Filter on the way in, Filter on the way out…
. . Part 4d.1 Scientific method
. . Part 4d.2 Quantified self
Part 5: Call to action

[Link] Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them

3 Stefan_Schubert 22 May 2017 06:31PM

[Link] Keeping up with deep reinforcement learning research: /r/reinforcementlearning

3 gwern 16 May 2017 07:12PM

[Link] Anthropic uncertainty in the Evidential Blackmail problem

3 Johannes_Treutlein 14 May 2017 04:43PM

Open thread, May 8 - May 14, 2017

3 Thomas 08 May 2017 08:10AM

If it's worth saying, but not worth its own post, then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

On-line google hangout on approaches to the control problem (2017/6/13 7PM UTC)

3 whpearson 07 May 2017 07:13PM

I'd like to get more discussion on-line about approaches to the control problem, so I'm hosting a hangout.

I'll run it lean coffee style meeting with a broad theme of what to do about the control problem. So people propose topics, we vote on the topics to discuss. Then we have a set period of time to discuss the most popular topics, with the topic proposer going first.

Message me with your email address for an invite.

Voting for continuation of a topic will be done via slack,  so video won't be mandatory. Topic write up will be on a trello board.

 

On "Overthinking" Concepts

2 Bound_up 27 May 2017 05:07PM

Related to http://lesswrong.com/lw/1mh/that_magical_click/1hd7

 

I've NOT been confused by the problem of overthinking in the middle of performing an action. I understand perfectly well the disadvantages of using system 2 in a situation where time is sufficiently limited.

And maybe there are some other fail modes where overthinking has some disadvantages.

But there's one situation where I'd often be accused by someone of "overthinking" something when I didn't even understand what they might mean, and that was in understanding concepts. I would think "Huh? How can thinking less about the concept you're explaining help me understand that concept more? I don't currently understand it; I can't just stay here! Even if you thought I needed to take longer to try and understand this, or that I needed more experience or to shorten the inferential gap, all of that would mean doing more thinking, not less."

Then I would think "Well, I must be misunderstanding the way they're using the word 'overthinking,' that's all." I'd ask for a clear explanation and...

"You're overthinking it."

Now I was overthinking the meaning of overthinking. This was really not good for my social reputation (or for their competency reputation in my own mind).

.

Now, I think I got it. At last, I got it, all on my own.

I'm asking them to help me draw precise lines around their concept in thingspace, and they're going along with it (at first) until they realize...they don't HAVE precise lines. There's nothing there TO understand, or if there is, they don't understand it, either. Then they use the get-out-of-jail-free card of "You're overthinking."

.

Honestly, most nerds probably take them at their word that the problem is with them, and may be used to there being subtle social things going on that they just won't easily understand, and if they do try to understand, they just look worse (for "overthinking" again), so this is a pretty good strategy for getting out of admitting that you don't know what you're talking about.

[Link] As there are a number of podcasts by LWers now, I've made a wiki page for them

2 OpenThreadGuy 26 May 2017 07:34AM

[Link] Have We Been Interpreting Quantum Mechanics Wrong This Whole Time?

2 korin43 23 May 2017 04:38PM

Open thread, May 22 - May 28, 2017

2 Thomas 22 May 2017 05:44AM

If it's worth saying, but not worth its own post, then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

[Link] Learning Deep Learning the EASY way, with Keras

2 morganism 21 May 2017 07:48PM

View more: Next