How did my baby die and what is the probability that my next one will?
Summary: My son was stillborn and I don't know why. My wife and I would like to have another child, but would very much not like to try if the probability of this occurring again is above a certain threshold (of which we have already settled on one). All 3 doctors I have consulted were unable to give a definitive cause of death, nor were any willing to give a numerical estimate of the probability (whether for reasons of legal risk, or something else) that our next baby will be stillborn. I am likely too mind-killed to properly evaluate my situation and would very much appreciate an independent (from mine) probability estimate of what caused my son to die, and given that cause, what is the recurrence risk?
Background: V (L and my only biologically related living son) had no complications during birth, nor has he showed any signs of poor health whatsoever. L has a cousin who has had two miscarriages, and I have an aunt who had several stillbirths followed by 3 live births of healthy children. We know of no other family members that have had similar misfortunes.
J (my deceased son) was the product of a 31 week gestation. L (my wife and J's mother) is 28 years old, gravida 2, para 1. L presented to the physicians office for routine prenatal care and noted that she had not felt any fetal movement for the last five to six days. No fetal heart tones were identified. It was determined that there was an intrauterine fetal demise. L was admitted on 11/05/2015 for induction and was delivered of a nonviable, normal appearing, male fetus at approximately 1:30 on 11/06/2015.
Pro-Con Reasoning: According to a leading obstetrics textbook1, causes of stillbirth are commonly classified into 8 categories: obstetrical complications, placental abnormalities, fetal malformations, infection, umbilical cord abnormalities, hypertensive disorders, medical complications, and undetermined. Below, I'll list the percentage of stillbirths in each category (which may be used as prior probabilities) along with some reasons for or against.
Obstetrical complications (29%)
- Against: No abruption detected. No multifetal gestation. No ruptured preterm membranes at 20-24 weeks.
Placental abnormalities (24%)
- For: Excessive fibrin deposition (as concluded in the surgical pathology report). Early acute chorioamnionitis (as conclused in the surgical pathology report, but Dr. M claimed this was caused by the baby's death, not conversely). L has gene variants associated with deep vein thrombosis (AG on rs2227589 per 23andme raw data).
- Against: No factor V Leiden mutation (GG on rs6025 per 23andme raw data and confirmed via independent lab test). No prothrombin gene mutation (GG on l3002432 per 23andme raw data and confirmed via independent lab test). L was negative for prothrombin G20210A mutation (as determined by lab test). Anti-thrombin III activity results were within normal reference ranges (as determined by lab test). Protein C activity results were withing normal reference ranges (as determined by lab test). Protein S activity results were within normal reference ranges (as determined by lab test). Protein S antigen (free and total) results were within normal references ranges (as determined by lab test).
Infection (13%)
- For: L visited a nurse's home during the last week of August that works in a hospital we now know had frequent cases of CMV infection. CMV antibody IgH, CMV IgG, and Parvovirus B-19 Antibody IgG values were outside of normal reference ranges.
- Against: Dr. M discounted the viral test results as the cause of death, since the levels suggested the infection had occurred years ago, and therefore could not have caused J's death. Dr. F confirmed Dr. M's assessment.
Fetal malformations (14%)
- Against: No major structural abnormalities. No genetic abnormalities detected (CombiSNP Array for Pregnancy Loss results showed a normal male micro array profile).
Umbilical cord abnormalities (10%)
- Against: No prolapse. No stricture. No thrombosis.
Hypertensive disorder (9%)
- Against: No preeclampsia. No chronic hypertension.
Medical complications (8%)
- For: L experienced 2 nights of very painful abdominal pains that could have been contractions on 10/28 and 10/29. L remembers waking up on her back a few nights between 10/20 and 11/05 (it is unclear if this belongs in this category or somewhere else).
- Against: No antiphospholipid antibody syndrome detected (determined via Beta-2 Glycoprotein I Antibodies [IgG, IgA, IgM] test). No maternal diabetes detected (determined via glucose test on 10/20).
Undetermined (24%)
What is the most likely cause of death? How likely is that cause? Given that cause, if we choose to have another child, then how likely is it to survive its birth? Are there any other ways I could reduce uncertainty (additional tests, etc...) that I haven't listed here? Are there any other forums where these questions are more likely to get good answers? Why won't doctors give probabilities? Help with any of these questions would be greatly appreciated. Thank you.
If your advice to me is to consult another expert (in addition to the 2 obstetricians and 1 high-risk obstetrician I already have consulted), please also provide concrete tactics as to how to find such an expert and validate their expertise.
Contact Information: If you would like to contact me, but don't want to create an account here, you can do so at deprimita.patro@gmail.com.
[1] Cunningham, F. (2014). Williams obstetrics. New York: McGraw-Hill Medical.
EDIT 1: Updated to make clear that both V and J are mine and L's biological sons.
EDIT 2: Updated to add information on family history.
EDIT 3: On
Instrumental behaviour: Inbox zero - A guide
This will be brief.
Inbox zero is a valuable thing to maintain. Roughly promoted around the web as having an empty inbox.
An email inbox collects a few things:
- junk
- automatic mail sent to you
- personal mail sent to you
- work sent to you
- (maybe - work you send to yourself because that's the best way to store information for now)
- Old as all hell (or other friendly name)
- 2014
- 2015
- 2016
A note about calibration of confidence
Background
In a recent Slate Star Codex Post (http://slatestarcodex.com/2016/01/02/2015-predictions-calibration-results/), Scott Alexander made a number of predictions and presented associated confidence levels, and then at the end of the year, scored his predictions in order to determine how well-calibrated he is. In the comments, however, there arose a controversy over how to deal with 50% confidence predictions. As an example, Scott has these predictions at 50% confidence, among his others:
|
Proposition |
Scott's Prior |
Result |
|
|
A |
Jeb Bush will be the top-polling Republican candidate |
P(A) = 50% |
A is False |
|
B |
Oil will end the year greater than $60 a barrel |
P(B) = 50% |
B is False |
|
C |
Scott will not get any new girlfriends |
P(C) = 50% |
C is False |
|
D |
At least one SSC post in the second half of 2015 will get > 100,000 hits: 70% |
P(D) = 70% |
D is False |
|
E |
Ebola will kill fewer people in second half of 2015 than the in first half |
P(E) = 95% |
E is True |
Scott goes on to score himself as having made 0/3 correct predictions at the 50% confidence interval, which looks like significant overconfidence. He addresses this by noting that with only 3 data points it’s not much data to go by, and could easily have been correct if any of those results had turned out differently. His resulting calibration curve is this:

However, the commenters had other objections about the anomaly at 50%. After all, P(A) = 50% implies P(~A) = 50%, so the choice of “I will not get any new girlfriends: 50% confidence” is logically equivalent to “I will get at least 1 new girlfriend: 50% confidence”, except that one results as true and the other false. Therefore, the question seems sensitive only to the particular phrasing chosen, independent of the outcome.
One commenter suggests that close to perfect calibration at 50% confidence can be achieved by choosing whether to represent propositions as positive or negative statements by flipping a fair coin. Another suggests replacing 50% confidence with 50.1% or some other number arbitrarily close to 50%, but not equal to it. Others suggest getting rid of the 50% confidence bin altogether.
Scott recognizes that predicting A and predicting ~A are logically equivalent, and choosing to use one or the other is arbitrary. But by choosing to only include A in his data set rather than ~A, he creates a problem that occurs when P(A) = 50%, where the arbitrary choice of making a prediction phrased as ~A would have changed the calibration results despite being the same prediction.
Symmetry
This conundrum illustrates an important point about these calibration exercises. Scott chooses all of his propositions to be in the form of statements to which he assigns greater or equal to 50% probability, by convention, recognizing that he doesn’t need to also do a calibration of probabilities less than 50%, as the upper-half of the calibration curve captures all the relevant information about his calibration.
This is because the calibration curve has a property of symmetry about the 50% mark, as implied by the mathematical relation P(X) = 1- P(~X) and of course P(~X) = 1 –P(X).
We can enforce that symmetry by recognizing that when we make the claim that proposition X has probability P(X), we are also simultaneously making the claim that proposition ~X has probability 1-P(X). So we add those to the list of predictions and do the bookkeeping on them too. Since we are making both claims, why not be clear about it in our bookkeeping?
When we do this, we get the full calibration curve, and the confusion about what to do about 50% probability disappears. Scott’s list of predictions looks like this:
|
Proposition |
Scott's Prior |
Result |
|
|
A |
Jeb Bush will be the top-polling Republican candidate |
P(A) = 50% |
A is False |
|
~A |
Jeb Bush will not be the top-polling Republican candidate |
P(~A) = 50% |
~A is True |
|
B |
Oil will end the year greater than $60 a barrel |
P(B) = 50% |
B is False |
|
~B |
Oil will not end the year greater than $60 a barrel |
P(~B) = 50% |
~B is True |
|
C |
Scott will not get any new girlfriends |
P(C) = 50% |
C is False |
|
~C |
Scott will get new girlfriend(s) |
P(~C) = 50% |
~C is True |
|
D |
At least one SSC post in the second half of 2015 will get > 100,000 hits: 70% |
P(D) = 70% |
D is False |
|
~D |
No SSC post in the second half of 2015 will get > 100,000 hits |
P(~D) = 30% |
~D is True |
|
E |
Ebola will kill fewer people in second half of 2015 than the in first half |
P(E) = 95% |
E is True |
|
~E |
Ebola will kill as many or more people in second half of 2015 than the in first half |
P(~E) = 05% |
~E is False |
You will by now have noticed that there will always be an even number of predictions, and that half of the predictions always are true and half are always false. In most cases, like with E and ~E, that means you get a 95% likely prediction that is true and a 5%-likely prediction that is false, which is what you would expect. However, with 50%-likely predictions, they are always accompanied by another 50% prediction, one of which is true and one of which is false. As a result, it is actually not possible to make a binary prediction at 50% confidence that is out of calibration.
The resulting calibration curve, applied to Scott’s predictions, looks like this:

Sensitivity
By the way, this graph doesn’t tell the whole calibration story; as Scott noted it’s still sensitive to how many predictions were made in each bucket. We can add “error bars” that show what would have resulted if Scott had made one more prediction in each bucket, and whether the result of that prediction had been true or false. The result is the following graph:

Note that the error bars are zero about the point of 0.5. That’s because even if one additional prediction had been added to that bucket, it would have had no effect. That point is fixed by the inherent symmetry.
I believe that this kind of graph does a better job of showing someone’s true calibration. But it's not the whole story.
Ramifications for scoring calibration (updated)
Clearly, it is not possible to make a binary prediction with 50% confidence that is poorly calibrated. This shouldn’t come as a surprise; a prediction at 50% between two choices represents the correct prior for the case where you have no information that discriminates between X and ~X. However, that doesn’t mean that you can improve your ability to make correct predictions just by giving them all 50% confidence and claiming impeccable calibration! An easy way to "cheat" your way into apparently good calibration is to take a large number of predictions that you are highly (>99%) confident about, negate a fraction of them, and falsely record a lower confidence for those. If we're going to measure calibration, we need a scoring method that will encourage people to write down the true probabilities they believe, rather than faking low confidence and ignoring their data. We want people to only claim 50% confidence when they genuinely have 50% confidence, and we need to make sure our scoring method encourages that.
A first guess would be to look at that graph and do the classic assessment of fit: sum of squared errors. We can sum the squared error of our predictions against the ideal linear calibration curve. If we did this, we would want to make sure we summed all the individual predictions, rather than the averages of the bins, so that the binning process itself doesn’t bias our score.
If we do this, then our overall prediction score can be summarized by one number:
Here P(Xi) is the assigned confidence of the truth of Xi, and Xi is the ith proposition and has a value of 1 if it is True and 0 if it is False. S is the prediction score, and lower is better. Note that because these are binary predictions, the sum of squared errors gives an optimal score if you assign the probabilities you actually believe (ie, there is no way to "cheat" your way to a better score by giving false confidence).
In this case, Scott's score is S=0.139, much of this comes from the 0.4/0.6 bracket. The worst score possible would be S=1, and the best score possible is S=0. Attempting to fake a perfect calibration by everything by claiming 50% confidence for every prediction, regardless of the information you actually have available, yields S=0.25 and therefore isn't a particularly good strategy (at least, it won't make you look better-calibrated than Scott).
Several of the commenters pointed out that log scoring is another scoring rule that works better in the general case. Before posting this I ran the calculus to confirm that the least-squares error did encourage an optimal strategy of honest reporting of confidence, but I did have a feeling that it was an ad-hoc scoring rule and that there must be better ones out there.
The logarithmic scoring rule looks like this:
Here again Xi is the ith proposition and has a value of 1 if it is True and 0 if it is False. The base of the logarithm is arbitrary so I've chosen base "e" as it makes it easier to take derivatives. This scoring method gives a negative number and the closer to zero the better. The log scoring rule has the same honesty-encouraging properties as the sum-of-squared-errors, plus the additional nice property that it penalizes wrong predictions of 100% or 0% confidence with an appropriate score of minus-infinity. When you claim 100% confidence and are wrong, you are infinitely wrong. Don't claim 100% confidence!
In this case, Scott's score is calculated to be S=-0.42. For reference, the worst possible score would be minus-infinity, and claiming nothing but 50% confidence for every prediction results in a score of S=-0.69. This just goes to show that you can't win by cheating.
Example: Pretend underconfidence to fake good calibration
In an attempt to appear like I have better calibration than Scott Alexander, I am going to make the following predictions. For clarity I have included the inverse propositions in the list (as those are also predictions that I am making), but at the end of the list so you can see the point I am getting at a bit better.
|
Proposition |
Quoted Prior |
Result |
|
|
A |
I will not win the lottery on Monday |
P(A) = 50% |
A is True |
|
B |
I will not win the lottery on Tuesday |
P(B) = 66% |
B is True |
|
C |
I will not win the lottery on Wednesday |
P(C) = 66% |
C is True |
|
D |
I will win the lottery on Thursday |
P(D) =66% |
D is False |
|
E |
I will not win the lottery on Friday |
P(E) = 75% |
E is True |
|
F |
I will not win the lottery on Saturday |
P(F) = 75% |
F is True |
|
G |
I will not win the lottery on Sunday |
P(G) = 75% |
G is True |
|
H |
I will win the lottery next Monday |
P(H) = 75% |
H is False |
|
… |
|
|
|
|
~A |
I will win the lottery on Monday |
P(~A) = 50% |
~A is False |
|
~B |
I will win the lottery on Tuesday |
P(~B) = 34% |
~B is False |
|
~C |
I will win the lottery on Wednesday |
P(~C) = 34% |
~C is False |
|
… |
|
|
|
Look carefully at this table. I've thrown in a particular mix of predictions that I will or will not win the lottery on certain days, in order to use my extreme certainty about the result to generate a particular mix of correct and incorrect predictions.
To make things even easier for me, I’m not even planning to buy any lottery tickets. Knowing this information, an honest estimate of the odds of me winning the lottery are astronomically small. The odds of winning the lottery are about 1 in 14 million (for the Canadian 6/49 lottery). I’d have to win by accident (one of my relatives buying me a lottery ticket?). Not only that, but since the lottery is only held on Wednesday and Saturday, that makes most of these scenarios even more implausible since the lottery corporation would have to hold the draw by mistake.
I am confident I could make at least 1 billion similar statements of this exact nature and get them all right, so my true confidence must be upwards of (100% - 0.0000001%).
If I assemble 50 of these types of strategically-underconfident predictions (and their 50 opposites) and plot them on a graph, here’s what I get:

You can see that the problem with cheating doesn’t occur only at 50%. It can occur anywhere!
But here’s the trick: The log scoring algorithm rates me -0.37. If I had made the same 100 predictions all at my true confidence (99.9999999%), then my score would have been -0.000000001. A much better score! My attempt to cheat in order to make a pretty graph has only sabotaged my score.
By the way, what if I had gotten one of those wrong, and actually won the lottery one of those times without even buying a ticket? In that case my score is -0.41 (the wrong prediction had a probability of 1 in 10^9 which is about 1 in e^21, so it’s worth -21 points, but then that averages down to -0.41 due to the 49 correct predictions that are collectively worth a negligible fraction of a point).* Not terrible! The log scoring rule is pretty gentle about being very badly wrong sometimes, just as long as you aren’t infinitely wrong. However, if I had been a little less confident and said the chance of winning each time was only 1 in a million, rather than 1 in a billion, my score would have improved to -0.28, and if I had expressed only 98% confidence I would have scored -0.098, the best possible score for someone who is wrong one in every fifty times.
This has another important ramification: If you're going to honestly test your calibration, you shouldn't pick the predictions you'll make. It is easy to improve your score by throwing in a couple predictions that you are very certain about, like that you won't win the lottery, and by making few predictions that you are genuinely uncertain about. It is fairer to use a list of propositions that is generated by somebody else, and then pick your probabilities. Scott demonstrates his honesty by making public predictions about a mix of things he was genuinely uncertain about, but if he wanted to cook his way to a better score in the future, he would avoid making any predictions at the 50% category that he wasn't forced to.
Input and comments are welcome! Let me know what you think!
* This result surprises me enough that I would appreciate if someone in the comments can double-check it on their own. What is the proper score for being right 49 times with 1-1 in a billion certainty, but wrong once?
Open Thread, January 4-10, 2016
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
What EAO has been doing, what it is planning to do, and why donating to EAO is a good idea
EA Outreach is the organization behind the EA Global conference, the EffectiveAltruism.org website, and various other projects related to the development of the Effective Altruism community. This post is going to explain a bit more what we are working on, and why we think that donating to us might currently be the best use of marginal donations in the EA Movement. This is both part of our annual fundraiser, but also a general attempt at trying to communicate better what we have been working on, and what we are going to be working on in the coming months.
What is EAO's core vision?
Ironically, our focus is not what our name might naively suggest. Though EAO had been founded with the goal of rapidly growing the EA community, we have since realized that pure growth is not the best thing to focus on. Instead our focus is much better summarized with the statement:
Understand the EA community, and help guide it towards the worlds in which it can have the most impact
This means concretely, that EAO is trying to do two things:
- Do research on the composition, structure and dynamics of the EA community
- Build projects that steer the EA community towards a better future (using the previously acquired knowledge)
Keep in mind that these are our highest level goals, and that in our every day work we are working on projects that are much more concrete than this.
So what is EAO actually doing every day?
On any given day, there is a selection of concrete projects that we are working on. Most of our projects generate direct value, while also helping us gather information about the structure and function of EA at large. Since I don't want to throw a giant wall of text at you, here is a nice diagram that tries to summarize the different projects we were working on in 2015. Feel free to ask more detailed question in the comments or read our full plan for 2016.

The important thing to keep in mind in all of this is that EAO has existed for less than a year, and has consisted of less than 3 people for most of that year. The above is quite impressive for a group of that small size, and I think we basically managed to keep up that level of productivity per individual as we expanded our team (to a total team size of 6 people).
That said, though I think the above projects are somewhat indicative of what EAO is doing on any given day, I think they are also slightly misleading. The projects we tackled during 2015 were immediately valuable, but weren't really driven by much of a larger vision, or a detailed model of where we want EA to go. This has changed in the last few months, and during 2016 we are going to be running projects that are both significantly larger in scope, and have significantly more sophisticated models behind them. Those projects, and why we think they are valuable, is what I am going to be spending the rest of this article on (and something I am much more excited by than talking about the things we already did).
What makes EAO a valuable target for donations?
Jonathon Smith, a donor in our most recent fundraiser, summarized his perspective on EAO as follows:
"A quick note on what encouraged me to donate to EAO.
I navigate robotic spacecraft to destinations in deep space at JPL. If you're trying to get somewhere like Jupiter or Saturn, the most important course corrections you can make are right after launch. We always have a crack team of analysts closely monitoring a spacecraft just after it leaves Earth, because the energy required to change the spacecraft's heading grows exponentially with time; point in the wrong direction too long and the mission is lost.
EA is moving really, really fast, and small adjustments to its development now are likely to have huge consequences down the road. With EAO, we have a team of talented people focused on nothing but making sure it's heading in the right direction. They are doing a lot of really impressive, concrete work (like book promotion, EAG, VIP outreach etc), but I think the greatest value in keeping them well funded is to have a vigilant eye watching for obstacles and helping navigate them at this very important, early stage of the movement."
(Thanks for the kind words Jonathon!)
I think Jonathon basically gets to the point here, and I want to extend a bit on what he said in the above comment.
Jonathon says that it is really important to have a group of people watching out for where EA at large is headed, and making appropriate adjustments in its course. This seems reasonable. It seems unlikely that EA could maximize its impact without reflecting on its overall path, since coordination problems are common. But it clearly isn't the case that nobody in EA is reflecting on where we EA is going. Quite the opposite! If the average discussion at EA Global is any indication, then thinking about the overall composition and trajectory of EA is one of the most common topics of conversation in the EA community!
So the question arises, if we already have so many people thinking about where EA is headed, why add additional cooks to the kitchen? And why found a whole organization dedicated to understanding and supporting EAs big-picture trajectory?
I think there are two main reasons for why a dedicated community organization like EAO should exist:
1. Coordination is difficult, and requires infrastructure and time
Right now, the different organizations in EA are doing a pretty good job at coordinating. As has repeatedly been mentioned during EA Global, almost all of the major organizations associated with EA are supporting each other. They encourage new potential hires to first check with other EA organizations to see whether other organizations might have a bigger need for their specific talents. They coordinate on fundraisers to avoid unhealthy competition, and they generally do a good job at exchanging new information and important considerations.
But all of this is coming at a cost. EA organizations are growing rapidly, and it is becoming less and less feasible to have most of the employees of EA organizations talk individually to each other. Judging from the Google Alerts I have set up for Effective Altruism, in the past two months EA has been averaging something like 2 news headline per day. Reading these, processing these, and chasing the implications of each of them takes the EAO team a lot of time. Other organizations cannot spend that many resources. Things like this are distracting them from solving the concrete problems that they want to be working on.
A dedicated community organization can solve this problem. By creating infrastructure, summarizing and consolidating information and facilitating communication between different organizations, such an organization can significantly reduce the cognitive overhead for all other EA organizations. It can create periodical updates on the current state of the EA community, screen the onslaught of information for the most important bits, and keep a constant eye on whether two organizations are significantly duplicating efforts.
And facilitating that kind of coordination takes time. Right now, it is almost a half-time job to keep up with the new developments around EA. In the near future, it will be a full-time job, and soon after that it will take a multi-person team to keep up with the onslaught of information. An organization like EAO can make sure that this effort only needs to be exerted once.
It is important to note that that effort does not have to be exerted solely by a group of individuals on the EAO team. Healthy communities develop a collective intelligence on their own, and systems like the EA Forum, LessWrong or the Facebook upvote function serve as similar information filters that allow the community at large to be kept up to speed without everyone reading through all the information. But for this kind of collective intelligence to exist, we need infrastructure. We need to make sure that platforms like the EA Forum are well-maintained and are used in a way that allows the community at large to understand what is happening in EA. Again, a good community organization will notice that certain infrastructure is missing, and have the available resources and expertise to build whatever is lacking.
2. Thinking rationally about your own tribe is really really hard
One important fact to acknowledge is that being a part of EA encourages the same kind of irrational thought patterns as sport-teams, political parties and other forms of community tend to encourage. EA is a tribe, and thinking about your own tribe is hard. Humans have evolved as social creatures, and we are extremely good at advocating for "fair" rules and guidelines, that "accidentally" end up serving our own interests. There is a whole literature on self-serving bias, and in particular how it extends to our opinion on social rules and guidelines.
This is a problem. This means that most of the time when I come up with an idea for what the EA community at large should do, and what kind of rules and virtues we should be endorsing, it will internally feel like I am proposing fair rules that everyone would obviously agree with, but unconsciously I am nudging the social context in a way that favors me. Noticing this kind of bias is extremely difficult (though some debiasing techniques appear to work at least a bit).
As most of these unconscious biases tend to work, the more hasty we are in our decisions, and the quicker we have to decide, the more we are affected by them. If we don't reflect on the reasons behind our sense of fairness, it is very likely that self-serving motivations will be one of the biggest driving forces behind it. Thinking rigorously, having externally verified frameworks as well as consulting many independent opinions from all over the community and outside of it, all help in mitigating the effect of this bias.
But again, most people and organizations do not have the time to build these kinds of frameworks, or to work through their implicit biases about what the EA community should do. And certainly none of them have the time to run frequent surveys that compile information from inside and outside the community to get access to a balanced viewpoint. Building these kinds of frameworks and expertise takes time. And in this domain our gut judgement will be wrong more often than not, making it key that someone has put the relevant work into this.
A community organization can again solve this problem. Such an organization can take the time to build formal frameworks of how EA works. It can put significant resources into getting a balanced viewpoint by talking to all the different parts of the community, and it can extend its reach out into the world at large and get inputs from experts in community organizing, sociological modeling, cognitive science and many other diverse viewpoints. It can focus on proposing good changes to the EA community, since it doesn't have to split its attention with another problem.
(That said, this is a really hard problem, and I don't know whether it's possible at all to not get sucked into rationalization, political thinking and ingroup outgroup politics. This might just be an intractable problem, though it does seem likely to me that any organization that is not consciously aware of these problems, is going to fall prey to them. )
Summary
Both coordinating large groups of people and setting up an environment to think rationally about your own tribe require a significant investment of time and resources that other EA organizations should not be distracted by. A dedicated community organization can take care of that distraction, and make sure that we create an infrastructure in EA that supports the intellectual development of the community, while taking precautions to not fall prey to self-serving biases when proposing those changes.
Can EAO be that organization?
The key question that is left now, is just: "Does EAO have the talent and capacity to be the organization that I outlined above?"
I think the answer is yes. The EAO team has both shown that it is able to execute on the relevant tasks in the past, and its team composition features a rare combination of skills that makes the current EAO team particularly well suited to the role that I outlined above.
Here is a list of, I think, the most important facts about EAO when trying to assess whether it is suited for the role it is trying to play:
- We are a part of the Centre for Effective Altruism, which gives us direct access to many EA organizations
- Our team has expensive experience in organizing events for the EA community:
- Tyler Alterman has organized dozens of VIP dinners, talks and other events in the EA community
- I (Oliver Habryka) have helped organize both the EA summit of 2014 and EA Global in 2015, and have been organizing community events for the rationality community for over 2 years, such as the HPMOR wrap party, and the Bay Area Solstice in 2014.
- Julia Wise has organized meetups for the EA community for over 4 years
- Peter Buckley has co-founded multiple EA-Chapters at the University of Pennsylvania and has extensive experience in coordinating student chapters
- The EAO team is very well connected to many different branches of EA. Parts of our team are sharing an office with the Center for Applied Rationality and the Machine Intelligence Research Institute, while being part of CEA directly connects us to everything happening in Britain. By working closely with the EAs in Australia during EA Global, we are also closely connected to the EA community there.
- We are very well connected not only to the community of active members, but also to the larger network of donors interested in effective interventions. With our work on EA Global, EA Ventures and our general VIP outreach we gained deeper insights into what the larger philanthropic community is interested in, what kind of opportunities entrepreneurs are interested in, and what projects are possible to run in the framework of Effective Altruism.
- The EAO team has both people with a web-development and web-design background, allowing it to create websites and web-applications from scratch without needing to rely on outside contractors. This significantly speeds up our ability to create infrastructure for the EA community.
I don't think there is right now any other group of people who would be as well suited to the job as the current EAO team is.
How are you going to do it?
Since this article is already quite long, and its purpose is more to explain the bigger picture around EA Outreach, I will try to summarize the concrete projects we have planned for 2016 relatively quickly. Here is another diagram summarizing the projects we are planning for 2016. If you are interested in more detail, please feel free to read our full plan for 2016.

How do I help?
EAO is right now running its first annual fundraiser, and we are still facing a significant funding gap. Donating money is extremely helpful. We are right now operating at less than a 12 month runway, and though the current members of EAO are extremely comfortable with risks and instability, we would still like to be able to sustain our current level of operations and expand by hiring additional community organizers and building better EA infrastructure. If you think that what we are doing is valuable, please consider donating to us here.
It's also important to note that our current lack of runway creates a lot of strategic uncertainty, which might cause us to make worse decisions than we would have made otherwise. Having reliable funding and a decent runway allows us to build much more reliable infrastructure, since we can secure stability for our systems and new hires.
What is EAO going to do with my money?
If you are interested in a more detailed overview over EAO's expenses, you can read our full plan for 2016 here. The plan has variety of different funding levels, though right now we are still trying to stay above our basic expenses. But for those of you who don't necessarily want to read through another giant wall of text, here is a quick summary:
The money of EAO will be spent on the following things, roughly in this order:
- Salaries of the current core team
- Contractors and hires for EA Global, EAGx and independent chapter building
- Equipment and tech for our web infrastructure and design work
- Scholarships for the most promising attendees to EA Global
Ok, but what about ...?
If you have any additional questions, please feel free to ask in the comments, send me an email at oliver@eaglobal.org or schedule a Skype chat with our CEO Kerry Vaughan here.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)