Celebrating All Who Are in Effective Altruism
Elitism and Effective Altruism
Many criticize Effective Altruists as elitist. While this criticism is vastly overblown, unfortunately, it does have some basis, not only from the outside looking in but also within the movement itself, including some explicitly arguing for elitism.
Within many EA circles, there are status games and competition around doing “as much as we can,” and in many cases, even judging and shaming, usually implicit and unintended but no less real, of those whom we might term softcore EAs. These are people who identify as EAs and donate money and time to effective charities, but otherwise lead regular lives, as opposed to devoting the brunt of their resources to advance human flourishing as do hardcore EAs. To be clear, there is no definitive and hard distinction between softcore and hardcore EAs, but this is a useful heuristic to employ, as long as we keep in mind that softcore and hardcore are more like poles on a spectrum rather than binary categories.
We should help softcore EAs feel proud of what they do, and beware implying that being softcore EA is somehow deficient or simply the start of an inevitable path to being a hardcore EA. This sort of mentality has caused people I know to feel guilty and ashamed, and led to some leaving the EA movement. Remember that we all suffer from survivorship bias based on seeing those who remained, and not those who left - I specifically talked to people who left, and tried to get their takes on why they did so.
I suggest we aim to respect people wherever they are on the softcore/hardcore EA spectrum. I propose that, from a consequentialist perspective, negative attitudes toward softcore EAs are counterproductive for doing the most good for the world.
Why We Need Softcore EAs
Even if the individual contributions of softcore EAs are much less than the contributions of individual hardcore EAs, it’s irrational and anti-consequentialist to fail to acknowledge and celebrate the contributions of softcore EAs, and yet that is the status quo for the EA movement. As in any movement, the majority of EAs are not deeply committed activists, but are normal people for whom EA is a valuable but not primary identity category.
All of us were softcore EAs once - if you are a hardcore EA now, envision yourself back in those shoes. How would you have liked to have been treated? Acknowledged and celebrated or pushed to do more and more and more? How many softcore EAs around us are suffering right now due to the pressure of expectations to ratchet up their contributions?
I get it. I myself am driven by powerful emotional urges to reduce human suffering and increase human flourishing. Besides my full-time job as a professor, which takes about ~40 hours per week, I’ve been working ~50-70 hours per week for the last year and a half as the leader of an EA and rationality-themed meta-charity. As all people do, when I don’t pay attention, I fall unthinkingly into the mind projection fallacy, assuming other people think like I do and have my values, as well as my capacity for productivity and impact. I have a knee-jerk pattern as part of my emotional self to identify with and give social status to fellow hardcore EAs, and consider us an in-group, above softcore EAs.
These are natural human tendencies, but destructive ones. From a consequentialist perspective, it weakens our movement and undermines our capacity to build a better world and decrease suffering for current and future humans and other species.
More softcore EAs are vital for the movement itself to succeed. Softcore EAs can help fill talent gaps and donating to effective direct-action charities, having a strong positive impact on the outside world. Within the movement, they support the hardcore EAs emotionally through giving them a sense of belonging, safety, security, and encouragement, which are key for motivation and mental and physical health. Softcore EAs also donate to and volunteer for EA-themed meta-charities, as well as providing advice and feedback, and serving as evangelists of the movement.
Moreover, softcore EAs remind hardcore EAs of the importance of self-care and taking time off for themselves. This is something we hardcore EAs must not ignore! I’m speaking from personal experience here.
Fermi Estimates of Hardcore and Softcore Contributions
If we add up the amount of resources contributed to the movement by softcore EAs, they will likely add up to substantially more than the resources contributed by hardcore EAs. For instance, the large majority of those who took the Giving What We Can and The Life You Can Save pledges are softcore EAs, and so are all the new entrants to the EA movement, by definition.
To attach some numbers to this claim, let’s do a Fermi Estimate that uses some educated guesses to get at the actual resources each group contributes. Say that for every 100 EAs, there are 5 hardcore EAs and 95 softcore EAs. We can describe softcore EAs as contributing anywhere from 1 to 10 percent of their resources to EA causes (this is the range from The Life You Can Save pledge to the Giving What We Can pledge), so let’s guesstimate around 5 percent. Hardcore EAs we can say give an average of 50% of their resources to the movement. Using the handy Guesstimate app, here is a link to a model that shows softcore EAs contribute 480 resources, and hardcore EAs contribute 250 resources per 100 EAs. Now, these are educated guesses, and you can use the model I put together to put in your own numbers for the number of hardcore and softcore EAs per 100 EAs, and also the percent of their resources contributed. In any case, you will find that softcore EAs contribute a substantial amount of resources.
We should also compare the giving of softcore EAs to the giving of members of the general public to get a better grasp on the benefits provided to improving the world by softcore EAs. Let’s say a typical member of the general public contributes 3.5% of her resources to charitable causes, by comparison to 5% for softcore EAs. Being generous, we can estimate that the giving of non-EAs is 100 times less effective than that of EAs. Thus, using the same handy app, here is a link to a model that demonstrates the impact of giving by a typical member of the general public, 3.5, vs. the impact of giving by a softcore EA, 500. Now, the impact of giving by a hardcore EA is going to be higher, of course, 5000 as opposed to 500, but again, we have to remember that there are many more softcore EAs who give resources. You’re welcome to plug in your own numbers to get estimates if you think my suggested figures don’t match your intuitions. Regardless, you can see the high-impact nature of how a typical softcore EA compares to a typical member of the general public.
Effective Altruism, Mental Health, and Burnout: A Personal Account
About two years ago, in February 2014, my wife and I co-founded our meta-charity. In the summer of that year, she suffered a nervous breakdown due to burnout over running the organization. I had to - or to be accurate, chose to - take over both of our roles in managing the nonprofit, assuming the full burden of leadership.
In the Fall of 2014, I myself started to develop a mental disorder from the strain of doing both my professor job and running the organization, while also taking care of my wife. It started with heightened anxiety, which I did not recognize as something abnormal at the time - after all, with the love of my life recovering very slowly from a nervous breakdown and me running the organization, anxiety seemed natural. I was flinching away from my problem, not willing to recognize it and pretending it was fine, until some volunteers at the meta-charity I run – most of them softcore EAs – pointed it out to me.
I started to pay more attention to this, especially as I began to experience fatigue spells and panic attacks. With the encouragement of these volunteers, who essentially pushed me to get professional help, I began to see a therapist and take medication, which I continue to do to this day. I scaled back on the time I put into the nonprofit, from 70 hours per week on average to 50 hours per week. Well, to be honest, I occasionally put in more than 50, as I’m very emotionally motivated to help the world, but I try to restrain myself. The softcore volunteers at the meta-charity I run know about my workaholism and the danger of burnout for me, and remind me to take care of myself. I also need to remind myself constantly that doing good for the world is a marathon and not a sprint, and that in the long run, I will do much more good by taking it easy on myself.
Celebrating Everyone
As a consequentialist, my analysis, along with my personal experience, convince me that the accomplishments of softcore EAs should be celebrated as well as those of hardcore EAs.
So what can we do? We should publicly showcase the importance of softcore EAs. For example, we can encourage publications of articles that give softcore EAs the recognition they deserve, as well as those who give a large portion of their earnings and time to charity. We can invite a softcore EA to speak about her/his experiences at the 2016 EA Global. We can publish interviews with softcore EAs. Now, I’m not suggesting we should make most speakers softcore EAs, or write most articles, or conduct most interviews with softcore EAs. Overall, my take is that it’s appropriate to celebrate individual EAs proportional to their labors, and as the numbers above show, hardcore EAs individually contribute quite a bit more than softcore EAs. Yet we as a movement need to go against the current norm of not celebrating softcore EAs, and these are just some specific steps that would help us achieve this goal.
Let’s celebrate all who engage in Effective Altruism. Everyone contributes in their own way. Everyone makes the world a better place.
Acknowledgments: For their feedback on draft versions of this post, I want to thank Linch (Linchuan) Zhang, Hunter Glenn, Denis Drescher, Kathy Forth, Scott Weathers, Jay Quigley, Chris Waterguy (Watkins), Ozzie Gooen, Will Kiely, and Jo Duyvestyn. I bear sole responsibility for any oversights and errors remaining in the post, of course.
A different version of this, without the Fermi estimates, was cross-posted on the EA Forum.
EDIT: added link to post explicitly arguing for EA elitism
Neutral hours: a tool for valuing time
Prioritisation is mostly about working out how to trade different resources off against one another. Prioritisation problems come at different scales: for individuals, for companies or organisations, for the world at large. At the Global Priorities Project we’re mostly interested in the large-scale questions. But we sometimes have something to say about smaller scale problems, too.
I’ve just tidied and released old research notes (mostly from 2013) on the personal prioritisation problem of how to value time spent on different activities. This is primarily of use for individuals making decisions about how to spend their time, money, and mental energy.
Abstract: We get lots of opportunities to convert between time and money, and it’s hard to know which ones to take, since they use up other mental resources. I introduce the neutral hour as a tool for thinking about how to make these comparisons. A neutral hour is an hour spent where your mental energy is the same level at the start and the end. I work through some examples of how to use this tool, look at implications for some common scenarios, and explore the theory behind them.
There may be benefits for broader prioritisation questions. Since societies are comprised of individuals, it could help to know how to value time savings or costs to individuals when performing cost-benefit analysis on larger projects. And there may be techniques for comparing between different resources that we could usefully apply in wider contexts. However we think these benefits are secondary. We’re releasing this work now to let others take advantage of it: either for personal benefit; or to build on it and release easier-to-use guidance or tools.
You can find the full document here. I'm happy to answer questions and I'd love to know if people have thoughts on this material.
Replace the Symbol with the Substance
Continuation of: Taboo Your Words
Followup to: Original Seeing, Lost Purposes
What does it take to—as in yesterday's example—see a "baseball game" as "An artificial group conflict in which you use a long wooden cylinder to whack a thrown spheroid, and then run between four safe positions"? What does it take to play the rationalist version of Taboo, in which the goal is not to find a synonym that isn't on the card, but to find a way of describing without the standard concept-handle?
You have to visualize. You have to make your mind's eye see the details, as though looking for the first time. You have to perform an Original Seeing.
Is that a "bat"? No, it's a long, round, tapering, wooden rod, narrowing at one end so that a human can grasp and swing it.
Is that a "ball"? No, it's a leather-covered spheroid with a symmetrical stitching pattern, hard but not metal-hard, which someone can grasp and throw, or strike with the wooden rod, or catch.
Are those "bases"? No, they're fixed positions on a game field, that players try to run to as quickly as possible because of their safety within the game's artificial rules.
The chief obstacle to performing an original seeing is that your mind already has a nice neat summary, a nice little easy-to-use concept handle. Like the word "baseball", or "bat", or "base". It takes an effort to stop your mind from sliding down the familiar path, the easy path, the path of least resistance, where the small featureless word rushes in and obliterates the details you're trying to see. A word itself can have the destructive force of cliche; a word itself can carry the poison of a cached thought.
Giving What We Can - New Year drive
If you’ve been planning to get around to maybe thinking about Effective Altruism, we’re making your job easier. A group of UK students has set up a drive for people to sign up to the Giving What We Can pledge to donate 10% of their future income to charity. It does not specify the charities - that decision remains under your control. The pledge is not legally binding, but honour is a powerful force when it comes to promising to help. If 10% is a daunting number, or you don't want to sign away your future earnings in perpetuity, there is a Try Giving scheme in which you may donate less money for less time. I suggest five years (that is, from 2015 to 2020) of 5% as a suitable "silver" option to the 10%-until-retirement "gold medal".
We’re hoping to take advantage of the existing Schelling point of “new year” as a time for resolutions, as well as building the kind of community spirit that gets people signing up in groups. If you feel it’s a word worth spreading, please feel free to spread it. As of this writing, GWWC reported 41 new members this month, which is a record for monthly acquisitions (and we’re only halfway through the month, three days into the event).
If anyone has suggestions about how to better publicise this event (or Effective Altruism generally), please do let me know. We’re currently talking to various news outlets and high-profile philanthropists to see if they can give us a mention, but suggestions are always welcome. Likewise, comments on the effectiveness of this post itself will be gratefully noted.
About Giving What We Can: GWWC is under the umbrella of the Centre for Effective Altruism, was co-founded by a LessWronger, and in 2013 had verbal praise from lukeprog.
Apptimize -- rationalist startup hiring engineers
Apptimize is a 2-year old startup closely connected with the rationalist community, one of the first founded by CFAR alumni. We make “lean” possible for mobile apps -- our software lets mobile developers update or A/B test their apps in minutes, without submitting to the App Store. Our customers include big companies such as Nook and Ebay, as well as Top 10 apps such as Flipagram. When companies evaluate our product against competitors, they’ve chosen us every time.
We work incredibly hard, and we’re striving to build the strongest engineering team in the Bay Area. If you’re a good developer, we have a lot to offer.
Team
-
Our team of 14 includes 7 MIT alumni, 3 ex-Googlers, 1 Wharton MBA, 1 CMU CS alum, 1 Stanford alum, 2 MIT Masters, 1 MIT Ph. D. candidate, and 1 “20 Under 20” Thiel Fellow. Our CEO was also just named to the Forbes “30 Under 30”
-
David Salamon, Anna Salamon’s brother, built much of our early product
-
Our CEO is Nancy Hua, while our Android lead is "20 under 20" Thiel Fellow James Koppel. They met after James spoke at the Singularity Summit
-
HP:MoR is required reading for the entire company
-
We evaluate candidates on curiosity even before evaluating them technically
-
Seriously, our team is badass. Just look
Self Improvement
-
You will have huge autonomy and ownership over your part of the product. You can set up new infrastructure and tools, expense business products and services, and even subcontract some of your tasks if you think it's a good idea
-
You will learn to be a more goal-driven agent, and understand the impact of everything you do on the rest of the business
-
Access to our library of over 50 books and audiobooks, and the freedom to purchase more
-
Everyone shares insights they’ve had every week
-
Self-improvement is so important to us that we only hire people committed to it. When we say that it’s a company value, we mean it
The Job
-
Our mobile engineers dive into the dark, undocumented corners of iOS and Android, while our backend crunches data from billions of requests per day
-
Engineers get giant monitors, a top-of-the-line MacBook pro, and we’ll pay for whatever else is needed to get the job done
-
We don’t demand prior experience, but we do demand the fearlessness to jump outside your comfort zone and job description. That said, our website uses AngularJS, jQuery, and nginx, while our backend uses AWS, Java (the good parts), and PostgreSQL
-
We don’t have gratuitous perks, but we have what counts: Free snacks and catered meals, an excellent health and dental plan, and free membership to a gym across the street
-
Seriously, working here is awesome. As one engineer puts it, “we’re like a family bent on taking over the world”
If you’re interested, send some Bayesian evidence that you’re a good match to jobs@apptimize.com
Could you be Prof Nick Bostrom's sidekick?
If funding were available, the Centre for Effective Altruism would consider hiring someone to work closely with Prof Nick Bostrom to provide anything and everything he needs to be more productive. Bostrom is obviously the Director of the Future of Humanity Institute at Oxford University, and author of Superintelligence, the best guide yet to the possible risks posed by artificial intelligence.
Nobody has yet confirmed they will fund this role, but we are nevertheless interested in getting expressions of interest from suitable candidates.
The list of required characteristics is hefty, and the position would be a challenging one:
- Willing to commit to the role for at least a year, and preferably several
- Able to live and work in Oxford during this time
- Conscientious and discreet
- Trustworthy
- Able to keep flexible hours (some days a lot of work, others not much)
- Highly competent at almost everything in life (for example, organising travel, media appearances, choosing good products, and so on)
- Will not screw up and look bad when dealing with external parties (e.g. media, event organisers, the university)
- Has a good personality 'fit' with Bostrom
- Willing to do some tasks that are not high-status
- Willing to help Bostrom with both his professional and personal life (to free up his attention)
- Can speak English well
- Knowledge of rationality, philosophy and artificial intelligence would also be helpful, and would allow you to also do more work as a research assistant.
The research Bostrom can do is unique; to my knowledge we don't have anyone who has made such significant strides clarifying the biggest risks facing humanity as a whole. As a result, helping increase Bostrom's output by say, 20%, would be a major contribution. This person's work would also help the rest of the Future of Humanity Institute run smoothly.
2014 Survey of Effective Altruists
I'm pleased to announce the first annual survey of effective altruists. This is a short survey of around 40 questions (generally multiple choice), which several collaborators and I have put a great deal of work into and would be very grateful if you took. I'll offer $250 of my own money to one participant.
Take the survey at http://survey.effectivealtruismhub.com/
The survey should yield some interesting results such as EAs' political and religious views, what actions they take, and the causes they favour and donate to. It will also enable useful applications which will be launched immediately afterwards, such as a map of EAs with contact details and a cause-neutral register of planned donations or pledges which can be verified each year. I'll also provide an open platform for followup surveys and other actions people can take. If you'd like to suggest questions, email me or comment.
Anonymised results will be shared publicly and not belong to any individual or organisation. The most robust privacy practices will be followed, with clear opt-ins and opt-outs.
I'd like to thank Jacy Anthis, Ben Landau-Taylor, David Moss and Peter Hurford for their help.
Other surveys' results, and predictions for this one
Other surveys have had intriguing results. For example, Joey Savoie and Xio Kikauka's interviewed 42 often highly active EAs over Skype, and found that they generally had left-leaning parents, donated on average 10%, and were altruistic before becoming EAs. The time they spent on EA activities was correlated with the percentage they donated (0.4), the time their parents spend volunteering (0.3), and the percentage of their friends who were EAs (0.3).
80,000 Hours also released a questionnaire and, while this was mainly focused on their impact, it yielded a list of which careers people plan to pursue: 16% for academia, 9% for both finance and software engineering, and 8% for both medicine and non-profits.
I'd be curious to hear people's predictions as to what the results of this survey will be. You might enjoy reading or sharing them here. For my part, I'd imagine we have few conservatives or even libertarians, are over 70% male, and have directed most of our donations to poverty charities.
A critique of effective altruism
I recently ran across Nick Bostrom’s idea of subjecting your strongest beliefs to a hypothetical apostasy in which you try to muster the strongest arguments you can against them. As you might have figured out, I believe strongly in effective altruism—the idea of applying evidence and reason to finding the best ways to improve the world. As such, I thought it would be productive to write a hypothetical apostasy on the effective altruism movement.
(EDIT: As per the comments of Vaniver, Carl Shulman, and others, this didn't quite come out as a hypothetical apostasy. I originally wrote it with that in mind, but decided that a focus on more plausible, more moderate criticisms would be more productive.)
Contents
- How to read this post
- Abstract
- Philosophical difficulties
- Poor cause choices
- Non-obviousness
- Efficient markets for giving
- Inconsistent attitude towards rigor
- Poor psychological understanding
- Historical analogues
- Monoculture
- Community problems
- Movement building issues
- Conclusion
- Are these problems solvable?
- Acknowledgments
How to read this post
(EDIT: the following two paragraphs were written before I softened the tone of the piece. They're less relevant to the more moderate version that I actually published.)
Hopefully this is clear, but as a disclaimer: this piece is written in a fairly critical tone. This was part of an attempt to get “in character”. This tone does not indicate my current mental state with regard to the effective altruism movement. I agree, to varying extents, with some of the critiques I present here, but I’m not about to give up on effective altruism or stop cooperating with the EA movement. The apostasy is purely hypothetical.
Also, because of the nature of a hypothetical apostasy, I’d guess that for effective altruist readers, the critical tone of this piece may be especially likely to trigger defensive rationalization. Please read through with this in mind. (A good way to counteract this effect might be, for instance, to imagine that you’re not an effective altruist, but your friend is, and it’s them reading through it: how should they update their beliefs?)
(End less relevant paragraphs.)
Finally, if you’ve never heard of effective altruism before, I don’t recommend making this piece your first impression of it! You’re going to get a very skewed view because I don’t bother to mention all the things that are awesome about the EA movement.
Abstract
Effective altruism is, to my knowledge, the first time that a substantially useful set of ethics and frameworks to analyze one’s effect on the world has gained a broad enough appeal to resemble a social movement. (I’d say these principles are something like altruism, maximization, egalitarianism, and consequentialism; together they imply many improvements over the social default for trying to do good in the world—earning to give as opposed to doing direct charity work, working in the developing world rather than locally, using evidence and feedback to analyze effectiveness, etc.) Unfortunately, as a movement effective altruism is failing to use these principles to acquire correct nontrivial beliefs about how to improve the world.
By way of clarification, consider a distinction between two senses of the word “trying” I used above. Let’s call them “actually trying” and “pretending to try”. Pretending to try to improve the world is something like responding to social pressure to improve the world by querying your brain for a thing which improves the world, taking the first search result and rolling with it. For example, for a while I thought that I would try to improve the world by developing computerized methods of checking informally-written proofs, thus allowing more scalable teaching of higher math, democratizing education, etc. Coincidentally, computer programming and higher math happened to be the two things that I was best at. This is pretending to try. Actually trying is looking at the things that improve the world, figuring out which one maximizes utility, and then doing that thing. For instance, I now run an effective altruist student organization at Harvard because I realized that even though I’m a comparatively bad leader and don’t enjoy it very much, it’s still very high-impact if I work hard enough at it. This isn’t to say that I’m actually trying yet, but I’ve gotten closer.
Using this distinction between pretending and actually trying, I would summarize a lot of effective altruism as “pretending to actually try”. As a social group, effective altruists have successfully noticed the pretending/actually-trying distinction. But they seem to have stopped there, assuming that knowing the difference between fake trying and actually trying translates into ability to actually try. Empirically, it most certainly doesn’t. A lot of effective altruists still end up satisficing—finding actions that are on their face acceptable under core EA standards and then picking those which seem appealing because of other essentially random factors. This is more likely to converge on good actions than what society does by default, because the principles are better than society’s default principles. Nevertheless, it fails to make much progress over what is directly obvious from the core EA principles. As a result, although “doing effective altruism” feels like truth-seeking, it often ends up being just a more credible way to pretend to try.
Is That Your True Rejection?
It happens every now and then, that the one encounters some of my transhumanist-side beliefs—as opposed to my ideas having to do with human rationality—strange, exotic-sounding ideas like superintelligence and Friendly AI. And the one rejects them.
If the one is called upon to explain the rejection, not uncommonly the one says,
"Why should I believe anything Yudkowsky says? He doesn't have a PhD!"
And occasionally someone else, hearing, says, "Oh, you should get a PhD, so that people will listen to you." Or this advice may even be offered by the same one who disbelieved, saying, "Come back when you have a PhD."
Now there are good and bad reasons to get a PhD, but this is one of the bad ones.
There's many reasons why someone actually has an adverse reaction to transhumanist theses. Most are matters of pattern recognition, rather than verbal thought: the thesis matches against "strange weird idea" or "science fiction" or "end-of-the-world cult" or "overenthusiastic youth".
So immediately, at the speed of perception, the idea is rejected. If, afterward, someone says "Why not?", this lanches a search for justification. But this search will not necessarily hit on the true reason—by "true reason" I mean not the best reason that could be offered, but rather, whichever causes were decisive as a matter of historical fact, at the very first moment the rejection occurred.
Instead, the search for justification hits on the justifying-sounding fact, "This speaker does not have a PhD."
But I also don't have a PhD when I talk about human rationality, so why is the same objection not raised there?
And more to the point, if I had a PhD, people would not treat this as a decisive factor indicating that they ought to believe everything I say. Rather, the same initial rejection would occur, for the same reasons; and the search for justification, afterward, would terminate at a different stopping point.
They would say, "Why should I believe you? You're just some guy with a PhD! There are lots of those. Come back when you're well-known in your field and tenured at a major university."
The Epistemic Prisoner's Dilemma
Let us say you are a doctor, and you are dealing with a malaria epidemic in your village. You are faced with two problems. First, you have no access to the drugs needed for treatment. Second, you are one of two doctors in the village, and the two of you cannot agree on the nature of the disease itself. You, having carefully tested many patients, being a highly skilled, well-educated diagnostician, have proven to yourself that the disease in question is malaria. Of this you are >99% certain. Yet your colleague, the blinkered fool, insists that you are dealing with an outbreak of bird flu, and to this he assigns >99% certainty.
Well, it need hardly be said that someone here is failing at rationality. Rational agents do not have common knowledge of disagreements etc. But... what can we say? We're human, and it happens.
So, let's say that one day, OmegaDr. House calls you both into his office and tells you that he knows, with certainty, which disease is afflicting the villagers. As confident as you both are in your own diagnoses, you are even more confident in House's abilities. House, however, will not tell you his diagnosis until you've played a game with him. He's going to put you in one room and your colleague in another. He's going to offer you a choice between 5,000 units of malaria medication, and 10,000 units of bird-flu medication. At the same time, he's going to offer your colleague a choice between 5,000 units of bird-flu meds, and 10,000 units of malaria meds.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)