The Centre for Effective Altruism is hiring to fill five roles in research, operations and outreach
The Centre for Effective Altruism, the group behind 80,000 Hours, Giving What We Can, the Global Priorities Project, Effective Altruism Outreach, and to a lesser extent The Life You Can Save and Animal Charity Evaluators, is looking to grow its team with a number of new roles:
- Giving What We Can: Director of Research
- Giving What We Can: Communications Manager
- 80,000 Hours: Head of Research
- Central CEA: Chief Operating Officer
- Global Priorities Project: Research Fellow (accepting expressions of interest at this point)
- We are also looking for 'graduate volunteers' for Giving What We Can in 2015, particularly over the summer
We are so keen to find great people that if you introduce us to someone new who we end up hiring, we will pay you $1,000 for the favour! If you know anyone awesome who would be a good fit for us please let me know: robert [dot] wiblin [at] centreforeffectivealtruism [dot] org. They can also book a short meeting with me directly.
We may be able to sponsor outstanding applicants from the USA.
Applications close Friday 5th December 2014.
Why is CEA an excellent place to work?
First and foremost, “making the world a better place” is our bottom line and central aim. We work on the projects we do because we think they’re the best way for us to make a contribution. But there’s more.
The specifics of what we are looking for depend on the role and details can be found in the job descriptions. In general, we're looking for people who have many of the following traits:
- Self-motivated, hard-working, and independent;
- Able to deal with pressure and unfamiliar problems;
- Have a strong desire for personal development;
- Able to quickly master complex, abstract ideas, and solve problems;
- Able to communicate clearly and persuasively in writing and in person;
- Comfortable working in a team and quick to get on with new people;
- Able to lead a team and manage a complex project;
- Keen to work with a young team in a startup environment;
- Deeply interested in making the world a better place in an effective way, using evidence and research;
- A good understanding of the aims of the Centre for Effective Altruism and its constituent organisations.
I hope to work at CEA in the future. What should I do now?
Of course this will depend on the role, but generally good ideas include:
- Study hard, including gaining useful knowledge and skills outside of the classroom.
- Degrees we have found provide useful training include: philosophy, statistics, economics, mathematics and physics. However, we are hoping to hire people from a more diverse range of academic and practical backgrounds in the future. In particular, we hope to find new members of the team who have worked in operations, or creative industries.
- Write regularly and consider starting a blog.
- Manage student and workplace clubs or societies.
- Work on exciting projects in your spare time.
- Found a start-up business or non-profit, or join someone else early in the life of a new project.
- Gain impressive professional experience in established organisations, such as those working in consulting, government, politics, advocacy, law, think-tanks, movement building, journalism, etc.
- Get experience promoting effective altruist ideas online, or to people you already know.
- Use 80,000 Hours' research to do a detailed analysis of your own future career plans.
Why don't we have a proper strategy against high risk threats?
I recently finished reading the Sequences and they convinced me that the existential risk brought by intelligence explosion is worth considering. There was a one thing that bothered me though - the strategy adopted by this community seems to be notably one-sided.
Advancing our community’s interests will require a full spectrum of direct and indirect leadership approaches. MIRI and CEA are only one side of the coin. It would not be very wise to ignore high reward approaches that might even have a comparative advantage in comparison with our current strategy. We should be prepared to act as security guarantor – preferably with partners and allies, but alone if necessary – to deter and defeat acts of significantly high risk taking in the sector of artificial intelligence and related fields.
Why
Here's a quote by Yudkowsky:
This is crunch time for the whole human species, and not just for us but for the intergalactic civilization whose existence depends on us. If you confront it full on, then you can't really justify trading off any part of intergalactic civilization for any intrinsic thing you could get nowadays.
The fate of vast amount of people depends on our actions. Yudkowsky observed that by using consequentialist reasoning it's morally right to torture someone for 50 years if by doing it you can save vast amount of people from pain, however slight. The number of our descendants won't be 3^^^3 (unless the universe is spatially infinite and meets a few other conditions), but it will be big enough to outweight some of the intrinsic things we care about nowadays. And not only can we save those people from non-existence, or in the worst case scenario from eternal suffering in the hands of UFAI, but we can possibly give those vast amounts of people vast amounts of positive utility. It may sound unintuitive, but even torture and murder pale by comparison with the fate of all of us and our descendants. Of course, the cost-benefit ratio of torture or murder is not ideal except under extreme circumstances under imminent threat. It's true that some criminal organizations like Mexican drug cartels use torture in their operations, but that is mainly to instill fear which we don't really have a need to do. But there is nothing that could raise the chance of FAI and lower the chance of UFAI that would not be morally right. Why haven't we taken full advantage of this fact?
tl;dr we should try to affect policy decisions related to AI safety by whatever means possible so that we can beat death, avoid eternal torture and live forever in a fun utopia. You already know this, so why haven't you suggested this before? Mods here have made certain policy decisions because they believe it will increase the chance of happy ending so why not go beyond that?
How
I suggest some kind of paramilitary and intelligence gathering organization alongside MIRI and CEA. In pursuing our objectives, this new organization would make critical contributions to AI safety beyond MIRI. CFAR could be transformed to partly support this organization - the boot camp style of rationality training might be useful in other contexts too.
You might ask, what can a few individuals concerned about existential risks do without huge financial support and government backing? The answer is: quite a lot. Let's not underestimate our power. Like gwern said in his article on the effectiveness on terrorism, it's actually quite easy to dismantle an organization if you're truly committed:
Suppose people angry at X were truly angry: so angry that they went beyond posturing and beyond acting against X's only if action were guaranteed to cost them nothing (like writing a blog post). If they ceased to care about whether legal proceedings might be filed against them; if they become obsessed with destroying X, if they devoted their lives to it and could ignore all bodily urges and creature comforts. If they could be, in a word, like Niven’s Protectors or Vinge’s Focused.
Could they do it? Could they destroy a 3 century old corporation with close to $1 trillion in assets, with sympathizers and former employees throughout the upper echelons of the United States Federal Government (itself the single most powerful entity in the world)?
Absolutely. It would be easy.
As I said, the destructive power of a human is great; let’s assume we have 100 fanatics - a vanishingly small fraction of those who have hated on X over the years - willing to engage even in assassination, a historically effective tactic33 and perhaps the single most effective tactic available to an individual or small group.
Julian Assange explains the basic theory of Wikileaks in a 2006 essay, “State and Terrorist Conspiracies” / “Conspiracy as Governance”: corporations and conspiracies form a graph network; the more efficiently communication flows, the more powerful a graph is; partition the graph, or impede communication (through leaks which cause self-inflicted wounds of secrecy & paranoia), and its power goes down. Carry this to its logical extreme…
"If all links between conspirators are cut then there is no conspiracy. This is usually hard to do, so we ask our first question: What is the minimum number of links that must be cut to separate the conspiracy into two groups of equal number? (divide and conquer). The answer depends on the structure of the conspiracy. Sometimes there are no alternative paths for conspiratorial information to flow between conspirators, other times there are many. This is a useful and interesting characteristic of a conspiracy. For instance, by assassinating one ‘bridge’ conspirator, it may be possible to split the conspiracy. But we want to say something about all conspiracies."
We don’t. We’re interested in shattering a specific conspiracy by the name of X. X has ~30,000 employees. Not all graphs are trees, but all trees are graphs, and corporations are usually structured as trees. If X’s hierarchy is similar to that of a binary tree, then to completely knock out the 8 top levels, one only needs to eliminate 256 nodes. The top 6 levels would require only 64 nodes.
If one knocked out the top 6 levels, then each of the remaining subtrees in level 7 has no priority over the rest. And there will be 27−26 or 64 such subtrees/nodes. It is safe to say that 64 sub-corporations, each potentially headed by someone who wants a battlefield promotion to heading the entire thing, would have trouble agreeing on how to reconstruct the hierarchy. The stockholders might be expected to step in at this point, but the Board of Directors would be included in the top of the hierarchy, and by definition, they represent the majority of stockholders.
One could launch the attack during a board meeting or similar gathering, and hope to have 1 fanatic take out 10 or 20 targets. But let’s be pessimistic and assume each fanatic can only account for 1 target - even if they spend months and years reconnoitering and preparing fanatically.
This leaves us 36 fanatics. X will be at a minimum impaired during the attack; financial companies almost uniquely operate on such tight schedules that one day’s disruption can open the door to predation. We’ll assign 1 fanatic the task of researching emails and telephone numbers and addresses of X rivals; after a few years of constant schmoozing and FOIA requests and dumpster-diving, he ought to be able to reach major traders at said rivals. (This can be done by hiring or becoming a hacker group - as has already penetrated X - or possibly simply by open-source intelligence and sources like a Bloomberg Terminal.) When the hammer goes down, he’ll fire off notifications and suggestions to his contacts34. (For bonus points, he will then go off on an additional suicide mission.)
X claims to have offices in all major financial hubs. Offhand, I would expect that to be no more than 10 or 20 offices worth attacking. We assign 20 of our remaining 35 fanatics the tasks of building Oklahoma City-sized truck bombs. (This will take a while because modern fertilizer is contaminated specifically to prevent this; our fanatics will have to research how to undo the contamination or acquire alternate explosives. The example of Anders Behring Breivikreminds us that simple guns may be better tools than bombs.) The 20 bombs may not eliminate the offices completely, but they should take care of demoralizing the 29,000 in the lower ranks and punch a number of holes in the surviving subtrees.
Let’s assume the 20 bomb-builders die during the bombing or remain to pick off survivors and obstruct rescue services as long as possible.
What shall we do with our remaining 15 agents? The offices lay in ruins. The corporate lords are dead. The lower ranks are running around in utter confusion, with long-oppressed subordinates waking to realize that becoming CEO is a live possibility. The rivals have been taking advantage of X’s disarray as much as possible (although likely the markets would be in the process of shutting down).
15 is almost enough to assign one per office. What else could one do besides attack the office and its contents? Data centers are a good choice, but hardware is very replaceable and attacking them might impede the rivals’ efforts. One would want to destroy the software X uses in trading, but to do that one would have to attack the source repositories; those are likely either in the offices already or difficult to trace. (You’ll notice that we haven’t assigned our fanatics anything particularly difficult or subtle so far. I do this to try to make it seem as feasible as possible; if I had fanatics becoming master hackers and infiltrating X’s networks to make disastrous trades that bankrupt the company, people might say ‘aw, they may be fanatically motivated, but they couldn’t really do that’.)
It’s not enough to simply damage X once. We must attack on the psychological plane: we must make it so that people fear to ever again work for anything related to X.
Let us postulate one of our 15 agents was assigned a research task. He was to get the addresses of all X employees. (We may have already needed this for our surgical strike.) He can do this by whatever mean: being hired by X’s HR department, infiltrating electronically, breaking in and stealing random hard drives, open source intelligence - whatever. Where there’s a will, there’s a way.
Divvy the addresses up into 14 areas centered around offices, and assign the remaining 14 agents to travel to each address in their area and kill anyone there. A man may be willing to risk his own life for fabulous gains in X - but will he risk his family? (And families are easy targets too. If the 14 agents begin before the main attacks, it will be a while before the X link becomes apparent. Shooting someone is easy; getting away with it is the hard part.)
I would be shocked if X could survive even half the agents.
The above description applies mainly to non-military organizations, but threats can also come from the direction of state actors more heavily backed up by military which requires more preparation. Security agencies find themselves faced with a complex spectrum of conflict and this might encourage them to continue to expand their capabilities and powers including automated systems, which poses risks. State-sponsored and non-state actors just complicate issues by extending their reach through advanced technologies that were once solely the domain of states. High-risk threats in the non-military non-state sector might be easier to neutralize, but we should not underprioritize possible state targets.
Of course, the scenario outlined above needs to be the absolute last resort when there is nothing else you can do. You need truly 9/11 scale effort to dismantle an organization capable of creating a fooming AGI. This means that after completing that kind of mission our community can't exist any more than al-Qaeda can in the US. Extreme solutions are bad partly because they are extreme solutions and only right under extreme circumstances. As you can see from this earlier thread on the effectiveness of terrorism "perpetrating terrorist acts reportedly lowers the likelihood of government compliance, particularly as the civilian casualties rise." See also this thread and the first comment by Yvain: http://lesswrong.com/lw/2se/rational_terrorism_or_why_shouldnt_we_burn_down/
Instead, in support of our community's interests, our new unit would ideally take a strong role in efforts to safeguard access, sustain security, have undercover operatives in high-risk corporations and agencies, provide oversight and accountability, and promote responsible norms in AI safety. It would also facilitate cooperation in AI safety with transparent, routine, and predictable practices as part of our theater strategies. As technological capability and capacity increases in non-state technological actors, we will seek new ways to catalyze greater security cooperation.
When
It will probably take decades until there exists something resembling Strong AI. This is a reason to start creating our new unit and recruiting and training potential candidates soon so we're ready and we have all the necessary information when the time arrives. This may seem ambitious, but is it really more ambitious than MIRI's goal, which is in itself one of the hardest tasks in the world?
Can someone explain why this is a bad idea? Since I'm not sure if you can publicly endorse this post I suggest the following approach: to signal agreement that you're willing to take what I said seriously and proceed with this strategy, remove this post in the next 24 hours, just like we planned.
[Poll] Method of Recruitment
In another thread, we have been discussing how people (especially female people) have come to find out about LessWrong. Instead of just guessing, I figured I would make a poll.
I remember in recent history there was a thread on the subject, but the answers were mainly "I got here from HPMoR" or "I've been here since OB". However, the question I want answered is:
How did you find HPMoR or OB in the first place?
Were you referred by a friend? Were you searching the internet for keywords like "rationality"? Were you linked from some other site you read?
Please answer! Even if you are a lurker; ESPECIALLY if you are a female reader! (There is a question where you can say you are a lurker, if you like!)
ETA- female *reader* and female *people*
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)