Improving long-run civilisational robustness

11 RyanCarey 10 May 2016 11:15AM

People trying to guard civilisation against catastrophe usually focus on one specific kind of catastrophe at a time. This can be useful for building concrete knowledge with some certainty in order for others to build on it. However, there are disadvantages to this catastrophe-specific approach:

1. Catastrophe researchers (including Anders Sandberg and Nick Bostrom) think that there are substantial risks from catastrophes that have not yet been anticipated. Resilience-boosting measures may mitigate risks that have not yet been investigated.

2. Thinking about resilience measures in general may suggest new mitigation ideas that were missed by the catastrophe-specific approach.

One analogy for this is that an intrusion (or hack) to a software system can arise from a combination of many minor security failures, each of which might appear innocuous in isolation. You can decrease the chance of an intrusion by adding extra security measures, even without a specific idea of what kind of hacking would be performed. Things like being being able to power down and reboot a system, storing a backup and being able to run it in a "safe" offline mode are all standard resilience measures for software systems. These measures aren't necessarily the first thing that would come to mind if you were trying to model a specific risk like a password getting stolen, or a hacker subverting administrative privileges, although they would be very useful in those cases. So mitigating risk doesn't necessarily require a precise idea of the risk to be mitigated. Sometimes it can be done instead by thinking about the principles required for proper operation of a system - in the case of its software, preservation of its clean code - and the avenues through which it is vulnerable - such as the internet.

So what would be good robustness measures for human civilisation? I have a bunch of proposals:

 

Disaster forecasting

Disaster research

* Build research labs to survey and study catastrophic risks (like the Future of Humanity Institute, the Open Philanthropy Project and others)

Disaster prediction

* Prediction contests (like IARPA's Aggregative Contingent Estimation "ACE" program)

* Expert aggregation and elicitation

 

Disaster prevention

General prevention measures

* Build a culture of prudence in groups that run risky scientific experiments

* Lobby for these mitigation measures

* Improving the foresight and clear-thinking of policymakers and other relevant decision-makers

* Build research labs to plan more risk-mitigation measures (including the Centre for Study of Existential Risk)

Preventing intentional violence

* Improve focused surveillance of people who might commit large-scale terrorism (this is controversial because excessive surveillance itself poses some risk)

* Improve cooperation between nations and large institutions

Preventing catastrophic errors

* Legislating for individuals to be held more accountable for large-scale catastrophic errors that they may make (including by requiring insurance premiums for any risky activities)

 

Disaster response

* Improve political systems to respond to new risks

* Improved vaccine development, quarantine and other pandemic response measures

* Building systems for disaster notification


Disaster recovery

Shelters

* Build underground bomb shelters

* Provide a sheltered place for people to live with air and water

* Provide (or store) food and farming technologies (cf Dave Denkenberger's *Feeding Everyone No Matter What*

* Store energy and energy-generators

* Store reproductive technologies (which could include IVF, artificial wombs or measures for increasing genetic diversity)

* Store information about building the above

* Store information about building a stable political system, and about mitigating future catastrophes

* Store other useful information about science and technology (e.g. reading and writing)

* Store some of the above in submarines

* (maybe) store biodiversity

 

Space Travel

* Grow (or replicate) the international space station

* Improve humanity's capacity to travel to the Moon and Mars

* Build sustainable settlements on the Moon and Mars

 

Of course, some caveats are in order. 

To begin with, one could argue that surveilling terrorists is a measure specifically designed to reduce the risk from terrorism. But there are a number of different scenarios and methods through which a malicious actor could try to inflict major damage on civilisation, and so I still regard this as a general robustness measure, granted that there is some subjectivity to all of this. If you know absolutely nothing about the risks that you might face, and the structures in society that are to be preserved, then the exercise is futile. So some of the measures on this list will mitigate a smaller subset of risks than others, and that's just how it is, though I think the list is pretty different from the one people think of by using a risk-specific paradigm, which is the reason for the exercise.

Additionally, I'll disclaim that some of these measures are already well invested, and yet others will not be able to be done cheaply or effectively. But many seem to me to be worth thinking more about.

Additional suggestions for this list are welcome in the comments, as are proposals for their implementation.

 

Related readings

https://www.academia.edu/7266845/Existential_Risks_Exploring_a_Robust_Risk_Reduction_Strategy

http://www.nickbostrom.com/existential/risks.pdf

http://users.physics.harvard.edu/~wilson/pmpmta/Mahoney_extinction.pdf

http://gcrinstitute.org/aftermath

http://sethbaum.com/ac/2015_Food.html

http://the-knowledge.org

http://lesswrong.com/lw/ma8/roadmap_plan_of_action_to_prevent_human/

Reducing Catastrophic Risks, A Practical Introduction

6 RyanCarey 09 September 2015 10:39PM

While thinking about my own next career steps, I've been writing down some of my thoughts about what's in an impactful career.

In the process, I wrote an introductory report on what seem to me to be practical approaches to problems in catastrophic risks. It's intended to complement the analysis that 80,000 Hours provides by thinking about what general roles we ought to perform, rather than analysing specific careers and jobs, and by focusing specifically on existential risks.

I'm happy to receive feedback on it, positive and negative. 

Here it is: Reducing Catastrophic Risks, A Practical Introduction.

The Effective Altruism Handbook

14 RyanCarey 24 April 2015 12:30AM

Lots of people want to help others but lack information about how to do so effectively. Thanks to the growing effective altruism movement, lots of essays have been written around the topic of charity effectiveness over the last five years. And many of the key insights are gathered together in the Effective Altruism Handbook, which has become available today.

The Effective Altruism Handbook includes an introduction by William MacAskill and Peter Singer followed by five sections. The first section motivates the rest of the book, giving an overview of why people care about effectiveness. The second through fourth sections address tricky decisions involved in helping others: evaluating charities, choosing a career and prioritizing causes. In the final section, the leaders of seven organizations describe why they're doing what they're doing, and describe the kinds of activities they consider especially helpful.

A lot of conversations have gone into picking out the materials for this compilation, so I hope you enjoy reading it! Or, for those who are already familiar with its concepts, sharing it with friends.

The Effective Altruism Handbook can be freely downloaded here.

There are also epub and mobi versions for readers using ebook devices, although their formatting has not been edited as carefully.

Thanks to all of the authors in this compilation for writing their essays in the first place, as well as for making them available for the Handbook. Thanks to Alex Vermeer from MIRI, whose experience and assistance in producing a LaTeX book was invaluable. Thanks also to Bastian Stern, the Centre for Effective Altruism, Peter Orr (for proofreading), and Lauryn Vaughan for cover design. Also, thanks kindly to Agata Sagan who is helping by making a Polish translation! It is always good to see useful ideas spread to a more linguistically diverse audience.

Lastly, here’s the full table of contents:

  • Introduction, Peter Singer and William MacAskill

I. WHAT IS EFFECTIVE ALTRUISM?

  • The Drowning Child and the Expanding Circle, Peter Singer 
  • What is Effective Altruism, William MacAskill 
  • Scope Neglect, Eliezer Yudkowsky 
  • Tradeoffs, Julia Wise

II. CHARITY EVALUATION

  • Efficient Charity: Do Unto Others, Scott Alexander
  • “Efficiency” Measures Miss the Point, Dan Pallotta
  • How Not to Be a “White in Shining Armor”, Holden Karnofsky
  • Estimation Is the Best We Have, Katja Grace
  • Our Updated Top Charities, Elie Hassenfeld

III. CAREER CHOICE

  • Don’t Get a Job at a Charity: Work on Wall Street William MacAskill
  • High Impact Science Carl Shulman
  • How to Assess the Impact of a Career Ben Todd

IV. CAUSE SELECTION

  • Your Dollar Goes Further Overseas, GiveWell
  • The Haste Consideration, Matt Wage
  • Preventing Human Extinction, Nick Beckstead, Peter Singer & Matt Wage
  • Speciesism, Peter Singer
  • Four Focus Areas of Effective Altruism, Luke Muehlhauser

V. ORGANIZATIONS

  • GiveWell, GiveWell
  • Giving What We Can, Michelle Hutchinson
  • The Life You Can Save, Charlie Bresler
  • 80,000 Hours, Ben Todd
  • Charity Science, Xiomara Kikauka
  • The Machine Intelligence Research Institute, Luke Muehlhauser
  • Animal Charity Evaluators, Jon Bockman

One week left for CSER researcher applications

10 RyanCarey 17 April 2015 12:40AM

This is the last week to apply for one of four postdoctoral research positions at the Centre for the Study of Existential Risk. We are seeking researchers in disciplines including: economics, science and technology studies, science policy, arms control policy, expert elicitation and aggregation, conservation studies and philosophy.

The application requires a research proposal of no more than 1500 words from an individual with a relevant doctorate.

"We are looking for outstanding and highly-committed researchers, interested in working as part of growing research community, with research projects relevant to any aspect of the project. We invite applicants to explain their project to us, and to demonstrate their commitment to the study of extreme technological risks.

We have several shovel-ready projects for which we are looking for suitable postdoctoral researchers. These include:

1. Ethics and evaluation of extreme technological risk (ETR) (with Sir Partha Dasgupta;

2. Horizon-scanning and foresight for extreme technological risks (with Professor William Sutherland);

3. Responsible innovation and extreme technological risk (with Dr Robert Doubleday and the Centre for Science and Policy).

However, recruitment will not necessarily be limited to these subprojects, and our main selection criterion is suitability of candidates and their proposed research projects to CSER’s broad aims.

More details are available here. Applications close on April 24th.

- Sean OH and Ryan

SETI-related: fast radio bursts

2 RyanCarey 06 April 2015 10:23AM

Recently, there has been some media coverage of a recent sample of fast radio bursts (FRBs) that are an unusual regular integer spacing apart, raising the possibility of intelligent life.

The original paper:

We have noted a potential discrete spacing in DM of FRBs. Identified steps are integer multiples of 187.5cm−3

... 

In case this would hold, an extragalactic origin would seem unlikely, as high (random) DMs would be added by intergalactic dust. A more likely option could be a galactic source producing quantized chirped signals, but this seems most surprising. If both of these options could be excluded, only an artificial source (human or non-human) must be considered, particularly since most bursts have been observed in only one location (Parkes radio telescope). A re-assessment of man-made phenomena, such as perytons (Burke-Spolaor et al. 2011), would then be required. Failing some observational bias, the suggestive correlation with terrestrial time standards seems to nearly clinch the case for human association of these peculiar phenomena.

Usefully sceptical coverage by Gizmodo:

No matter how you slice it, eleven data points is a small sample set to draw any meaningful conclusions from. A handful of deviant observations could cause the entire pattern to unravel. And that’s exactly what seems to be happening. As Nadia Drake reports for National Geographic, newer observations, not included in the latest scientific report or other popular media articles, don’t fit:

“There are five fast radio bursts to be reported,” says Michael Kramer of Germany’s Max Planck Institute for Radioastronomy. “They do not fit the pattern.”

Instead of aliens, unexpected astrophysics, or even Earthly interference, the mysterious mathematical pattern is probably an artifact produced by a small sample size, Ransom says. When working with a limited amount of data – say, a population of 11 fast radio bursts – it’s easy to draw lines that connect the dots. Often, however, those lines disappear when more dots are added.

“My prediction is that this pattern will be washed out quite quickly once more fast radio bursts are found,” says West Virginia University’s Duncan Lorimer, who reported the first burst in 2007. “It’s a good example of how apparently significant results can be found in sparse data sets.”

Who here knows more about this?

GCRI: Updated Strategy and AMA on EA Forum next Tuesday

7 RyanCarey 23 February 2015 12:35PM

Just announcing for those interested that Seth Baum from the Global Catastrophic Risks Institute (GCRI) will be coming to the Effective Altruism Forum to answer a wide range of questions (like a Reddit "Ask Me Anything") next week at 7pm US ET on March 3.

Seth is an interesting case - more of a 'mere mortal' than Bostrom and Yudkowsky. (Clarification: his background is more standard, and he's probably more emulate-able!). He had a PhD in geography, and had come to a maximising consequentialist view, in which GCR-reduction is overwhelmingly important. So three years ago,  with risk analyst Tony Barrett, he cofounded the Global Catstrophic Risks Institute - one of the handful of places working on these particularly important problems. Since then, it's done some academic outreach and have covered issues like double-catastrophe/ recovery from catstrophe, bioengineering, food security and AI.

Just last week, they've updated their strategy, giving the following announcement:

Dear friends,

I am delighted to announce important changes in GCRI’s identity and direction. GCRI is now just over three years old. In these years we have learned a lot about how we can best contribute to the issue of global catastrophic risk. Initially, GCRI aimed to lead a large global catastrophic risk community while also performing original research. This aim is captured in GCRI’s original mission statement, to help mobilize the world’s intellectual and professional resources to meet humanity’s gravest threats.

Our community building has been successful, but our research has simply gone farther. Our research has been published in leading academic journals. It has taken us around the world for important talks. And it has helped us publish in the popular media. GCRI will increasingly focus on in-house research.

Our research will also be increasingly focused, as will our other activities. The single most important GCR research question is: What are the best ways to reduce the risk of global catastrophe? To that end, GCRI is launching a GCR Integrated Assessment as our new flagship project. The Integrated Assessment puts all the GCRs into one integrated study in order to assess the best ways of reducing the risk. And we are changing our mission statement accordingly, to develop the best ways to confront humanity’s gravest threats.

So 7pm ET Tuesday, March 3 is the time to come online and post your questions about any topic you like, and Seth will remain online until at least 9 to answer as many questions as he can. Questions in the comments here can also be ported across.

On the topic of risk organisations, I'll also mention that i) video is available from CSER's recent seminar, in which Mark Lipsitch and Derek Smith's discussed potentially pandemic pathogens, and ii) I'm helping Sean to write up an update of CSER's progress for LessWrong and effective altruists which will go online soon.

Certificates of Impact [Paul Christiano; Link]

6 RyanCarey 11 November 2014 04:58PM

Paul proposes that we could create a market for certificates of impact. The certificates would be created whenever someone does something that has a positive impact in the world.

Recovery Manual for Civilization

6 RyanCarey 31 October 2014 09:36AM

I was wondering how seriously we've considered storing useful information to improve the chance of rebounding from a global catastrophe. I'm sure this has been discussed previously, but not in sufficient depth that I could find it on a short search of the site. If we value future civilisation, then, it may be worth going to significant length to reduce existential risks.

Some interventions will target specific risky tech, like AI and synthetic biology. However, just as many of today's risks could not have been identified a century ago, we should expect some emerging risks of the coming decades to also catch us by surprise. As argued by Karim Jebari, even if risks are not identifiable, we can take general-purpose methods to reduce them, by analogy to the principles of robustness and safety factors in engineering. One such idea is, to create a store of the kind of items one would want to recover from catastrophe. This idea varies based on which items are chosen and where they are stored.

Nick Beckstead has investigated bunkers, and he basically rejected bunker-improvement because the strength of a bunker would not improve our resilience to known risks like AI, nuclear weapons or biowarfare. However, his analysis was fairly limited in scope. He focused largely on where to put people, food and walls, in order to manage known risks. It would be useful for further analysis to consider where you can put other items, like books, batteries or 3D printers, in an analysis of a range of scenarios that could arise from known or unknown risks. Though we can't currently identify many plausible risks that would leave us without 99% of civilisation, that's still a plausible situation that it's good to equip ourselves to recover from. What information would we store? 

The Knowledge, How to Rebuild Civilisation From Scratch would be a good candidate based on its title alone, and a quick skim over i09's review. One could bury Wikipedia, the Internet Archive, or a bunch of other items suggested by The Long Now Foundation. A computer with a battery perhaps? Perhaps all of the above, to ward against the possibility that we miscalculate.  Where would we store it? Again, the principle of resilience would seem to dictate that we should store these in a variety of sites. They could be underground and overground, marked and unmarked at busy and deserted sites of varying climate, and with various levels of security. In general, this seems to be neglected, cheap, and unusually valuable, and so I would be interested to hear whether LessWrong has any further ideas about how this could be done well.

Further relevant reading: Adaptation to and Recovery From Global Catastrophe, Svalbard Global Seed Vault (a biodiversity store in the far North of Norway, started by Gates and others).

Announcing The Effective Altruism Forum

29 RyanCarey 24 August 2014 08:07AM

The Effective Altruism Forum will be launched at effective-altruism.com on September 10, British time.

Now seems like a good time time to discuss why we might need an Effective Altruism Forum, and how it might compare to LessWrong.

About the Effective Altruism Forum

The motivation for the Effective Altruism Forum is to improve the quality of effective altruist discussion and coordination. A big part of this is to give many of the useful features of LessWrong to effective altruists, including:

 

  • Archived, searchable content (this will begin with archived content from effective-altruism.com)
  • Meetups
  • Nested comments
  • A karma system
  • A dynamically upated list of external effective altruist blogs
  • Introductory materials (this will begin with these articles)

 

The Effective Altruism Forum has been designed by Mihai Badic. Over the last month, it has been developed by Trike Apps, who have built the new site using the LessWrong codebase. I'm glad to report that it is now basically ready, looks nice, and is easy to use.

I expect that at the new forum, as on the effective altruist Facebook and Reddit pages, people will want to discuss the which intellectual procedures to use to pick effective actions. I also expect some proposals of effective altruist projects, and offers of resources. So users of the new forum will share LessWrong's interest in instrumental and epistemic rationality. On the other hand, I expect that few of its users will want to discuss the technical aspects of artificial intelligence, anthropics or decision theory, and to the extent that they do so, they will want to do it at LessWrong. As a result, I  expect the new forum to cause:

 

  • A bunch of materials on effective altruism and instrumental rationality to be collated for new effective altruists
  • Discussion of old LessWrong materials to resurface
  • A slight increase to the number of users of LessWrong, possibly offset by some users spending more of their time posting at the new forum.

 

At least initially, the new forum won't have a wiki or a Main/Discussion split and won't have any institutional affiliations.

Next Steps:

It's really important to make sure that the Effective Altruism Forum is established with a beneficial culture. If people want to help that process by writing some seed materials, to be posted around the time of the site's launch, then they can contact me at ry [dot] duff [at] gmail.com. Alternatively, they can wait a short while until they automatically receive posting priveleges.

It's also important that the Effective Altruism Forum helps the shared goals of rationalists and effective altruists, and has net positive effects on LessWrong in particular. Any suggestions for improving the odds of success for the effective altruism forum are most welcome.

Testing my cognition

6 RyanCarey 19 February 2014 10:30PM

Hi all, I'm doing first quantified self experiment. How does this design sound to you?

 

Metrics

20 min sample of tests from Cambridge brain sciences site: spatial span, double trouble, object reasoning, rotations, hampshire tree task, spatial slider.

10 USMLE Rx questions in 11 mins covering neurology, psychiatry, cognitive sciences and epidemiology, randomised from a pool of 400 questions.

Subjective report of cognitive ability from one to ten.

These will be taken daily at noon.

 

Intervention

Take nothing for one week.

Take creatine 5g daily for two weeks 

Then take nothing for two weeks

 

I'm starting with creatine because I'm vegetarian. Then I'll report my findings, re-evaluate value of further experiments and proceed on to some or all of piracetam+choline, Luminosity and dual n-back.

 

Thoughts on how I can improve this?

View more: Next