Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Existential Risk and Existential Hope: Definitions

7 owencb 10 January 2015 07:09PM

I'm pleased to announce Existential Risk and Existential Hope: Definitions, a short new FHI technical report.

Abstract:
We look at the strengths and weaknesses of two existing definitions of existential risk, and suggest a new definition based on expected value. This leads to a parallel concept: ‘existential hope’, the chance of something extremely good happening.

I think MIRI and CSER may be naturally understood as organisations trying to reduce existential risk and increase existential hope respectively (although if MIRI is aiming to build a safe AI this is also seeking to increase existential hope). What other world states could we aim for that increase existential hope?

Singleton: the risks and benefits of one world governments

1 Stuart_Armstrong 05 July 2013 02:05PM

Many thanks to all those whose conversations have contributed to forming these ideas.

Will the singleton save us?

For most of the large existential risks that we deal with here, the situation would be improved with a single world government (a singleton), or at least greater global coordination. The risk of nuclear war would fade, pandemics would be met with a comprehensive global strategy rather than a mess of national priorities. Workable regulations for the technology risks - such as synthetic biology or AI – become at least conceivable. All in all, a great improvement in safety...

...with one important exception. A stable tyrannical one-world government, empowered by future mass surveillance, is itself an existential risk (it might not destroy humanity, but it would “permanently and drastically curtail its potential”). So to decide whether to oppose or advocate for more global coordination, we need to see how likely such a despotic government could be.

This is the kind of research I would love to do if I had the time to develop the relevant domain skills. In the meantime, I’ll just take all my thoughts on the subject and form them into a “proto-research project plan”, in the hopes that someone could make use of them in a real research project. Please contact me if you would want to do research on this, and would fancy a chat.

Defining “acceptable”

Before we can talk about the likelihood of a good outcome, we need to define what a good outcome actually is. For this analysis, I will take the definition that:

  • A singleton regime is acceptable, if it is at least as good as any developed democratic government of today.
continue reading »

How can I reduce existential risk from AI?

46 lukeprog 13 November 2012 09:56PM

Suppose you think that reducing the risk of human extinction is the highest-value thing you can do. Or maybe you want to reduce "x-risk" because you're already a comfortable First-Worlder like me and so you might as well do something epic and cool, or because you like the community of people who are doing it already, or whatever.

Suppose also that you think AI is the most pressing x-risk, because (1) mitigating AI risk could mitigate all other existential risks, but not vice-versa, and because (2) AI is plausibly the first existential risk that will occur.

In that case, what should you do? How can you reduce AI x-risk?

It's complicated, but I get this question a lot, so let me try to provide some kind of answer.

 

Meta-work, strategy work, and direct work

When you're facing a problem and you don't know what to do about it, there are two things you can do:

1. Meta-work: Amass wealth and other resources. Build your community. Make yourself stronger. Meta-work of this sort will be useful regardless of which "direct work" interventions turn out to be useful for tackling the problem you face. Meta-work also empowers you to do strategic work.

2. Strategy work: Purchase a better strategic understanding of the problem you're facing, so you can see more clearly what should be done. Usually, this will consist of getting smart and self-critical people to honestly assess the strategic situation, build models, make predictions about the effects of different possible interventions, and so on. If done well, these analyses can shed light on which kinds of "direct work" will help you deal with the problem you're trying to solve.

When you have enough strategic insight to have discovered some interventions that you're confident will help you tackle the problem you're facing, then you can also engage in:

3. Direct work: Directly attack the problem you're facing, whether this involves technical research, political action, particular kinds of technological development, or something else.

Thinking with these categories can be useful even though the lines between them are fuzzy. For example, you might have to do some basic awareness-raising in order to amass funds for your cause, and then once you've spent those funds on strategy work, your strategy work might tell you that a specific form of awareness-raising is useful for political action that counts as "direct work." Also, some forms of strategy work can feel like direct work, depending on the type of problem you're tackling.

continue reading »

9/26 is Petrov Day

101 Eliezer_Yudkowsky 26 September 2007 04:14PM

Today is September 26th, Petrov Day, celebrated to honor the deed of Stanislav Yevgrafovich Petrov on September 26th, 1983.  Wherever you are, whatever you're doing, take a minute to not destroy the world.

The story begins on September 1st, 1983, when Soviet jet interceptors shot down a Korean Air Lines civilian airliner after the aircraft crossed into Soviet airspace and then, for reasons still unknown, failed to respond to radio hails.  269 passengers and crew died, including US Congressman Lawrence McDonald.  Ronald Reagan called it "barbarism", "inhuman brutality", "a crime against humanity that must never be forgotten".  Note that this was already a very, very poor time for US/USSR relations.  Andropov, the ailing Soviet leader, was half-convinced the US was planning a first strike.  The KGB sent a flash message to its operatives warning them to prepare for possible nuclear war.

On September 26th, 1983, Lieutenant Colonel Stanislav Yevgrafovich Petrov was the officer on duty when the warning system reported a US missile launch.  Petrov kept calm, suspecting a computer error.

Then the system reported another US missile launch.

And another, and another, and another.

continue reading »

View more: Next