Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Further discussion of CFAR’s focus on AI safety, and the good things folks wanted from “cause neutrality”

34 AnnaSalamon 12 December 2016 07:39PM

Follow-up to:

In the days since we published our previous post, a number of people have come up to me and expressed concerns about our new mission.  Several of these had the form “I, too, think that AI safety is incredibly important — and that is why I think CFAR should remain cause-neutral, so it can bring in more varied participants who might be made wary by an explicit focus on AI.”

I would here like to reply to these people and others, and to clarify what is and isn’t entailed by our new focus on AI safety.

continue reading »

CFAR’s new focus, and AI Safety

30 AnnaSalamon 03 December 2016 06:09PM

A bit about our last few months:

  • We’ve been working on getting a simple clear mission and an organization that actually works.  We think of our goal as analogous to the transition that the old Singularity Institute underwent under Lukeprog (during which chaos was replaced by a simple, intelligible structure that made it easier to turn effort into forward motion).
  • As part of that, we’ll need to find a way to be intelligible.
  • This is the first of several blog posts aimed at causing our new form to be visible from outside.  (If you're in the Bay Area, you can also come meet us at tonight's open house.) (We'll be talking more about the causes of this mission-change; the extent to which it is in fact a change, etc. in an upcoming post.)

Here's a short explanation of our new mission:
  • We care a lot about AI Safety efforts in particular, and about otherwise increasing the odds that humanity reaches the stars.

  • Also, we[1] believe such efforts are bottlenecked more by our collective epistemology, than by the number of people who verbally endorse or act on "AI Safety", or any other "spreadable viewpointdisconnected from its derivation.

  • Our aim is therefore to find ways of improving both individual thinking skill, and the modes of thinking and social fabric that allow people to think together.  And to do this among the relatively small sets of people tackling existential risk. 


continue reading »

Stable and Unstable Risks

5 katydee 25 November 2013 06:12PM

Related: Existential Risk, 9/26 is Petrov Day

Existential risks—risks that, in the words of Nick Bostrom, would "either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential," are a significant threat to the world as we know it. In fact, they may be one of the most pressing issues facing humanity today.

The likelihood of some risks may stay relatively constant over time—a basic view of asteroid impact is that there is a certain probability that a "killer asteroid" hits the Earth and that this probability is more or less the same every year. This is what I refer to as a "stable risk."

However, the likelihood of other existential risks seems to fluctuate, often quite dramatically. Many of these "unstable risks" are related to human activity.

For instance, the likelihood of a nuclear war at sufficient scale to be an existential threat seems contingent on various geopolitical factors that are difficult to predict in advance. That said, the likelihood of this risk has clearly changed throughout recent history. Nuclear war was obviously not an existential risk before nuclear weapons were invented, and was fairly clearly more of a risk during the Cuban Missile Crisis than it is today.

Many of these unstable, human-created risks seem based largely on advanced technology. Potential risks like gray goo rely on theorized technologies that have yet to be developed (and indeed may never be developed). While this is good news for the present day, it also means that we have to be vigilant for the emergence of potential new threats as human technology increases.

GiveWell's recent conversation with Carl Shulman contains some arguments as to why the risk of human extinction may be decreasing over time. However, it strikes me as perhaps more likely that the risk of human extinction is increasing over time—or at the very least becoming less stable—as technology increases the amount of power available to individuals and civilizations.

After all, the very concept of human-created unstable existential risks is a recent one. Even if Julius Caesar, Genghis Khan, or Queen Victoria for some reason decided to destroy human civilization, it seems almost certain that they would fail, even given all the resources of their empires.

The same cannot be said for Kennedy or Khrushchev.

A Proposed Adjustment to the Astronomical Waste Argument

19 Nick_Beckstead 27 May 2013 03:39AM

This article has been cross-posted at http://effective-altruism.com/.

An existential risk is a risk “that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development,” (Bostrom, 2013). Nick Bostrom has argued that

“[T]he loss in expected value resulting from an existential catastrophe is so enormous that the objective of reducing existential risks should be a dominant consideration whenever we act out of an impersonal concern for humankind as a whole. It may be useful to adopt the following rule of thumb for such impersonal moral action:

Maxipok: Maximize the probability of an “OK outcome,” where an OK outcome is any outcome that avoids existential catastrophe.”

There are a number of people in the effective altruism community who accept this view and cite Bostrom’s argument as their primary justification. Many of these people also believe that the best ways of minimizing existential risk involve making plans to prevent specific existential catastrophes from occurring, and believe that the best giving opportunities must be with charities that primarily focus on reducing existential risk. They also appeal to Bostrom’s argument to support their views. (Edited to add: Note that Bostrom himself sees maxipok as neutral on the question of whether the best methods of reducing existential risk are very broad and general, or highly targeted and specific.) For one example of this, see Luke Muehlhauser’s comment:

“Many humans living today value both current and future people enough that if existential catastrophe is plausible this century, then upon reflection (e.g. after counteracting their unconscious, default scope insensitivity) they would conclude that reducing the risk of existential catastrophe is the most valuable thing they can do — whether through direct work or by donating to support direct work.”

I now think these views require some significant adjustments and qualifications, and given these adjustments and qualifications, their practical implications become very uncertain. I still believe that what matters most about what we do is how our actions affect humanity’s long-term future potential, and I still believe that targeted existential risk reduction and research is a promising cause, but it now seems unclear whether targeted existential risk reduction is the best area to look for ways of making the distant future go as well as possible. It may be and it may not be, and which is right probably depends on many messy details about specific opportunities, as well as general methodological considerations which are, at this point, highly uncertain. Various considerations played a role in my reasoning about this, and I intend to talk about more of them in greater detail in the future. I’ll talk about just a couple of these considerations in this post.

In this post, I argue that:

  1. Though Bostrom’s argument supports the conclusion that maximizing humanity’s long term potential is extremely important, it does not provide strong evidence that reducing existential risk is the best way of maximizing humanity’s future potential. There is a much broader class of actions which may affect humanity’s long-term potential, and Bostrom’s argument does not uniquely favor existential risk over other members in this class.
  2. A version of Bostrom’s argument better supports a more general view: what matters most is that we make path-dependent aspects of the far future go as well as possible. There are important questions about whether we should accept this more general view and what its practical significance is, but this more general view seems to be a strict improvement on the view that minimizing existential risk is what matters most.
  3. The above points favor very broad, general, and indirect approaches to shaping the far future for the better, rather than thinking about very specific risks and responses, though there are many relevant considerations and the issue is far from settled.

I think some prominent advocates of existential risk reduction already agree with these general points, and believe that other arguments, or other arguments together with Bostrom’s argument, establish that direct existential risk reduction is what matters most. This post is most relevant to people who currently think Bostrom’s arguments may settle the issues discussed above.

Path-dependence and trajectory changes

In thinking about how we might affect the far future, I've found it useful to use the concept of the world's development trajectory, or just trajectory for short. The world's development trajectory, as I use the term, is a rough summary way the future will unfold over time. The summary includes various facts about the world that matter from a macro perspective, such as how rich people are, what technologies are available, how happy people are, how developed our science and culture is along various dimensions, and how well things are going all-things-considered at different points of time. It may help to think of the trajectory as a collection of graphs, where each graph in the collection has time on the x-axis and one of these other variables on the y-axis.

With that concept in place, consider three different types of benefits from doing good. First, doing something good might have proximate benefits—this is the name I give to the fairly short-run, fairly predictable benefits that we ordinarily think about when we cure some child's blindness, save a life, or help an old lady cross the street. Second, there are benefits from speeding up development. In many cases, ripple effects from good ordinary actions speed up development. For example, saving some child's life might cause his country's economy to develop very slightly more quickly, or make certain technological or cultural innovations arrive more quickly. Third, our actions may slightly or significantly alter the world's development trajectory. I call these shifts trajectory changes. If we ever prevent an existential catastrophe, that would be an extreme example of a trajectory change. There may also be smaller trajectory changes. For example, if some species of dolphins that we really loved were destroyed, that would be a much smaller trajectory change.

The concept of a trajectory change is closely related to the concept of path dependence in the social sciences, though when I talk about trajectory changes I am interested in effects that persist much longer than standard examples of path dependence. A classic example of path dependence is our use of QWERTY keyboards. Our keyboards could have been arranged in any number of other possible ways. A large part of the explanation of why we use QWERTY keyboards is that it happened to be convenient for making typewriters, that a lot of people learned to use these keyboards, and there are advantages to having most people use the same kind of keyboard. In essence, there is path dependence whenever some aspect of the future could easily have been way X, but it is arranged in way Y due to something that happened in the past, and now it would be hard or impossible to switch to way X. Path dependence is especially interesting when way X would have been better than way Y. Some political scientists have argued that path dependence is very common in politics. For example, in an influential paper (with over 3000 citations) Pierson (2000, p. 251) argues that:

Specific patterns of timing and sequence matter; a wide range of social outcomes may be possible; large consequences may result from relatively small or contingent events; particular courses of action, once introduced, can be almost impossible to reverse; and consequently, political development is punctuated by critical moments or junctures that shape the basic contours of social life.

The concept of a trajectory change is also closely related to the concept of a historical contingency. If Thomas Edison had not invented the light bulb, someone else would have done it later. In this sense, it is not historically contingent that we have light bulbs, and the most obvious benefits from Thomas Edison inventing the light bulb are proximate benefits and benefits from speeding up development. Something analogous is probably true of many other technological innovations such as computers, candles, wheelbarrows, object-oriented programming, and the printing press. Some important examples of historical contingencies: the rise of Christianity, the creation of the US Constitution, and the writings of Karl Marx. Various aspects of Christian morality influence the world today in significant ways, but the fact that those aspects of morality, in exactly those ways, were part of a dominant world religion was historically contingent. And therefore events like Jesus's death and Paul writing his epistles are examples of trajectory changes. Likewise, the US Constitution was the product of deliberation among a specific set of men, the document affects government policy today and will affect it for the foreseeable future, but it could easily have been a different document. And now that the document exists in its specific legal and historical context, it is challenging to make changes to it, so the change is somewhat self-reinforcing.

Some small trajectory changes could be suboptimal

Persistent trajectory changes that do not involve existential catastrophes could have great significance for shaping the far future. It is unlikely that the far future will inherit many of our institutions exactly as they are, but various aspects of the far future—including social norms, values, political systems, and perhaps even some technologies—may be path dependent on what happens now, and sometimes in suboptimal ways. In general, it is reasonable to assume that if there is some problem that might exist in the future and we can do something to fix it now, future people would also be able to solve that problem. But if values or social norms change, they might not agree that some things we think are problems really are problems. Or, if people make the wrong decisions now, certain standards or conventions may get entrenched, and resulting problems may be too expensive to be worth fixing. For further categories of examples of path-dependent aspects of the far future, see these posts by Robin Hanson.

The astronomical waste argument and trajectory changes

Bostrom’s argument only works if reducing existential risk is the most effective way of maximizing humanity’s future potential. But there is no robust argument that trying to reduce existential risk is a more effective way of shaping the far future than trying to create other positive trajectory changes. Bostrom’s argument for the overwhelming importance of reducing existential risk can be summarized as follows:

  1. The expected size of humanity's future influence is astronomically great.
  2. If the expected size of humanity's future influence is astronomically great, then the expected value of the future is astronomically great.
  3. If the expected value of the future is astronomically great, then what matters most is that we maximize humanity’s long-term potential.
  4. Some of our actions are expected to reduce existential risk in not-ridiculously-small ways.
  5. If what matters most is that we maximize humanity’s future potential and some of our actions are expected to reduce existential risk in not-ridiculously-small ways, what it is best to do is primarily determined by how our actions are expected to reduce existential risk.
  6. Therefore, what it is best to do is primarily determined by how our actions are expected to reduce existential risk.

Call that the “astronomical waste” argument.

It is unclear whether premise (5) is true because it is unclear whether trying to reduce existential risk is the most effective way of maximizing humanity’s future potential. For all we know, it could be more effective to try to create other positive trajectory changes. Clearly, it would be better to prevent extinction than to improve our social norms in a way that indirectly makes the future go one millionth better, but, in general, “X is a bigger problem than Y” is only a weak argument that “trying to address X is more important than trying to address Y.” To be strong, the argument must be supplemented by looking at many other considerations related to X and Y, such as how much effort is going into solving X and Y, how tractable X and Y are, how much X and Y could use additional resources, and whether there are subsets of X or Y that are especially strong in terms of these considerations.

Bostrom does have arguments that speeding up development and providing proximate benefits are not as important, in themselves, as reducing existential risk. And these arguments, I believe, have some plausibility. Since we don’t have an argument that reducing existential risk is better than trying to create other positive trajectory changes and an existential catastrophe is one type of trajectory change, it seems more reasonable for defenders of the astronomical waste argument to focus on trajectory changes in general. It would be better to replace the last two steps of the above argument with:

4’   Some of our actions are expected to change our development trajectory in not-ridiculously-small ways.

5’.  If what matters most is that we maximize humanity’s future potential and some of our actions are expected to change our development trajectory in not-ridiculously-small ways, what it is best to do is primarily determined by how our actions are expected to change our development trajectory.

6’.  Therefore, what it is best to do is primarily determined by how our actions are expected to change our development trajectory.

This seems to be a strictly more plausible claim than the original one, though it is less focused.

In response to the arguments in this post, which I e-mailed him in advance, Bostrom wrote a reply (see the end of the post). The key comment, from my perspective, is:

“Many trajectory changes are already encompassed within the notion of an existential catastrophe.  Becoming permanently locked into some radically suboptimal state is an xrisk.  The notion is more useful to the extent that likely scenarios fall relatively sharply into two distinct categories---very good ones and very bad ones.  To the extent that there is a wide range of scenarios that are roughly equally plausible and that vary continuously in the degree to which the trajectory is good, the existential risk concept will be a less useful tool for thinking about our choices.  One would then have to resort to a more complicated calculation.  However, extinction is quite dichotomous, and there is also a thought that many sufficiently good future civilizations would over time asymptote to the optimal track.”

I agree that a key question here is whether there is a very large range of plausible equilibria for advanced civilizations, or whether civilizations that manage to survive long enough naturally converge on something close to the best possible outcome. The more confidence one has in the second possibility, the more interesting existential risk is as a concept. The less confidence one has in the second possibility, the more interesting trajectory changes in general are. However, I would emphasize that unless we can be highly confident in the second possibility, it seems that we cannot be confident that reducing existential risk is more important than creating other positive trajectory changes because of the astronomical waste argument alone. This would turn on further considerations of the sort I described above.

Broad and narrow strategies for shaping the far future

Both the astronomical waste argument and the fixed up version of that argument conclude that what matters most is how our actions affect the far future. I am very sympathetic to this viewpoint, abstractly considered, but I think its practical implications are highly uncertain. There is a spectrum of strategies for shaping the far future that ranges from the very targeted (e.g., stop that asteroid from hitting the Earth) to very broad (e.g., create economic growth, help the poor, provide education programs for talented youth), with options like “tell powerful people about the importance of shaping the far future” in between. The limiting case of breadth might be just optimizing for proximate benefits or for speeding up development. Defenders of the astronomical waste argument tend to be on the highly targeted end of this spectrum. I think it’s a very interesting question where on this spectrum we should prefer to be, other things being equal, and it’s a topic I plan to return to in the future.

The arguments I’ve offered above favor broader strategies for shaping the far future, though they don’t settle the issue. The main reason I say this is that the best ways of creating positive trajectory changes may be very broad and general, whereas the best ways of reducing existential risk may be more narrow and specific. For example, it may be reasonable to try to assess, in detail, questions like, “What are the largest specific existential risks?” and, “What are the most effective ways of reducing those specific risks?” In contrast, it seems less promising to try to make specific guesses about how we might create smaller positive trajectory changes because there are so many possibilities and many trajectory changes do not have significance that is predictable in advance. No one could have predicted the persistent ripple effects that Jesus's life had, for example. In other cases—such as the framing of the US Constitution—it's clear that a decision has trajectory change potential, but it would be hard to specify, in advance, which concrete measures should be taken. In general, it seems that the worse you are at predicting some phenomenon that is critical to your plans, the less your plans should depend on specific predictions about that phenomenon. Because of this, promising ways to create positive trajectory changes in the world may be more broad than the most promising ways of trying to reduce existential risk specifically. Improving education, improving parenting, improving science, improving our political system, spreading humanitarian values, or otherwise improving our collective wisdom as stewards of the future could, I believe, create many small, unpredictable positive trajectory changes.

I do not mean to suggest that broad approaches are necessarily best, only that people interested in shaping the far future should take them more seriously than they currently do. The way I see the trade-off between highly targeted strategies and highly broad strategies is as follows. Highly targeted strategies for shaping the far future often depend on highly speculative plans, often with many steps, which are hard to execute. We often have very little sense of whether we are making valuable progress on AI risk research or geo-engineering research. On the other hand, highly broad strategies must rely on implicit assumptions about the ripple effects of doing good in more ordinary ways. It is very subtle and speculative to say how ordinary actions are related to positive trajectory changes, and estimating magnitudes seems extremely challenging. Considering these trade-offs in specific cases seems like a promising area for additional research.

Summary

In this post, I argued that:

  1. The astronomical waste argument becomes strictly more plausible if we replace the idea of minimizing existential risk with the idea of creating positive trajectory changes.
  2. There are many ways in which our actions could unpredictably affect our general development trajectory, and therefore many ways in which our actions could shape the far future for the better. This is one reason to favor broad strategies for shaping the far future.

The trajectory change perspective may have other strategic implications for people who are concerned about maximizing humanity’s long-term potential. I plan to write about these implications in the future.[i]

Comment from Nick Bostrom on this post

[What follows is an e-mail response from Nick Bostrom. He suggested that I share his comment along with the post. Note that I added a couple of small clarifications to this post (noted above) in response to Bostrom's comment.]

One can arrive at a more probably correct principle by weakening, eventually arriving at something like 'do what is best' or 'maximize expected good'.  There the well-trained analytic philosopher could rest, having achieved perfect sterility.  Of course, to get something fruitful, one has to look at the world not just at our concepts.

Many trajectory changes are already encompassed within the notion of an existential catastrophe.  Becoming permanently locked into some radically suboptimal state is an xrisk.  The notion is more useful to the extent that likely scenarios fall relatively sharply into two distinct categories---very good ones and very bad ones.  To the extent that there is a wide range of scenarios that are roughly equally plausible and that vary continuously in the degree to which the trajectory is good, the existential risk concept will be a less useful tool for thinking about our choices.  One would then have to resort to a more complicated calculation.  However, extinction is quite dichotomous, and there is also a thought that many sufficiently good future civilizations would over time asymptote to the optimal track.

In a more extended and careful analysis there are good reasons to consider second-order effects that are not captured by the simple concept of existential risk.  Reducing the probability of negative-value outcomes is obviously important, and some parameters such as global values and coordination may admit of more-or-less continuous variation in a certain class of scenarios and might affect the value of the long-term outcome in correspondingly continuous ways.  (The degree to which these complications loom large also depends on some unsettled issues in axiology; so in an all-things-considered assessment, the proper handling of normative uncertainty becomes important.  In fact, creating a future civilization that can be entrusted to resolve normative uncertainty well wherever an epistemic resolution is possible, and to find widely acceptable and mutually beneficial compromises to the extent such resolution is not possible---this seems to me like a promising convergence point for action.)

It is not part of the xrisk concept or the maxipok principle that we ought to adopt some maximally direct and concrete method of reducing existential risk (such as asteroid defense): whether one best reduces xrisk through direct or indirect means is an altogether separate question.

 


[i] I am grateful to Nick Bostrom, Paul Christiano, Luke Muehlhauser, Vipul Naik, Carl Shulman, and Jonah Sinick for feedback on earlier drafts of this post.

Don't Build Fallout Shelters

27 katydee 07 January 2013 02:38PM

Related: Circular Altruism

One thing that many people misunderstand is the concept of personal versus societal safety. These concepts are often conflated despite the appropriate mindsets being quite different.

Simply put, personal safety is personal.

In other words, the appropriate actions to take for personal safety are whichever actions reduce your chance of being injured or killed within reasonable cost boundaries. These actions are largely based on situational factors because the elements of risk that two given people experience may be wildly disparate.

For instance, if you are currently a young computer programmer living in a typical American city, you may want to look at eating better, driving your car less often, and giving up unhealthy habits like smoking. However, if you are currently an infantryman about to deploy to Afghanistan, you may want to look at improving your reaction time, training your situational awareness, and practicing rifle shooting under stressful conditions.

One common mistake is to attempt to preserve personal safety for extreme circumstances such as nuclear wars. Some individuals invest sizeable amounts of money into fallout shelters, years worth of emergency supplies, etc.

While it is certainly true that a nuclear war would kill or severely disrupt you if it occurred, this is not necessarily a fully convincing argument in favor of building a fallout shelter. One has to consider the cost of building a fallout shelter, the chance that your fallout shelter will actually save you in the event of a nuclear war, and the odds of a nuclear war actually occurring.

Further, one must consider the quality of life reduction that one would likely experience in a post-nuclear war world. It's also important to remember that, in the long run, your survival is contingent on access to medicine and scientific progress. Future medical advances may even extend your lifespan very dramatically, and potentially provide very large amounts of utility. Unfortunately, full-scale nuclear war is very likely to impair medicine and science for quite some time, perhaps permanently.

Thus even if your fallout shelter succeeds, you will likely live a shorter and less pleasant life than you would otherwise. In the end, building a fallout shelter looks like an unwise investment unless you are extremely confident that a nuclear war will occur shortly-- and if you are, I want to see your data!

When taking personal precautionary measures, worrying about such catastrophes is generally silly, especially given the risks we all take on a regular basis-- risks that, in most cases, are much easier to avoid than nuclear wars. Societal disasters are generally extremely expensive for the individual to protect against, and carry a large amount of disutility even if protections succeed.

To make matters worse, if there's a nuclear war tomorrow and your house is hit directly, you'll be just as dead as if you fall off your bike and break your neck. Dying in a more dramatic fashion does not, generally speaking, produce more disutility than dying in a mundane fashion does. In other words, when optimizing for personal safety, focus on accidents, not nuclear wars; buy a bike helmet, not a fallout shelter.

The flip side to this, of course, is that if there is a full-scale nuclear war, hundreds of millions-- if not billions-- of people will die and society will be permanently disrupted. If you die in a bike accident tomorrow, perhaps a half dozen people will be killed at most. So when we focus on non-selfish actions, the big picture is far, far, far more important. If you can reduce the odds of a nuclear war by one one-thousandth of one percent, more lives will be saved on average than if you can prevent hundreds of fatal accidents.

When optimizing for overall safety, focus on the biggest possible threats that you can have an impact on. In other words, when dealing with societal-level risks, your projected impact will be much higher if you try to focus on protecting society instead of protecting yourself.

In the end, building fallout shelters is probably silly, but attempting to reduce the risk of nuclear war sure as hell isn't. And if you do end up worrying about whether a nuclear war is about to happen, remember that if you can reduce the risk of said war-- which might be as easy as making a movie-- your actions will have a much, much greater overall impact than building a shelter ever could.

Existential Risk

28 lukeprog 15 November 2011 02:23PM

This is a "basics" article, intended for introducing people to the concept of existential risk.

On September 26, 1983, Soviet officer Stanislav Petrov saved the world.

Three weeks earlier, Soviet interceptors had shot down a commercial jet, thinking it was on a spy mission. All 269 passengers were killed, including active U.S. senator Lawrence McDonald. President Reagan called the Soviet Union an “evil empire" in response. It was one of the most intense periods of the Cold War.

Just after midnight on September 26, Petrov sat in a secret bunker, monitoring early warning systems. He did this only twice a month, and it wasn’t his usual shift; he was filling in for the shift crew leader.

One after another, five missiles from the USA appeared on the screen. A siren wailed, and the words "ракетном нападении" ("Missile Attack") appeared in red letters. Petrov checked with his crew, who reported that all systems were operating properly. The missiles would reach their targets in Russia in mere minutes.

Protocol dictated that he press the flashing red button before him to inform his superiors of the attack so they could decide whether to launch a nuclear counterattack. More than 100 crew members stood in silence behind him, awaiting his decision.

"I thought for about a minute," Petrov recalled. "I thought I’d go crazy... It was as if I was sitting on a bed of hot coals."

continue reading »

Be a Visiting Fellow at the Singularity Institute

26 AnnaSalamon 19 May 2010 08:00AM

Now is the very last minute to apply for a Summer 2010 Visiting Fellowship.  If you’ve been interested in SIAI for a while, but haven’t quite managed to make contact -- or if you’re just looking for a good way to spend a week or more of your summer -- drop us a line.  See what an SIAI summer might do for you and the world. 

(SIAI’s Visiting Fellow program brings volunteers to SIAI for anywhere from a week to three months, to learn, teach, and collaborate.  Flights and room and board are covered.  We’ve been rolling since June of 2009, with good success.)

Apply because:

  • SIAI is tackling the world’s most important task -- the task of shaping the Singularity.  The task of averting human extinction. We aren’t the only people tackling this, but the total set is frighteningly small.
  • When numbers are this small, it’s actually plausible that you can tip the balance
  • SIAI has some amazing people to learn from -- many report learning and growing more here than in any other period of their lives.
  • SIAI also has major gaps, and much that desperately needs doing but that we haven’t noticed yet, or have noticed but haven’t managed to fix -- gaps where your own skills, talents, and energy can come into play.

continue reading »

Call for new SIAI Visiting Fellows, on a rolling basis

29 AnnaSalamon 01 December 2009 01:42AM

Last summer, 15 Less Wrongers, under the auspices of SIAI, gathered in a big house in Santa Clara (in the SF bay area), with whiteboards, existential risk-reducing projects, and the ambition to learn and do.

Now, the new and better version has arrived.  We’re taking folks on a rolling basis to come join in our projects, learn and strategize with us, and consider long term life paths.  Working with this crowd transformed my world; it felt like I was learning to think.  I wouldn’t be surprised if it can transform yours.

continue reading »

Optimal Strategies for Reducing Existential Risk

3 FrankAdamek 31 August 2009 03:52PM

I've been talking to a variety of people about this recently, and it was suggested that people (including myself) might benefit from a LessWrong discussion on the topic. I've been thinking about it on my own for a year, which took me through Neuroscience, Computer Science, and International Security Policy. I'm hoping and finding that through discussion, a much greater variety of options can be proposed and considered, and those with particular experience or observations can have others benefit from their knowledge. I've been very happy to find there are a number of people seriously working towards this already (still far fewer than we might need), and their deliberations and learning would be particularly valuable.

This is primarily about careers and other long term focused efforts (academic research and writing on the side, etc), not smaller incremental tools such as motivation and akrasia discussions. Where you should be applying your efforts, now how (much). Unless there's a lot of interest, it might also be good to otherwise avoid discussions on self-improvement in general and how to best realize these long term concerns, bringing those up elsewhere or in a seperate post.

A few initial thoughts:

  • What's the marginal value of one's own contribution? How much less would the "next person" be able to accomplish, i.e. the person who would get that job if you didn't?
  • What are efforts that provide compounding benefit over time, rather than a constant static contribution? Most things are likely to provide some compounding benefit, but where is this most significant?
  • We may not need or desire to do a lot of discussion on this, but it's worth keeping in mind that as time goes on and we learn more, collectively and individually, charitable donations could become much more effective. This suggests that it may be best save money until better applications are discovered, to be balanced of course by the compounding effects earlier donations can have. See Alan Dawrst's paper on the value of making money, which besides being a worthwhile read has a point about this in the 3rd note. If others know of more thorough treatments of this, please mention them.
  • What are the particular existential risks that have the largest expected marginal reduction for a given amount of effort? What dangers are particularly underserved, by society in general and by this community? Is there more marginal use in focusing on particular dangers or in efforts that reduce all of them at once (such as better general institutional treatment of them)?
  • How likely is it to increase the dangers of certain risks by an increased effort to avoid risks? If considerably likely and significant, can it be avoided and how?
  • How should we assess the risk of various efforts? Some efforts might provide small but very likely reductions in risk, and in others you may be likely to accomplish nothing, but a success could be immensely helpful.

Cookies vs Existential Risk

8 FrankAdamek 30 August 2009 03:56AM

I've been thinking for a while now about the possible trade-offs between present recreation and small reductions in existential risk, and I've finally gotten around to a (consequentialist) utilitarian analysis.

ETA: Most of the similar mathematical treatments I've seen assume a sort of duty to unrealized people, such as Bostrom's "Astronomical Waste" paper. In addition to avoiding that assumption, my aim was to provide a more general formula for someone to use, in which they can enter differing beliefs and hypotheses. Lastly I include 3 examples using widely varying ideas, and explore the results.

Let's say that you've got a mind to make a batch of cookies. That action has a certain amount of utility, from the process itself and/or the delicious cookies. But it might lessen (or increase) the chances of you reducing existential risk, and hence affect the chance of existential disaster itself. Now if these cookies will help x-risk reduction efforts (networking!) and be enjoyable, the decision is an easy one. Same thing if they'll hurt your efforts and you hate making, eating, and giving away cookies. Any conflict arises when cookie making/eating is in opposition to x-risk reduction. If you were sufficiently egoist then risk of death would be comparable to existential disaster, and you should consider the two risks together. For readability I’ll refer simply to existential risk.

The question I'll attempt to answer is: what reduction in the probability of existential disaster makes refraining from an activity an equally good choice in terms of expected utility? If you think that by refraining and doing something else you would reduce the risk at least that much, then rationally you should pursue the alternative. If refraining would cut risk by less than this value, then head to the kitchen.

continue reading »

View more: Next