Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
I've just released a Future of Humanity Institute technical report, written as part of the Global Priorities Project.
This article is about priority-setting for work aiming to reduce existential risk. Its chief claim is that all else being equal we should prefer work earlier and prefer to work on risks that might come early. This is because we are uncertain about when we will have to face different risks, because we expect diminishing returns of extra work, and because we expect that more people will work on these risks in the future.
I explore this claim both qualitatively and with explicit models. I consider its implications for two questions: first, “When is it best to do different kinds of work?”; second, “Which risks should we focus on?”.
As a major application, I look at the case of risk from artificial intelligence. The best strategies for reducing this risk depend on when the risk is coming. I argue that we may be underinvesting in scenarios where AI comes soon even though these scenarios are relatively unlikely, because we will not have time later to address them.
You can read the full report here: Allocating risk mitigation across time.
I'm pleased to announce Existential Risk and Existential Hope: Definitions, a short new FHI technical report.
We look at the strengths and weaknesses of two existing definitions of existential risk, and suggest a new definition based on expected value. This leads to a parallel concept: ‘existential hope’, the chance of something extremely good happening.
As of May 2014, there is an existential risk research and outreach organization based in the Boston area. The Future of Life Institute (FLI), spearheaded by Max Tegmark, was co-founded by Jaan Tallinn, Meia Chita-Tegmark, Anthony Aguirre and myself.
Our idea was to create a hub on the US East Coast to bring together people who care about x-risk and the future of life. FLI is currently run entirely by volunteers, and is based on brainstorming meetings where the members come together and discuss active and potential projects. The attendees are a mix of local scientists, researchers and rationalists, which results in a diversity of skills and ideas. We also hold more narrowly focused meetings where smaller groups work on specific projects. We have projects in the pipeline ranging from improving Wikipedia resources related to x-risk, to bringing together AI researchers in order to develop safety guidelines and make the topic of AI safety more mainstream.
Max has assembled an impressive advisory board that includes Stuart Russell, George Church and Stephen Hawking. The advisory board is not just for prestige - the local members attend our meetings, and some others participate in our projects remotely. We consider ourselves a sister organization to FHI, CSER and MIRI, and touch base with them often.
We recently held our launch event, a panel discussion "The Future of Technology: Benefits and Risks" at MIT. The panelists were synthetic biologist George Church, geneticist Ting Wu, economist Andrew McAfee, physicist and Nobel laureate Frank Wilczek and Skype co-founder Jaan Tallinn. The discussion covered a broad range of topics from the future of bioengineering and personal genetics, to autonomous weapons, AI ethics and the Singularity. A video and transcript are available.
FLI is a grassroots organization that thrives on contributions from awesome people like the LW community - here are some ways you can help:
- If you have ideas for research or outreach we could be doing, or improvements to what we're already doing, please let us know (in the comments to this post, or by contacting me directly).
- If you are in the vicinity of the Boston area and are interested in getting involved, you are especially encouraged to get in touch with us!
- Support in the form of donations is much appreciated. (We are grateful for seed funding provided by Jaan Tallinn and Matt Wage.)
-This is not a duplicate of the original less wrong x-risk primer. I like lukeprog's article just fine, but it works mostly as a punch in the gut for anyone who needs a wake up call. Very little of the actual research on x-risk is discussed in that article, so the gap that was there before it was published was largely there after. My article and his would work well being read together.
-This was originally written to accompany a presentation I gave, hence the random inclusion of both hyperlinks and citations. It also lives, with minor differences, here.
-Summary: For various reasons the future is scarier than a lot of people realize. All sorts of things could lead to the destruction of the human species, ranging from asteroid impacts to runaway AIs, and these things are united by the fact that any one of them could destroy the value of the future from a human perspective. The dangers can be separated into bangs (very sudden extinction), crunches (not fatal but crippling), shrieks (mostly curse with a little blessing), and whimpers (a long, slow fading), though there is nothing sacred about these categories. Some humans have are trying to prevent this, though their methods are still in their infancy. Much more should be done to support them.
In the beginning
I want to start this off with a quote, which nicely captures both how I use to feel about the idea of human extinction and how I feel about it now:
I think many atheists still trust in God. They say there is no God, but …[a]sk them how they think the future will go, especially with regards to Moral Progress, Human Evolution, Technological Progress, etc. There are a few different answers you will get: Some people just don’t know or don’t care. Some people will tell you stories of glorious progress… The ones who tell stories are the ones who haven’t quite internalized that there is no god. The people who don’t care aren’t paying attention. The correct answer is not nervous excitement, or world-weary cynicism, it is fear. -Nyan Sandwich
Back when I was a Christian I gave some thought to the rapture, which is not entirely unlike extinction as far as most ten-year-olds can tell. Sometime during this period I found a slim little book of fiction which portrayed a damned soul's experience of burning in hell forever, and that did scare me. Such torment, as luck would have it, is easy enough to avoid if you just call god the right name and ask forgiveness often enough.
When I was old enough to contemplate possible secular origins of the apocalypse, I was both an atheist and one of the people who tell glorious stories about the future. The potential fruits of technological development, from the end of aging to the creation of a benevolent super-human AI, excited me, and still excite me now. No doubt I would've admitted the possibility of human extinction, I don't really remember. But there wasn't the kind of internal siren that should go off when you start thinking seriously about one of the Worst Possible Outcomes. That I would remember.
But as I've gotten older I've come to appreciate that most of us are not afraid enough of the future. Those who are afraid, are often afraid for the wrong reasons.
What is an Existential Risk?
An existential risk or x-risk (to use a common abbreviation) is "...one that threatens to annihilate Earth-originating intelligent life or permanently and drastically to curtail its potential" (Bostrom 2006). The definition contains some subtlety, as not all x-risks involve the outright death of every human. Some could take potentially eons to complete, and some are even survivable. Positioning x-risks within the broader landscape of risks yields something like this chart:
At the top right extreme is where Cthulu sleeps. They are risks that carry the potential to drastically and negatively affect this and every subsequent human generation. So as not to keep everyone in suspense, let's use this chart to put a face on the shadows.
Four Types of Existential Risks
Philosopher Nick Bostrom has outlined four broad categories of x-risk. In more recent papers he hasn't used the terminology that I'm using here, so maybe he thinks the names are obsolete. I find them evocative and useful, however, so I'll stick with them until I have a reason to change.
Bangs are probably the easiest risks to conceptualize. Any event which causes the sudden and complete extinction of humanity would count as a Bang. Think asteroid impacts, supervolcanic eruptions, or intentionally misused nanoweapons.
Crunches are risks which humans survive but which leaves us permanently unable to navigate to a more valuable future. An example might be depleting our planetary resources before we manage to build the infrastructure needed to mine asteroids or colonize other planets. After all the die-offs and fighting, some remnant of humanity could probably survive indefinitely, but it wouldn't be a world you'd want to wake up in.
Shrieks occur when a post-human civilization develops but only manages to realize a small amount of its potential. Shrieks are very difficult to effectively categorize, and I'm going to leave examples until the discussion below.
Whimpers are really long-term existential risks. The most straight forward is the heat death of the universe; within our current understanding of physics, no matter how advanced we get we will eventually be unable to escape the ravages of entropy. Another could be if we encounter a hostile alien civilization that decides to conquer us after we've already colonized the galaxy. Such a process could take a long time, and thus would count as a whimper.
Just because whimpers are so much less immediate than other categories of risk and x-risk doesn't automatically mean we can just ignore them; it has been argued that affecting the far future is one of the most important projects facing humanity, and thus we should take the time to do it right.
Sharp readers will no doubt have noticed that there is quite a bit of fuzziness to these classifications. Where, for example, should we put all-out nuclear war, the establishment of an oppressive global dictatorship, or the development of a dangerous and uncontrollable superintelligent AI? If everyone dies in the war it counts as a bang, but if it makes a nightmare of the biosphere while leaving a good fraction of humanity intact it would be a crunch. A global dictatorship wouldn't be an x-risk unless it used some (probably technological) means to achieve near-total control and long-term stability, in which case it would be a crunch. But it isn't hard to imagine such a situation in which some parts of life did get better, like if a violently oppressive government continued to develop advanced medicines so that citizens were universally healthier and longer-lived than people today. If that happened, it would be a Shriek. A similar analysis applies to the AI, with the possible outcomes being Bang, Crunch, and Shriek depending on just how badly we misprogrammed it.
What Ties These Threads Together?
Even if you think existential threats deserve more attention, the rationale for treating them as a diverse but unified phenomenon may not be obvious. In addition to the crucial but (relatively) straightforward work of, say, tracking Near-Earth Objects (NEOs), existential risk researchers also think seriously about alien invasions and rogue AIs. With such a range of speculativeness, why group x-risks together at all?
It turns out that they share a cluster of features which does give them some cohesion and make them worth studying under a single label, not all of which I discuss here. First and most obvious is that should any of them occur the consequences would be truly vast relative to any other kind of risk. To see why, think about the difference between a catastrophe that kills 99% of humanity and one that kills 100%. As big a tragedy as the former would be, there's a chance humans could recover and build a post-human civilization. But if every person dies, then the entire value of our future is lost (Bostrom 2013).
Second, these are not risks which admit of a trial and error approach. Pretty much by definition a collision with an x-risk will spell doom for humanity, and so we must be more proactive in our strategies for reducing them. Related to this, we as a species have neither the cultural nor biological instincts needed to prepare us for the possibility of extinction. A group of people might live through several droughts and thus develop strong collective norms towards planning ahead and keeping generous food reserves. But they cannot have gone extinct multiple times, and thus they can't rely on their shared experience and cultural memory to guide them in the future. I certainly hope we can develop a set of norms and institutions which makes us all safer, but we can't wait to learn from history. We're going to have to start well in advance, or we won't survive.
A final commonality I'll mention is that the solutions to quite a number of x-risks are themselves x-risks. A powerful enough government could effectively halt research into dangerous pathogens or nano-replicators. But given how States have generally comported themselves in the past, one would do well to be cautious before investing them with that kind of power. Ditto for a superhuman AI, which could set up an infrastructure to protect us from asteroids, nuclear war, or even other less Friendly AI. Get the coding just a little wrong, though, and it might reuse your carbon to make paperclips.
It is indeed a knife edge along which we creep towards the future.
Measuring the Monsters
A first step is getting straight about how likely survival is. The reader may have encountered predictions of the "we have only a 50% chance of surviving the next hundred years" variety. Examining the validity of such estimates is worth doing, but I won't be taking up that challenge here; I tend to agree that these figures involves a lot of subjective judgement, but that even if the chances were very very small it would still be worth taking seriously (Bostrom 2006). At any rate, it seems to me that trying to calculate an overall likelihood of human extinction is going to be premature before we've nailed down probabilities for some of the different possible extinction scenarios. It is to the techniques which x-risk researchers rely on to try and do this that I now turn.
X-risk-assessments rely on both direct and indirect methods (Bostrom 2002). Using a direct method involves building a detailed causal model of the phenomenon and using that to generate a risk probability, while indirect methods include arguments, thought experiments, and information that we use to constrain and refine our guesses.
As far as I know for some x-risks we could use direct methods if we just had a way to gather the relevant information. If we knew where all the NEOs were we could use settled physics to predict whether any of them posed a threat and then prioritize accordingly. But we don't where they all are, so we might instead examine the frequency of impacts throughout the history of the Earth and then reason about whether or not we think an impact will happen soon. It would be nice to exclusively use direct methods, but we supplement with indirect methods when we can't, and of course for x-risks like AI we are in an even more uncertain position than we are for NEOs.
The Fermi Paradox
Applying indirect methods can lead to some strange and counter-intuitive territory, an example of which is the mysteries surrounding the Fermi Paradox. The central question is: in a universe with so many potential hotbeds of life, why is it that when we listen for stirring in the void all we hear is silence? Many feel that the universe must be teeming with life, some of it intelligent, so why haven't we see any sign of it yet?
Musing about possible solutions to the Fermi Paradox can be a lot of fun, and it's worth pointing out that we haven't been looking that long or that hard for signals yet. Nevertheless I think the argument has some meat to it.
Observing this state of affairs, some have postulated the existence of at least one Great Filter, a step in the chain of development from the first organisms to space-faring civilizations that must be extremely hard to achieve.
This is cause for concern because the Great Filter could be in front of us or behind us. Let me explain: imagine a continuum with the simplest self-replicating molecules on one side and the Star Trek Enterprise on the other. From our position on the continuum we want to know whether or not we have already passed one of the hardest steps, but we have only our own planet to look at. So imagine that we send out probes to thousands of different worlds in the hopes that we will learn something.
If we find lots of simple eukaryotes that means that the Great Filter is probably not before the development of membrane-bound organelles. The list of possible places on the continuum the Great Filter could be shrinks just a little bit. If instead we find lots of mammals and reptiles (or creatures that are very different but about as advanced), that means the Great Filter is probably not before the rise of complex organisms, so the places the Great Filter might be hiding shrinks again. Worst of all would be if we find the dead ruins of many different advanced civilizations. This would imply that the real killer is yet to come, and we will almost certainly not survive it.
As happy as many people would be to discover evidence of life in the universe, a case has been made that we should hope to find only barren rocks waiting for us in the final frontier. If not even simple bacteria evolve on most worlds, then there is still a chance that the Great Filter is behind us, and we can worry only about the new challenges ahead, which may or not be Filters as great as the ones in the past.
If all this seems really abstract out there, that's because it is. But I hope it is clear how this sort of thinking can help us interpret new data, make better guesses, form new hypotheses, etc. When dealing with stakes this high and information this limited, one must do the best they can with what's available.
What priority should we place on reducing existential risk and how can we do that? I don't know of anyone who thinks all our effort should go towards mitigating x-risks; there are lots of pressing issues which are not x-risks that are worth our attention, like abject poverty or geopolitical instability. But I feel comfortable saying we aren't doing nearly as much as we should be. Given the stakes and the fact that there probably won't be a second chance we are going to have to meet x-risks head on and be aggressively proactive in mitigating them.
Suppose we taboo 'aggressively proactive', what's left? Well the first step, as it so often is, will be just to get the right people to be aware of the problem (Bostrom 2002). Thankfully this is starting to be the case as more funding and brain power go into existential risk reduction. We have to get to a point where we are spending at least as much time, energy, and effort making new technology safe as we do making it more powerful. More international cooperation on these matters will be necessary, and there should be some sort of mechanism by which efforts to develop existentially-threatening technologies like super-virulent pathogens can be stopped. I don't like recommending this at all, but almost anything is preferable to extinction.
In the meantime both research that directly reduces x-risk (like NEO detection), as well as research that will help elucidate deep and foundational issues in x-risk (FHI and MIRI) should be encouraged. It's a stereotype that research papers always end with a call for more research, but as was pointed out by lukeprog in a talk he gave, there's more research done on lipstick than on friendly AI. This generalizes to x-risk more broadly, and represents the truly worrying state of our priorities.
Though I maintain we should be more fearful of what's to come, that should not obscure the fact that the human potential is vast and truly exciting. If the right steps are taken, we and our descendants will have a future better than most can even dream of. Life spans measured in eons could be spent learning and loving in ways our terrestrial languages don't even have words for yet. The vision of a post-human civilization flinging it's trillions of descendants into the universe to light up the dark is tremendously inspiring. It's worth fighting for.
But we have much work ahead of us.
12/13/2011 - A 2011 update with data from the 2010 fiscal year is in progress. Should be done by the end of the week or sooner.
- I am not affiliated with the Singularity Institute for Artificial Intelligence.
- I have not donated to the SIAI prior to writing this.
- I made this pledge prior to writing this document.
- Images are now hosted on LessWrong.com.
- The 2010 Form 990 data will be available later this month.
- It is not my intent to propagate misinformation. Errors will be corrected as soon as they are identified.
Acting on gwern's suggestion in his Girl Scout Cookie analysis, I decided to look at SIAI funding. After reading about the Visiting Fellows Program and more recently the Rationality Boot Camp, I decided that the SIAI might be something I would want to support. I am concerned with existential risk and grapple with the utility implications. I feel that I should do more.
I wrote on the mini-boot camp page a pledge that I would donate enough to send someone to rationality mini-boot camp. This seemed to me a small cost for the potential benefit. The SIAI might get better at building rationalists. It might build a rationalist who goes on to solve a problem. Should I donate more? I wasn’t sure. I read gwern’s article and realized that I could easily get more information to clarify my thinking.
So I downloaded the SIAI’s Form 990 annual IRS filings and started to write down notes in a spreadsheet. As I gathered data and compared it to my expectations and my goals, my beliefs changed. I now believe that donating to the SIAI is valuable. I cannot hide this belief in my writing. I simply have it.
My goal is not to convince you to donate to the SIAI. My goal is to provide you with information necessary for you to determine for yourself whether or not you should donate to the SIAI. Or, if not that, to provide you with some direction so that you can continue your investigation.
Most of us want to make the world a better place. But what should we do if we want to generate the most positive impact possible? It’s definitely not an easy problem. Lots of smart, talented people with the best of intentions have tried to end war, eliminate poverty, cure disease, stop hunger, prevent animal suffering, and save the environment. As you may have noticed, we’re still working on all of those. So the track record of people trying to permanently solve the world's biggest problems isn’t that spectacular. This isn’t just a “look to your left, look to your right, one of you won’t be here next year”-kind of thing, this is more like “behold the trail of dead and dying who line the path before you, and despair”. So how can you make your attempt to save the world turn out significantly better than the generations of others who've tried this already?
It turns out there actually are a number of things we can do to substantially increase our odds of doing the most good. Here's a brief summary of some on the most crucial considerations that one needs to take into account when soberly approaching the task of doing the most good possible (aka "saving the world").
1. Patch your moral intuition (with math!) - Human moral intuition is really useful. But it tends to fail us at precisely the wrong times -- like when a problem gets too big [“millions of people dying? *yawn*”] or when it involves uncertainty [“you can only save 60% of them? call me when you can save everyone!”]. Unfortunately, these happen to be the defining characteristics of the world’s most difficult problems. Think about it. If your standard moral intuition were enough to confront the world’s biggest challenges, they wouldn’t be the world’s biggest challenges anymore... they’d be “those problems we solved already cause they were natural for us to understand”. If you’re trying to do things that have never been done before, use all the tools available to you. That means setting aside your emotional numbness by using math to feel what your moral intuition can’t. You can also do better by acquainting yourself with some of the more common human biases. It turns out your brain isn't always right. Yes, even your brain. So knowing the ways in which it systematically gets things wrong is a good way to avoid making the most obvious errors when setting out to help save the world.
2. Identify a cause with lots of leverage - It’s noble to try and save the world, but it’s ineffective and unrealistic to try and do it all on your own. So let’s start out by joining forces with an established organization who’s already working on what you care about. Seriously, unless you’re already ridiculously rich + brilliant or ludicrously influential, going solo or further fragmenting the philanthropic world by creating US-Charity#1,238,202 is almost certainly a mistake. Now that we’re all working together here, let's keep in mind that only a few charitable organizations are truly great investments -- and the vast majority just aren’t. So maximize your leverage by investing your time and money into supporting the best non-profits with the largest expected pay-offs.
Summary : Whenever human beings seek to achieve goals far beyond their individual ability, they use leverage of some kind of another. Creating organizations to achieve goals is a very powerful source of leverage. However due to their nature, organizations are imperfect levers and the primary purpose is often lost. The inertia of present forms and processes dominates beyond its useful period. The present system of the world has many such imperfect organizations in power and any of them developing near-general intelligence without significant redesign of their utility function can be a source of existential risk/values risk.
Major update here.
Related to: Should I believe what the SIAI claims?
... pointing out that something scary is possible, is a very different thing from having an argument that it’s likely. — Ben Goertzel
What I ask for:
I want the SIAI or someone who is convinced of the Scary Idea1 to state concisely and mathematically (and with possible extensive references if necessary) the decision procedure that led they to make the development of friendly artificial intelligence their top priority. I want them to state the numbers of their subjective probability distributions2 and exemplify their chain of reasoning, how they came up with those numbers and not others by way of sober calculations.
The paper should also account for the following uncertainties:
- Comparison with other existential risks and how catastrophic risks from artificial intelligence outweigh them.
- Potential negative consequences3 of slowing down research on artificial intelligence (a risks and benefits analysis).
- The likelihood of a gradual and controllable development versus the likelihood of an intelligence explosion.
- The likelihood of unfriendly AI4 versus friendly and respectively abulic5 AI.
- The ability of superhuman intelligence and cognitive flexibility as characteristics alone to constitute a serious risk given the absence of enabling technologies like advanced nanotechnology.
- The feasibility of “provably non-dangerous AGI”.
- The disagreement of the overwhelming majority of scientists working on artificial intelligence.
- That some people who are aware of the SIAI’s perspective do not accept it (e.g. Robin Hanson, Ben Goertzel, Nick Bostrom, Ray Kurzweil and Greg Egan).
- Possible conclusions that can be drawn from the Fermi paradox6 regarding risks associated with superhuman AI versus other potential risks ahead.
Further I would like the paper to include and lay out a formal and systematic summary of what the SIAI expects researchers who work on artificial general intelligence to do and why they should do so. I would like to see a clear logical argument for why people working on artificial general intelligence should listen to what the SIAI has to say.
Here are are two examples of what I'm looking for:
The first example is Robin Hanson demonstrating his estimation of the simulation argument. The second example is Tyler Cowen and Alex Tabarrok presenting the reasons for their evaluation of the importance of asteroid deflection.
I'm wary of using inferences derived from reasonable but unproven hypothesis as foundations for further speculative thinking and calls for action. Although the SIAI does a good job on stating reasons to justify its existence and monetary support, it does neither substantiate its initial premises to an extent that an outsider could draw the conclusions about the probability of associated risks nor does it clarify its position regarding contemporary research in a concise and systematic way. Nevertheless such estimations are given, such as that there is a high likelihood of humanity's demise given that we develop superhuman artificial general intelligence without first defining mathematically how to prove the benevolence of the former. But those estimations are not outlined, no decision procedure is provided on how to arrive at the given numbers. One cannot reassess the estimations without the necessary variables and formulas. This I believe is unsatisfactory, it lacks transparency and a foundational and reproducible corroboration of one's first principles. This is not to say that it is wrong to state probability estimations and update them given new evidence, but that although those ideas can very well serve as an urge to caution they are not compelling without further substantiation.
1. If anyone is actively trying to build advanced AGI succeeds, we’re highly likely to cause an involuntary end to the human race.
2. Stop taking the numbers so damn seriously, and think in terms of subjective probability distributions [...], Michael Anissimov (existential.ieet.org mailing list, 2010-07-11)
3. Could being overcautious be itself an existential risk that might significantly outweigh the risk(s) posed by the subject of caution? Suppose that most civilizations err on the side of caution. This might cause them to either evolve much slower so that the chance of a fatal natural disaster to occur before sufficient technology is developed to survive it, rises to 100%, or stops them from evolving at all for being unable to prove something being 100% safe before trying it and thus never taking the necessary steps to become less vulnerable to naturally existing existential risks. Further reading: Why safety is not safe
4. If one pulled a random mind from the space of all possible minds, the odds of it being friendly to humans (as opposed to, e.g., utterly ignoring us, and being willing to repurpose our molecules for its own ends) are very low.
5. Loss or impairment of the ability to make decisions or act independently.
6. The Fermi paradox does allow for and provide the only conclusions and data we can analyze that amount to empirical criticism of concepts like that of a Paperclip maximizer and general risks from superhuman AI's with non-human values without working directly on AGI to test those hypothesis ourselves. If you accept the premise that life is not unique and special then one other technological civilisation in the observable universe should be sufficient to leave potentially observable traces of technological tinkering. Due to the absence of any signs of intelligence out there, especially paper-clippers burning the cosmic commons, we might conclude that unfriendly AI could not be the most dangerous existential risk that we should worry about.
Can we talk about changing the world? Or saving the world?
I think few here would give an estimate higher than 95% for the probability that humanity will survive the next 100 years; plenty might put a figure less than 50% on it. So if you place any non-negligible value on future generations whose existence is threatened, reducing existential risk has to be the best possible contribution to humanity you are in a position to make. Given that existential risk is also one of the major themes of Overcoming Bias and of Eliezer's work, it's striking that we don't talk about it more here.
One reason of course was the bar until yesterday on talking about artificial general intelligence; another factor are the many who state in terms that they are not concerned about their contribution to humanity. But I think a third is that many of the things we might do to address existential risk, or other issues of concern to all humanity, get us into politics, and we've all had too much of a certain kind of argument about politics online that gets into a stale rehashing of talking points and point scoring.
If we here can't do better than that, then this whole rationality discussion we've been having comes to no more than how we can best get out of bed in the morning, solve a puzzle set by a powerful superintelligence in the afternoon, and get laid in the evening. How can we use what we discuss here to be able to talk about politics without spiralling down the plughole?
I think it will help in several ways that we are a largely community of materialists and expected utility consequentialists. For a start, we are freed from the concept of "deserving" that dogs political arguments on inequality, on human rights, on criminal sentencing and so many other issues; while I can imagine a consequentialism that valued the "deserving" more than the "undeserving", I don't get the impression that's a popular position among materialists because of the Phineas Gage problem. We need not ask whether the rich deserve their wealth, or who is ultimately to blame for a thing; every question must come down only to what decision will maximize utility.
For example, framed this way inequality of wealth is not justice or injustice. The consequentialist defence of the market recognises that because of the diminishing marginal utility of wealth, today's unequal distribution of wealth has a cost in utility compared to the same wealth divided equally, a cost that we could in principle measure given a wealth/utility curve, and goes on to argue that the total extra output resulting from this inequality more than pays for it.
However, I'm more confident of the need to talk about this question than I am of my own answers. There's very little we can do about existential risk that doesn't have to do with changing the decisions made by public servants, businesses, and/or large numbers of people, and all of these activities get us straight into the world of politics, as well as the world of going out and changing minds. There has to be a way for rationalists to talk about it and actually make a difference. Before we start to talk about specific ideas to do with what one does in order to change or save the world, what traps can we defuse in advance?
View more: Next