The Effective Altruism Community has been an unexpected and pleasant surprise. I remember wishing there was a group out there that shared at least one of my ideals. Instead, I found one that shares three: global reduction of suffering, rationality, and longtermism. However, with each conference I attend, posts I read on the forum, and organizations being created, I notice most of them fall into a few distinct categories. Global development/health, animal welfare, biosecurity, climate change, nuclear risk/global conflict, and AI Safety. Don’t get me wrong, these are some of the most important areas to possibly be working on (I’m currently focusing 90% of my energy on AI Safety, myself). But I think there are at least five other areas that could benefit substantially from a small growth in interest.
Interplanetary Species Expansion
This might come as the biggest surprise to be on the list. After all, space exploration is expensive and difficult. But there are very few out there who are actually working on how to change humanity from being a Single Point of Failure System. If we are serious about longtermism and truly decreasing x-risk, this might be one of the most crucial achievements needed. Any x-risk is most likely greatly reduced by this, even perhaps AGI*. The sooner this process begins, the greater the reduction in risk, since this will be a very slow process. One comparatively low-cost research area is studying biospheres and how a separate ecosystem and climate could be created in complete isolation. And this can be studied on Earth. It’s been decades since someone has attempted creating a closed ecological system, and advancement in this could even improve our chances of surviving on Earth if the climate proves inhospitable.
Life Extension
~100,000 people die from age-related diseases every day. ~100 billion people have died in our history. (Read that again.) Aging causes an immense amount of suffering, both to those who suffer from it for years, and to those who must grieve. It also causes irrecoverable loss, and is perhaps the greatest tragedy that is treated as normal. If every person who dies of preventable diseases like malaria is a tragedy, I do not see the difference in those dying of other causes also being a tragedy. Even if you do believe extending the human lifespan is not important, consider the alternative case where you’re wrong. If your perspective is incorrect, then ~100k more tragedies happen for every day we delay solving it.
Cryonics
This is related to Life Extension, but even more neglected, and probably even more impactful. The number of people actually working on cryonics to improve human minds is easily below 100. A key advancement from one individual in research, technology, or organizational improvement could likely have enormous impact. The reason for doing this goes back to the idea of irrecoverable loss of sentient minds. As with life extension, if you do not believe cryonics to be important or even possible, consider the alternative where you’re wrong. If one day we do manage to bring people back from suspended animation, I believe humanity will weep for all those that were needlessly thrown in the dirt or the fire: for they are the ones there is no hope for, an irreversible tragedy. The main reason why I think this isn’t being worked on more is because it is even "weirder" than most EA causes, despite making a good deal of sense.
Nanotechnology
80k lists a survey provided on the 80k website** places nanotechnology as having a 5% chance of causing human extinction, the same as artificial superintelligence***, and 4% greater than nuclear war. Many do not seem to dispute the possible danger of nanoweapons. Many agree that nanoweapons are possible. Many agree that nanotechnology is expanding, even if it’s no longer in the news. So, where are all the EAs tackling nanotech? Where are the organizations devoted to it? Where are the research institutions?**** Despite so many seeming to agree that this cause is important, there seems to be a perplexing lack of pursuit.
Coordination Failures
Most of humanity’s problems come from coordination failures. Nuclear war and proliferation is a coordination failure: everyone would be safer if there were no nukes in the world, and very few people (with some obvious current world exceptions) actually benefit from many entities having them. Climate change is partially a coordination failure: everyone wants the benefits of reducing it, but no one wants to be the only one footing the bill. A large amount of AGI risk will likely be from coordination failures: everyone will be so concerned about others building dangerous AGI that they will be incentivized to build dangerous AGI first. Finding fundamental ways to solve this could not only radically decrease x-risk, but would probably make the lives of everyone unbelievably better. This is a big ask, though. It’s likely that most attempts at this will fail, but even a 1-5% chance of success I think is worth putting far more effort into. We have already seen some achievements. As Eliezer Yudkowsky notes in Inadequate Equilibria, Kickstarter achieved a way for people to contribute to a project, but only if the project got enough funding to actually be created, so that no one ended-up wasting their own money. The Satoshi Nakamoto Consensus created a way for contracts to be enforced without the need for government coercion. These were insights from a few individuals, with inspiration from a wide variety of domains. It is likely there are many others waiting to be discovered.
*I do not think AGI risk is prevented by having multiple human bases, but I think the high uncertainty around how an AGI might kill us all does pose the chance that other home worlds might be safe from it. This is contingent on 1: the AGI not wishing to expand exponentially, and 2: the AGI not being specifically interested in our extinction. All other x-risks I know of (nuclear war, climate change, bioweapons, etc.) are all substantially reduced by having other bases.
**80k actually places AI risk closer to 10%, and nanoweapons much lower.
***I believe this is far too low for AGI.
****There are a few. But institutions such as the Center for Responsible Nanotechnology don’t seem to have many people/funding, and haven’t published anything in years.
This is a very interesting line of argument that I wish was true but I'm not sure is very convincing as it is. We can hypothesize about capabilities researchers who are relying on making advancements in AI in order to make a mark during their finite lifespans, or in order for the AI to cure aging-related disease to save them from dying. But how many capabilities researchers are actually primarily motivated by these factors, such that solving aging will significantly move the needle in convincing them not to work on AI?
What's also missing is acknowledgement that some of the forces could push in the other direction - that solving the diseases of old age would contribute to greater AI risk in various ways. Aubrey de Grey is an example of a highly prominent figure in life extension and aging-related disease who was originally an AI capabilities researcher, and only changed careers because he thought aging was both more neglected and important.
Another possibility is that solving aging-related disease could result in extending the productive lifespan of capabilities researchers. John Carmack for example is a prodigous software engineer in his 50s who has recently decided to put all of his energy into AI capabilities research, and that he's pushing on with this despite people trying to convince him about the risks[1]. Morbid and tasteless as it might sound, it's possible in principle that succeeding in life extension/aging-related-disease research would give people like him enough additional productive and healthy years with which to become the creator of doom, wheras in worlds like ours where such breakthroughs are not made, they are limited by when they are struck down by death or dementia.
Those are very small examples, but in any case it isn't obvious to me where things would balance out to, considering the myriad complicated possible nth-order effects of such a massive change. You could speculate all day about these, maybe the sheer surplus of economic resources/growth from e.g. not having to deal with massive human capital loss/turnover that occurs thanks to aging-related disease killing everyone after a while results in significantly more resources going into capabilities research, speeding up timelines. There are plenty of ways things could go.
Eliezer Yudkowsky has personally tried to convince him about AI risk without success. This despite Carmack being an HPMOR fan.