Here I argue that following the Maxipok rule could have truly catastrophic consequences.

Here I provide a comprehensive list of actual humans who expressed, often with great intensity, omnicidal urges. I also discuss the worrisome phenomenon of "latent agential risks."

And finally, here I argue that a superintelligence singleton constitutes the only mechanism that could neutralize the "threat of universal unilateralism" and the consequent breakdown of the social contract, resulting in a Hobbesian state of constant war among Earthians.

I would genuinely welcome feedback on any of these papers! The first one seems especially relevant to the good denizens of this website. :-)

New to LessWrong?

New Comment
32 comments, sorted by Click to highlight new comments since: Today at 8:24 AM

In the second paper, you mention radical negative utilitarians as a force that could be motivated to kill everyone, but similar considerations seem to apply to utilitarianism in general. Hedonistic utilitarians would want to convert the world into orgasmium (killing everyone in the process), varieties of preference utilitarianism might want to rewire everyone's brains so that those brains experience maximum preference satisfaction (thus effectively killing everyone), etc.

You could argue that mere destruction would be easier than converting everything to orgasmium, but both seem hard enough to basically require a superintelligence. And if you can set the goals of a superintelligence, it's not clear that one of the goals would be much harder than the other.

You could argue that mere destruction would be easier than converting everything to orgasmium, but both seem hard enough to basically require a superintelligence.

We can kill everyone today or in the near future by diverting a large asteroid to crash into Earth, or by engineering a super-plague. Doing either would take significant resources but isn't anywhere near requiring a superintelligence. In comparison, converting everything to orgasmium seems much harder and is far beyond our current technological capabilities.

On super-plagues, I've understood the consensus position to be that even though you could create one that had really big death tolls, actual human extinction would be very unlikely. E.g.

Asked by Representative Christopher Shays (R-Conn.) whether a pathogen could be engineered that would be virulent enough to “wipe out all of humanity,” Fauci and other top officials at the hearing said such an agent was technically feasible but in practice unlikely.

Centers for Disease Control and Prevention Director Julie Gerberding said a deadly agent could be engineered with relative ease that could spread throughout the world if left unchecked, but that the outbreak would be unlikely to defeat countries’ detection and response systems.

“The technical obstacles are really trivial,” Gerberding said. “What’s difficult is the distribution of agents in ways that would bypass our capacity to recognize and intervene effectively.”

Fauci said creating an agent whose transmissibility could be sustained on such a scale, even as authorities worked to counter it, would be a daunting task.

“Would you end up with a microbe that functionally will … essentially wipe out everyone from the face of the Earth? … It would be very, very difficult to do that,” he said.

Asteroid strikes do sound more plausible, though there too I would expect a lot of people to be aware of the possibility and thus devote considerable measures to ensuring the safety of any space operations capable of actually diverting asteroids.

I'm not an expert on bioweapons, but I note that the paper you cite is dated 2005, before the advent of synthetic biology. The recent report from FHI seems to consider bioweapons to be a realistic existential risk.

Thanks, I hadn't seen that. Interesting (and scary).

The problem this consensus position is that it failed to imagine that several deadly pandemics could run simultaneously, and existential terrorists could deliberately organize it by manipulating several viruses. Rather simple AI may help to engineer deadly plagues in droves, and it should not be superintelligent to do so.

Personally, I see the big failure of all x-risks community in ignoring and not even discussing such risks.

Is there anything we can realistically do about it? Without crippling the whole of biotech?

Perhaps have any bioprinter, or other tool, be constantly connected to a narrow AI, to make sure it doesn't accidentally, or intentionally , print ANY viruses, bacteria, or prions.

Jump ASAP to friendly AI or to another global control system, may be using many interconnected narrow AIs as AI police. Basically, if we don't create global control system, we are doomed. But it may be decentralised to escape the worst side of the totalitarianism.

Regarding FAI research it is a catch-22. If we slow down AI research effectively, biorisks will start to dominate. If we accelerate AI, we more likely create it before AI safety theory implementation is ready.

I could send anyone interested my article about the biorisks and all this, which I don't want to publish openly on the internet, hoping for some journal publication.

Interesting topics :) About your second paper:

You say you provide “a comprehensive list of actual humans who expressed, often with great intensity, omnicidal urges.” So, it sounds like the list excludes those whose morality implies that it would be right to kill everyone, or who may want to kill everyone, but who have simply kept quiet about it.

In footnote 2, you write “Note that, taken to its extreme, classical utilitarianism could also, arguably, engender an existential risk,” and you refer to an argument by David Pearce. That’s an important note. That also goes beyond individuals who themselves have “expressed omnicidal urges,” since the argument is from Pearce; not a classical utilitarian reporting her urges. By the way, I think it is fine to say that "classical utilitarianism could also, arguably, engender an existential risk." But the risk is also about killing everyone, which need not be an existential risk (in the sense that Earth-originating, intelligent life goes away, or fails to realize its potential), since, if there is a risk, a or the main risk is presumably that classical utilitarianism implies that it would be right to kill all of us in order to spread well-being beyond Earth, which would not be an existential catastrophe in the sense I just mentioned.

A fun and important exercise would be to start from your, the author’s, morality, and analyze if it implies that it would be right to kill everyone. Without knowing much at all about your morality, I guess that one could make a case for it, and that it would be a complex investigation to see if the arguments you could give as replies are really successful.

First article TL;DR: space colonisation will produce star wars and result in enormous sufferings, that is s-risk.

My 5 dollars: maxipoc is mostly not about space colonisation, but prevention of total extinction. I also hold an unshared opinion that death is the worst form of sufferings, as it is really bad. Pain-Sufferings are part of life and are ok, if they are diluted by much larger pleasure. Surely space wars are possible (without singleton), but life is intrinsically good, and most time there will be no wars, but some form of very sophisticated space pleasures. They will dilute sufferings from wars.

But I also don't share the Maxipoc interpretation that we should start space colonisation as soon as possible to get maximum number of possible people into existence. Firstly, all possible people exist somewhere else in the infinite multiverse. Also, it better to be slow but sure (is it correct expression?)

"My 5 dollars: maxipoc is mostly not about space colonisation, but prevention of total extinction." But the goal of avoiding an x-catastrophe is to reach technological maturity, and reaching technological maturity would require space colonization (to satisfy the requirement that we have "total control" over nature). Right?

I am not sure that the goal of x-risks prevention is reaching technological maturity, and may be it is about preserving Homo sapience, our culture and civilization indefinitely long, but it may be unreachable without some level of technology. But technology is not the terminal goal.

Third article TL;DR: It is clear that superintelligence singleton is the most obvious solution to prevent all non-AI risks.

However, the main problem is that there is a risk of creation of such singleton (risks of unfriendly AI), risks of implementation it (AI have to fight a war for global domination probably against other AIs, nuclear national states etc) and risks of singletone failure (if it halts - it is forever).

As result, we only move risks from one side equation to another, and even replace known risks with unknown risks.

I think that other possible solutions exist, where many agents unite in some kind of police to monitor each other, like suggested David Brin in his transparent society. Such police may consist not of citizens, but of AIs.

Yes, good points. As for "As result, we only move risks from one side equation to another, and even replace known risks with unknown risks," another way to put the paper's thesis is this: insofar as the threat of unilateralism becomes widespread, thus requiring a centralized surveillance apparatus, solving the control problem is that mush more important! I.e., it's an argument for why MIRI's work matters.

I think that unilateralist biological risks will be soon here. I modeled their development in my unpublished article about multipandemic, and compare their number with the historical number of computer viruses. There was 1 virus a year in the beginning of 1980s, 1000 a year in 1990, millions in 2000s, millions of malwares a day in 2010s, according to some report on CNN. But the peak of damage was in 1990s as viruses were more destructive at the time and aimed on data deletion, and not much antiviruses were available. Thus it needs around 10 years to move from the technical possibility of creating a virus at home to global mulripandemic.

It would seem that increasing certainty for "Does the rocket launch successfully?" would be more important than "How early does the rocket launch?". Most acts that shoot for a early launch would seem to increase the risk that something goes wrong in the launch or that the launching colonization would be insufficient or suicidal. Otherwise it just seem like logic of "better die soon to get to heaven faster to have 3 days + infinity instead of just infinity in it". I think that I ought to turn down any offerings of shady moral actions for however virgins in heaven (and this should not be sensitive (atleast greatly) to the number of virgins). So if it used for "lets get seriously rockety" I don't think the analysis adds anything beyond "rockets are cool".

Because of the expansion of space I think that if you get far enough away from earth, you will never be able to return to earth even if you travel at the speed of light. If we become a super-advanced civilization we could say that if you want to colonize another solar system we will put you on a ship that won't stop until the ship is sufficiently far from earth so that neither you nor any of your children will be able to return. Given relativity if this ship can more fast enough it won't take too long in ship time to reach such a point. (I haven't read everything at the links so please forgive me if you have already mentioned this idea.)

If there was a decentralized singularity and offence proved stronger than defense I would consider moving to a light cone that couldn't ever intersect with the light cone of anyone I didn't trust.

1) the math may work out for this, but you're giving up a lot of potential-existence-time to do so (halfway or more to the heat-death of the universe).

2) we haven't gotten off this planet, let alone to another star, so it seems a bit premature to plan to get out of many-eon light cones.

3) If there is an event that shows offence stronger than defense (and you're a defender), it's too late to get away.

4) Wherever you go, you're bringing the seeds of such an event with you - there's nothing that will make you or your colony immune from whatever went wrong for the rest of the known intelligent life in the universe.

(1) Agreed, although I would get vastly more resources to personally consume! Free energy is probably the binding limitation on computational time which probably is the post-singularity binding limit on meaningful lifespan.

(2) An intelligence explosion might collapse to minutes the time between when humans could walk on Mars and when my idea becomes practical to implement.

(3) Today offense is stronger than defense, yet I put a high probability on my personally being able to survive another year.

(4) Perhaps. But what might go wrong is a struggle for limited resources among people with sharply conflicting values. If, today, a small group of people carefully chosen by some leader such as Scott Alexander could move to an alternate earth in another Hubble volume, and he picked me to be in the group, I would greatly increase the estimate of the civilization I'm part of surviving a million years.

While biotech risks are existential at the current time, they lessen as we get more technology. If we can have hermetically sealable living quarters and bioscanners that sequence and look for novel virus and bacteria we should be able to detect and lock down infected areas. Without requiring brain scanners and red goo.

I think we can do similar interventions for most other existential risks classes. The only one you need really invasive surveillance for is AI. How dangerous tool AI is depends on what intelligence actually is. Which is an open question. So I don't think red goo and brain scanners will become a necessity, conditional on my view of intelligence being correct.

I think they will grow before dimish because tech is cheapening.

The tech is cheapening, but I think there's a lot more resources going into developing biotech to fight viruses and bacteria then there is in developing genetically engineered bioweapons.

Fighting many of viruses is known to be difficult. We still have not vaccine from HIV.

Yeah, that's very true. But in the future, I think that we're going to get to a point where we figure out how to the new tools of biotechnology to deal with viruses in a more direct way.

For example, there was some interesting research a few months ago about using CRISPR to remove HIV from live mice, by carefully snipping out the HIV DNA from infected cells directly.

http://sci-hub.io/10.1016/j.ymthe.2017.03.012

I'm not sure if that specific research will turn out to be significant or not, but in the long run, I think that biotech research is going to give us many new tools to deal with both viruses and bacteria in general, and that those will also be effective against bio-weapons.

But what about using CRISPR for new types of bioweapons? It is all the problem of the sword and shild race.

Sure, that's true. It's hard to say for sure, but like I said, if overall research on how to treat viruses and diseases gets more resources then bioweapons research, it should be able to pull ahead, I would think. I think we are likely eventually get to a point where new diseases just aren't that much of an issue because they get picked up and dealt with quickly, or because we have special engineered defenses against them built into our bodies, ect, and then it wouldn't matter if they're natural mutations or genetic engineered diseases.

I think there's a bigger threat of someone recreating smallpox or Spanish influenza or something in their basement before we get to that point, and that could be catastrophic if we don't have the tools to deal with it yet, but that's not actually an existential threat, although it could kill millions. Creating a truly novel disease that would be both contagious and fatal enough to actually be an existential threat, it seems to me, would be a much more difficult challenge; not that it's impossible, but I don't see someone doing it with a "CRISPR at home" set in his basement anytime soon.

I think that the real existential threat is something which could be described by one word: multipandemic.

That is many simultaneous deadly pandemics, may be organised artificially or because of quick growth of the number of bioterrorists and availability of synthetic biology. I wrote an article about it, but it needs major revision.

Hmm. I could see that being a serious threat, at least a potentially civilization-ending one.

Again though, would you agree that the best way to reduce the risk of this threat is biotech research itself?

I would say that the best protection is as quickly as possible to jump on the higher level, like the creation of nanotech and benevolent AI. But they have their own risks.

It may also apply to biotech research, if protection measures will grow quickly.

Second article: there a lot of humans who want to kill all humans.

My take: The desire to end the world is the intrinsic value of humans. Half of Hollywood movies are about the apocalypse. It probably has some cultural or evolutionary psychology explanation as humans want revolution in their tribe, or move to new place from time to time, and culturally present it as the end of "old world". I hope future AI will not extrapolate this desire.

Ok, reading your first essay, my first thought is this:

Let's say that you are correct and the future will see a wide variety of human and post-human species, cultures, and civilizations, which look at the universe in very different ways and have very different mindsets, and which use technology and science in entierly different ways which may totally baffle outside observers. To quote your essay:

The point is that different species may be in the same situation with respect to each others’ ability to manipulate the physical world. A species X could observe something happening in the universe but have no way, in principle, of understanding the causal mechanisms behind the explanandum-phenomenon. By the same token, species X could wiggle some feature of the universe in a way that species Y finds utterly perplexing. The result could be a very odd and potentially catastrophic sort of “mutually asymmetrical warfare,” where the asymmetry here refers to fundamental differences in how various species understand the universe and, therefore, are able to weaponize it. Unlike a technologically “advanced” civilization on Earth fighting a more technologically “primitive” society, such space conflicts would be more like Homo sapiens engaged in an all-out war with bonobos—except that this asymmetry would be differently mirrored back toward us.

If that is true, then it seems to me that the civilizations which would have the biggest advantages would be very diverse civilizations, wouldn't it? If at some point in the future, a certain civilization (say, for the sake of example, let's call it the "inner solar system civilization") has a dozen totally different varieties of humans and transhumans and post-humans and maybe uplifted animals and maybe some kind of AI's or whatever, living in wildly different environments, with very different goals and ideas and ways of looking at the universe, and these different groups develop in contact with each other in such a way that they still are generally on good terms and share ideas and information (even when they don't really understand each other), it seems like that diverse civilization would have a huge advantage in any kind of conflict with monopolar civilizations where everyone in the civilization had the same worldview. The diverse civilization would have a dozen different types of technologies and worldviews and ways of "wiggling some feature in the universe", while the monopoplar civilization would only have one; the diverse civilization would probably also advance more quickly overall.

So, if that is true, then I would think that in that kind of future situation, the civilizations that would have the greatest advantage in any possible conflict would be the very diverse civilizations with a large number of different sub-types of civilizations living in harmony with each other; and those civilizations would, I suspect, also tend to be the most peaceful and the least likely to start an interstellar war just because that other civilization seemed different or weird to them. More likely a diverse civilization that already is sharing ideas between a dozen different species of posthumans would be more interested in sharing ideas and knowledge with more distance civilizations instead of engaging in war with them.

Maybe I'm just being overly optimistic, but it seems like that may be a way out of the problem you are talking about.