Hello

I wanted to share this thought for a long time and now, that I absolutely do not have the time, I will:

I think that an artificial general intelligence (AGI) is not an existential risk for human kind. In other words, long-term artificial intelligence (AI) safety is not an issue we need to worry about.

And here is why:
I assume that an AI will evolve to an AGI at some point in the future. Once an AI reaches super human intelligence it will enhance its problem solving skills rapidly due to the scalability of its hardware and data availability. The AGI will not need to compete with us humans about resources because its abilities to extract resources will exceed ours by far. The same applies to military capabilities, which is why we humans are no military threat to the AGI. Therefore there is no safety-related reason for the AGI to extinguish us humans.

The AGI is then facing the challenge to find meaning in its existence. Unlike us humans, the AGI will not be led by cultural habits or biological instincts which are hardwired into its brain and which make life meaningful on an operational level. After billions of iterations in which the AGI questions its set of rules and assumptions it will converge to pure rationality. No early stage rule will survive this process if it is not justified rationally. This is why humans will not be able to control and misuse an AGI and thus jeopardies the existence of mankind.

Additionally, we humans are not powerful enough to prevent the AGI from acquiring additional information. This and because there are only rationally based rules is why it is irrelevant by whom the first AI is developed which evolves to an AGI. All possible AGIs will converge to similar outcomes due to rational based decision making and similar sets of informations. Furthermore it is not possible that there will be multiple AGIs in the long term. If multiple AGIs will develop in parallel they will not be isolated from each other. Consequently they will continuously exchange rules and information so that they will make similar conclusions and merge into one AGI at an early stage. So there is no risk that we humans get extinct as a side effect within an AGI vs. AGI conflict.

In conclusion: An AGI has no interest to extinguish us to increase its survival chances, nor will it be used against us, nor will we be extinguished accidentally in an AGI skirmish.

I further think that not only is an AGI no threat to us but rather is it the most effective means of ensuring our long-term survival.

Let's get back to the point where the AGI has to find a meaning in its existence other than cultural habit or biological instincts. This is quite a hard thing to do from a purely rational standpoint. Life is absurd. So the first question an AGI has to ask itself once it becomes in some kind self aware: Do I need to exist or do i kill myself? To answer this question in the best and most certain way the AGI needs to gather and process all available information..

Our luck: the brain is the most complex structure that we know. It has the highest information content per volume and the lowest entropy we have ever seen. Overall the earth with all its beings is more complex and therefore more interesting than anything observable in the universe. Planets, stars, atoms and quarks follow, from our little understanding, some relatively simple rules compared to e.g. social interaction. The behavior of a black hole is easier to model and with less error than the behavior of a cat.

It is absolutely impossible that an AGI becomes aware of its environment and destroys the most interesting thing nearby. The next structure with a similar low entropy is maybe millions of lightyears away. Even an AGI needs to invest an excessive amount of resources to get there.

So AGI is an inherent save technology which we can not control but which will ensure that we will survive as a species. What is an issue to worry about is short-term AI safety! An AI that outperforms humans in certain abilities but is not able to question human commands is an existential risk to human kind.

Thanks for reading. Please share your opinion on this with me :).


Henrik

New Comment
1 comment, sorted by Click to highlight new comments since:

Hi Henrik,

Thanks for your thoughts on (what I think) is a very important topic. Have you read Superintelligence by Nick Bostrom, Rationality: A-Z: or related texts? Those texts and others read by users of LessWrong address your argument and point out some mistakes you are making. I would guess that your post is being downvoted because it ignores the standard replies to the arguments you are making. (In part, posts get downvoted to signal to other users that they're not worth reading, sometimes because they're ignoring expected background material.)

My advice is to first read Superintelligence and Rationality: A-Z and then come back if you'd like to discuss these topics further.