This is a bit of a rough draft. I would appreciate any constructive comments especially important arguments that I may have overlooked.

§ 1. Introduction

Summary. I argue, from the perspective of biology, that super human AI is a very low hanging fruit. I believe that this argument is very solid. I briefly consider reasons why {super human AI}/{vastly super human AI} might not arise. I then contrast AI with other futurist technologies like human brain emulation, radical life extension & space colonization. I argue that these technologies are in a different category & plausibly impossible to achieve in the way commonly envisioned. This also has some relevance for EA cause prioritization.

In my experience certain arguments are ignored b/c they are too straightforward. People have perhaps heard similar arguments previously. The argument isn't as exciting. B/c people understand the argument they feel empowered to disagree with it which is not necessarily a bad thing. They may believe that complicated new arguments, which they don't actually understand well, have debunked the straightforward arguments even tho the straightforward argument maybe easily salvageable or not even debunked in the 1st place!

§ 2. Super human AI is a very low hanging

The reasons why super human AI is a very low hanging fruit are pretty obvious.

1) The human brain is meager in terms of energy consumption & matter. 2000/calories per day is approximately 100 watts. Obviously the brain uses less than that. Moreover the brain is only 3 pounds.

So we know for certain that human level intelligence is possible with meager energy & matter requirements. It follows that super human intelligence should be achievable especially if we're able to use orders of magnitude more energy & matter which we are.

2) Humans did not evolved to do calculus, computer programming & things like that.

Even Terence Tao did not evolve to do complicated math. Of course you can nitpick this to death by saying humans evolved to do many complex reason tasks. But we didn't actually evolve to do tasks requiring such high levels of mathematical reasoning ability. This is actually why there's such a large variability in mathematical intelligence. Even with 3 pound brains we could all have been as talented as (or even far more talented than) Terence Tao had selective pressure for such things been strong.

3) Evolution is not efficient.

Evolution is not like gradient descent. It's a bit more like Nelder Mead. Much of evolution is just purging bad mutations & selection on standing diversity in response to environmental change. A fitness enhancing gain of function mutation is a relatively very rare thing.

A) Evolution does not act at the level of synapse.

The human genome is far far too short. Instead the genome acts as metaparameters that determine human learning in response to the environment. I think this point cuts both ways which is why I'm referring to it as A not 4. Detailed analysis of this point is far beyond the scope of this post. But I'm inclined to believe that such an approach is maybe not quite as inefficient as Nelder Mead applied at the level of synapse but more limited in its ability to optimize.

§ 3. Possible obstacles to super human AI

I see only a few reasons why super human AI might not happen.

1) An intelligence obedience tradeoff. Obviously companies want AI to be obedient. Even a harmless AI which just thinks about incomprehensible AI stuff all day long is not obviously a good investment. It would be the corniest thing ever if humans' tendency to be free gave us an insurmountable advantage over AI. I doubt this is the case, but it wouldn't be surprise if there is some (not necessarily insurmountable) intelligence obedience tradeoff.

2) Good ideas are not tried b/c of high costs. I feel like I have possibly good ideas about how to train AI, but I just don't have the spare 1 billion dollars.

3) Hardware improvements hit a wall.

4) Societal collapse.

Realistically I think at least 2 of these are needed to stop super human AI.

§ 4. Human brain emulation

In § 2 I argue that super human AI is quite an easy task. Up until quite recently I would some times encounter claims that human brain emulation is actually easier than super human AI. I think that that line of thinking puts somewhat too much faith in evolution. The problem with human brain emulation is that the artificial neural network would need to model various peculiarities & quirks of neurons. An easy & efficient way for a neuron to function is not necessarily that easy & efficient for an artificial neuron & vice verse. Adding up a bunch of things & putting that into ReLu is obviously not what a neuron does, but how complex would that function need to be to capture all of a neuron's important quirks? Some people seem to think that the complexity of this function would match it's superior utility relative to an artificial neuron [N1]. But this is not the case; the neuron is simply doing what is easy for a neuron to do; likewise for the artificial neuron. Actually the artificial neuron has 2 big advantages over the neuron -- the artificial neuron is easier to optimize and it is not spatially constrained.

If human brain emulation is unavailable, then a certain vision of mind uploading is impossible. But an AI copying aspects of a person's personality, like the Magi system from that TV show, is not some thing that I doubt [N2].

N1. I've some times heard the claim that a neuron is more like a MLP. I would go so far as to claim that an artificial neuron with input from k artificial neurons & using a simple activation function like ReLu & half precision is going to be functionally superior to a neuron with input from k neurons b/c of greater optimization & lack of spatial constraints. But simulating the latter is going to be way more difficult.

N2. That an AI could remember details of your life better than you could and in that sense be more you than you could possibly be is also worth noting.

§ 5. Radical life extension & space colonization

Life spans longer than human are definitely possible & have been reported for bowhead whales, Greenland sharks & a quahog named Ming. But the number of genetic changes necessary for humans to have such long life spans is probably high. And it's unclear whether non genetic interventions will be highly effective given the centrality of DNA in biology.

The energy costs of sending enough stuff into space to bootstrap a civilization are alone intimidating. Perhaps advances like fusion or improved 3D printing will solve this problem.

§ 6. Conclusions

Won't super human AI make it all possible?

I'm not claiming that human brain emulation, radical life extension & space colonization are definitely impossible.

But in the case of super human AI we're merely trying to best some thing that is definitely possible & with meager physical inputs.

On the other hand human brain emulation, radical life extension & space colonization may be possible or they may be too physically constrained ie constrained by the laws of physics.

What is the significance of this beyond just the technical points? I'm not proposing that people preemptively give up on these goals. Some elements of human brain emulation will not require the simulation to be accurate at the neuronal level. Radical life extension via genetics seems in principle achievable but maybe not desirable or worthwhile. My point is that a future with {super human AI}/{vastly super human AI} seems likely. But the time lag between vastly super human AI & those other technologies may be very substantial or infinite. Hence the importance of AI & humans living harmoniously & happily during that extended period of time is possibly paramount [N3] & the required cultural & political changes for such a coexistence are likely substantial.

N3. If things progress in a pleasant direction, this could be an opportunity for humans to have more free time with AI doing most (but not all) of the work.

Hzn

New Comment