Followup to: Is That Your True Rejection?
I expected from the beginning, that the difficult part of two rationalists reconciling a persistent disagreement, would be for them to expose the true sources of their beliefs.
One suspects that this will only work if each party takes responsibility for their own end; it's very hard to see inside someone else's head. Yesterday I exhausted myself mentally while out on my daily walk, asking myself the Question "What do you think you know, and why do you think you know it?" with respect to "How much of the AI problem compresses to large insights, and how much of it is unavoidable nitty-gritty?" Trying to either understand why my brain believed what it believed, or else force my brain to experience enough genuine doubt that I could reconsider the question and arrive at a real justification that way. It's hard to see how Robin Hanson could have done any of this work for me.
Presumably a symmetrical fact holds about my lack of access to the real reasons why Robin believes what he believes. To understand the true source of a disagreement, you have to know why both sides believe what they believe - one reason why disagreements are hard to resolve.
Nonetheless, here's my guess as to what this Disagreement is about:
If I had to pinpoint a single thing that strikes me as "disagree-able" about the way Robin frames his analyses, it's that there are a lot of opaque agents running around, little black boxes assumed to be similar to humans, but there are more of them and they're less expensive to build/teach/run. They aren't even any faster, let alone smarter. (I don't think that standard economics says that doubling the population halves the doubling time, so it matters whether you're making more minds or faster ones.)
This is Robin's model for uploads/ems, and his model for AIs doesn't seem to look any different. So that world looks like this one, except that the cost of "human capital" and labor is dropping according to (exogenous) Moore's Law , and it ends up that economic growth doubles every month instead of every sixteen years - but that's it. Being, myself, not an economist, this does look to me like a viewpoint with a distinctly economic zeitgeist.
In my world, you look inside the black box. (And, to be symmetrical, I don't spend much time thinking about more than one box at a time - if I have more hardware, it means I have to figure out how to scale a bigger brain.)
The human brain is a haphazard thing, thrown together by idiot evolution, as an incremental layer of icing on a chimpanzee cake that never evolved to be generally intelligent, adapted in a distant world devoid of elaborate scientific arguments or computer programs or professional specializations.
It's amazing we can get anywhere using the damn thing. But it's worth remembering that if there were any smaller modification of a chimpanzee that spontaneously gave rise to a technological civilization, we would be having this conversation at that lower level of intelligence instead.
Human neurons run at less than a millionth the speed of transistors, transmit spikes at less than a millionth the speed of light, and dissipate around a million times the heat per synaptic operation as the thermodynamic minimum for a one-bit operation at room temperature. Physically speaking, it ought to be possible to run a brain at a million times the speed without shrinking it, cooling it, or invoking reversible computing or quantum computing.
There's no reason to think that the brain's software is any closer to the limits of the possible than its hardware, and indeed, if you've been following along on Overcoming Bias this whole time, you should be well aware of the manifold known ways in which our high-level thought processes fumble even the simplest problems.
Most of these are not deep, inherent flaws of intelligence, or limits of what you can do with a mere hundred trillion computing elements. They are the results of a really stupid process that designed the retina backward, slapping together a brain we now use in contexts way outside its ancestral environment.
Ten thousand researchers working for one year cannot do the same work as a hundred researchers working for a hundred years; a chimpanzee is one-fourth the volume of a human's but four chimps do not equal one human; a chimpanzee shares 95% of our DNA but a chimpanzee cannot understand 95% of what a human can. The scaling law for population is not the scaling law for time is not the scaling law for brain size is not the scaling law for mind design.
There's a parable I sometimes use, about how the first replicator was not quite the end of the era of stable accidents, because the pattern of the first replicator was, of necessity, something that could happen by accident. It is only the second replicating pattern that you would never have seen without many copies of the first replicator around to give birth to it; only the second replicator that was part of the world of evolution, something you wouldn't see in a world of accidents.
That first replicator must have looked like one of the most bizarre things in the whole history of time - this replicator created purely by chance. But the history of time could never have been set in motion, otherwise.
And what a bizarre thing a human must be, a mind born entirely of evolution, a mind that was not created by another mind.
We haven't yet begun to see the shape of the era of intelligence.
Most of the universe is far more extreme than this gentle place, Earth's cradle. Cold vacuum or the interior of stars; either is far more common than the temperate weather of Earth's surface, where life first arose, in the balance between the extremes. And most possible intelligences are not balanced, like these first humans, in that strange small region of temperate weather between an amoeba and a Jupiter Brain.
This is the challenge of my own profession - to break yourself loose of the tiny human dot in mind design space, in which we have lived our whole lives, our imaginations lulled to sleep by too-narrow experiences.
For example, Robin says:
Eliezer guesses that within a few weeks a single AI could grow via largely internal means from weak and unnoticed to so strong it takes over the world [his italics]
I suppose that to a human a "week" sounds like a temporal constant describing a "short period of time", but it's actually 10^49 Planck intervals, or enough time for a population of 2GHz processor cores to perform 10^15 serial operations one after the other.
Perhaps the thesis would sound less shocking if Robin had said, "Eliezer guesses that 10^15 sequential operations might be enough to..."
One should also bear in mind that the human brain, which is not designed for the primary purpose of scientific insights, does not spend its power efficiently on having many insights in minimum time, but this issue is harder to understand than CPU clock speeds.
Robin says he doesn't like "unvetted abstractions". Okay. That's a strong point. I get it. Unvetted abstractions go kerplooie, yes they do indeed. But something's wrong with using that as a justification for models where there are lots of little black boxes just like humans scurrying around, and we never pry open the black box and scale the brain bigger or redesign its software or even just speed up the damn thing. The interesting part of the problem is harder to analyze, yes - more distant from the safety rails of overwhelming evidence - but this is no excuse for refusing to take it into account.
And in truth I do suspect that a strict policy against "unvetted abstractions" is not the real issue here. I constructed a simple model of an upload civilization running on the computers their economy creates: If a non-upload civilization has an exponential Moore's Law, y = e^t, then, naively, an upload civilization ought to have dy/dt = e^y -> y = -ln(C - t). Not necessarily up to infinity, but for as long as Moore's Law would otherwise stay exponential in a biological civilization. I walked though the implications of this model, showing that in many senses it behaves "just like we would expect" for describing a civilization running on its own computers.
Compare this to Robin Hanson's "Economic Growth Given Machine Intelligence", which Robin describes as using "one of the simplest endogenous growth models to explore how Moore's Law changes with computer-based workers. It is an early but crude attempt, but it is the sort of approach I think promising." Take a quick look at that paper.
Now, consider the abstractions used in my Moore's Researchers scenario, versus the abstractions used in Hanson's paper above, and ask yourself only the question of which looks more "vetted by experience" - given that both are models of a sort that haven't been used before, in domains not actually observed, and that both give results quite different from the world we see and that would probably cause the vast majority of actual economists to say "Naaaah."
Moore's Researchers versus Economic Growth Given Machine Intelligence - if you didn't think about the conclusions in advance of the reasoning; and if you also neglected that one of these has been written up in a way that is more impressive to economics journals; and you just asked the question, "To what extent is the math used here, constrained by our prior experience?" then I would think that the race would at best be even. Or possibly favoring "Moore's Researchers" as being more simple and intuitive, and involving less novel math as measured in additional quantities and laws introduced.
I ask in all humility if Robin's true rejection is a strictly evenhandedly applied rule that rejects unvetted abstractions. Or if, in fact, Robin finds my conclusions, and the sort of premises I use, to be objectionable for other reasons - which, so far as we know at this point, may well be valid objections - and so it appears to him that my abstractions bear a larger burden of proof than the sort of mathematical steps he takes in "Economic Growth Given Machine Intelligence". But rather than offering the reasons why the burden of proof appears larger to him, he says instead that it is "not vetted enough".
One should understand that "Your abstractions are unvetted!" makes it difficult for me to engage properly. The core of my argument has to do with what happens when you pry open the black boxes that are your economic agents, and start fiddling with their brain designs, and leave the tiny human dot in mind design space. If all such possibilities are rejected on the basis of their being "unvetted" by experience, it doesn't leave me with much to talk about.
Why not just accept the rejection? Because I expect that to give the wrong answer - I expect it to ignore the dominating factor in the Future, even if the dominating factor is harder to analyze.
It shouldn't be surprising if a persistent disagreement ends up resting on that point where your attempt to take into account the other person's view, runs up against some question of simple fact where, it seems to you, you know that can't possibly be right.
For me, that point is reached when trying to visualize a model of interacting black boxes that behave like humans except they're cheaper to make. The world, which shattered once with the with the first replicator, and shattered for the second time with the emergence of human intelligence, somehow does not shatter a third time. Even in the face of blowups of brain size far greater than the size transition from chimpanzee brain to human brain; and changes in design far larger than the design transition from chimpanzee brains to human brains; and simple serial thinking speeds that are, maybe even right from the beginning, thousands or millions of times faster.
That's the point where I, having spent my career trying to look inside the black box, trying to wrap my tiny brain around the rest of mind design space that isn't like our small region of temperate weather, just can't make myself believe that the Robin-world is really truly actually the way the future will be.
There are other things that seem like probable nodes of disagreement:
Robin Hanson's description of Friendly AI development as "total war" that is harmful to even discuss, or his description of a realized Friendly AI as "a God to rule us all". Robin must be visualizing an in-practice outcome very different from what I do, and this seems like a likely source of emotional fuel for the disagreement as well.
Conversely, Robin Hanson seems to approve of a scenario where lots of AIs, of arbitrary motives, constitute the vast part of the economic productivity of the Solar System, because he thinks that humans will be protected under the legacy legal system that grew continuously out of the modern world, and that the AIs will be unable to coordinate to transgress the legacy legal system for fear of losing their own legal protections. I tend to visualize a somewhat different outcome, to put it mildly; and would symmetrically be suspected of emotional unwillingness to accept that outcome as inexorable.
Robin doesn't dismiss Cyc out of hand and even "hearts" it, which implies that we have an extremely different picture of how intelligence works.
Like Robin, I'm also feeling burned on this conversation, and I doubt we'll finish it; but I should write at least two more posts to try to describe what I've learned, and some of the rules that I think I've been following.
Tim, you've been going on about this through multiple posts, you have been requested to stop, please do so.
Robin, email reply looks fine.