Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
Reply to: Abstraction, Not Analogy
Eliezer, have I completely failed to communicate here? You have previously said nothing is similar enough to this new event for analogy to be useful, so all we have is "causal modeling" (though you haven't explained what you mean by this in this context). This post is a reply saying, no, there are more ways using abstractions; analogy and causal modeling are two particular ways to reason via abstractions, but there are many other ways.
Well... it shouldn't be surprising if you've communicated less than you thought. Two people, both of whom know that disagreement is not allowed, have a persistent disagreement. It doesn't excuse anything, but - wouldn't it be more surprising if their disagreement rested on intuitions that were easy to convey in words, and points readily dragged into the light?
I didn't think from the beginning that I was succeeding in communicating. Analogizing Doug Engelbart's mouse to a self-improving AI is for me such a flabbergasting notion - indicating such completely different ways of thinking about the problem - that I am trying to step back and find the differing sources of our differing intuitions.
(Is that such an odd thing to do, if we're really following down the path of not agreeing to disagree?)
"Abstraction", for me, is a word that means a partitioning of possibility - a boundary around possible things, events, patterns. They are in no sense neutral; they act as signposts saying "lump these things together for predictive purposes". To use the word "singularity" as ranging over human brains, farming, industry, and self-improving AI, is very nearly to finish your thesis right there.
I wouldn't be surprised to find that, in a real AI, 80% of the actual computing crunch goes into drawing the right boundaries to make the actual reasoning possible. The question "Where do abstractions come from?" cannot be taken for granted.
Boundaries are drawn by appealing to other boundaries. To draw the boundary "human" around things that wear clothes and speak language and have a certain shape, you must have previously noticed the boundaries around clothing and language. And your visual cortex already has a (damned sophisticated) system for categorizing visual scenes into shapes, and the shapes into categories.
It's very much worth distinguishing between boundaries drawn by noticing a set of similarities, and boundaries drawn by reasoning about causal interactions.
There's a big difference between saying "I predict that Socrates, like other humans I've observed, will fall into the class of 'things that die when drinking hemlock'" and saying "I predict that Socrates, whose biochemistry I've observed to have this-and-such characteristics, will have his neuromuscular junction disrupted by the coniine in the hemlock - even though I've never seen that happen, I've seen lots of organic molecules and I know how they behave."
But above all - ask where the abstraction comes from!
To see a hammer is not good to hold high in a lightning storm, we draw on pre-existing objects that you're not supposed to hold electrically conductive things to high altitudes - this is a predrawn boundary, found by us in books; probably originally learned from experience and then further explained by theory. We just test the hammer to see if it fits in a pre-existing boundary, that is, a boundary we drew before we ever thought about the hammer.
To evaluate the cost to carry a hammer in a tool kit, you probably visualized the process of putting the hammer in the kit, and the process of carrying it. Its mass determines the strain on your arm muscles. Its volume and shape - not just "volume", as you can see as soon as that is pointed out - determine the difficulty of fitting it into the kit. You said "volume and mass" but that was an approximation, and as soon as I say "volume and mass and shape" you say, "Oh, of course that's what I meant" - based on a causal visualization of trying to fit some weirdly shaped object into a toolkit, or e.g. a thin ten-foot thin pin of low volume and high annoyance. So you're redrawing the boundary based on a causal visualization which shows that other characteristics can be relevant to the consequence you care about.
None of your examples talk about drawing new conclusions about the hammer by analogizing it to other things rather than directly assessing its characteristics in their own right, so it's not all that good an example when it comes to making predictions about self-improving AI by putting it into a group of similar things that includes farming or industry.
But drawing that particular boundary would already rest on causal reasoning that tells you which abstraction to use. Very much an Inside View, and a Weak Inside View, even if you try to go with an Outside View after that.
Using an "abstraction" that covers such massively different things, will often be met by a differing intuition that makes a different abstraction, based on a different causal visualization behind the scenes. That's what you want to drag into the light - not just say, "Well, I expect this Singularity to resemble past Singularities."
I am of course open to different way to conceive of "the previous major singularities". I have previously tried to conceive of them in terms of sudden growth speedups.
Is that the root source for your abstraction - "things that do sudden growth speedups"? I mean... is that really what you want to go with here?