or: Why our universe has already had its one and only foom

In the late 1980s, I added half a megabyte of RAM to my Amiga 500. A few months ago, I added 2048 megabytes of RAM to my Dell PC. The later upgrade was four thousand times larger, yet subjectively they felt about the same, and in practice they conferred about the same benefit. Why? Because each was a factor of two increase, and it is a general rule that each doubling tends to bring about the same increase in capability.

That's a pretty important rule, so let's test it by looking at some more examples.

How does the performance of a chess program vary with the amount of computing power you can apply to the task? The answer is that each doubling of computing power adds roughly the same number of ELO rating points. The curve must flatten off eventually (after all, the computation required to fully solve chess is finite, albeit large), yet it remains surprisingly constant over a surprisingly wide range.

Is that idiosyncratic to chess? Let's look at Go, a more difficult game that must be solved by different methods, where the alpha-beta minimax algorithm that served chess so well, breaks down. For a long time, the curve of capability also broke down: in the 90s and early 00s, the strongest Go programs were based on hand coded knowledge such that some of them literally did not know what to do with extra computing power; additional CPU speed resulted in zero improvement.

The breakthrough came in the second half of last decade, with Monte Carlo tree search algorithms. It wasn't just that they provided a performance improvement, it was that they were scalable. Computer Go is now on the same curve of capability as computer chess: whether measured on the ELO or the kyu/dan scale, each doubling of power gives a roughly constant rating improvement.

Where do these doublings come from? Moore's Law is driven by improvements in a number of technologies, one of which is chip design. Each generation of computers is used, among other things, to design the next generation. Each generation needs twice the computing power of the last generation to design in a given amount of time.

Looking away from computers to one of the other big success stories of 20th-century technology, space travel, from Goddard's first crude liquid fuel rockets, to the V2, to Sputnik, to the half a million people who worked on Apollo, we again find that successive qualitative improvements in capability required order of magnitude after order of magnitude increase in the energy a rocket could deliver to its payload, with corresponding increases in the labor input.

What about the nuclear bomb? Surely that at least was discontinuous?

At the simplest physical level it was: nuclear explosives have six orders of magnitude more energy density than chemical explosives. But what about the effects? Those are what we care about, after all.

The death tolls from the bombings of Hiroshima and Nagasaki have been estimated respectively at 90,000-166,000 and 60,000-80,000. That from the firebombing of Hamburg in 1943 has been estimated at 42,600; that from the firebombing of Tokyo on the 10th of March 1945 alone has been estimated at over 100,000. So the actual effects were in the same league as other major bombing raids of World War II. To be sure, the destruction was now being carried out with single bombs, but what of it? The production of those bombs took the labor of 130,000 people, the industrial infrastructure of the worlds most powerful nation, and $2 billion of investment in 1945 dollars, nor did even that investment at that time gain the US the ability to produce additional nuclear weapons in large numbers at short notice. The construction of the massive nuclear arsenals of the later Cold War took additional decades.

(To digress for a moment from the curve of capability itself, we may also note that destructive power, unlike constructive power, is purely relative. The death toll from the Mongol sack of Baghdad in 1258 was several hundred thousand; the total from the Mongol invasions was several tens of millions. The raw numbers, of course, do not fully capture the effect on a world whose population was much smaller than today's.)

Does the same pattern apply to software as hardware? Indeed it does. There's a significant difference between the capability of a program you can write in one day versus two days. On a larger scale, there's a significant difference between the capability of a program you can write in one year versus two years. But there is no significant difference between the capability of a program you can write in 365 days versus 366 days. Looking away from programming to the task of writing an essay or a short story, a textbook or a novel, the rule holds true: each significant increase in capability requires a doubling, not a mere linear addition. And if we look at pure science, continued progress over the last few centuries has been driven by exponentially greater inputs both in number of trained human minds applied and in the capabilities of the tools used.

If this is such a general law, should it not apply outside human endeavor? Indeed it does. From protozoa which pack a minimal learning mechanism into a single cell, to C. elegans with hundreds of neurons, to insects with thousands, to vertebrates with millions and then billions, each increase in capability takes an exponential increase in brain size, not the mere addition of a constant number of neurons.

But, some readers are probably thinking at this point, what about...

... what about the elephant at the dining table? The one exception that so spectacularly broke the law?

Over the last five or six million years, our lineage upgraded computing power (brain size) by about a factor of three, and upgraded firmware to an extent that is unknown but was surely more like a percentage than an order of magnitude. The result was not a corresponding improvement in capability. It was a jump from almost no to fully general symbolic intelligence, which took us from a small niche to mastery of the world. How? Why?

To answer that question, consider what an extraordinary thing is a chimpanzee. In raw computing power, it leaves our greatest supercomputers in the dust; in perception, motor control, spatial and social reasoning, it has performance our engineers can only dream about. Yet even chimpanzees trained in sign language cannot parse a sentence as well as the Infocom text adventures that ran on the Commodore 64. They are incapable of arithmetic that would be trivial with an abacus let alone an early pocket calculator.

The solution to the paradox is that a chimpanzee could make an almost discontinuous jump to human level intelligence because it wasn't developing across the board. It was filling in a missing capability - symbolic intelligence - in an otherwise already very highly developed system. In other words, its starting point was staggeringly lopsided.

(Is there an explanation why this state of affairs came about in the first place? I think there is - in a nutshell, most conscious observers should expect to live in a universe where it happens exactly once - but that would require a digression into philosophy and anthropic reasoning, so it really belongs in another post; let me know if there's interest, and I'll have a go at writing that post.)

Can such a thing happen again? In particular, is it possible for AI to go foom the way humanity did?

If such lopsidedness were to repeat itself... well even then, the answer is probably no. After all, an essential part of what we mean by foom in the first place - why it's so scarily attractive - is that it involves a small group accelerating in power away from the rest of the world. But the reason why that happened in human evolution is that genetic innovations mostly don't transfer across species. The dolphins couldn't say hey, these apes are on to something, let's snarf the code for this symbolic intelligence thing, oh and the hands too, we're going to need manipulators for the toolmaking application, or maybe octopus tentacles would work better in the marine environment. Human engineers carry out exactly this sort of technology transfer on a routine basis.

But it doesn't matter, because the lopsidedness is not occurring. Obviously computer technology hasn't lagged in symbol processing - quite the contrary. Nor has it really lagged in areas like vision and pattern matching - a lot of work has gone into those, and our best efforts aren't clearly worse than would be expected given the available development effort and computing power. And some of us are making progress on actually developing AGI - very slow, as would be expected if the theory outlined here is correct, but progress nonetheless.

The only way to create the conditions for any sort of foom would be to shun a key area completely for a long time, so that ultimately it could be rapidly plugged into a system that is very highly developed in other ways. Hitherto no such shunning has occurred: every even slightly promising path has had people working on it. I advocate continuing to make progress across the board as rapidly as possible, because every year that drips away may be an irreplaceable loss; but if you believe there is a potential threat from unfriendly AI, then such continued progress becomes the one reliable safeguard.

 

New to LessWrong?

New Comment
266 comments, sorted by Click to highlight new comments since: Today at 1:29 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

While it is true that exponential improvements in computer speed and memory often have the sort of limited impact you are describing, algorithmic improvements are frequently much more helpful. When RSA-129 was published as a factoring challenge, it was estimated that even assuming Moore's law it would take a very long time to factor (the classic estimate was that it would take on the order of 10^15 years assuming that one could do modular arithmetic operations at 1 per a nanosecond. Assuming a steady progress of Moore's law one got an estimate in the range of hundreds of years at minimum.) However, it was factored only a few years later because new algorithms made factoring much much easier. In particular, the quadratic sieve and the number field sieve were both subexponential. The analogy here is roughly to the jump in Go programs that occurred when the new Monte Carlo methods were introduced.

An AI that is a very good mathematician, and can come up with lots of good algorithms might plausibly go FOOM. For example, if it has internet access and has finds a practical polynomial time factoring algorithm it will control much of the internet quite quickly. This is not the only example... (read more)

6paulfchristiano13y
If you believe that FOOM is comparably probable to P = NP, I think you should be breathing pretty easily. Based on purely mathematical arguments it would be extraordinarily surprising if P = NP (and even more surprising if SAT had a solution with low degree polynomial running time), but even setting this aside, if there were a fast (say, quadratic) algorithm for SAT, then it would probably allow a smart human to go FOOM within a few weeks. Your concern at this point isn't really about an AI, its just that a universe where worst-case SAT can be practically solved is absolutely unlike the universe we are used to. If anyone has doubts about this assertion I would be happy to argue, although I think its an argument that as been made on the internet before. So I guess maybe you shouldn't breathe easy, but you should at least have a different set of worries. In reality I would bet my life against a dollar on the assertion P != NP, but I don't think this makes it difficult for FOOM to occur. I don't think the possibility of a FOOMing AI existing on modern computers is even worth debating, its just how likely humans are to stumble upon it. If anyone wants to challenge the assertion that a FOOMing AI can exist, as a statement about the natural world rather than a statement about human capabilities, I would be happy to argue with that as well. As an aside, it seems likely that a reasonably powerful AI would be able to build a quantum computer good enough to break most encryption used in practice in the near future. I don't think this is really a serious issue, since breaking RSA seems like about the least threatening thing a reasonably intelligent agent could do to our world as it stands right now.
5JoshuaZ13y
Yes, I'm familiar with these arguments. I find that suggestive but not nearly as persuasive as others seem to. I estimate about a 1% chance that P=NP is provable in ZFC, around a 2% chance that P=NP is undecideable in ZFC (this is a fairly recent update. This number used to be much smaller. I am willing to discuss reasons for it if anyone cares.) and a 97% chance that P !=NP. Since this is close to my area of expertise, I think I can make these estimates fairly safely. Absolutely not. Humans can't do good FOOM. We evolved in circumstances where we very rarely had to solve NP hard or NP complete problems. And our self-modification system is essentially unconscious. There's little historical evolutionary incentive to take advantage of fast SAT solving. If one doesn't believe this just look at how much trouble humans have doing all sorts of very tiny instances of simple computational problems like multiplying small numbers, or factoring small integers (say under 10 digits). Really? In that case we have a sharply different probability estimate. Would you care to make an actual bet? Is it fair to say that you are putting an estimate of less than 10^-6 that P=NP? If it an AI can make quantum computers that can do that then it hardly has so much matter manipulation ability it has likely already won (although I doubt that even a reasonably powerful AI could necessarily do this simply because quantum computers are so finicky and unstable.) But if P=NP in a practical way, RSA cracking is just one of the many things the AI will have fun with. Many crypto systems not just RSA will be vulnerable. The AI might quickly control many computer systems, increasing its intelligence and data input drastically. Many sensitive systems will likely fall under its control. And if P=NP then the AI also has shortcuts to all sorts of other things that could help it, like designing new circuits for it to use (and chip factories are close to automated at this point), and lots of neat biologic
6paulfchristiano13y
I now agree that I was overconfident in P != NP. I was thinking only of failures where my general understanding of and intuition about math and computer science are correct. In fact most of the failure probability comes from the case where I (and most computer scientists) are completely off base and don't know at all what is going on. I think that worlds like this are unlikely, but probably not 1 in a million.
2paulfchristiano13y
We have very different beliefs about P != NP. I would be willing to make the following wager, if it could be suitably enforced with sufficiently low overhead. If a proof that P != NP reaches general acceptance, you will pay me $10000 with probability 1/1000000 (expectation $.01). If an algorithm provably solves 3SAT on n variables in time O(n^4) or less, I will pay you $1000000. This bet is particularly attractive to me, because if such a fast algorithm for SAT appears I will probably cease to care about the million dollars. My actual probability estimate is somewhere in the ballpark of 10^-6, though its hard to be precise about probabilities so small. Perhaps it was not clear what I meant by an individual going FOOM, which is fair since it was a bit of a misuse. I mean just that an individual with access to such an algorithm could quickly amplify their own power and then exert a dominant influence on human society. I don't imagine a human will physically alter themselves. It might be entertaining to develop and write up a plan of attack for this contingency. I think the step I assume is possible that you don't is the use of a SAT solver to search for a compact program whose behavior satisfies some desired property, which can be used to better leverage your SAT solver. A similar thought experiment my friends and I have occasionally contemplated is: given a machine which can run any C program in exactly 1 second (or report an infinite loop), how many seconds would it take you to ?
0JoshuaZ13y
Replying a second time to remind you of this subthread in case you still have any interest in making a bet. If we change it to degree 7 rather than degree 4 and changed the monetary aspects as I outlined I'm ok with the bet.
0JoshuaZ13y
We may need to adjust the terms. The most worrisome parts of that bet are twofold: 1) I don't have 10^5 dollars, so on my end, paying $1000 with probability 1/100000 which has the same expectation probably makes more sense. 2) I'm not willing to agree to that O(n^4) simply because there are many problems which are in P where our best algorithm known is much worse than that. For example, the AKS primality test is O(n^6) and deterministic Miller Rabin might be O(n^4) but only if one makes strong assumptions corresponding to a generalized Riemann hypothesis. Not necessarily. While such an algoirthm would be useful many of the more effective uses would only last as long as one kept quiet that one had such a fast SAT solver. So to take full advantage requires some subtlety. There's a limit to how much you can do with this since general questions about properties of algorithms are still strongly not decidable.
2Douglas_Knight13y
I'm not sure I'm parsing that correctly. Is that 2% for undecidable or undecidable+true? Don't most people consider undecidability evidence against? All crypto systems would be vulnerable. At least, all that have ever been deployed on a computer.
3JoshuaZ13y
2% is undecidable in general. Most of that probability mass is for "There's no polynomial time solver for solving an NP complete problem but that is not provable in ZFC" (obviously one then needs to be careful about what one means by saying such a thing doesn't exist, but I don't want to have to deal with those details). A tiny part of that 2% is the possibility that there's a polynomial time algorithm for solving some NP complete problem but one can't prove in ZFC that the algorithm is polynomial time. That's such a weird option that I'm not sure how small a probability for it, other than "very unlikely." Actually, no. There are some that would not. For example, one-time pads have been deployed on computer systems (among other methods using USB flash drives to deliver the secure bits.) One-time pads are provably secure. But all public key cryptography would be vulnerable, which means most forms of modern crypto.
2Douglas_Knight13y
I forgot about one-time pads, which certainly are deployed, but which I don't think of as crypto in the sense of "turning small shared secrets into large shared secrets." My point was that breaks not just public-key cryptography, but also symmetric cryptography, which tends to be formalizable as equivalent to one-way functions.
2PhilGoetz13y
Agreed. (Though it would be kinda cool in the long run if P = NP.)
0Liron13y
P vs NP has nothing to do with AI FOOM. P = NP is effectively indistinguishable from P = ALL. Like PaulFChristiano said, If P = NP (in a practical way), then FOOMs are a dime a dozen. And assuming P != NP, the fact that NP-complete problems aren't efficiently solvable in the general case doesn't mean paperclipping the universe is the least bit difficult.
3JoshuaZ13y
Nothing to do with it at all? I'm curious as to how you reach that conclusion. No. There are many fairly natural problems which are not in P. For example, given a specific Go position does white or black have a win? This is in EXP The difficulty of FOOMing for most entities such as humans even given practical P=NP is still severe. See my discussion with Paul. I'm puzzled at how you can reach such a conclusion. Many natural problems that an entity trying to FOOM would want to do are NP complete. For example, graph coloring comes up in memory optimization and traveling salesman comes up in circuit design. Now, in a prior conversation I had with Cousin It he made the excellent point that a FOOMing AI might not need to actually deal with worst case instances of these problems. But that is a more subtle issue than what you seem to be claiming.
3soreff13y
I'm confused or about to display my ignorance. When you write: Are you saying that evaluating a Go position has exponential (time?) cost even on a nondeterministic machine? I thought that any given string of moves to the end of the game could be evaluated in polynomial (N^2?) time. I thought that the full set of possible strings of moves could be evaluated (naively) in exponential time on a deterministic machine and polynomial time on a nondeterministic machine - so I thought Go position evaluation would be in NP. I think you are saying that it is more costly than that. Are you saying that, and what am I getting wrong?
8darius13y
"Does this position win?" has a structure like "Is there a move such that, for each possible reply there is a move such that, for each possible reply... you win." -- where existential and universal quantifiers alternate in the nesting. In a SAT problem on the other hand you just have a nest of existentials. I don't know about Go specifically, but that's my understanding of the usual core difference between games and SAT.
0soreff13y
Much appreciated! So the NP solution to SAT is basically an OR over all of the possible assignments of the variables, where here (or for alternating move games in general), we've got alternating ORs and ANDs on sequential moves.
3Perplexed13y
I'm not sure what you are getting wrong. My initial inclination was to think as you do. But then I thought of this: You are probably imagining N to be the number of moves deep that need to be searched. Which you probably think is roughly the square of the board size (nominally, 19). The trouble is, what N really means is the size of the specific problem instance. So, it is possible to imagine Go problems on 1001x1001 boards where the number of stones already played is small. I.e. N is much less than a million, but the amount of computation needed to search the tree is on the order of 1000000^1000000. ETA: This explanation is wrong. Darius got it right.
0soreff13y
Much appreciated! I was taking N to be the number of squares on the board. My current thought is that, as you said, the number of possible move sequences on an N square board is of the order of N^N (actually, I think slightly smaller: N!). As you said, N may be much larger than the number of stones already played. My current understanding is that board size if fixed for any given Go problem. Is that true or false? If it is true, then I'd think that the factor of N branching at each step in the tree of moves is just what gets swept into the nondeterministic part of NP.
2Douglas_Knight13y
If one assumes everything that is conjectured, then yes. To say that it is EXP-hard is to say that it takes exponential time on a deterministic machine. This does not immediately say how much time it takes on a non-deterministic machine. It is not ruled out that NP=EXP, but it is extremely implausible. Also doubted, though more plausible, is that PSPACE=EXP. PSPACE doesn't care about determinism.
0JoshuaZ13y
I'm not completely sure what you mean. Darius's response seems relevant (in particular, you may want to look at the difference between a general non-deterministic Turing machine and an alternating Turing machine). However, there seems to be possibly another issue here: When mathematicians and computer scientists discuss polynomial time, they are talking about polynomials of the length of the input, not polynomials of the input (similarly, for exponential time and other classes). Thus, for example, to say that PRIMES is in P we mean that there's an algorithm that answers whether a given integer is prime that is time bounded by a polynomial of log_2 p (assuming p is written in base 2).
-2Liron13y
There are indeed natural problems outside of NP, but an AI will be able to quickly answer any such queries in a way that, to a lesser intelligence, looks indistinguishable from an oracle's answers.
1JoshuaZ13y
How do you know that? What makes you reach that conclusion? Do you mean an AI that has already FOOMed? If so, this is sort of a trivial claim. If you are talking about a pre-FOOM AI then I don't see why you think this is such an obvious conclusion. And the issue at hand is precisely what the AI could do leading up to and during FOOM.
-2paulfchristiano13y
Determining whether white or black wins at GO is certainly not in P (in fact certainly not in NP, I think, if the game can be exponentially long), but in the real world you don't care whether white or black wins. You care about whether you win in the particular game of Go you are playing, which is in NP (although with bad constants, since you have to simulate whoever you are playing against). There is a compelling argument to be made that any problem you care about is in NP, although in general the constants will be impractical in the same sense that building a computer to simulate the universe is impractical, even if the problem is in P. In fact, this question doesn't really matter, because P is not actually the class of problems which can be solved in practice. It is a convenient approximation which allows us to state some theorems we have a chance of proving in the next century. I think the existence of computational lower bounds is clearly of extreme importance to anything clever enough to discover optimal algorithms (and probably also to humans in the very long term for similar reasons). P != NP is basically the crudest such question, and even though I am fairly certain I know which way that question goes the probability of an AI fooming depends on much subtler problems which I can't even begin to understand. In fact, basically the only reason I personally am interested in the P vs NP question is because I think it involves techniques which will eventually help us address these more difficult problems.
0JoshuaZ13y
Huh? I don't follow this at all. The question of any fixed game who would win is trivially in NP because it is doable in constant time. Any single question is always answerable in constant time. Am I misunderstanding you?
-4paulfchristiano13y
Suppose I want to choose how to behave to achieve some goal. Either what genes to put in a cell I am growing or what moves to play in a game of go or etc. Presumably I can determine whether any fixed prescription will cause me to attain my goal---I can simulate the universe and check the outcome at the end. Thus checking whether a particular sequence of actions (or a particular design, strategy, etc.) has the desired property is in P. Thus finding one with the desired property is in NP. The same applies to determining how to build a cell with desired properties, or how to beat the world's best go player, etc. None of this is to say that P = NP sheds light on how easy these questions actually are, but P = NP is the normal theoretical interpretation, and in fact the only theoretical interpretation that makes sense if you are going to stick with the position that P is precisely the class of problems that an AI can solve.
0JoshuaZ13y
I'm having some trouble parsing what you have wrote. I don't follow this line of reasoning at all. Whether a problem is in P is a statement about the length of time it takes in general to solve instances. Also, a problem for this purpose is a collection of questions of the form "for given input N, does N have property A?" I'm not sure what you mean by this. First of all, the general consensus is that P !=NP. Second of all, in no interpretation is P is somehow precisely the set of problems that an AI can solve. It seems you are failing to distinguish between instances of problems and the general problems. Thus for example, the traveling salesman problem is NP complete. Even if P != NP, I can still solve individual traveling salesman problems (you can probably solve any instance with fewer than five nodes more or less by hand without too much effort.). Similarly, even if factoring turns out to be not in P, it doesn't mean anyone is going to have trouble factoring 15.
-1rwallace13y
Right, but one of the reasons for the curve of capability is the general version of Amdahl's law. A particular new algorithm may make a particular computational task much easier, but if that task was only 5% of the total problem you are trying to solve, then even an infinite speedup on that task will only give you 5% overall improvement. The upshot is that new algorithms only make enough of a splash to be heard of by computer scientists, whereas the proverbial man on the street has heard of Moore's Law. But I will grant you that a constructive proof of P=NP would... be interesting. I don't know that it would enable AI to go foom (that would still have other difficulties to overcome), but it would be of such wide applicability that my argument from the curve of capability against AI foom, would be invalidated. I share the consensus view that P=NP doesn't seem to be on the cards, but I agree it would be better to have a proof of P!=NP.
4Jordan13y
There are plenty of instances where an algorithmic breakthrough didn't just apply to 5% of a problem, but the major part of it. My field (applied math) is riddled with such breakthroughs.
0rwallace13y
Yeah, but it's fractal. For every such problem, there is a bigger problem such that the original one was a 5% subgoal. This is one of the reasons why the curve of capability manifests so consistently: you always find yourself hitting the general version of Amdahl's law in the context of a larger problem

For every such problem, there is a bigger problem such that the original one was a 5% subgoal.

Sure, you can always cook up some ad hoc problem such that my perfect 100% solution to problem A is just a measly 5% subcomponent of problem B . That doesn't change the fact that I've solved problem A, and all the ramifications that come along with it. You're just relabeling things to automate a moving goal post. Luckily, an algorithm by any other name would still smell as sweet.

3magfrump13y
I think the point he was trying to make was that the set of expanding subgoals that an AI would have to make its way through would be sufficient to slow it down to within the exponential we've all been working with. Phrased this way, however, it's a much stronger point and I think it would require more discussion to be meaningful.
-2rwallace13y
Bear in mind that if we zoom in to sufficiently fine granularity, we can get literally infinite speedup -- some of my most productive programming days have been when I've found a way to make a section of code unnecessary, and delete it entirely. To show this part of my argument is not infinitely flexible, I will say one algorithmic breakthrough that would invalidate it, would be a constructive proof of P=NP (or a way to make quantum computers solve NP-complete problems in polynomial time - I'm told the latter has been proven impossible, but I don't know how certain it is that there aren't any ways around that).
3paulfchristiano13y
There are no known strong results concerning the relationship between BQP (roughly the analogue of P for quantum computers) and NP. There is strong consensus that BQP does not contain NP, but it is not as strong as the overwhelming consensus that P != NP.
2bentarm13y
Presumably because P = NP would imply that NP is contained in BQP, so you can't believe the first of your statements without believing the second.
0Psy-Kosh13y
It's not even known if NP contains BQP?
5bentarm13y
No. The best we can do is that both contain BPP and are contained in PP, as far as I recall.
3wnoise13y
And there exist oracles relative to which BQP is not contained in MA (which contains NP).
2JoshuaZ13y
May I suggest that the reasons that the proverbial person on the street has heard of Moore's Law is more due to that 1) It is easier to understand 2) It has more of a visibly obvious impact on their lives? Edit: Also, regarding 5%, sometimes the entire problem is just in the algorithm. For example, in the one I gave, factoring, the entire problem is can you factor integers quickly.

This is an interesting case, and reason enough to form your hypothesis, but I don't think observation backs up the hypothesis:

The difference in intelligence between the smartest academics and even another academic is phenomenal, to say nothing of the difference between the smartest academics and an average person. Nonetheless, the brain size of all these people is more or less the same. The difference in effectiveness is similar to the gulf between men and apes. The accomplishments of the smartest people are beyond the reach of average people. There are things smart people can do that average people couldn't, irregardless of their numbers or resources.

The only way to create the conditions for any sort of foom would be to shun a key area completely for a long time, so that ultimately it could be rapidly plugged into a system that is very highly developed in other ways.

Such as, for instance, the fact that our brains aren't likely optimal computing machines, and could be greatly accelerated on silicon.

Forget about recursive FOOMs for a minute. Do you not think a greatly accelerated human would be orders of magnitude more useful (more powerful) than a regular human?

1rwallace13y
You raise an interesting point about differences among humans. It seems to me that, caveats aside about there being a lot of exceptions, different kinds of intelligence, IQ being an imperfect measure etc., there is indeed a large difference in typical effectiveness between, say, IQ 80 and 130... ... and yet not such a large difference between IQ 130 and IQ 180. Last I heard, the world's highest IQ person wasn't cracking problems the rest of us had found intractable, she was just writing self-help books. I find this counterintuitive. One possible explanation is the general version of Amdahl's law: maybe by the time you get to IQ 130, it's not so much the limiting factor. It's also been suggested that to get a human brain into very high IQ levels, you have to make trade-offs; I don't know whether there's much evidence for or against this. As for uploading, yes, I think it would be great if we could hitch human thought to Moore's Law, and I don't see any reason why this shouldn't eventually be possible.

Correlation between IQ and effectiveness does break down at higher IQs, you're right. Nonetheless, there doesn't appear to be any sharp limit to effectiveness itself. This suggests to me that it is IQ that is breaking down, rather than us reaching some point of diminishing returns.

As for uploading, yes, I think it would be great if we could hitch human thought to Moore's Law, and I don't see any reason why this shouldn't eventually be possible.

My point here was that human mind are lop-sided, to use your terminology. They are sorely lacking in certain hardware optimizations that could render them thousands or millions of times faster (this is contentious, but I think reasonable). Exposing human minds to Moore's Law doesn't just give them the continued benefit of exponential growth, it gives them a huge one-off explosion in capability.

For all intents and purposes, an uploaded IQ 150 person accelerated a million times might as well be a FOOM in terms of capability. Likewise an artificially constructed AI with similar abilities.

(Edit: To be clear, I'm skeptical of true recursive FOOMs as well. However, I don't think something that powerful is needed in practice for a hard take off to occur, and think arguments for FAI carry through just as well even if self modifying AIs hit a ceiling after the first or second round of self modification.)

2CarlShulman13y
Average performance in science and income keeps improving substantially with IQ well past 130: http://www.vanderbilt.edu/Peabody/SMPY/Top1in10000.pdf Some sources of high intelligence likely work (and aren't fixed throughout the population) because of other psychological tradeoffs.
0Jordan13y
Thanks for the correction!
2rwallace13y
Sure, I'm not saying there is a sharp limit to effectiveness, at least not one we have nearly reached, only that improvements in effectiveness will continue to be hard-won. As for accelerating human minds, I'm skeptical about a factor of millions, but thousands, yes, I could see that ultimately happening. But getting to that point is not going to be a one-off event. Even after we have the technology for uploading, there's going to be an awful lot of work just debugging the first uploaded minds, let alone getting them to the point where they're not orders of magnitude slower than the originals. Only then will the question of doubling their speed every couple of years even arise.
5Jordan13y
My original example about academics was to demonstrate that there are huge jumps in effectiveness between individuals, on the order of the gap between man and ape. This goes against your claim that the jump from ape to man was a one time bonanza. The question isn't if additional gains are hard-won or not, but how discontinuous their effects are. There is a striking discontinuity between the effectiveness of different people. That is one possible future. Here's another one: Small animal brains are uploaded first, and the kinks and bugs are largely worked out there. The original models are incredibly detailed and high fidelity (because no one knows what details to throw out). Once working animal brains are emulating well, a plethora of simplifications to the model are found which preserve the qualitative behavior of the mind, allowing for orders of magnitude speed ups. Human uploads quickly follow, and intense pressure to optimize leads to additional orders of magnitude speed up. Within a year the fastest uploads are well beyond what meatspace humans can compete with. The uploads then leverage their power to pursue additional research in software and hardware optimization, further securing an enormous lead. (If Moore's Law continued to hold in their subjective time frame, then even if they are only 1000x faster they would double in speed every day. In fact, if Moore's Law held indefinitely they would create a literal singularity in 2 days. That's absurd, of course. But the point is that what the future Moore's Law looks like could be unexpected once uploads arrive). There's a million other possible futures, of course. I'm just pointing out that you can't look at one thing (Moore's Law) and expect to capture the whole picture.

Because each was a factor of two increase, and it is a general rule that each doubling tends to bring about the same increase in capability.

I found this an interesting rule, and thought that after the examples you would establish some firmer theoretical basis for why it might work the way it does. But you didn't do that, and instead jumped to talking about AGI. It feels like you're trying to apply a rule before having established how and why it works, which raises "possible reasoning by surface analogy" warning bells in my head. The tone in your post is a lot more confident than it should be.

3rwallace13y
Fair enough; it's true that I don't have a rigorous mathematical model, nor do I expect to have one anytime soon. My current best sketch at an explanation is that it's a combination of: 1. P != NP, i.e. the familiar exponential difficulty of finding solutions given only a way to evaluate them - I think this is the part that contributes the overall exponential shape. 2. The general version of Amdahl's law, if subproblem X was only e.g. 10% of the overall job, then no improvement in X can by itself give more than a 10% overall improvement - I think this is the part that makes it so robust; as I mentioned in the subthread about algorithmic improvements, even if you can break the curve of capability in a subproblem, it persists at the next level up.

it is a general rule that each doubling tends to bring about the same increase in capability.

This rule smells awfully like a product of positive/confirmation bias to me.

Did you pre-select a wide range of various endeavours and projects and afterwards analysed their rate of progress, thus isolating this common pattern;

or did you notice a similarity between a couple of diminishing-returns phaenomena, and then kept coming up with more examples?

My guess is strongly on the latter.

4rwallace13y
You guess incorrectly -- bear in mind that when I first looked into AI go foom it was not with a view to disproving it! Actually, I went out of my way to choose examples from areas that were already noteworthy for their rate of progress. The hard part of writing this post wasn't finding examples -- there were plenty of those -- it was understanding the reason for and implications of the one big exception.
6Perplexed13y
I'm not convinced you have nailed the reasons for and implications of the exception. The cognitive significance of human language is not simply that it forced brain evolution into developing new kinds of algorithms (symbolic processing). Rather language enabled culture, which resulted in an explosion of intra-species co-evolution. But although this led to a rapid jump (roughly 4x) in the capacity of brain hardware, the significant thing for mankind is not the increased individual smarts - it is the greatly increased collective smarts that makes us what we are today.
2rwallace13y
Well yes -- that's exactly why adding language to a chimpanzee led to such a dramatic increase in capability.
9Perplexed13y
But, as EY points out, there may be some upcoming aspects of AI technology evolution which can have the same dramatic effects. Not self-modifying code, but maybe high bandwidth networks or ultra-fine-grained parallel processing. Eliezer hasn't convinced me that a FOOM is inevitable, but you have come nowhere near convincing me that another one is very unlikely.
3rwallace13y
High-bandwidth networks and parallel processing have fit perfectly well within the curve of capability thus far. If you aren't convinced yet that another one is very unlikely, okay, what would convince you? Formal proof of a negative isn't possible outside pure mathematics.

If you aren't convinced yet that another one is very unlikely, okay, what would convince you?

I'm open to the usual kinds of Bayesian evidence. Lets see. H is "there will be no more FOOMs". What do you have in mind as a good E? Hmm, lets see. How will the world be observably different if you are right, from how it will look if you are wrong?

Point out such an E, and then observe it, and you may sway me to your side.

Removing my tongue from my cheek, I will make an observation. I'm sure that you have heard the statement "Extraordinary claims require extraordinary evidence." Well, there is another kind of claim that requires extraordinary evidence. Claims of the form "We don't have to worry about that, anymore."

1orthonormal13y
IWICUTT. (I Wish I Could Upvote This Twice.)
1rwallace13y
If I'm wrong, then wherever we can make use of some degree of recursive self-improvement -- to the extent that we can close the loop, feed the output of an optimization process into the process itself, as in e.g. programming tools, chip design and Eurisko -- we should be able to break the curve of capability and demonstrate sustained faster than exponential improvement. If I'm right, then the curve of capability should hold in all cases, even when some degree of recursive self-improvement is in operation, and steady exponential improvement should remain the best we can get. All the evidence we have thus far, supports the latter case, but I'm open to -- and would very much like -- demonstrations of the contrary. I address that position here and here.
9Perplexed13y
Then maybe I misunderstood your claim, because I thought you had claimed that there are no kinds of recursive self-improvement that break your curve. Or at least no kinds of recursive self-improvement that are relevant to a FOOM. To be honest, my intuition is that recursive self-improvement opportunities generating several orders of magnitude of improvement must be very rare. And where the do exist, there probably must be significant "overhang" already in place to make them FOOM-capable. So a FOOM strikes me as unlikely. But your posting here hasn't led me to consider it any less likely than I had before. Your "curve of capability" strikes me as a rediscovery of something economists have known about for years - the "law of diminishing returns". Since my economics education took place more than 40 years ago, "diminishing returns" is burnt deep into my intuitions. The trouble is that "diminishing returns" is not a really a law. It is, like your capability curve, more of a rough empirical observation - though admitted one with lots of examples to support it. What I hear is that, since I got my degree and formed my intuitions, economists have been exploring the possibility of "increasing returns". And they find examples of it practically everywhere that people are climbing up a new-technology learning curve. In places like electronics and biotech. They are seeing the phenomenon in almost every new technology. Even without invoking recursive self-improvement. But so far, not in AI. That seems to be the one new industry that is still stumbling around in the dark. Kind of makes you wonder what will happen when you guys finally find the light switch.
1rwallace13y
That is what I'm claiming, so if you can demonstrate one, you'll have falsified my theory. I don't think so, I think that's a different thing. In fact... ... I would've liked to use the law of increasing returns as a positive example, but I couldn't find a citation. The version I remember reading about (in a paper book, back in the 90s) said that every doubling of the number of widgets you make, lets you improve the process/cut costs/whatever, by a certain amount; and that this was remarkably consistent across industries -- so once again we have the same pattern, double the optimization effort and you get a certain degree of improvement.
1NancyLebovitz13y
I think I read that, too, and the claimed improvement was 20% with each doubling.
0Perplexed13y
That would look linear on a log-log graph. A power-law response. I understood rwallace to be hawking a "curve of capability" which looks linear on a semi-log graph. A logarithmic response. Of course, one of the problems with rwallace's hypothesis is that it becomes vague when you try to quantify it. "Capability increases by the same amount with each doubling of resources" can be interpreted in two ways. "Same amount" meaning "same percentage", or meaning literally "same amount".
-1rwallace13y
Right, to clarify, I'm saying the curve of capability is a straight line on a log-log graph, perhaps the clearest example being the one I gave of chip design, which gives repeated doublings of output for doublings of input. I'm arguing against the "AI foom" notion of faster growth than that, e.g. each doubling taking half the time of the previous one.
0JGWeissman13y
So this could be falsified by continous capability curves that curve upwards on a log-log graphs, and you arguments in various other threads that the discussed situations result in continous capability curves are not strong enough to support your theory.
0JoshuaZ13y
Some models of communication equipment suggest high return rates for new devices since the number of possible options increases at the square of the number of people with the communication system. I don't know if anyone has looked at this in any real detail although I would naively guess that someone must have.
1wedrifid13y
Yet somehow the post is managing to hover in the single figure positives!

It seems like you're entirely ignoring feedback effects from more and better intelligence being better at creating more and better intelligence, as argued in Yudkowsky's side of the FOOM debate.

And hardware overhang (faster computers developed before general cognitive algorithms, first AGI taking over all the supercomputers on the Internet) and fast infrastructure (molecular nanotechnology) and many other inconvenient ideas.

Also if you strip away the talk about "imbalance" what it works out to is that there's a self-contained functioning creature, the chimpanzee, and natural selection burps into it a percentage more complexity and quadruple the computing power, and it makes a huge jump in capability. Nothing is offered to support the assertion that this is the only such jump which exists, except the bare assertion itself. Chimpanzees were not "lopsided", they were complete packages designed for an environment; it turned out there were things that could be done which created a huge increase in optimization power (calling this "symbolic processing" assumes a particular theory of mind, and I think it is mistaken) and perhaps there are yet more things like that, such as, oh, say, self-modification of code.

calling this "symbolic processing" assumes a particular theory of mind, and I think it is mistaken

Interesting. Can you elaborate or link to something?

I'm not Eliezer, but will try to guess what he'd have answered. The awesome powers of your mind only feel like they're about "symbols", because symbols are available to the surface layer of your mind, while most of the real (difficult) processing is hidden. Relevant posts: Detached Lever Fallacy, Words as Mental Paintbrush Handles.

1xamdam13y
Thanks. The posts (at least the second one) seem to point that symbolic reasoning is overstated and at least some reasoning is clearly non-symbolic (e.g. visual). In this context the question is whether the symbolic processing (there is definitely some - math, for example) gave pre-humans the boost that allowed the huge increase in computing power, so I am not seeing the contradiction.
2Perplexed13y
Speech is a kind of symbolic processing, and is probably an important capability in mankind's intellectual evolution, even if symbolic processing for the purpose of reasoning (as in syllogisms and such) is an ineffectual modern invention.
0timtyler13y
Susan Blackmore argues that what originally caused the "huge increase in optimization power" was memes - not symbolic processing - which probably started up a bit later than the human cranium's expansion did.
5Will_Sawin13y
What's clearly fundamental about the human/chimpanzee advantage, the thing that made us go FOOM and take over the world, is that we can, extremely efficiently, share knowledge. This is not as good as fusing all our brains into a giant brain, but it's much much better than just having a brain. This analysis possibly suggests that "taking over the world's computing resources" is the most likely FOOM, because it is similar to the past FOOM, but that is weak evidence.
4XiXiDu13y
The genetic difference between a chimp and a human amounts to about ~40–45 million bases that are present in humans and missing from chimps. And that number is irrespective of the difference in gene expression between humans and chimps. So it's not like you're adding a tiny bit of code and get a superapish intelligence. Nothing is offered to support the assertion that there is another such jump. If you were to assert this then another premise of yours, that an universal computing device can simulate every physical process, could be questioned based on the same principle. So here is an antiprediction, humans are on equal footing with any other intelligence who can master abstract reasoning (that does not necessarily include speed or overcoming bias).
2Vladimir_Nesov13y
In a public debate, it makes sense to defend both sides of an argument, because each of the debaters actually tries to convince a passive third party whose beliefs are not clearly known. But any given person that you are trying to convince doesn't have a duty to convince you that the argument you offer is incorrect. It's just not an efficient thing to do. A person should always be allowed to refute an argument on the grounds that they don't currently believe it to be true. They can be called on contradicting assertions of not believing or believing certain things, but never required to prove a belief. The latter would open a separate argument, maybe one worth engaging in, but often a distraction from the original one, especially when the new argument being separate is not explicitly acknowledged.
3XiXiDu13y
I agree with some of what you wrote although I'm not sure why you wrote it. Anyway, I was giving an argumentative inverse of what Yudkowsky asserted and hereby echoed his own rhetoric. Someone claimed A and in return Yudkowsky claimed that A is a bare assertion, therefore ¬A, whereupon I claimed that ¬A is a bare assertion therefore the truth-value of A is again ~unknown. This of course could have been inferred from Yudkowskys statement alone, if interpreted as a predictive inverse (antiprediction), if not for the last sentence which states, "[...] and perhaps there are yet more things like that, such as, oh, say, self-modification of code." [1] Perhaps yes, perhaps not. Given that his comment already scored 16 when I replied, I believed that highlighting that it offered no convincing evidence for or against A would be justified by one sentence alone. Here we may disagree, but note that my comment included more information than that particular sentence alone. 1. Self-modification of code does not necessarily amount to a superhuman level of abstract reasoning similar to that between humans and chimps but might very well be unfeasible as it demands self-knowledge requiring resources exceeding that of any given intelligence. This would agree with the line of argumentation in the original post, namely that the next step (e.g. an improved AGI created by the existing AGI) will require a doubling of resources. Hereby we are on par again, two different predictions canceling out each other.
4Vladimir_Nesov13y
You should keep track of whose beliefs you are talking about, as it's not always useful or possible to work with the actual truth of informal statements where you analyze correctness of debate. A person holding a wrong belief for wrong reasons can still be correct in rejecting an incorrect argument for incorrectness of those wrong reasons. If A believes X, then (NOT X) is a "bare assertion", not enough to justify A changing their belief. For B, who believes (NOT X), stating "X" is also a bare assertion, not enough to justify changing the belief. There is no inferential link between refuted assertions and beliefs that were held all along. A believes X not because "(NOT X) is a bare assertion", even though A believes both that "(NOT X) is a bare assertion" (correctly) and X (of unknown truth).
0XiXiDu13y
That is true. Yet for a third party, one that is unaware of any additional substantiation not featured in the debate itself, a prediction and its antipredication cancel out each other. As a result no conclusion can be drawn by an uninformed bystander. This I tried to highlight without having to side with one party.
2Vladimir_Nesov13y
They don't cancel out each other, as they both lack convincing power, equally irrelevant. It's an error to state as arguments what you know your audience won't agree with (change their mind in response to). At the same time, explicitly rejecting an argument that failed to convince is entirely correct.
0XiXiDu13y
Let's assume that you contemplate the possibility of an outcome Z. Now you come across a discussion between agent A and agent B discussing the prediction that Z is true. If agent B does proclaim the argument X in favor of Z being true and you believe that X is not convincing then this still gives you new information about agent B and the likelihood of Z being true. You might now conclude that Z is slightly more likely to be true because of additional information in favor of Z and the confidence of agent B necessary to proclaim that Z is true. Agent A does however proclaim argument Y in favor of Z being false and you believe that Y is equally unconvincing than argument X in favor of Z being true. You might now conclude again that the truth-value of Z is ~unknown as each argument and the confidence of its facilitator ~outweigh each other. Therefore no information is irrelevant if it is the only information about any given outcome in question. Your judgement might weigh less than the confidence of an agent with possible unknown additional substantiation in favor of its argument. If you are unable to judge the truth-value of an exclusive disjunction then that any given argument about it is not compelling does tell more about you than the agent that does proclaim it. Any argument alone has to be taken into account, if only due to its logical consequence. Every argument should be incorporated into your probability estimations for that it signals a certain confidence (for that it is proclaimed at all) of the agent that is uttering it. Yet if there exists a counterargument that is inverse to the original argument you'll have to take that into account as well. This counterargument might very well outweigh the original argument. Therefore there are no arguments that lack the power to convince, however small, yet arguments can outweigh and trump each other. ETA: Fixed the logic, thanks Vladimir_Nesov.
3Vladimir_Nesov13y
Z XOR ¬Z is always TRUE. (I know what you mean, but it looks funny.)
0XiXiDu13y
Fixed it now (I hope), thanks.
0Vladimir_Nesov13y
I think it became more confused now. With C and D unrelated, what do you care for (C XOR D)? For the same reason, you can't now expect evidence for C to always be counter-evidence for D.
2XiXiDu13y
Thanks for your patience and feedback, I updated it again. I hope it is now somewhat more clear what I'm trying to state.
0XiXiDu13y
Whoops, I'm just learning the basics (some practise here). I took NOT Z as an independent proposition. I guess there is no simple way to express this if you do not assign the negotation of Z its own variable, in case you want it to be an indepedent proposition?
1Vladimir_Nesov13y
B believes that X argues for Z, but you might well believe that X argues against Z. (You are considering a model of a public debate, while this comment was more about principles for an argument between two people.) Also, it's strange that you are contemplating levels of belief in Z, while A and B assert it being purely true or false. How overconfident of them. (Haven't yet got around to a complete reply rectifying the model, but will do eventually.)
4rwallace13y
See my reply to saturn on recursive self-improvement. Potential hardware overhang, I already addressed. Nanotechnology is thus far following the curve of capability, and there is every reason to expect it will continue to do so in the future. I already explained the sense in which chimpanzees were lopsided. Self modification of code has been around for decades.
2David_Gerard13y
May be minorly off-topic: nothing Drexler hypothesised has, as far as I know, even been started. As I understand it, the state of things is still that we still have literally no idea how to get there from here, and what's called "nanotechnology" is material science or synthetic biology. Do you have details of what you're describing as following the curve?
4timtyler13y
Perhaps start here, with his early work on the potential of hypertext ;-)
1rwallace13y
A good source of such details is Drexler's blog, where he has written some good articles about -- and seems to consider highly relevant -- topics like protein design and DNA origami.
7David_Gerard13y
(cough) I'm sure Drexler has much detail on Drexler's ideas. Assume I'm familiar with the advocates. I'm speaking of third-party sources, such as from the working worlds of physics, chemistry, physical chemistry and material science for example. As far as I know - and I have looked - there's little or nothing. No progress to nanobots, no progress to nanofactories. The curve in this case is a flat line at zero. Hence asking you specifically for detail on what you are plotting on your graph.
2lsparrish13y
There has been some impressive sounding research done on simulated diamondoid tooltips for this kind of thing. (Admittedly, done by advocates.) I suspect when these things do arrive, they will tend to have hard vacuum, cryogenic temperatures, and flat surfaces as design constraints.
1rwallace13y
Well, that's a bit like saying figuring out how to smelt iron constituted no progress to the Industrial Revolution. These things have to go a step at a time, and my point in referring to Drexler's blog was that he seems to think e.g. protein design and DNA origami do constitute real progress. As for things you could plot on a graph, consider the exponentially increasing amount of computing power put into molecular modeling simulations, not just by nanotechnology advocates, but people who actually do e.g. protein design for living today.
1rwallace13y
Also, I'm not sure what you mean by "symbolic processing" assuming a particular theory of mind -- theories of mind differ on the importance thereof, but I'm not aware of any that dispute its existence. I'll second the request for elaboration on this. I'll also ask, assuming I'm right, is there any weight of evidence whatsoever that would convince you of this? Or is AI go foom for you a matter of absolute, unshakable faith?

I'll also ask, assuming I'm right, is there any weight of evidence whatsoever that would convince you of this? Or is AI go foom for you a matter of absolute, unshakable faith?

It would be better if you waited until you had made somewhat of a solid argument before you resorted to that appeal. Even Robin's "Trust me, I'm an Economist!" is more persuasive.

The Bottom Line is one of the earliest posts in Eliezer's own rationality sequences and describes approximately this objection. You'll note that he added an Addendum:

This is intended as a caution for your own thinking, not a Fully General Counterargument against conclusions you don't like.

-3rwallace13y
I'm resisting the temptation to say "trust me, I'm an AGI researcher" :-) Bear in mind that my bottom line was actually the pro "AI go foom" side; it's still what I would like to believe. But my theory is clearly falsifiable. I stand by my position that it's fair to ask you and Eliezer whether your theory is falsifiable, and if so, what evidence you would agree to have falsified it.

I'm resisting the temptation to say "trust me, I'm an AGI researcher" :-)

But barely. ;)

You would not believe how little that would impress me. Well, I suppose you would - I've been talking with XiXi about Ben, after all. I wouldn't exactly say that your status incentives promote neutral reasoning on this position - or Robin on the same. It is also slightly outside of the core of your expertise, which is exactly where the judgement of experts is notoriously demonstrated to be poor.

Bear in mind that my bottom line was actually the pro "AI go foom" side; it's still what I would like to believe.

You are trying to create AGI without friendliness and you would like to believe it will go foom? And this is supposed to make us trust your judgement with respect to AI risks?

Incidentally, 'the bottom line' accusation here was yours, not the other way around. The reference was to question its premature use as a fully general counterargument.

But my theory is clearly falsifiable. I stand by my position that it's fair to ask you and Eliezer whether your theory is falsifiable, and if so, what evidence you would agree to have falsified it.

We are talking here about pred... (read more)

2rwallace13y
No indeed, they very strongly promote belief in AI foom - that's why I bought into that belief system for a while, because if true, it would make me a potential superhero. Nope, it's exactly in the core of my expertise. Not that I'm expecting you to believe my conclusions for that reason. When I believed in foom, I was working on Friendly AI. Now that I no longer believe that, I've reluctantly accepted human level AI in the near future is not possible, and I'm working on smarter tool AI instead - well short of human equivalence, but hopefully, with enough persistence and luck, better than what we have today. That is what falsifiability refers to, yes. My theory makes the prediction that even when recursive self-improvement is used, the results will be within the curve of capability, and will not produce more than a steady exponential rate of improvement. Are you saying your theory makes no other predictions than this?
9wedrifid13y
RWallace, you made a suggestion of unfalsifiabiity, a ridiculous claim. I humored you by giving the most significant, obvious and overwhelmingly critical way to falsify (or confirm) the theory. You now presume to suggest that such a reply amounts to a claim that this is the only prediction that could be made. This is, to put it in the most polite terms I am willing, disingenuous.
-8rwallace13y
0wedrifid13y
You are not an expert on recursive self improvement, as it relates to AGI or the phenomenon in general.
4JoshuaZ13y
In fairness, I'm not sure anyone is really an expert on this (although this doesn't detract from your point at all.)
8wedrifid13y
You are right, and I would certainly not expect anyone to have such expertise for me to take their thoughts seriously. I am simply wary of Economists (Robin) or AGI creator hopefuls claiming that their expertise should be deferred to (only relevant here as a hypothetical pseudo-claim). Professions will naturally try to claim more territory than would be objectively appropriate. This isn't because the professionals are actively deceptive but rather because it is the natural outcome of tribal instincts. Lets face it - intellectual disciplines and fields of expertise are mostly about pissing on trees with but with better hygiene.
-2XiXiDu13y
Yes, but why would the antipredictions of AGI researcher not outweigh yours as they are directly inverse? Further, if your predictions are not falsifiable then they are by definition true and cannot be refuted. Therefore it is not unreasonable to ask for what would prematurely disqualify your predictions so as to be able to argue based on diverging opinions here. Otherwise, as I said above, we'll have two inverse predictions outweigh each other, and not the discussion about risk estimations we should be having.
0wedrifid13y
The claim being countered was falsifiability. Your reply here is beyond irrelevant to the comment you quote.
-2XiXiDu13y
rwallace said it all in his comment that has been downvoted. Since I'm unable to find anything wrong with his comment and don't understand yours at all, which has for unknown reasons be upvoted, there's no way for me to counter what you say besides by what I've already said. Here's a wild guess of what I believe to be the positions. rwallace asks you what information would make you update or abandon your predictions. You in turn seem to believe that predictions are just that, the utterance of that might be possible, unquestionable and not subject to any empirical criticism. I believe I'm at least smarter than the general public, although I haven't read a lot of Less Wrong yet. Further I'm always willing to announce that I have been wrong and to change my mind. This should at least make you question your communication skills regarding outsiders, a little bit. Theories are collections or proofs and a hypothesis is a prediction or collection of predictions and must be falsifiable or proven to become a collection of proofs that is a theory. It is not absurd at all to challenge predictions based on their refutability, as any prediction that isn't falsifiable will be eternal and therefore useless.
-1wedrifid13y
The wikipedia article on falsifiablility would be a good place to start if you wish to understand what is wrong with way falsification has been used (or misused) here. With falsifiability understood, seeing the problem should be straightforward.
2XiXiDu13y
I'll just back out and withdraw my previous statements here. I have already been reading that Wiki entry when you replied. It would certainly take too long to figure out where I might be wrong here. I thought falsifiablility has been sufficiently clear to me to ask for what would change someones mind if I believe that a given prediction is sufficiently unspecific. I have to immerse myself into the shallows that are the foundations of falsifiability (philosophy). I have done so in the past and will continue to do so, but that will take time. Nothing so far has really convinced me that a unfalsifiable idea can provide more than hints of what might be possible and therefore something new to try. Yet empirical criticism, in the form of the eventual realization of ones ideas, or a prove of contradiction (respectively inconsistency), seems to be the best bedding of any truth-value (at least in retrospect to a prediction). That is why I like to ask for what information would change ones mind about an idea, prediction or hypothesis. I call this falsifiability. If one replied, "nothing falsifiability is misused here", I would conclude that his idea is unfalsifiable. Maybe wrongly so!
0wedrifid13y
Thou art wise.
0[anonymous]13y
I'd like to know if you disagree with this comment. It would help me to figure out where we disagree or what exactly I'm missing or misunderstand with regard to falsifiability and the value of predictions.

The answer is that each doubling of computing power adds roughly the same number of ELO rating points.

When you make a sound carry 10 times as much energy, it only sounds a bit louder.

If your unit of measure already compensates for huge leaps in underlying power, then you'll tend to ignore that the leaps in power are huge.

How you feel about a ram upgrade is one such measure, because you don't feel everything that happens inside your computer. You're measuring benefit by how it works today vs. yesterday, instead of what "it" is doing today vs. 20 years ago.

Can such a thing happen again? In particular, is it possible for AI to go foom the way humanity did?

If such lopsidedness were to repeat itself... well even then, the answer is probably no.

Isn't that lopsideness the state computers are currently in? So the first computer that gets the 'general intelligence' thing will have a huge advantage even before any fancy self-modification.

You seem to be arguing from the claim that there are exponentially diminishing returns within a certain range, to the claim that there are no phase transitions outside that range. You explain away the one phase transition you've noticed (ape to human) in an unconvinving manner, and ignored three other discontinuous phase transitions: agriculture, industrialization, and computerization.

Also, this statement would require extraordinary evidence that is not present:

every even slightly promising path has had people working on it

It seems to me that the first part of your post, that examines PC memory and such, simply states that (1) there are some algorithms (real-world process patterns) that take exponential amount of resources, and (2) if capability and resources are roughly the same thing, then we can unpack the concept of "significant improvement in capability" as "roughly a doubling in capability", or, given that resources are similar to capability in such cases, "roughly a doubling in resources".

Yes, such things exist. There are also other kinds of things.

I'm not convinced; but it's interesting. A lot hinges on the next-to-last paragraph, which is dubious and handwavy.

One weakness is that, when you say that chimpanzees looked like they were well-developed creatures, but really they had this huge unknown gap in their capabilities, which we filled in, I don't read that as evidence that now we are fully-balanced creatures with no gaps. I wonder where the next gap is. (EDIT: See jimrandomh's excellent comment below.)

What if an AI invents quantum computing? Or, I don't know, is rational?

Another weakness is the assumption that the various scales you measure things on, like go ratings, are "linear". Go ratings, at least, are not. A decrease in 1 kyu is supposed to mean an increase in likelihood ratio of winning by a factor of 3. Also, by your logic, it should take twice as long to go from 29 kyu to 28, as from 30 to 29; no one should ever reach 10 kyu.

Over the last five or six million years, our lineage upgraded computing power (brain size) by about a factor of three, and upgraded firmware to an extent that is unknown but was surely more like a percentage than an order of magnitude. The result was not a corresponding imp

... (read more)
-2rwallace13y
Mind you, I don't think we, in isolation, are close to fully balanced; we are still quite deficient in areas like accurate data storage and arithmetic. Fortunately we have computers to fill those gaps for us. Your question is then essentially, are there big gaps in the combined system of humans plus computers -- in other words, are there big opportunities we're overlooking, important application domains within the reach of present-day technology, not yet exploited? I think the answer is no; of course outside pure mathematics, it's not possible to prove a negative, only to keep accumulating absence of evidence to the point where it becomes evidence of absence. But I would certainly be interested in any ideas for such gaps. No indeed! I should clarify that exponential inputs to certain to produce exponential outputs -- an example I gave is chip design, where the outputs feed back into the inputs, getting us fairly smooth exponential growth. Put another way, the curve of capability is a straight line on a log-log graph; I'm merely arguing against the existence of steeper growth than that. QFT. Necessary conditions are not, unfortunately, sufficient conditions :P

You seem to have completely glossed over the idea of recursive self-improvement.

4rwallace13y
Not at all -- we already have recursive self-improvement, each new generation of computers is taking on more of the work of designing the next generation. But -- as an observation of empirical fact, not just a conjecture -- this does not produce foom, it produces steady exponential growth; as I explained, this is because recursive self-improvement is still subject to the curve of capability.

Whenever you have a process that requires multiple steps to complete, you can't go any faster than the slowest step. Unless Intel's R&D department actually does nothing but press the "make a new CPU design" button every few months, I think the limiting factor is still the step that involves unimproved human brains.

Elsewhere in this thread you talk about other bottlenecks, but as far as I know FOOM was never meant to imply unbounded speed of progress, only fast enough that humans have no hope of keeping up.

7magfrump13y
I saw a discussion somewhere with a link to a CPU design researcher discussing the level of innovation required to press that "make a new CPU design" button, and how much of the time is spent waiting for compiler results and bug testing new designs. I'm getting confused trying to remember it off the top of my head, but here are some links to what I could dig up with a quick search. Anyway I haven't reread the whole discussion but the weak inside view seems to be that speeding up humans would not be quite as big of a gap as the outside view predicts--of course with a half dozen caveats that mean a FOOM could still happen. soreff's comment below also seems related.
1saturn13y
As long as nontrivial human reasoning is a necessary part of the process, even if it's a small part, the process as a whole will stay at least somewhat "anchored" to a human-tractable time scale. Progress can't speed up past human comprehension as long as human comprehension is required for progress. If the human bottleneck goes away there's no guarantee that other bottlenecks will always conveniently appear in the right places to have the same effect.
1magfrump13y
I agree, but that is a factual and falsifiable claim that can be tested (albeit only in very weak ways) by looking at where current research is most bottlenecked by human comprehension.
4JGWeissman13y
No. We have only observed weak recursive self improvement, where the bottleneck is outside the part being improved.
5soreff13y
I work in CAD, and I can assure you that the users of my department's code are very interested in performance improvements. The CAD software run time is a major bottleneck in designing the next generation of computers.
8wedrifid13y
Might I suggest that the greatest improvements in CAD performance will come from R&D into CAD techniques and software? At least, that seems to be the case basically everywhere else. Not to belittle computing power but the software to use the raw power tends to be far more critical. If this wasn't the case, and given that much of the CAD task can be parellelized, recursive self improvement in computer chip self improvement would result in an exponentially expanding number of exponentially improving supercomputer clusters. On the scale of "We're using 30% of the chips we create to make new supercomputers instead of selling them. We'll keep doing that until our processors are sufficiently ahead of ASUS that we can reinvest less of our production capacity and still be improving faster than all competitors". I think folk like yourself and the researchers into CAD are the greater bottleneck here. That's a compliment to the importance of your work. I think. Or maybe an insult to your human frailty. I never can tell. :)
2Douglas_Knight13y
Jed Harris says similar things in the comments here, but this seems to make predictions that don't seem born out to me (cf wedrifid). If serial runtime is a recursive bottleneck, then the break of exponentially increasing clockspeed should cause problems for the chip design process and then also break exponential transistor density. But if these processes can be parallelized, then they should have been parallelized long ago. A way to reconcile some of these claims is that serial clockspeed has only recently become a bottleneck, as a result of the clockspeed plateau.
5wedrifid13y
It is interesting to note that rw first expends effort to argue show something could kind of be considered to be recursive improvement so as to go on and show how weakly recursive it is. That's not even 'reference class tennis'... it's reference class Aikido!
-1rwallace13y
I'll take that as a compliment :-) but to clarify, I'm not saying its weakly recursive. I'm saying it's quite strongly recursive -- and noting that recursion isn't magic fairy dust, the curve of capability limits rate of progress even when you do have recursive self-improvement.
3wedrifid13y
I suppose 'quite' is a relative term. It's improvement with a bottleneck that resides firmly in the human brain. Of course it does. Which is why it matters so much how steep the curve of recursion is compared to the curve of capability. It is trivial maths.
-2rwallace13y
But there will always be a bottleneck outside the part being improved, if nothing else because the ultimate source of information is feedback from the real world. (Well, that might stop being true if it turns out to be possible to Sublime into hyperspace or something like that. But it will remain true as long as we are talking about entities that exist in the physical universe.)
3JGWeissman13y
An AGI could FOOM and surpass us before it runs into that limit. It's not like we are making observations anywhere near as fast as physically possible, nor are we drawing all the conclusions we ideally could from the data we do observe.
0rwallace13y
"That limit"? The mathematically ultimate limit is Solomonoff induction on an infinitely powerful computer, but that's of no physical relevance. I'm talking about the observed bound on rates of progress, including rates of successive removal of bottlenecks. To be sure, there may -- hopefully will! -- someday exist entities capable of making much better use of data than we can today; but there is no reason to believe the process of getting to that stage will be in any way discontinuous, and plenty of reason to believe it will not.
0JGWeissman13y
Are you being deliberately obtuse? "That limit" refers to the thing you brought up: the rate at which observations can be made.
-2rwallace13y
Yes, but you were the one who started talking about it as something you can "run into", together with terms like "as fast as physically possible" and "ideally could from the data" - that last term in particular has previously been used in conversations like this to refer to Solomonoff induction on an infinitely powerful computer. My point is that at any given moment an awful lot of things will be bottlenecks, including real-world data. The curve of capability is already observed in cases where you are free to optimize whatever variable is the lowest hanging fruit at the moment. In other words, you are already "into" the current data limit; if you could get better performance by using less data and substituting e.g. more computation, you would already be doing it. As time goes by, the amount of data you need for a given degree of performance will drop as you obtain more computing power, better algorithms etc. (But of course, better performance still will be obtainable by using more data.) However, 1. The amount of data needed won't drop below some lower bound, 2. More to the point, the rate at which the amount needed drops, is itself bound by the curve of capability.

Even if AGI had only very small comparative advance(in skill and ability to recursively self-improve) over humans supported by then best available computer technology and thus their ability to self-improve, it would eventually, propably even then quite fast, overpower humans totally and utterly. And it seems fairly likely that eventually you could build fully artifical agent that was strictly superior(or could recursively self-update to be one) to humans. This intuition is fairly likely given that humans are not ultimately designed to be the singularity-su... (read more)

3Vladimir_Nesov13y
Yup. Intelligence explosion is pretty much irrelevant at this point (even if in fact real). Given the moral weight of the consequences, one doesn't need impending doom to argue high marginal worth of pursuing Friendly AI. Abstract arguments get stronger by discarding irrelevant detail, even correct detail. (It's unclear what's more difficult to argue, intelligence explosion, or expected utility of starting to work on possibly long-term Friendly AI right now. But using both abstract arguments allows to convince even if only one of them gets accepted.)

There is no law of nature that requires consequences to be commensurate with their causes. One can build a doom machine that is activated by pressing a single button. A mouse exerts gravitational attraction on all of the galaxies in the future light cone.

Counterexample-thinking-of time.

What has capability that's super-logarithmic?

  • Money. Interest and return on investment are linear at my current point in money-space.
  • Vacuum emptiness. Each tiny bit better vacuum we could make had a large increase in usefulness. Now we've hit diminishing returns, but there was definitely a historical foom in the usefulness of vacuum.
  • Physical understanding. One thing just leads to another thing, which leads to another thing... Multiple definitions of capability here, but what I'm thinking of is the fact that there are

... (read more)
8David_Gerard13y
Most growth curves are sigmoid. They start off looking like a FOOM and finish with diminishing returns.
1Manfred13y
Most growth curves of things that grow because they are self-replicating are sigmoid. "Capability functions" get to do whatever the heck they want, limited only by the whims of the humans (mostly me) I'm basing my estimates on.
-1rabidchicken13y
If it is a finite universe, there will never be a "foom" (i love our technical lingo)
0JoshuaZ13y
What does the size of the universe have to do with this?
0rabidchicken13y
David's post implied that we should only consider something to be a FOOM if it follows exponential growth and never sees diminishing returns. In that case, we cannot have a true foom if energy and matter are finite. No matter how intelligent a computer gets, it eventually will slow down and stop increasing capacity because energy and matter both are limiting factors. I don't recall seeing the definition of a foom anywhere on this site, but it seems there is some inconstancy in how people use the word.
0JoshuaZ13y
Hmm, you must be reading David's remarks differently. David's observation about sigmoids seemed to be more of an observation that in practice growth curves do eventually slow down and that they generally slow down well before the most optimistic and naive estimates would say so.

Looking away from programming to the task of writing an essay or a short story, a textbook or a novel, the rule holds true: each significant increase in capability requires a doubling, not a mere linear addition.

If this is such a general law, should it not apply outside human endeavor?

You're reaching. There is no such general law. There is just the observation that whatever is at the top of the competition in any area, is probably subject to some diminishing returns from that local optimum. This is an interesting generalization and could provide insight... (read more)

-1rwallace13y
Certainly if someone can come up with a potent approach towards AI that we haven't yet conceived of, I will be very interested in looking at it!

in a nutshell, most conscious observers should expect to live in a universe where it happens exactly once - but that would require a digression into philosophy and anthropic reasoning, so it really belongs in another post; let me know if there's interest, and I'll have a go at writing that post.

This, to me, could be a much more compelling argument, if presented well. So there's definitely a lot of interest from me.

This one is a little bit like my "The Intelligence Explosion Is Happening Now" essay.

Surely my essay's perspective is a more realistic one, though.

6Jonii13y
Your essay was the first thing that came to my mind after reading this post. I think the core argument is the same, and valid. It does seem to me that people vastly underestimate what computer-supported humans are capable of doing when AGI becomes more feasible, and thus overestimate how superior AGI actually would be, even after continously updating its own software and hardware.
2timtyler13y
That sounds a bit of a different subject - one that I didn't really discuss at all. Human go skill goes from 30kyu to 1 kyu and then from 1 dan up to 9 dan. God is supposed to be 11 dan. Not that much smarter than the smartest human. How common is that sort of thing?
0Jonii13y
Dunno how to answer so I just stay quiet. I communicated badly there, and I don't see fast way out of here.
0timtyler13y
Your comment seems absolutely fine to me. Thanks for thinking of my essay - and thanks for the feedback!
3rwallace13y
I see your excellent essay as being a complement to my post, and I should really have remembered to refer to it -- upvoted, thanks for the reminder!
2Thomas13y
You are correct there, so I have no choice but to upvote your post.

Essentially, this is an argument that a FOOM would be a black swan event in the history of optimization power. That provides some evidence against overconfident Singularity forecasts, but doesn't give us enough reason to dismiss FOOM as an existential risk.

The ELO rating scheme is calculated on a logistic curve - and so includes an exponent - see details here. It gets harder to climb up the ratings the higher you get.

It's the same with traditional go kyu/dan ratings - 9 kyu to 8 kyu is easy, 8 dan to 9 dan is very difficult.

2Perplexed13y
Actually, the argument could be turned around. A 5 kyu player can give a 9 kyu player a 4 stone handicap and still have a good chance of winning. A 9 dan, offering a 4 stone handicap to a 5 dan, will be crushed. By this metric, the distance between levels becomes smaller at the higher levels of skill. It is unclear whether odds of winning, log odds of winning, number of handicap stones required to equalize odds of winning, or number of komi points required to equalize odds of winning ought to be the metric of relative skill.
0timtyler13y
Probably not by very much. One of the main motivations behind the grading system is to allow people of different grades to easily calculate the handicap needed to produce a fair game - e.g. see here: You may be right that the system is flawed - but I don't think it is hugely flawed.
0skepsci12y
The difference is between amateur and professional ratings. Amateur dan ratings, just like kyu ratings, are designed so that a difference of n ranks corresponds to suitability of a n-stone handicap, but pro dan ratings are more bunched together. See Wikipedia:Go pro.
0wedrifid13y
Does this suggest anything except that the scale mostly useless at the top end?
2timtyler13y
The idea in the post was: My observation is that ELO ratings are calculated on a logistic curve - and so contain a "hidden" exponent - so the "constant rating improvement" should be taken with a pinch of salt.
[-][anonymous]12y50

This is a very interesting proposal, but as a programmer and with my knowledge of statistical analysis I have to disagree:

The increase in computing power that you regrettably cannot observe at user level is a fairly minor quirk of the development; pray tell, does the user interface of your Amiga system's OS and your Dell system's OS look alike? The reason why modern computers don't feel faster is because the programs we run on them are wasteful and gimmicky. However, in therms of raw mathematics, we have 3D games with millions and millions of polygons, we ... (read more)

This post is missing the part where the observations made support the conclusion to any significant degree.

Computer chips used to assist practically identical human brains to tweak computer chips is a superficial similarity to a potential foom at the very best.

1rwallace13y
Current computer aided design is already far from mere tweaking and getting further each generation.
2wedrifid13y
It would become a significantly relevant 'fooming' anecdote when the new generation chips were themselves inventing new computer aided design software and techniques.
0rwallace13y
When that happens, I'll be pointing to that as an example of the curve of capability, and FAI believers will be saying it will become significantly relevant when the chips are themselves inventing new ways to invent new computer aided design techniques. Etc.
-1wedrifid13y
No, shortly after that happens all the FAI believers will be dead, with the rest of humanity. ;) We'll have about enough time to be the "O(h shit!)" in FOOM.
-4rwallace13y
So is there any way at all to falsify your theory?
1wedrifid13y
Are you serious? You presented your own hypothesis for the outcome of the experiment before I gave mine! They are both obviously falsifiable. I think, in no uncertain terms, that this rhetorical question was an extremely poor one.
-3rwallace13y
Clearly we aren't going to agree on whether my question was a good one or a poor one, so agreeing to differ on that, what exactly would falsify your theory?
1wedrifid13y
The grandparent answered that question quite clearly. You make a prediction here of what would happen if this happened. I reply that that would actually happen instead. You falsify each of these theories by making this happen and observing the results. I note that you are trying to play the 'unfalsifiable card' in two different places here and I am treating them differently because you question different predictions. I note this to avoid confusion if you meant them to be a single challenge to the overall position. So see other branch if you mean only to say "FOOM is unfalsifiable".
-1rwallace13y
Ah, then I'm asking whether "in situation X, the world will end" is your theory's only prediction - since that's the same question I've ended up asking in the other branch, let's pursue it in the other branch.

The answer is that each doubling of computing power adds roughly the same number of ELO rating points.

Well, ELO rating is a logarithmic scale, after all.

A very familiar class of discontinuities in engineering refers to functionality being successfully implemented for the first time. This is a moment where a design finally comes together, all its components adjusted to fit with each other, the show-stopping bugs removed from the system. And then it just works, where it didn't before.

-1rwallace13y
And yet, counterintuitively, these discontinuities don't lead to discontinuities, because it is invariably the case that the first prototype is not dramatically more useful than what came before -- often less so -- and requires many rounds of debugging and improvement before the new technology can fulfill its potential.
2Vladimir_Nesov13y
Arguably, you can call the whole process of development "planning", and look only at the final product. So, your argument seems to depend on an arbitrarily chosen line between cognitive work and effects that are to be judged for "discontinuity".

Following logic of the article: our current mind-designs, both human, and real AIs are lopsided in that they can't effectively modify themselves. There is theory of mind, but as far as maps go, this one is plenty far from the territory. Once this obstacle is dealt with there might be another FOOM.

I got some questions regarding self-improvement:

  • Is it possible for an intelligence to simulate itself?
  • Is self-modification practicable without running simulations to prove goal stability?
  • Does creating an improved second version of oneself demand prove of friendliness?

I'm asking because I sense that those questions might be important in regard to the argument in the original post that each significant increase in capability requires a doubling of resources.

2Perplexed13y
The naive and direct answer is "Yes, but only at reduced speed and perhaps with the use of external simulated memory". A smarter answer is "Of course it can, if it can afford to pay for the necessary computing resources." I would say that self-modification is incredibly stupid without proof of much more fundamental things than goal-stability. And that you really can't prove anything by simulation, since you cannot exercise a simulation with all possible test data, and you want a proof that good things happen forall data. So you want to prove the correctness of the self-modification symbolically. That seems to be a wise demand to make of any powerful AI. Could you explain why you sense that? I will comment that, in the course of this thread, I have become more and more puzzled at the usual assumption here that we are concerned with one big AI with unlimited power which self-modifies, rather than being concerned with an entire ecosystem of AIs, no single one of which is overwhelmingly powerful, which don't self-modify but rather build machines of the next generation, and which are entirely capable of proving the friendliness of next-generation machines before those new machines are allowed to have any significant power.
6XiXiDu13y
I was curious if there might exist something analog to the halting problem regarding provability of goal-stability and general friendliness. If every AGI has to prove the friendliness of its successor then this might demand considerable effort and therefore resources as it would demand great care and extensive simulations or, as your answer suggests, proving the correctness of the self-modification symbolically. In both cases the AGI would have to acquire new resources slowly, as it couldn't just self-improve to come up with faster and more efficient solutions. In other words, self-improvement would demand resources, therefore the AGI could not profit from its ability to self-improve regarding the necessary acquisition of resources to be able to self-improve in the first place.
8Perplexed13y
There is a lot of confusion out there as to whether the logic of the halting problem means that the idea of proving program correctness is somehow seriously limited. It does not mean that. All it means is that there can be programs so badly written that it is impossible to say what they do. However, if programs are written for some purpose, and their creator can say why he expects the programs to do what he meant them to do, then that understanding of the programs can be turned into a checkable proof of program correctness. It may be the case that it is impossible to decide whether an arbitrary program even halts. But we are not interested in arbitrary programs. We are interested in well-written programs accompanied by a proof of their correctness. Checking such a proof is not only a feasible task, it is an almost trivial task. To say this again, it may well be the case that there are programs for which it is impossible to find a proof that they work. But no one sane would run such a program with any particular expectations regarding its behavior. However, if a program was constructed for some purpose, and the constructor can give reasons for believing that the program does what it is supposed to do, then a machine can check whether those reasons are sufficient. It is a reasonable conjecture that for every correct program there exists a provably correct program which does the same thing and requires only marginally more resources to do it. Huh? If that is meant to be an argument against a singleton AI being able to FOOM without using external resources, then you may be right. But why would it not have access to external resources? And why limit consideration to singleton AIs?
2wnoise13y
It's true that the result is "there exists programs such that", and it says nothing about the relative sizes of those that can be proved to terminate vs those that can't in the subset we care about and are likely to write. Sometimes that purpose is to test mathematical conjectures. In that case, the straightforward and clear way to write the program actually is the one used, and is often apparently does fall into this subclass. (And then there are all the programs that aren't designed to normally terminate, and then we should be thinking about codata and coinduction.)
3[anonymous]13y
Upvoted because I think this best captures what the halting problem actually means. The programs for which we really want to know if they halt aren't recursive functions that might be missing a base case or something. Rather, they're like the program that enumerates the non-trivial zeroes of the Riemann zeta function until it finds one with real part not equal to 1/2. If we could test whether such programs halted, we would be able to solve any mathematical existence problem ever that we could appropriately describe to a computer.
3Perplexed13y
What you mean 'we,' white man? People researching program verification for purposes of improving software engineering really are interested in the kinds of mistakes programmers make, like "recursive functions that might be missing a base case or something". Or like unbounded iteration for which we can prove termination, but really ought to double-check that proof. Unbounded iteration for which we cannot (currently) prove termination may be of interest to mathematicians, but even a mathematician isn't stupid enough to attempt to resolve the Zeta function conjecture by mapping the problem to a question about halting computer programs. That is a step in the wrong direction. And if we build an AGI that attempts to resolve the conjecture by actually enumerating the roots in order, then I think that we have an AI that is either no longer rational or has too much time on its hands.
3[anonymous]13y
I suppose I wasn't precise enough. Actual researchers in that area are, rightly, interested in more-or-less reasonable halting questions. But these don't contradict our intuition about what halting problems are. In fact, I would guess that a large class of reasonable programming mistakes can be checked algorithmically. But as far as consequences for there being, hypothetically, a solution to the general halting problem... the really significant result would be the existence of a general "problem solving algorithm". Sure, Turing's undecidability problem proves that the halting problem is undecidable... but this is my best attempt at capturing the intuition of why it's not even a little bit close to being decidable. Of course, even in a world where we had a halting solver, we probably wouldn't want to use it to prove the Riemann Hypothesis. But your comment actually reminds me of a clever little problem I once saw. It went as follows: (the connection to your comment is that even if P=NP, we wouldn't actually use the solution to the above problem in practical applications. We'd try our best to find something better)
2Perplexed13y
Which raises a question I have never considered ... Is there something similar to the halting-problem theorem for this kind of computation? That is, can we classify programs as either productive or non-productive, and then prove the non-existence of a program that can classify?
0wnoise13y
That's a great question. I haven't found anything on a brief search, but it seems like we can fairly readily embed a normal program inside a coinductive-style one and have it be productive after the normal program terminates.
2JGWeissman13y
I would like to see all introductions to the Halting Problem explain this point. Unfortunately, it seems that "computer scientists" have only limited interest in real physical computers and how people program them.
3fiddlemath13y
I'm working on my ph.d. in program verification. Every problem we're trying to solve is as hard as the halting problem, and so we make the assumption, essentially, that we're operating over real programs: programs that humans are likely to write, and actually want to run. It's the only way we can get any purchase on the problem. Trouble is, the field doesn't have any recognizable standard for what makes a program "human-writable", so we don't talk much about that assumption. We should really get a formal model, so we have some basis for expecting that a particular formal method will work well before we implement it... but that would be harder to publish, so no one in academia is likely to do it.
2andreas13y
Similarly, inference (conditioning) is incomputable in general, even if your prior is computable. However, if you assume that observations are corrupted by independent, absolutely continuous noise, conditioning becomes computable.
0gwern13y
Offhand, I don't see any good way of specifying it in general, either (even the weirdest program might be written as part of some sort of arcane security exploit). Why don't you guys limit yourselves to some empirical subset of programs that humans have written, like 'Debian 5.0'?
1XiXiDu13y
Interesting! I have read that many mathematical proofs today require computer analysis. If it is such a trivial task to check the correctness of code that gives rise to the level of intelligence above ours, then my questions have not been relevant regarding the argument of the original post. I mistakenly expected that no intelligence is able to easily verify conclusively the workings of another intelligence that is a level above its own without painstakingly acquiring resources, devising the necessary tools and building the required infrastructure. I expected that what applied for humans creating superhuman AGI would also hold for AGI creating its successor. Humans first had to invent science, bumble through the industrial revolution and develop computers to be able to prove modern mathematical problems. So I thought a superhuman AGI would have to invent meta-science, advanced nanotechnology and figure out how to create an intelligence that could solve problems it couldn't solve itself.
3Perplexed13y
Careful! I said checking the validity of a proof of code correctness, not checking the correctness of code. The two are very different in theory, but not (I conjecture) very different in practice, because code is almost never correct unless the author of the code has at least a sketch proof that it does what he intended.
3XiXiDu13y
I see, it wasn't my intention to misquote you, I was simply not aware of a significant distinction. By the way, has the checking of validity of a proof of code correctness to be validated or proven as well or is there some agreed limit to the depth of verification sufficient to be sure that if you run the code nothing unexpected will happen?
4fiddlemath13y
The usual approach, taken in tools like Coq and HOL, is to reduce the proof checker to a small component of the overall proof assistant, with the hope that these small pieces can be reasonably understood by humans, and proved correct without machine assistance. This keeps the human correctness-checking burden low, while also keeping the piece that must be verified in a domain that's reasonable for humans to work with, as opposed to, say, writing meta-meta-meta proofs. The consequence of this approach is that, since the underlying logic is very simple, the resulting proof language is very low-level. Human users give higher-level specifications, and the bulk of the proof assistant compiles those into low-level proofs.
1Perplexed13y
There are a couple of branches worth mentioning in the "meta-proof". One is the proof of correctness of the proof-checking software. I believe that fiddlemath is addressing this problem. The "specification" of the proof-checking software is essentially a presentation of an "inference system". The second is the proof of the soundness of the inference system with respect to the programming language and specification language. In principle, this is a task which is done once - when the languages are defined. The difficulty of this task is (AFAIK) roughly like that of a proof of cut-elimination - that is O(n^2) where n is the number of productions in the language grammars.

there's a significant difference between the capability of a program you can write in one year versus two years

The program that the AI is writing is itself, so the second half of those two years takes less than one year - as determined by the factor of "significant difference". And if there's a significant difference from 1 to 2, there ought to be a significant difference from 2 to 4 as well, no? But the time taken to get from 2 to 4 is not two years; it is 2 years divided by the square of whatever integer you care to represent "significa... (read more)

1rwallace13y
Compound interest is a fine analogy, it delivers a smooth exponential growth curve, and we have seen technological progress do likewise. What I am arguing against is the "AI foom" claims that you can get faster growth than this, e.g. each successive doubling taking half the time. The reason this doesn't work is that each successive doubling is exponentially harder. What if the output feeds back into the input, so the system as a whole is developing itself? Then you get the curve of capability, which is a straight line on a log-log graph, which again manifests itself as smooth exponential growth.
0PeterS13y
Suppose A designs B, which then designs C. Why does it follow that C is more capable than B (logically, disregarding any hardware advances made between B and C)? Alternatively, why couldn't A have designed C initially?
3khafra13y
It does not necessarily follow; but the FOOM contention is that once A can design a B more capable than itself, B's increased capability will include the capability to design C, which would have been impossible for A. C can then design D, which would have been impossible for B and even more impossible for A. Currently, each round of technology aids in developing the next, but the feedback isn't quite this strong.
0shokwave13y
As per khafra's post, though I would add that it looks likely: after all, that we as humans are capable of any kind of AI at all is proof that designing intelligent agents is the work of intelligent agents. It would be surprising if there was some hard cap on how intelligent an agent you can make - like if it topped out at exactly your level or below.

Could the current computing power we have, be obtained by enough Commodore 64 machines?

Maybe, but we would have to have many orders of magnitude bigger power supply for that. And this is just the beginning. The cell phone based on those processors could weight a ton each.

In other words, an impossibility from a whole bunch of reasons.

We already had a foom stretched over the last 30 years. Not the first and not the last one, if we are going to proceed as planed.

What is the Relationship between Language, Analogy, and Cognition?

What makes us so smart as a species, and what makes children such rapid learners? We argue that the answer to both questions lies in a mutual bootstrapping system comprised of (1) our exceptional capacity for relational cognition and (2) symbolic systems that augment this capacity. The ability to carry out structure-mapping processes of alignment and inference is inherent in human cognition. It is arguably the key inherent difference between humans and other great apes. But an equally impo

... (read more)

Side issue: should the military effectiveness of bombs be measured by the death toll?

I tentatively suggest that atomic and nuclear bombs are of a different kind than chemical explosives, as shown by the former changing the world politically.

I'm not sure exactly why-- it may have been the shock of novelty.

Atom and nuclear bombs combine explosion, fire, and poison, but I see no reason to think there would have have been the same sort of wide spread revulsion against chemical explosives, and the world outlawed poison gas in WWI and just kept on going with chemical explosives.

I tentatively suggest that atomic and nuclear bombs are of a different kind than chemical explosives, as shown by the former changing the world politically.

I'm not sure exactly why-- it may have been the shock of novelty.

Chemical explosives changed the world politically, just longer ago. Particularly when they put the chemicals in a confined area and put lead pellets on top...

3wedrifid13y
I've never really got that. If you are going to kill people kill them well. War isn't nice and people get injured horribly. That's kind of the point.
6CronoDAS13y
I think that when both sides use poison gas in warfare, the net effect is that everyone's soldiers end up having to fight while wearing rubber suits, which offer an effective defense against gas but are damn inconvenient to fight in. So it just ends up making things worse for everyone. Furthermore, being the first to use poison gas, before your enemy starts to defend against it and retaliate in kind, doesn't really provide that big of an advantage. In the end, I guess that the reason gas was successfully banned after WWI was that everyone involved agreed that it was more trouble than it was worth. I suppose that, even in warfare, not everything is zero-sum.
1Jordan13y
Seems like a classic iterated prisoner's dilemma.

The neglected missing piece is understanding of intelligence. (Not understanding how to solve a a Rubik cube, but understanding the generalized process that upon seeing a Rubik cube for the first time, and hearing the goal of the puzzle, figures out how to solve it.)

1rwallace13y
The point you are missing is that understanding intelligence is not a binary thing. It is a task that will be subject to the curve of capability like any other complex task.
3JGWeissman13y
What is binary is whether or not you understand intelligence enough to implement it on a computer that then will not need to rely on any much slower human brain for any aspect of cognition.
7rwallace13y
A useful analogy here is self replicating factories. At one time they were proposed as a binary thing, just make a single self replicating factory that does not need to rely on humans for any aspect of manufacturing and you have an unlimited supply of manufactured goods thereafter. It was discovered that in reality it's about as far from binary as you can get. While in principle such a thing must be possible, in practice it's so far off as to be entirely irrelevant to today's plans; what is real is a continuous curve of increasing automation. By the time our distant descendants are in a position to automate the last tiny fraction, it may not even matter anymore. Regardless, it doesn't matter. Computers are bound by the curve of capability just as much as humans are. There is no evidence that they're going to possess any special sauce that will allow them past it, and plenty of evidence that they aren't. My theory is falsifiable. Is yours? If so, what evidence would you agree would falsify it, if said evidence were obtained?

Do you have a citation for the value of the Go performance improvement per hardware doubling?

3gwern12y
There is now reportedly a Go program (available as a bot on KGS), which has hit 5-dan as of 2012.
2CarlShulman12y
This computer Go poll, conducted in 1997 to estimate arrival times for shodan and world champion level software play, is interesting: the programmers were a bit too optimistic, but the actual time required for shodan level was very close to the median estimate.
0gwern12y
That doesn't bode too well for reaching world champion level - if I toss the list into tr -s '[:space:]' | cut -d ' ' -f 3 | sort -g | head -14, the median estimate is 2050! Personally, I expect it by 2030.
3rwallace13y
Yes - look at the last table.

The solution to the paradox is that a chimpanzee could make an almost discontinuous jump to human level intelligence because it wasn't developing across the board. It was filling in a missing capability - symbolic intelligence - in an otherwise already very highly developed system. In other words, its starting point was staggeringly lopsided. [...]

Can such a thing happen again? In particular, is it possible for AI to go foom the way humanity did?

If such lopsidedness were to repeat itself... well even then, the answer is probably no.

Chimpanzee brains wer... (read more)

Snagged from a thread that's gone under a fold:

The beginning of this thread was Eliezer making a comment to the effect that symbolic logic is something computers can do so it must not be what makes humans more special than chimps.

One thing people do that neither chimps nor computers have managed is invent symbolic logic.[1]

Maybe it's in the sequences somewhere, but what does it take to notice gaps in one's models and oddities that might be systematizable?

[1] If I'm going to avoid P=0, then I'll say it's slightly more likely that chimps have done significant intellectual invention than computers.

7Eliezer Yudkowsky13y
The quote is wrong.
0NancyLebovitz13y
My apologies-- I should have caught that-- the quote didn't seem to be an accurate match for what you said, but I was having too much fun bouncing off the misquote to track that aspect.
2Vladimir_Nesov13y
Link to the replied-to comment.
0Vladimir_Nesov13y
Also lipstick. Don't forget lipstick. (Your comment isn't very clear, so I'm not sure what you intended to say by the statement I cited.)
0NancyLebovitz13y
Thanks for posting the link. My point was that some of the most interesting things people do aren't obviously algorithmic. It's impressive that programs beat chess grandmasters. It would be more impressive (and more evidential that self-optimization is possible) if a computer could invent a popular game.
0Vladimir_Nesov13y
What is this statement intended as an argument for? (What do you mean by "algorithmic"? It's a human category, just like "interesting". )