The arguments Eliezer describes are made, and his reactions are fair. But really the actual research community "grew out" of most of this stuff a while back. CYC and the "common sense" efforts were always a sideshow (in terms of research money and staff, not to mention results). Neural networks were a metonym for statistical learning for a while, then serious researchers figured out they needed to address statistical learning explicitly. Etc.
Admittedly there's always excessive enthusiasm for the current hot thing. A few years ago i...
Regarding serial vs. parallel:
The effect on progress is indirect and as a result hard to figure out with confidence.
We have gradually learned how to get nearly linear speedups from large numbers of cores. We can now manage linear speedups over dozens of cores for fairly structured computations, and linear speedup over hundreds of cores are possible in many cases. This is well beyond the near future number of cores per chip. For the purposes of this analysis I think we can assume that Intel can get linear speedups from increasing processors per chip, say...
I'll try to estimate as requested, but substituting fixed computing power for "riding the curve" (as Intel does now) is a bit of an apples to fruit cocktail comparison, so I'm not sure how useful it is. A more direct comparison would be with always having a computing infrastructure from 10 years in the future or past.
Even with this amendment, the (necessary) changes to design, test, and debugging processes make this hard to answer...
I'll think out loud a bit.
Here's the first quick guess I can make that I'm moderately sure of: The length of time ...
I did work at Intel, and two years of that was in the process engineering area (running the AI lab, perhaps ironically).
The short answer is that more computing power leads to more rapid progress. Probably the relationship is close to linear, and the multiplier is not small.
Two examples:
On the one hand, Eliezer is right in terms of historical and technical specifics.
On the other hand neural networks for many are a metoynym for continuous computations vs. the discrete computations of logic. This was my reaction when the two PDP volumes came out in the 80s. It wasn't "Here's the Way." It was "Here's and example of how to do things differently that will certainly work better."
Note also that the GOFAI folks were not trying to use just one point in logic space. In the 70s we already knew that monotonic logic was not goo...
The "500 bits" only works if you take a hidden variable or Bohmian position on quantum mechanics. If (as the current consensus would say) non-linear dynamics can amplify quantum noise then enormous amounts of new information are being "produced" locally everywhere all the time. The current state of the universe incorporates much or all of that information. (Someone who understands the debates about black holes and the holographic principle should chime in with more precise analysis.)
I couldn't follow the whole argument so I'm not sure how this affects it, but given that Eliezer keeps referring to this claim I guess it is important.
Poke's comment is interesting and I agree with his / her discussion of cultural evolution. But it also is possible to turn this point around to indicate a possible sweet spot in the fitness landscape that we are probably approaching. Conversely, however, I think the character of this sweet spot indicates scant likelihood of a very rapidly self-bootstrapping AGI.
Probably the most important and distinctive aspect of humans is our ability and desire to coordinate (express ourselves to others, imitate others, work with others, etc.). That ability and desire...
I largely agree with Robin's point that smaller incremental steps are necessary.
But Eliezer's point about big jumps deserves a reply. The transitions to humans and to atomic bombs do indicate something to think about -- and for that matter, so does the emergence of computers.
These all seem to me to be cases where the gradually rising or shifting capacities encounter a new "sweet spot" in the fitness landscape. Other examples are the evolution of flight, or of eyes, both of which happened several times. Or trees, a morphological innovation that...
PK, Phil Goetz, and Larry D'Anna are making a crucial point here but I'm afraid it is somewhat getting lost in the noise. The point is (in my words) that lookup tables are a philosophical red herring. To emulate a human being they can't just map external inputs to external outputs. They also have to map a big internal state to the next version of that big external state. (That's what Larry's equations mean.)
If there was no internal state like this, a GLUT couldn't emulate a person with any memory at all. But by hypothesis, it does emulate a person (pe...
Thanks for taking the time and effort to hash out this zombie argument. Often people don't seem get the extreme derangement of the argument that Chalmers actually makes, and imagine because it is discussed in respectable circles it must make sense.
Even the people who do "understand" the argument and still support it don't let themselves see the full consequences. Some of your quotes from Richard Chappell are very revealing in this respect. I think you don't engage with them as directly as you could.
At one point, you quote Chappell:
It's mislea...
Eliezer sayeth: "I want to be individually empowered by producing neato effects myself, without large capital investments and many specialists helping" ... [is] in principle doable - you can get this with, say, the right kind of nanotechnology, or (ahem) other sufficiently advanced tech, and bring it to a large user base..."
Agreed. But as you hint, Eliezer, this case is indistinguishable from magic. So arguably the class of fantasies I mention are equivalent to living in some interesting future. In any case they don't seem to match the sc...
There are a number of fantasy stories where the protagonist is very good at something, largely because they work hard at it, and then they enter a magical world and discover that their skills and work have a lot more impact. Often they have to work hard after they get there to apply their skills. Often the protagonist is a computer hacker and their skills, which in our world only work inside of computers, in a magical context can alter physical / consensual reality. (Examples: Broken Crescent, Web Mage. There are many others. Arguably this pattern goe...
MIT Press has just published Peter Grünwald's The Minimum Description Length Principle. His Preface, Chapter 1, and Chapter 17 are available at that link. Chapter 17 is a comparison of different conceptions of induction.
I don't know this area well enough to judge Peter's wok, but it is certainly informative. Many of his points echo Eliezer's. If you find this topic interesting, Peter's book is definitely worth checking out.
Thanks, Eliezer. Regarding your questions:
I do think there is a good deal of commonality among the reasonable comments about what emergence is and also feel the force of Eliezer's request for negative examples.
I'll try to summarize (and of course over-simplify).
When we have a large collection of interacting elements, and we can measure a property of the collection as a whole, in some cases we'd like to call that property emergent, and in some cases we wouldn't.
I can think of three important cases:
The discussion about the "dissipation" of knowledge from generation to generation (or of piety and trust in God, as ZH says) reminds me of Elizabeth Eisenstein's history of the transition to printing. Manual copying (on average) reduces the accuracy of manuscripts. Printing (on average) increases the accuracy, because printers can keep the type made up into pages, and can fix errors as they are found. Thus a type-set manuscript becomes a (more or less reliable) nexus for the accumulation of increasingly reliable judgments.
Eisenstein's account ...
Great discussion! Regarding majoritarianism and markets, they are both specific judgment aggregation mechanisms with specific domains of application. We need a general theory of judgment aggregation but I don't know if there are any under development.
In a purely speculative market (i.e. no consumption, just looking to maximize return) prices reflect majoritarian averages, weighted by endowment. Of course endowments change over time based on how good or lucky an investor is, so there is some intrinsic reputation effect. Also, investors can go bankrupt, ...
Nick Bostrom's point is important: We should regard the induced competition as a negative externality of the process that induces the competition -- grant writing, consideration for promotion, etc. The "correct" solution as Bostrom points out is to internalize the cost.
I think good companies do this quite carefully with the inducements they build into their culture -- they are looking to only generate competition that will produce net benefits to the company (not always the individuals).
Conversely, there are well known shop floor self-management...
I should mention that the NIPS '08 papers aren't on line yet, but all previous conferences do have the papers, tutorials, slides, background material, etc. on line. For example here's last year.