Again, I invite your feedback on this snippet from an intelligence explosion analysis Anna Salamon and myself have been working on. This section is less complete than the others; missing text is indicated with brackets: [].

_____

 

Many predictions of human-level digital intelligence have been wrong.1 On the other hand, machines surpass human ability at new tasks with some regularity (Kurzweil 2005). For example, machines recently achieved superiority at visually identifying traffic signs at low resolution (Sermanet and LeCun 2011), diagnosing cardiovascular problems from some types of MRI scan images (Li et al. 2009), and playing Jeopardy! (Markoff 2011). Below, we consider several factors that, considered together, appear to increase the odds that we will develop digital intelligence as the century progresses.

More hardware. For at least four decades, computing power2 has increased exponentially, in accordance with Moore’s law.3 Experts disagree on how much longer Moore’s law will hold (e.g. Mack 2011; Lundstrom 2003), but if it holds for two more decades then we may have enough computing power to emulate human brains by 2029.4 Even if Moore’s law fails to hold, our hardware should become much more powerful in the coming decades.5 More hardware doesn’t by itself give us digital intelligence, but it contributes to the development of digital intelligence in several ways:

Powerful hardware may improve performance simply by allowing existing “brute force” solutions to run faster (Moravec, 1976). Where such solutions do not yet exist, researchers might be incentivized to quickly develop them given abundant hardware to exploit. Cheap computing may enable much more extensive experimentation in algorithm design, tweaking parameters or using methods such as genetic algorithms. Indirectly, computing may enable the production and processing of enormous datasets to improve AI performance (Halevi et al., 2009), or result in an expansion of the information technology industry and the quantity of researchers in the field.6

Massive datasets. The greatest leaps forward in speech recognition and translation software have come not from faster hardware or smarter hand-coded algorithms, but from access to massive data sets of human-transcribed and human-translated words (Halevy, Norvig, and Pereira 2009). [add sentence about how datasets are expected to increase massively, or have been increasing massively and trends are expected to continue] [Possibly a sentence about Watson or usefulness of data for AI]

Better algorithms. Mathematical insights can reduce the computation time of a program by many orders of magnitude without additional hardware. For example, IBM’s Deep Blue played chess at the level of world champion Garry Kasparov in 1997 using about 1.5 trillion instructions per second (TIPS), but a program called Deep Junior did it in 2003 using only 0.15 TIPS. Thus, the power of the chess algorithms increased by a factor of 100 in only six years, or 3.33 orders of magnitude per decade (Richard and Shaw 2004). [add sentence about how this sort of improvement is not uncommon, with citations]

Progress in neuroscience. [neuroscientists have figured out brain algorithms X, Y, and Z that are related to intelligence.] New insights into how the brain achieves human-level intelligence can inform our attempts to build human-level intelligence with silicon (van der Velde 2010; Koene 2011). 

Accelerated science. A growing First World will mean that more researchers at well-funded universities will be available to do research relevant to digital intelligence. The world’s scientific output (in publications) grew by a third from 2002 to 2007 alone, much of this driven by the rapid growth of scientific output in developing nations like China and India (Smith 2011). New tools can accelerate particular fields, just as fMRI accelerated neuroscience in the 1990s. Finally, the effectiveness of scientists themselves can potentially be increased with cognitive enhancement drugs (Sandberg and Bostrom 2009) and brain-computer interfaces that allow direct neural access to large databases (Groß 2009). Better collaboration tools like blogs and Google scholar are already yielding results (Nielsen 2011).

Automated science. Early attempts at automated science — e.g., using data mining algorithms to make discoveries from existing data (Szalay and Gray 2006), or having a machine with no physics knowledge correctly infer natural laws from motion-tracking data (Schmidt and Lipson 2009) — were limited by the slowest part of the process: the human in the loop. Recently, the first “closed-loop” robot scientist successfully devised its own hypotheses (about yeast genomics), conducted experiments to test those hypotheses, assessed the results, and made novel scientific discoveries, all without human intervention (King et al. 2009). Current closed-loop robot scientists can only work on a narrow set of scientific problems, but future advances may allow for scalable, automated scientific discovery (Sparkes et al. 2010).

Embryo selection for better scientists. At age 8, Terrence Tao scored 760 on the math SAT, one of only [2?3?] children ever to do this at such an age; he later went on to [have a lot of impact on math]. Studies of similar kids convince researchers that there is a large “aptitude” component to mathematical achievement, even at the high end.7 How rapidly would mathematics or AI progress if we could create hundreds of thousands of Terrence Tao’s? This is a serious question because the creation of large numbers of exceptional scientists is an engineering project that we know in principle how to do. The plummeting costs of genetic sequencing [expected to go below AMOUNT per genome by SOONYEAR e.g. 2015] will soon make it feasible to compare the characteristics of an entire population of adults with those adults’ full genomes, and, thereby, to unravel the heritable components of intelligence, dilligence, and other contributors to scientific achievement. To make large numbers of babies with scientific abilities near the top of the current human range8 would then require only the ability to combine known alleles onto a single genome; procedures that can do this have already been developed for mice. China, at least, appears interested in this prospect.9

It isn’t clear which of these factors will ease progress toward digital intelligence, but it seems likely that — across a broad range of scenarios — some of these inputs will do so.

 

 

____

1 For example, Simon (1965, 96) predicted that “machines will be capable, within twenty years, of doing any work a man can do.”

2 The technical measure predicted by Moore’s law is the density of components on an integrated circuit, but this is closely tied to affordable computing power.

3 For important qualifications, see Nagy et al. (2010); Mack (2011).

4 This calculation depends on the “level of emulation” expected to be necessary for successful WBE. Sandberg and Bostrom (2008) report that attendees to a workshop on WBE tended to expect that emulation at the level of the brain’s spiking neural network, perhaps including membrane states and concentrations of metabolites and neurotransmitters, would be required for successful WBE. They estimate that if Moore’s law continues, we will have the computational capacity to emulate a human brain at the level of its spiking neural network by 2019, or at the level of metabolites and neurotransmitters by 2029.

5 Quantum computing may also emerge during this period. Early worries that quantum computing may not be feasible have been overcome, but it is hard to predict whether quantum computing will contribute significantly to the development of digital intelligence because progress in quantum computing depends heavily on unpredictable insights in quantum algorithms (Rieffel and Polak 2011).

6 Shulman and Sandberg (2010).

7 [Benbow etc. on study of exceptional talent; genetics of g; genetics of conscientiousness and openness, pref. w/ any data linking conscientiousness or openness to scientific achievement.  Try to frame in a way that highlights hard work type variables, so as to alienate people less.]

8 [folks with very top scientific achievement likely had lucky circumstances as well as initial gifts (so that, say, new kids with Einstein’s genome would be expected to average perhaps .8 times as exceptional).  However, one could probably identify genomes better than Einstein’s, both because these technologies would let genomes be combined that had unheard of, vastly statistically unlikely amounts of luck, and because e.g. there are likely genomes out there that are substantially better than Einstein (but on folks who had worse environmental luck).]

9 [find source]

 

 

_____

All references, including the ones used above:

 

 

  • Bainbridge 2006 managing nano-bio-info-cogno innovations
  • Baum Goertzel Goertzel 2011 how long until human-level ai
  • Bostrom 2003 ethical issues in advanced artificial intelligence
  • Legg 2008 machine super intelligence
  • Caplan 2008 the totalitarian threat
  • Sandberg & Bostrom 2011 machine intelligence survey
  • Chalmers 2010 singularity philosophical analysis
  • Turing 1950 machine intelligence
  • Good 1965 speculations concerning...
  • Von neumann 1966 theory of self-reproducing autonomata
  • Solomonoff 1985 the time scale of artificial intelligence
  • Vinge 1993 coming technological singularity
  • Yudkowsky 2001 creating friendly ai
  • Yudkowsky 2008a negative and positive factor in global risk
  • Yudkowsky 2008b cognitive biases potentially affecting
  • Russel Norvig 2010 artificial intelligence a modern approach 3e
  • Nordman 2007 If and then: a critique of speculative nanoethics
  • Moore and Healy the trouble with overconfidence
  • Tversky Kahneman 2002 extensional versus intuitive reasoning, the conjunction fallacy
  • Nickerson 1998 Confirmation Bias; A Ubiquitous Phenomenon in Many Guises
  • Dreyfus 1972 what computers can't do
  • Rhodes 1995 making of the atomic bomb
  • Arrhenius 1896 On the Influence of Carbonic Acid in the Air Upon the Temperature
  • Crawford 1997  Arrhenius' 1896 model of the greenhouse effect in context
  • Rasmussen 1975 WASH-1400 report
  • McGrayne 2011 theory that would not die
  • Lundstrom 2003 Enhanced: Moore’s law forever? 
  • Tversky and Kahneman 1974 Judgment under uncertainty: Heuristics and biases
  • Horgan 1997 end of science
  • Sutton and Barto 1998 reinforcement learning
  • Hutter 2004 universal ai
  • Schmidhuber 2007 godel machines
  • Dewey 2011 learning what to value
  • Simon 1965 The Shape of Automation for Men and Management
  • Marcus 2008 kluge
  • Sandberg Bostrom 2008 whole brain emulation
  • Kurzweil 2005 singularity is near
  • Sermanet Lecun 2011 traffic sign recognition with multi-scale convolutional networks
  • Li et al. 2009 optimizing a medical image analysis system using
  • Markoff 2011 watson trivial it's not
  • Smith 2011 Knowledge networks and nations
  • Sandberg Bostrom 2009 cognitive enhancement regulatory issues
  • Groß 2009 Blessing or Curse? Neurocognitive Enhancement by “Brain Engineering”
  • Williams 2011 prediction markets theory and appilcations
  • Nielsen 2011 reinventing discovery
  • Tetlock 2005 expert judgment
  • Green & Armstrong 2007 The Ombudsman: Value of Expertise for Forecasting 
  • Weinberg et al. 2010 philosophers expert intuiters
  • Szalay and gray 2006 science in an exponential world
  • Schmidt Lipson 2009 distilling free-form natural laws from experimental data
  • King et al. 2009 the automation of science
  • Sparkes et al. 2010 Towards Robot Scientists for autonomous scientific discovery
  • Stanovich 2010 rationality and the reflective mind
  • Lillienfeld, Ammirati, and Landfield 2009 giving debiasing away
  • Lipman 1983 Thinking Skills Fostered by Philosophy for Children
  • Fong et al 1986 The effects of statistical training on thinking about everyday problems
  • Shoemaker (1979). The role of statistical knowledge in gambling decisions
  • Larrick 2004 debiasing
  • Gordon 2007 reasoning about the future of nanotechnology
  • Landeta 2006 Current validity of the delphi method in social sciences
  • Maddison 2001 the world economy a millenial perspective
  • Niparko 2009 cochlear implants principles and practices
  • Bostrom 2002 existential risks
  • Joyce 2007 moral anti-realism stanford encyclopedia of philosophy
  • Portmore 2011 commonsense consequentialism
  • Martin 1971 brief proposal on immortality
  • Bostrom Cirkovic 2008 global catastrophic risks
  • National Academy of Sciences 2010 presistent forecasting of disruptive technologies
  • Donohoe and Needham 2009 Moving best practice forward, Delphi characteristics
  • Gordon 1994 the delphi method
  • Kesten, Armstrong, and Graefe 2007 Methods to Elicit Forecasts from Groups
  • Woudenberg 1991 an evaluation of delphi
  • Armstrong 2006 Findings from evidence-based forecasting
  • Armstrong 1985 Long-Range Forecasting: From Crystal Ball to Computer, 2nd edition
  • Anderson and Anderson-Parente 2011 A case study of long-term Delphi accuracy
  • Bixby 2002 Solving real-world linear programs: A decade and more of progress
  • Fox 2011 the limits of intelligence
  • Friedman 1953 The Methodology of Positive Economics
  • Schneider 2010 homo economicus, or more like Homer Simpsons
  • Cartwright 2011 behavioral economics
  • Bacon and Van Dam 2010 recent progress in quantum algorithms
  • Rieffel Polak 2011 quantum computing a gentle introduction
  • Mack 2011 fifty years of moore’s law
  • Nagy et al. 2010 testing laws of technological progress
  • Lundstrom 2003 Moore’s law forever
  • Shulman Sandberg 2010 implications of a software-limited singularity
  • Moravec 1976 The Role of raw power in intelligence
  • Halevi et al. 2009 The Unreasonable effectiveness of data
  • Alberth 2008 forecasting technology costs via the experience curve
  • Omohundro 2007 the nature of self-improving AI
  • Kurzban 2011 why everyone (else) is a hypocrite: evolution and the modular mind
  • Richard Shaw 2004 chips architectures and algorithms
  • Yudkowsky 2010 timeless decision theory
  • De Blanc Ontological Crises in Artificial Agents' Value Systems
  • Dewey 2011 learning what to value
  • Halevy, Norvig, and Pereira 2009 the unreasonable effectiveness of data
  • Ramachandran 2011 the tell-tale brain
  • van der Velde 2010 Where Artificial Intelligence and Neuroscience Meet
  • Koene 2011 AGI and neuroscience: Open sourcing the brain (in AGI-11 proceedings)
  • Lichtenstein, Fischoff, and Phillips 1982 calibration of probabilities the state of the art to 1980
  • Griffin and Tversky 1992 The weighing of evidence and the determinants of confidence
  • Yates, Lee, Sieck, Choi, Price 2002 Probability judgment across cultures
  • Murphy and Winkler 1984 probability forecasting in meteorology
  • Grove and Meehl 1996 Comparative Efficiency of Informal...
  • Grove et al. 2000 Clinical versus mechanical prediction: A meta-analysis
  • Kandel et al. 2000 principles of neural science, 4th edition
  • Shulman 2010 Omohundro's "Basic AI Drives" and Catastrophic Risks
  • Friedman 1993 Problems of Coordination in Economic Activity
  • Cooke 1991 experts on uncertainty
  • Yampolskiy forthcoming Leakproofing the Singularity
  • Lampson 1973 a note on the confinement problem
  • Schaeffer 1997 one jump ahead
  • Dolan and Sharot 2011 Neuroscience of preference and choice

 

 

New to LessWrong?

New Comment


27 comments, sorted by Click to highlight new comments since:

For some of these, you can play the "magic wand" game to probe the connections between nodes in your belief network:

  • More hardware - suppose you waved a magic wand just now, and suddenly there were 10 times as many computers around (or they all got 10 times faster and bigger), how do you suppose would that get us closer to digital intelligence?
  • Bigger data - magic wand gives you access to every word ever spoken by a human, magically transcribed; how does that get us closer? From the perspective of AGI, statistical machine translation, no matter how wondrous-looking, is just plain dumb - it does not even pretend to be able to generalize insights.
  • Better algorithms - this should really be "faster algorithms"; by definition "better" is what gets us closer to AGI. But short of a breakthrough in complexity theory, optimized algorithms are just an equivalent of faster hardware. Precisely which algorithms would bring us closer to AI if we could speed them up a lot with the magic wand? I can't really see a quicker sort, or matrix inverse, or even a faster traveling salesman (if that was the only algorithm in that class we knew to speed up).

More hardware [...] how do you suppose would that get us closer to digital intelligence?

IF Minsky's "Society of Mind" is near to accurate, then if we had enough separate "narrow" agents operating, we could solve all problems that could be encountered -- call this the "Eusocial Generalization" approach. That is, rather than actually solving the problem of general intelligence, just make programs that solve every last problem we can think of, individually -- and then run them all at once.

Horridly inefficient, but if we had magically infinite computational power available we could at least implement it.

As to the "bigger data" -- an element can be part of the solution without being capable of providing the entire solution. Highly rigorous relational databases allow pattern-matching algorithms to at least perform superior analysis.

folks with very top scientific achievement likely had lucky circumstances as well as initial gifts (so that, say, new kids with Einstein’s genome would be expected to average perhaps .8 times as exceptional)

.8 sounds like a lot, though it depends on what ".8 times as exceptional" means.

For example, IBM’s Deep Blue played chess at the level of world champion Garry Kasparov in 1997 using about 1.5 trillion instructions per second (TIPS), but a program called Deep Junior did it in 2003 using only 0.15 TIPS. Thus, the power of the chess algorithms increased by a factor of 100 in only six years

I'm seeing a factor of 10...

Yep. The source says:

Moravec claims [Deep Blue] to be equivalent to a general-purpose processor having throughput on the order of 1-3 trillion instructions per second (TIPS). [...] The host computer [that Deep Blue's successor Deep Junior ran on] was capable of a peak throughput of approximately 15 billion instructions per second (GIPS).

If we consider the Deep Blue machine to be a 1.5 TIPS machine for arithmetic convenience [....]

So "only 0.15 TIPS" should have been "only 0.015 TIPS".

This calculation depends on the “level of emulation” expected to be necessary for successful WBE. Sandberg and Bostrom (2008) report that attendees to a workshop on WBE tended to expect that emulation at the level of the brain’s spiking neural network, perhaps including membrane states and concentrations of metabolites and neurotransmitters, would be required for successful WBE. They estimate that if Moore’s law continues, we will have the computational capacity to emulate a human brain at the level of its spiking neural network by 2019, or at the level of metabolites and neurotransmitters by 2029.

The roadmap does estimate that we could do a spiking neural network emulation in 2019, but the target dates for the more detailed levels of emulation come later: 2033 for the electrophysiology level, 2044 for the metabolome level. The 2029 estimate is right if you only look at the demands for memory (on page 79), but the demands are higher for processing power (on page 80).

Embryo selection for better scientists.

Breeding for a single ability tends to be detrimental to others. E.g. fastest breeds of dogs/horses are often stupid and/or sickly. I would hate to see an army of geniuses who lack some essential qualities like compassion.

If people get bored of arguing about torture v. specks, for variety one could substitute the hypothetical of creating a human whose superintelligence would benefit the rest of the species in a way similar to an FAI, but at a cost of an existence that was extraordinarily miserable in some way on a personal level.

Under "Progress in Neuroscience", is this the sort of thing you are referring to ?

Buesing L, Bill J, Nessler B, Maass W (2011) Neural Dynamics as Sampling: A Model for Stochastic Computation in Recurrent Networks of Spiking Neurons. PLoS Comput Biol 7(11): e1002211. doi:10.1371/journal.pcbi.1002211

Great paper; thanks!

Embryo selection for better scientists. At age 8, Terrence Tao scored 760 on the math SAT, one of only [2?3?] children ever to do this at such an age; he later went on to [have a lot of impact on math]. Studies of similar kids convince researchers that there is a large “aptitude” component to mathematical achievement, even at the high end.7 How rapidly would mathematics or AI progress if we could create hundreds of thousands of Terrence Tao’s?

Though I think agree with the general point that you're trying to make here (that there's a large "aptitude" component to the skills relevant to AI research and that embryo selection technology could massively increase the number of people who have high aptitude), I don't think that it's so easy to argue:

(a) The math that Terence Tao does is arguably quite remote from AI research.

(b) More broadly, the relevance of mathematical skills to AI research skills is not clear cut.

(c) The SAT tests mathematical aptitude only very obliquely.

(d) Correlation is not causation; my own guess is that high mathematical aptitude as measured by conventional metrics (e.g. mathematical olympiads) is usually necessary but seldom sufficient for the highest levels of success as a mathematical researcher.

(e) Terence Tao is a single example

7 [Benbow etc. on study of exceptional talent; genetics of g; genetics of conscientiousness and openness, pref. w/ any data linking conscientiousness or openness to scientific achievement. Try to frame in a way that highlights hard work type variables, so as to alienate people less.]

Is there really high quality empirical data here? I vaguely remember Carl referencing a study about people at the one in ten thousand level of IQ having more success becoming professors than others, but my impression is that there's not much research in the way of the genetics of high achieving scientists.

For what it's worth I think that the main relevant variable here is a tendency (almost involuntary) to work in a highly focused way for protracted amounts of time. This seems to me much more likely to be the limiting factor than g.

I think that one would be on more solid footing both rhetorically and factually just saying something like "capacity for scientific achievement appears to have a large genetic component and it may be possible to select for genes relevant to high scientific achievement by studying the genes of high achieving scientists."

[-][anonymous]20

Studies of similar kids convince researchers that there is a large “aptitude” component to mathematical achievement, even at the high end.

This sentence seems strange, because aptitude isn't the best choice of word. I know you are trying not to alienate people, but really if someone doesn't accept the existence of innate intelligence with genetic causes, they shouldn't be part of your target audience for this stuff. I would just go ahead and say "there is a large genetically determined component to mathematical achievement".

Also, "even at the high end" should be replaced with "even amongst the most capable individuals", or otherwise changed to be more precise.

This is a serious question because the creation of large numbers of exceptional scientists is an engineering project that we know in principle how to do.

This sentence seems awkward to me.

Perhaps: How rapidly would mathematics or AI progress if we could create hundreds of thousands of Terrence Tao’s? This is not an idle question, because in principle the creation of large numbers of exceptional scientists is a feasible engineering project.

Finally, it seems strange to me that your list of references is longer than the actual excerpt. Is anyone actually going to look at those? It reminds me of Jaynes in PT:TLoS criticising a phd student giving a presentation who spent all of his allotted time setting out his mathematical definitions and being outstandingly rigorous, and never got round to actually demonstrating his findings.

You also have a lot of references in parentheses, which make the piece frustrating to read - hopefully you'll use little numbers instead.

Finally, it seems strange to me that your list of references is longer than the actual excerpt.

The list of references seems to also include the references for the previous (and perhaps also the following?) excerpts.

You also have a lot of references in parentheses, which make the piece frustrating to read - hopefully you'll use little numbers instead.

This will depend entirely on the guidelines of the publication where they'll submit this, but I'll regardless note that I prefer parentheses in the text.

Better algorithms. Mathematical insights can reduce the computation time of a program by many orders of magnitude without additional hardware. For example, IBM’s Deep Blue played chess at the level of world champion Garry Kasparov in 1997 using about 1.5 trillion instructions per second (TIPS), but a program called Deep Junior did it in 2003 using only 0.15 TIPS. Thus, the power of the chess algorithms increased by a factor of 100 in only six years, or 3.33 orders of magnitude per decade (Richard and Shaw 2004). [add sentence about how this sort of improvement is not uncommon, with citations]

One good example is linear programming and related algorithms. Kaj discussed this earlier here(pdf):

In the past, improvements in algorithms have sometimes been even more important than improvements in hardware. The President's Council of Advisors on Science and Technology [2010] mentions that performance on a benchmark production planning model improved by a factor of 43 million between 1988 and 2003. Out of the improvement, a factor of roughly 1,000 was due to better hardware and a factor of roughly 43,000 was due to improvements in algorithms. Also mentioned is an algorithmic improvement of roughly 30,000 for mixed integer programming between 1991 and 2008.

This is a serious question because the creation of large numbers of exceptional scientists is an engineering project that we know in principle how to do. The plummeting costs of genetic sequencing [expected to go below AMOUNT per genome by SOONYEAR e.g. 2015] will soon make it feasible to compare the characteristics of an entire population of adults with those adults’ full genomes, and, thereby, to unravel the heritable components of intelligence, dilligence, and other contributors to scientific achievement.

Way too optimistic. The plummeting costs of genetic sequencing have already made available the full genomes for individuals of many organisms, including humans. However, the results derived from the Human Genome Project, at least as summarised here are rather underwhelming, as far as engineering projects are concerned. What you are proposing is not an engineering project, it is basic research -- that is, no-one knows what the results are going to be until they find them.

The blocking point on that is monumentally massive replication. This has not yet happened.

That's one obstacle, of course, but I'm going with the original supposition of cheap and fast readout of whole genomes being available. If it was, what research proposal would you write? What questions would you expect to be able to answer?

You can 'go with the original supposition of cheap and fast readout of whole genomes being available', but in that case the counterargument is malformed - it's way cheaper than it was, but still way too expensive for monumentally massive replication, so failure to have done so is still expected.

So, what can you do once you have it super-cheap? The main thing to do is to do a huge association fishing-expedition studies, with the enormous numbers being sufficient to make up for the huge numbers of hypotheses being tested, which then lead into studies to determine the nature of the association. The HGP tested what? A few dozen people? That's not going to be statistically significant for just about anything.

When the genome gets cheap enough that it's insignificant compared to the other costs, then it changes the cost analysis for ordinary experiment design in two ways. First, you can add genomic data to existing experiments just to clarify the controls. Secondly, in genomic experiments, it enables you to expand your cohort. This in turn shifts cost-saving focus to the other per-person elements. An experiment could take, say, the full genome, an online IQ test, and several proxies for intelligence, and sample many people, rather than do multiple batteries of IQ tests conducted in person. If a genome costs $5, you can afford to have a cohort that will make the experiment worth something. If a genome costs $1k, you're not going to be able to afford the massive replication, no matter how cheap you make the profiling. Even if you maintain your profiling standards, saving that much money will let you expand your cohort.

They estimate that if Moore’s law continues, we will have the computational capacity to emulate a human brain at the level of its spiking neural network by 2019, or at the level of metabolites and neurotransmitters by 2029.

...for one million dollars. Note that that would not be cost-competitive with humans.

Typical humans, no.

That said, there are individuals that corporations pay more than a million dollars to rent the time of. If we assume that decision is cost-effective (which is a big "if"), getting to own those individuals outright for a million dollars might be a bargain.

Here, we risk crossing over from the realm of wondering "how much computer power it would take" into the bizarre fantasy realm - where emulations actually happen before engineered machine intelligence does.

Agreed. OTOH, to my mind we'd already made that crossover earlier in the discussion, as well... once we have engineered human-level machine intelligences in the mix, all assumptions about how much anything costs are just pretty-sounding numbers, so to talk about emulations costing a million dollars (or any other particular number) already presumes that we don't have engineered human-level machine intelligences yet.

Massive datasets.

Using “captchas” to digitize books

A growing First World will mean

E.g. Romania joins the first world, or Germany grows its economy?

However, one could probably identify genomes better than Einstein’s, both because these technologies would let genomes be combined that had unheard of, vastly statistically unlikely amounts of luck, and because e.g. there are likely genomes out there that are substantially better than Einstein (but on folks who had worse environmental luck).

We have every reason to believe that humans' maximum potential intelligence is greater than any achieved by anyone in the past, be it Archimedes, Einstein, or anyone else.

Because academic achievement is the result of many biological, environmental, social, and other factors, those we can identify as having had the most outstandingly productive and creative minds were likely not as smart as some members of the vast majority of humanity that had no opportunity to noticeably intellectually distinguish itself.

Much more importantly, the vast majority of possible combinations of human alleles have never been combined [some preposition or phrase] a person. As scientists learn more and more about different combinations of alleles and their effects on intelligence, diligence, and other traits, they will be able to make combinations of alleles that would have been extremely unlikely to have ever occurred in nature.

Perhaps consider breaking "Accelerated science" down into: more scientists, more technologists, more programmers, and better programming tools. Also, consider scratching embryo selection as being too low down on the list.

because these technologies would let genomes be combined that had unheard of, vastly statistically unlikely amounts of luck

This sounds like you mean a Teela Brown gene. Do you actually mean that? If so, that's kind of crazy. If not, rephrase?

I think ey means something like this:

because these technologies would let genomes be combined that would have taken unheard of amounts of luck to arise naturally at any significant frequency.

I understood this to mean "would let genomes be [deliberately] combined that [would require] unheard of, vastly statistically unlikely amounts of luck [to occur naturally]"