What personal factors, if any, cause some people to tend towards one direction or another in some of these key prognostications?
For example, do economists tend more towards multiagent scenarios while computer scientists or ethicists tend more towards singleton prognostications?
Do neuroscientists tend more towards thinking that WBE will come first and AI folks more towards AGI, or the opposite?
Do professional technologists tend to have earlier timelines and others later timelines, or vice versa?
Do tendencies towards the political left or right influence s...
...Lifelong depression of intelligence due to iodine deficiency remains widespread in many impoverished inland areas of the world--an outrage given that the condition can be prevented by fortifying table salt at a cost of a few cents per person and year.
According to the World Health Organization in 2007, nearly 2 billion individuals have insufficient iodine intake. Severe iodine deficiency hinders neurological development and leads to cretinism, which involves an average loss of about 12.5 IQ points. The condition can be easily and inexpensively prevented th
As of July 30, GiveWell considers the International Council for the Control of Iodine Deficiency Disorders Global Network (ICCIDD) a contender for their 2014 recommendation, according to their ongoing review. They also mention that they're considering the Global Alliance for Improved Nutrition (GAIN), which they've had their eye on for a few years. They describe some remaining uncertainties -- this has been a major philanthropic success for the past couple decades, so why is there a funding gap now, well before the work is finished? Is it some sort of donor fatigue, or are the remaining countries that need iodization harder to work in, or is it something else?
(Also, average gains from intervention seem to be more like 3-4 IQ points.)
Do you have a prefered explanation for the Flynn effect?
The Norwegian military conscripts above were part of a paper suggesting an interesting theory I hadn't heard before: that children are less intelligent as more are added to families, and so intelligence has risen as the size of families has shrunk.
'Let an ultraintelligent person be defined as a person who can far surpass all the intellectual activities of any other person however clever. Since the improvement of people is one of these intellectual activities, an ultraintelligent person could produce even better people; there would then unquestionably be an 'intelligence explosion,' and the intelligence of ordinary people would be left far behind. Thus the first ultraintelligent person is the last invention that people need ever make, provided that the person is docile enough to tell us how to keep them under control.'
Does this work?
Economic history suggests big changes are plausible.
Sure, but it is hard to predict what changes are going to happen and when.
In particular, major economic changes are typically precipitated by technological breakthroughs. It doesn't seem that we can predict these breakthroughs looking at the economy, since the causal relationship is mostly the other way.
AI progress is ongoing.
Ok.
AI progress is hard to predict, but AI experts tend to expect human-level AI in mid-century.
But AI experts have a notoriously poor track record at predicting human-leve...
I'd like to propose another possible in-depth investigation: How efficiently can money and research be turned into faster development of biological cognitive enhancement techniques such as iterated embryo selection? My motivation for asking that question is that, since extreme biological cognitive enhancement could reduce existential risk and other problems by creating people smart enough to be able to solve them (assuming we last long enough for them to mature, of course), it might make sense to pursue it if it can be done efficiently. Given the scarcity ...
Brain-computer interfaces for healthy people don't seem to help much, according to Bostrom. Can you think of BCIs that might plausibly exist before human-level machine intelligence, which you would expect to be substantially useful? (p46)
If ten percent of the population used a technology that made their children 10 IQ points smarter, how strong do you think the pressure would be for others to take it up? (p43)
If parents had strong embryo selection available to them, how would the world be different, other than via increased intelligence?
Ambiguities around 'intelligence' often complicate discussions about superintelligence, so it seems good to think about them a little.
Some common concerns: is 'intelligence' really a thing? Can intelligence be measured meaningfully as a single dimension? Is intelligence the kind of thing that can characterize a wide variety of systems, or is it only well-defined for things that are much like humans? (Kruel's interviewees bring up these points several times)
What do we have to assume about intelligence to accept Bostrom's arguments? For instance, does the cl...
What are the trends in those things that make groups of humans smarter? e.g. How will world capacity for information communication change over the coming decades? (Hilbert and Lopez's work is probably relevant)
A social / economic / political system is not just analogous to, but is, an artificial intelligence. Its purpose is to sense the environment and use that information to choose actions that further its goals. The best way to make groups of humans smarter would be to consciously apply what we've learned from artificial intelligence to human organiza...
If a technology existed that could make your children 10 IQ points smarter, how willing do you think people would be to use it? (p42-3)
On the other hand, we could point to Down syndrome eugenics: while it's true that Down's has fallen a lot in America thanks to selective abortion, it's also true that Down's has not disappeared and the details make me pessimistic about any widespread use in America of embryo selection for relatively modest gains.
An interesting paper: "Decision Making Following a Prenatal Diagnosis of Down Syndrome: An Integrative Review", Choi et al 2012 (excerpts). To summarize:
the people who do abort tend to be motivated to do so out of fear: fear that a Down's child will be too demanding and wreck their life.
Not out of concern for the child's reduced quality of life, because Down's syndrome is extremely expensive to society, because sufferers go senile in their 40s, because they're depriving a healthy child of the chance to live etc - but personal selfishness.
Add onto this:
most people see and endorse a strong asymmetry between 'healing the sick' and 'improving the healthy'
You can see this in the citation in Shulman &a
If parents could choose their preferred embryos from a large number, with good knowledge of the characteristics the children would have, how much of this selection power do you think those who used this power would spend on intelligence? (p39)
This chapter seems like the right place to add this to the conversation. Near the end of the book, Bostrom suggests that a lot of work should be put into generating crucial considerations for superintelligent AI. I've made a draft list of some crucial considerations, but it's definitely the sort of thing that should grow and change as other people make their own versions of it. Biological superintelligences didn't really make my list at all yet.
Bostrom says it is probably infeasible to 'download' large chunks of data from one brain to another, because brains are idiosyncratically formatted and meaning is likely spread holistically through patterns in a large number of neurons (p46). Do you agree? Do you think this puts such technology out of reach until after human-level machine intelligence?
Even if there were an easy way of pumping more information into our brains, the extra data inflow would do little to increase the rate at which we think and learn unless all the neural machinery necessary for making sense of the data were similarly upgraded. (p45-6)
This seems far from obvious to me. Firstly, why suppose that making sense of the data is such a bottleneck? And then even if making sense is a bottleneck, if the data is in a different form it might be easier to make sense of.
Intuitively, things that are already inside one's head are much ea...
Do you think the consequences listed in Table 6 (Possible impacts from genetic selection in different scenarios) are accurate? (p40) What does 'posthumanity' look like? What other consequences might you expect in these scenarios?
Intra-individual neuroplasticity and IQ - Something we can do for ourselves (and those we care about) right now
Sorry to get this one in at the last minute, but better late than..., and some of you will see this.
Many will be familiar with the Harvard psychiatrist, neuroscience researcher, and professor of medicine, John Ratey, MD., from seeing his NYT bestselling books in recent years. He excels at writing for the intelligent lay audience, yet not dumbing down his books to the point where they are useless to those of us who read above the laymans' level in...
This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.
Welcome. This week we finish chapter 2 with three more routes to superintelligence: enhancement of biological cognition, brain-computer interfaces, and well-organized networks of intelligent agents. This corresponds to the fourth section in the reading guide, Biological Cognition, BCIs, Organizations.
This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. My own thoughts and questions for discussion are in the comments.
There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).
Reading: “Biological Cognition” and the rest of Chapter 2 (p36-51)
Summary
Biological intelligence
Brain-computer interfaces
Networks and organizations
Summary
The book so far
Here's a recap of what we have seen so far, now at the end of Chapter 2:
Do you disagree with any of these points? Tell us about it in the comments.
Notes
Snake Oil Supplements? is a nice illustration of scientific evidence for different supplements, here filtered for those with purported mental effects, many of which relate to intelligence. I don't know how accurate it is, or where to find a summary of apparent effect sizes rather than evidence, which I think would be more interesting.
Ryan Carey and I talked to Gwern Branwen - an independent researcher with an interest in nootropics - about prospects for substantial intelligence amplification. I was most surprised that Gwern would not be surprised if creatine gave normal people an extra 3 IQ points.
And some more health-specific ones.
People have apparently been getting smarter by about 3 points per decade for much of the twentieth century, though this trend may be ending. Several explanations have been proposed. Namesake James Flynn has a TED talk on the phenomenon. It is strangely hard to find a good summary picture of these changes, but here's a table from Flynn's classic 1978 paper of measured increases at that point:
Here are changes in IQ test scores over time in a set of Polish teenagers, and a set of Norwegian military conscripts respectively:
This study uses 'Genome-wide Complex Trait Analysis' (GCTA) to estimate that about half of variation in fluid intelligence in adults is explained by common genetic variation (childhood intelligence may be less heritable). These studies use genetic data to predict 1% of variation in intelligence. This genome-wide association study (GWAS) allowed prediction of 2% of education and IQ. This study finds several common genetic variants associated with cognitive performance. Stephen Hsu very roughly estimates that you would need a million samples in order to characterize the relationship between intelligence and genetics. According to Robertson et al, even among students in the top 1% of quantitative ability, cognitive performance predicts differences in occupational outcomes later in life. The Social Science Genetics Association Consortium (SSGAC) lead research efforts on genetics of education and intelligence, and are also investigating the genetics of other 'social science traits' such as self-employment, happiness and fertility. Carl Shulman and Nick Bostrom provide some estimates for the feasibility and impact of genetic selection for intelligence, along with a discussion of reproductive technologies that might facilitate more extreme selection. Robert Sparrow writes about 'in vitro eugenics'. Stephen Hsu also had an interesting interview with Luke Muehlhauser about several of these topics, and summarizes research on genetics and intelligence in a Google Tech Talk.
For Parkinson's disease relief, allowing locked in patients to communicate, handwriting, and controlling robot arms.
Big ones I can think of include innovations in using text (writing, printing, digital text editing), communicating better in other ways (faster, further, more reliably), increasing population size (population growth, or connection between disjoint populations), systems for trade (e.g. currency, finance, different kinds of marketplace), innovations in business organization, improvements in governance, and forces leading to reduced conflict.
In-depth investigations
If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.
How to proceed
This has been a collection of notes on the chapter. The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!
Next week, we will talk about 'forms of superintelligence', in the sense of different dimensions in which general intelligence might be scaled up. To prepare, read Chapter 3, Forms of Superintelligence (p52-61). The discussion will go live at 6pm Pacific time next Monday 13 October. Sign up to be notified here.