This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.


Welcome. This week we discuss the twenty-sixth section in the reading guideScience and technology strategy. Sorry for posting late—my car broke.

This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.

There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).

Reading: “Science and technology strategy” from Chapter 14


Summary

  1. This section will introduce concepts that are useful for thinking about long term issues in science and technology (p228)
  2. Person affecting perspective: one should act in the best interests of everyone who already exists, or who will exist independent of one's choices (p228) 
  3. Impersonal perspective: one should act in the best interests of everyone, including those who may be brought into existence by one's choices. (p228)
  4. Technological completion conjecture: "If scientific and technological development efforts do not cease, then all important basic capabilities that could be obtained through some possible technology will be obtained." (p229)
    1. This does not imply that it is futile to try to steer technology. Efforts may cease. It might also matter exactly when things are developed, who develops them, and in what context.
  5. Principle of differential technological development: one should slow the development of dangerous and harmful technologies relative to beneficial technologies (p230)
  6. We have a preferred order for some technologies, e.g. it is better to have superintelligence later relative to social progress, but earlier relative to other existential risks. (p230-233)
  7. If a macrostructural development accelerator is a magic lever which slows the large scale features of history (e.g. technological change, geopolitical dynamics) while leaving the small scale features the same, then we can ask whether pulling the lever would be a good idea (p233). The main way Bostrom concludes that it matters is by affecting how well prepared humanity is for future transitions.
  8. State risk: a risk that persists while you are in a certain situation, such that the amount of risk is a function of the time spent there. e.g. risk from asteroids, while we don't have technology to redirect them. (p233-4)
  9. Step risk: a risk arising from a transition. Here the amount of risk is mostly not a function of how long the transition takes. e.g. traversing a minefield: this is not especially safer if you run faster. (p234)
  10. Technology coupling: a predictable timing relationship between two technologies, such that hastening of the first technology will hasten the second, either because the second is a precursor or because it is a natural consequence. (p236-8) e.g. brain emulation is plausibly coupled to 'neuromorphic' AI, because the understanding required to emulate a brain might allow one to more quickly create an AI on similar principles.
  11. Second guessing: acting as if "by treating others as irrational and playing to their biases and misconceptions it is possible to elicit a response from them that is more competent than if a case had been presented honestly and forthrightly to their rational faculties" (p238-40)

Another view

There is a common view which says we should not act on detailed abstract arguments about the far future like those of this section. Here Holden Karnofsky exemplifies it:

I have often been challenged to explain how one could possibly reconcile (a) caring a great deal about the far future with (b) donating to one of GiveWell’s top charities. My general response is that in the face of sufficient uncertainty about one’s options, and lack of conviction that there are good (in the sense of high expected value) opportunities to make an enormous difference, it is rational to try to make a smaller but robustly positivedifference, whether or not one can trace a specific causal pathway from doing this small amount of good to making a large impact on the far future. A few brief arguments in support of this position:

  • I believe that the track record of “taking robustly strong opportunities to do ‘something good'” is far better than the track record of “taking actions whose value is contingent on high-uncertainty arguments about where the highest utility lies, and/or arguments about what is likely to happen in the far future.” This is true even when one evaluates track record only in terms of seeming impact on the far future. The developments that seem most positive in retrospect – from large ones like the development of the steam engine to small ones like the many economic contributions that facilitated strong overall growth – seem to have been driven by the former approach, and I’m not aware of many examples in which the latter approach has yielded great benefits.
  • I see some sense in which the world’s overall civilizational ecosystem seems to have done a better job optimizing for the far future than any of the world’s individual minds. It’s often the case that people acting on relatively short-term, tangible considerations (especially when they did so with creativity, integrity, transparency, consensuality, and pursuit of gain via value creation rather than value transfer) have done good in ways they themselves wouldn’t have been able to foresee. If this is correct, it seems to imply that one should be focused on “playing one’s role as well as possible” – on finding opportunities to “beat the broad market” (to do more good than people with similar goals would be able to) rather than pouring one’s resources into the areas that non-robust estimates have indicated as most important to the far future.
  • The process of trying to accomplish tangible good can lead to a great deal of learning and unexpected positive developments, more so (in my view) than the process of putting resources into a low-feedback endeavor based on one’s current best-guess theory. In my conversation with Luke and Eliezer, the two of them hypothesized that the greatest positive benefit of supporting GiveWell’s top charities may have been to raise the profile, influence, and learning abilities of GiveWell. If this were true, I don’t believe it would be an inexplicable stroke of luck for donors to top charities; rather, it would be the sort of development (facilitating feedback loops that lead to learning, organizational development, growing influence, etc.) that is often associated with “doing something well” as opposed to “doing the most worthwhile thing poorly.”
  • I see multiple reasons to believe that contributing to general human empowerment mitigates global catastrophic risks. I laid some of these out in a blog post and discussed them further in my conversation with Luke and Eliezer.

Notes

1. Technological completion timelines game
The technological completion conjecture says that all the basic technological capabilities will eventually be developed. But when is 'eventually', usually? Do things get developed basically as soon as developing them is not prohibitively expensive, or is thinking of the thing often a bottleneck? This is relevant to how much we can hope to influence the timing of technological developments.

Here is a fun game: How many things can you find that could have been profitably developed much earlier than they were?

Some starting suggestions, which I haven't looked into:

Wheeled luggage: invented in the 1970s, though humanity had had both wheels and luggage for a while.

Hot air balloons: flying paper lanterns using the same principle were apparently used before 200AD, while a manned balloon wasn't used until 1783.

Penicillin: mould was apparently traditionally used for antibacterial properties in several cultures, but lots of things are traditionally used for lots of things. By the 1870s many scientists had noted that specific moulds inhibited bacterial growth.

Wheels: Early toys from the Americas appear to have had wheels (here and pictured is one from 1-900AD; Wikipedia claims such toys were around as early as 1500BC). However wheels were apparently not used for more substantial transport in the Americas until much later.

Image: "Remojadas Wheeled Figurine"

There are also cases where humanity has forgotten important insights, and then rediscovered them again much later, which suggests strongly that they could have been developed earlier.

2. How does economic growth affect AI risk?

Eliezer Yudkowsky argues that economic growth increases risk. I argue that he has the sign wrong. Others argue that probably lots of other factors matter more anyway. Luke Muehlhauser expects that cognitive enhancement is bad, largely based on Eliezer's aforementioned claim. He also points out that smarter people are different from more rational people. Paul Christiano outlines his own evaluation of economic growth in general, on humanity's long run welfare. He also discusses the value of continued technological, economic and social progress more comprehensibly here

3. The person affecting perspective

Some interesting critiques: the non-identity problem, taking additional people to be neutral makes other good or bad things neutral too, if you try to be consistent in natural ways.

In-depth investigations

If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.

  1. Is macro-structural acceleration good or bad on net for AI safety? 
  2. Choose a particular anticipated technology. Is it's development good or bad for AI safety on net?
  3. What is the overall current level of “state risk” from existential threats? 
  4. What are the major existential-threat “step risks” ahead of us, besides those from superintelligence? 
  5. What are some additional “technology couplings,” in addition to those named in Superintelligence, ch. 14?
  6. What are further preferred orderings for technologies not mentioned in this section?
If you are interested in anything like this, you might want to mention it in the comments, and see whether other people have useful thoughts.

How to proceed

This has been a collection of notes on the chapter.  The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

Next week, we will talk about the desirability of hardware progress, and progress toward brain emulation. To prepare, read “Pathways and enablers” from Chapter 14. The discussion will go live at 6pm Pacific time next Monday 16th March. Sign up to be notified here.

New Comment
21 comments, sorted by Click to highlight new comments since:

How high do you think state risks are at the moment?

Isn't there empirical data on this all over the place?

How plausible do you find the key points in this chapter? (see list above)

Bostrom lists a number of serious potential risks from technologies other than AI on page 231, but he apparently stops short of saying that science in general may soon reach a point where it will be too dangerous to be allowed to develop without strict controls. He considers whether AGI could be the tool that prevents these other technologies from being used catastrophically, but the unseen elephant in this room is the total surveillance state that would be required to prevent misuse of these technologies in the near future – and as long as humans remain recognizably human and there’s something left to be lost from UFAI. Is the centralized surveillance of everything, everywhere the future with the least existential risk?

[-][anonymous]30

Penicillin: it is even weirder why didn't they put honey on wounds before that. Already Alexander's corpse in 323 BCE was preserved in a coffin filled with honey, they knew it prevents rot.

[-][anonymous]30

Wheels in the mountainous and wooded parts of the americas would've not had terribly much of a point without draft animals or long level paths.

Wheelbarrows are useful even if all you have is short mostly-level paths, even if you don't have paths much longer than the width of a construction site. Then once those are in use, the incentive (and the ability) to lengthen and flatten other paths is greatly increased.

Wooded parts of the Americas did have some famously long paths though I don't know how passable they would be for carts.

[-]Liso00

Jared Diamond wrote that North america had not good animals for domestication. (sorry I dont remember in which book) It could be showstopper for using wheel massively.

[-][anonymous]00

Wheelbarrows, hand carts are still massively useful. I used to help out with construction. It is hard enough with wheelbarrows. We did not use them on roads, just around the site.

Wheelbarrows, hand carts are still massively useful.

If you construct things out of bricks or stone, yes. If you live in a wigwam or a hut of sticks and dry leaves, no.

I'd still expect carts to be useful in cultures that use loads of wood, or maybe to transport larger quantities of materials for trade. For example, this Seneca story has a man burning logs down to a size he can easily carry. Some northern peoples used dogs as pack animals, but the only land vehicles I'm aware of were sleds.

I'd still expect carts to be useful in cultures that use loads of wood

Wood generally comes from the forest and carts are not all that useful in a forest...

[-][anonymous]00

Absolutely, I had Tenochtitlan in mind, not Winnetou.

What do you think of Holden's view?

Probably because of the curse of perspective - that is, that funny sensation that others just must have the same point of view as I do, since it is so blatantly obvious - I tend to read Karnofsky as if he was just the elected sacrificial goat, the person that ended up having to pretend in public to be of an opinion everyone agrees is important someone sustains, but no one is in position or willing to pay the cost of actually sustaining it. Chalmers espouses a theory of consciousness that seems to fall short of his gargantuan intellectual capability in papers about anything else, perhaps - some say - to foster that other smart people tackle the problem of consciousness head-on, and eventually someone comes up with a good way to formulate it etc... Same goes for Karnofsky. It's hard to believe he believes, but I understand his fundamental role on the ecosystem, and am glad that his position is as well defended as it is. - The curse of perspective, like the curse of knowledge, can make us very patronizing, unfortunately.

It seems like building a group of people who have some interest in reducing x-risk (like the EA movement) is a strategy that is less likely to backfire and more likely to produce positive outcomes than the technology pathway interventions discussed in this chapter. Does anyone think this is not the case?

What was your favorite part of this section?

The definition of state risk/step risk and how to define "levers" that one might pull on development (like the macro-structural development accelerator) - these start to break down the technology strategy problem, which previously seemed like a big goopy mess. (It still seems like a big goopy mess, but maybe we can find things in it that are useful).

Do you think increased prosperity now is good for the long term?

prosperity going even incrementally onwards as a for-ever process is impossible to maintain. This happened nearly 80 yrs ago with the big Crash [1929] when the perfect socity [USA] couldn't save itself from a mega disaster of major proportions. Yet across the Atlantic neither Italy nor Germany [after 33] suffered. So it is a matter of applying collective intelligence to this reward system. The 1950s achieved a dream run that stalled 20 yrs later and the oil shock was a symptom not a cause. Collective failure was the debilitating source of this slow down. During the 80s Australia had a progressive govt [Labor] which adjusted to the neo-conservatives whilst America and the UK caused nothing but grief. The point is that it is attitude that creates prosperity. Even ideology. We have to think and apply macro models applied in micro ways like universal health care which works in several countries. That too can be considered prosperity because in New Zealand you wont go broke going to a hospital. Owning a car is not prosperity but spin-masters want this criterion included for purely monetary advantage. Car ownership is a burden on one's pocket. Govt subsidized public transport is not. It too works. [used to be a bus driver in Sydney]. Increased prosperity is a false vision. Secure prosperity might be a better ideological attitude. America's unemployment % might be low but when the basic wage is so low you can't exist on it then there won't be much prosperity for the workers who in countries that have collective bargaining entrenched in the social contract such as Germany prosperity is assured. The anglo-american mercantile model of rapacious exploitive capitalism guarantees almost next to nothing whilst the European model though not perfect by even their standards guarantees its participants a better overall quality of life missing in pure monetary gauges that seem to miss the essential: the human equation which is not a mere adjunct to investors who have no conscience that their earnings might come from child labour in some unfortunate re-developing country. Secure prosperity should be the foundation of society.

[-][anonymous]00

Contribution to the techonlogical conjectures game:

Anaesthesia: We could have had it literally two thousand years before we did.

It was worthwhile to look up arguments made in favor of surgical, labor, and other categories of pain made before its' widespread use in the 20th century. We have modern surgery and the associated decline in mortality and morbidity because of the germ theory of disease (which brought us sterilization), penicillin, anaesthesia, and to a less extent, power tools. Anaesthesia only entered the popular consciousness because an American dentist decided 'pain free dentistry' might be an effective marketing ploy, and was not accepted in that dentist's lifetime.

Interestingly enough, some of the surgical papers proposing many medical procedures that are now common include something to the effect of "we weren't sure how to proceed so we went to the local hardware store and..."

My takeaway from reading up on the case for pain was as follows: if "something something self-discipline" is the best argument in favor of something, it's a problem waiting to be solved.

[This comment is no longer endorsed by its author]Reply