Bostrom lists a number of serious potential risks from technologies other than AI on page 231, but he apparently stops short of saying that science in general may soon reach a point where it will be too dangerous to be allowed to develop without strict controls. He considers whether AGI could be the tool that prevents these other technologies from being used catastrophically, but the unseen elephant in this room is the total surveillance state that would be required to prevent misuse of these technologies in the near future – and as long as humans remain recognizably human and there’s something left to be lost from UFAI. Is the centralized surveillance of everything, everywhere the future with the least existential risk?
Penicillin: it is even weirder why didn't they put honey on wounds before that. Already Alexander's corpse in 323 BCE was preserved in a coffin filled with honey, they knew it prevents rot.
Wheels in the mountainous and wooded parts of the americas would've not had terribly much of a point without draft animals or long level paths.
Wheelbarrows are useful even if all you have is short mostly-level paths, even if you don't have paths much longer than the width of a construction site. Then once those are in use, the incentive (and the ability) to lengthen and flatten other paths is greatly increased.
Wooded parts of the Americas did have some famously long paths though I don't know how passable they would be for carts.
Jared Diamond wrote that North america had not good animals for domestication. (sorry I dont remember in which book) It could be showstopper for using wheel massively.
Wheelbarrows, hand carts are still massively useful. I used to help out with construction. It is hard enough with wheelbarrows. We did not use them on roads, just around the site.
Wheelbarrows, hand carts are still massively useful.
If you construct things out of bricks or stone, yes. If you live in a wigwam or a hut of sticks and dry leaves, no.
I'd still expect carts to be useful in cultures that use loads of wood, or maybe to transport larger quantities of materials for trade. For example, this Seneca story has a man burning logs down to a size he can easily carry. Some northern peoples used dogs as pack animals, but the only land vehicles I'm aware of were sleds.
I'd still expect carts to be useful in cultures that use loads of wood
Wood generally comes from the forest and carts are not all that useful in a forest...
Probably because of the curse of perspective - that is, that funny sensation that others just must have the same point of view as I do, since it is so blatantly obvious - I tend to read Karnofsky as if he was just the elected sacrificial goat, the person that ended up having to pretend in public to be of an opinion everyone agrees is important someone sustains, but no one is in position or willing to pay the cost of actually sustaining it. Chalmers espouses a theory of consciousness that seems to fall short of his gargantuan intellectual capability in papers about anything else, perhaps - some say - to foster that other smart people tackle the problem of consciousness head-on, and eventually someone comes up with a good way to formulate it etc... Same goes for Karnofsky. It's hard to believe he believes, but I understand his fundamental role on the ecosystem, and am glad that his position is as well defended as it is. - The curse of perspective, like the curse of knowledge, can make us very patronizing, unfortunately.
It seems like building a group of people who have some interest in reducing x-risk (like the EA movement) is a strategy that is less likely to backfire and more likely to produce positive outcomes than the technology pathway interventions discussed in this chapter. Does anyone think this is not the case?
The definition of state risk/step risk and how to define "levers" that one might pull on development (like the macro-structural development accelerator) - these start to break down the technology strategy problem, which previously seemed like a big goopy mess. (It still seems like a big goopy mess, but maybe we can find things in it that are useful).
prosperity going even incrementally onwards as a for-ever process is impossible to maintain. This happened nearly 80 yrs ago with the big Crash [1929] when the perfect socity [USA] couldn't save itself from a mega disaster of major proportions. Yet across the Atlantic neither Italy nor Germany [after 33] suffered. So it is a matter of applying collective intelligence to this reward system. The 1950s achieved a dream run that stalled 20 yrs later and the oil shock was a symptom not a cause. Collective failure was the debilitating source of this slow down. During the 80s Australia had a progressive govt [Labor] which adjusted to the neo-conservatives whilst America and the UK caused nothing but grief. The point is that it is attitude that creates prosperity. Even ideology. We have to think and apply macro models applied in micro ways like universal health care which works in several countries. That too can be considered prosperity because in New Zealand you wont go broke going to a hospital. Owning a car is not prosperity but spin-masters want this criterion included for purely monetary advantage. Car ownership is a burden on one's pocket. Govt subsidized public transport is not. It too works. [used to be a bus driver in Sydney]. Increased prosperity is a false vision. Secure prosperity might be a better ideological attitude. America's unemployment % might be low but when the basic wage is so low you can't exist on it then there won't be much prosperity for the workers who in countries that have collective bargaining entrenched in the social contract such as Germany prosperity is assured. The anglo-american mercantile model of rapacious exploitive capitalism guarantees almost next to nothing whilst the European model though not perfect by even their standards guarantees its participants a better overall quality of life missing in pure monetary gauges that seem to miss the essential: the human equation which is not a mere adjunct to investors who have no conscience that their earnings might come from child labour in some unfortunate re-developing country. Secure prosperity should be the foundation of society.
Contribution to the techonlogical conjectures game:
Anaesthesia: We could have had it literally two thousand years before we did.
It was worthwhile to look up arguments made in favor of surgical, labor, and other categories of pain made before its' widespread use in the 20th century. We have modern surgery and the associated decline in mortality and morbidity because of the germ theory of disease (which brought us sterilization), penicillin, anaesthesia, and to a less extent, power tools. Anaesthesia only entered the popular consciousness because an American dentist decided 'pain free dentistry' might be an effective marketing ploy, and was not accepted in that dentist's lifetime.
Interestingly enough, some of the surgical papers proposing many medical procedures that are now common include something to the effect of "we weren't sure how to proceed so we went to the local hardware store and..."
My takeaway from reading up on the case for pain was as follows: if "something something self-discipline" is the best argument in favor of something, it's a problem waiting to be solved.
This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.
Welcome. This week we discuss the twenty-sixth section in the reading guide: Science and technology strategy. Sorry for posting late—my car broke.
This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.
There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).
Reading: “Science and technology strategy” from Chapter 14
Summary
Another view
There is a common view which says we should not act on detailed abstract arguments about the far future like those of this section. Here Holden Karnofsky exemplifies it:
Notes
1. Technological completion timelines game
The technological completion conjecture says that all the basic technological capabilities will eventually be developed. But when is 'eventually', usually? Do things get developed basically as soon as developing them is not prohibitively expensive, or is thinking of the thing often a bottleneck? This is relevant to how much we can hope to influence the timing of technological developments.
Here is a fun game: How many things can you find that could have been profitably developed much earlier than they were?
Some starting suggestions, which I haven't looked into:
Wheeled luggage: invented in the 1970s, though humanity had had both wheels and luggage for a while.
Hot air balloons: flying paper lanterns using the same principle were apparently used before 200AD, while a manned balloon wasn't used until 1783.
Penicillin: mould was apparently traditionally used for antibacterial properties in several cultures, but lots of things are traditionally used for lots of things. By the 1870s many scientists had noted that specific moulds inhibited bacterial growth.
Wheels: Early toys from the Americas appear to have had wheels (here and pictured is one from 1-900AD; Wikipedia claims such toys were around as early as 1500BC). However wheels were apparently not used for more substantial transport in the Americas until much later.
Image: "Remojadas Wheeled Figurine"
There are also cases where humanity has forgotten important insights, and then rediscovered them again much later, which suggests strongly that they could have been developed earlier.
2. How does economic growth affect AI risk?
Eliezer Yudkowsky argues that economic growth increases risk. I argue that he has the sign wrong. Others argue that probably lots of other factors matter more anyway. Luke Muehlhauser expects that cognitive enhancement is bad, largely based on Eliezer's aforementioned claim. He also points out that smarter people are different from more rational people. Paul Christiano outlines his own evaluation of economic growth in general, on humanity's long run welfare. He also discusses the value of continued technological, economic and social progress more comprehensibly here.
3. The person affecting perspective
Some interesting critiques: the non-identity problem, taking additional people to be neutral makes other good or bad things neutral too, if you try to be consistent in natural ways.
In-depth investigations
If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.
How to proceed
This has been a collection of notes on the chapter. The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!
Next week, we will talk about the desirability of hardware progress, and progress toward brain emulation. To prepare, read “Pathways and enablers” from Chapter 14. The discussion will go live at 6pm Pacific time next Monday 16th March. Sign up to be notified here.