Dr. Jubjub: Sir, I have been running some calculations and I’m worried about the way our slithy toves are heading.

Prof. Bandersnatch: Huh? Why? The toves seem fine to me. Just look at them, gyring and gimbling in the wabe over there.

Dr. Jubjub: Yes, but there is a distinct negative trend in my data. The toves are gradually losing their slithiness.

Prof. Bandersnatch: Hmm, okay. That does sound serious. How long until it becomes a problem?

Dr. Jubjub: Well, I’d argue that it’s already having negative effects but I’d say we will reach a real crisis in around 120 years.

Prof. Bandersnatch: Phew, okay, you had me worried there for a moment. But it sounds like this is actually a non-problem. We can carry on working on the important stuff – technology will bail us out here in time.

Dr. Jubjub: Sir! We already have the technology to fix the toves. The most straightforward way would be to whiffle their tulgey wood but we could also...

Prof. Bandersnatch: What?? Whiffle their tulgey wood? Do you have any idea what that would cost? And besides, people won’t stand for it – slithy toves with unwhiffled tulgey wood are a part of our way of life.

Dr. Jubjub: So, when you say technology will bail us out you mean you expect a solution that will be cheap, socially acceptable and developed soon?

Prof. Bandersnatch: Of course! Prof. Jabberwock assures me the singularity will be here around tea-time on Tuesday. That is, if we roll up our sleeves and don’t waste time with trivialities like your tove issue.

Maybe it’s just me but I feel like I run into a lot of conversations like this around here. On any problem that won’t become an absolute crisis in the next few decades, someone will take the Bandersnatch view that it will be more easily solved later (with cheaper or more socially acceptable technology) so we shouldn’t work directly on it now. The way out is forward - let’s step on the gas and get to the finish line before any annoying problems catch up with us.

For all I know, Bandersnatch is absolutely right. But my natural inclination is to take the Jubjub view. I think the chances of a basically business-as-usual future for the next 200 or 300 years are not epsilon. They may not be very high but they seem like they need to be seriously taken into account. Problems may prove harder than they look. Apparently promising technology may not become practical. Maybe we'll have the capacity for AI in 50 years - but need another 500 years to make it friendly. I'd prefer humanity to plan in such a way that things will gradually improve rather than gradually deteriorate, even in a slow-technology scenario.

New Comment
65 comments, sorted by Click to highlight new comments since:

I felt that the nonsense words in this post were a good idea, both in avoiding explicit references to mindkilling subjects, and because they sound cute.

Business as usual for the next 300 years with, say, 3% growth per year makes us around seven thousand times richer. This can't happen without technologies that are so disruptive that we don't have business as usual.

Even if we don't get AI, eugenics or intelligence enhancement which creates lots of people at least as smart as John von Neumann would give us a massively improved capacity to solve problems.

Maybe we'll have the capacity for AI in 50 years - but need another 500 years to make it friendly.

Unfortunately, if that's true, we all lose.

Probably. But put me down for a last-ditch Butlerian Jihad.

Interesting. I didn't know that there were Dune sequels: http://en.wikipedia.org/wiki/Dune:_The_Butlerian_Jihad

They're not very good. (The House books are decent, though)

Agreed that we should treat the chances of a non-singularity future as being significant, but even with no singularity technological advance is a normal part of our society. Bandersnatch can be right even if Jabberwock is wrong.

even with no singularity technological advance is a normal part of our society

Depends what time scale you're talking about.

[-][anonymous]10

And on what you mean by 'advance'.

The time frames mentioned in the post were 50, 120, 200, 300 and 500 years. Over all of those scales I would expect significant technological advance.

Take any 500-year window that contains the year 2014. How typical would you say it is of all 500-year intervals during which tool-using humans existed?

How typical does it need to be? We generally discount data more the further away from the present it is, for exactly this reason.

The current 500-year window needs to be be VERY typical if it's the main evidence in support of the statement that "even with no singularity technological advance is a normal part of our society".

This is like someone in the 1990s saying that constantly increasing share price "is a normal part of Microsoft".

I think technological progress is desirable and hope that it will continue for a long time. All I'm saying is that being overconfident about future rates of technological progress is one of this community's most glaring weaknesses.

[-][anonymous]30

The sheer number of ways the last 500 years are atypical in ways that will never be repeated does boggle the imagination.

Microsoft quit growing because of market saturation and legal challenges. The former seems unlikely with regards to technology, and the latter nearly impossible. It is possible for tech to stop growing, yes, but the cause of it would need to be either a massive cultural shift across most of the world, or a civilization-collapsing event. It took a very long time to develop a technological mindset, even with its obvious superiority, so I would expect it to take even longer to eliminate it.

I fully agree.

Are you thinking on the margin? You don't control the priority function for humanity, only for yourself and those you influence. Even if slithiness were the most pressing issue, it doesn't follow that everyone should be working on it.

I think it's more about how annoying it is when someone shuts down or tries to shut down someone working on a shorter term issue by saying it won't matter anyway. It's safer to have people working both on the singularity and on how to not die from diseases even if the singularity will cure all diseases when it happens.

The question is why should we care about slithy toves? How high is the utility of protecting them?

You need to answer those questions to get me to care about slithy toves.

And besides, people won’t stand for it – slithy toves with unwhiffled tulgey wood are a part of our way of life.

In general culture changes when there a need for the change. You don't have to worry to much about technology being "socially acceptable" to prevent day values.

Prof. Bandersnatch: Of course! Prof. Jabberwock assures me the singularity will be here around tea-time on Tuesday. That is, if we roll up our sleeves and don’t waste time with trivialities like your tove issue.

I don't think that's the case. As far as I remember from the last survey the average LW participant predicts singularity after 2100. I see very few people arguing that we shouldn't fight aging because we will have singularity before it matters.

But my natural inclination is to take the Jubjub view. I think the chances of a basically business-as-usual future for the next 200 or 300 years are not epsilon

It depends on what you mean with business-as-usual. In general history shows us that lot of changes do happen and things don't stay constant.

The question is why should we care about slithy toves? How high is the utility of protecting them? You need to answer those questions to get me to care about slithy toves.

In my parable, the two scientists agree that slithiness is important. If I were to convince you of it we would of course have to exit the parable and discuss some particular real world problem on the merits.

It depends on what you mean with business-as-usual.

Which in turn depends on the particular Jubjub problem we are discussing. If it's global warming, for example, then developments in energy technology will be important.

Which in turn depends on the particular Jubjub problem we are discussing. If it's global warming, for example, then developments in energy technology will be important.

With business-as-usual you mean that we should plan with the cost of solar energy continue to halve every 7 years?

I don't have the expertise to predict anything of interest about future developments in solar technology. My general inclination is simply that we should have plans that do not lead to disaster if hoped-for technological advances fail to materialize. If we could make our civilization robust enough that it could continue to function for an indefinite time without any significant technological advances, that would be awesome.

A robust thing doesn't change when you exert pressure until you exert enough pressure to break it. Resiliant system do change when you apply pressure but they don't break.

Robust things tend to have the habit of breaking in awful ways. Resilience is a better for designing system that you want to survive.

I don't think the concern of making society work in a scenario without significant technological advances is pressing. We had a lot of significant technological advances in the last 100 years and even if Peter Thiel is right and we aren't doing much innovation at the moment we still do change things. I makes much more sense to focus on surviving scenario's with significant technological advances.

It makes sense to avoid better society on a single technological change, but doing future planning with expecting no technological change is not very helpful.

The distinction you are making between robustness and resilience was not previously familiar to me but seems useful. Thank you.

Obviously, "no significant technological advances" is a basically impossible scenario. I just mean it as a baseline. If you're able to handle techno-stagnation in all domains you're able to handle any permutation of stagnating domains.

I think the distinction is quite important. People frequently centralize systems to make them more robust. To big to fail banks are more robust than smaller banks.

On the other hand they don't provide a resilience. If one breaks down your screwed.

Italy political system isn't as robust as the system of Saudi Arabia but probably more resilient.

There are often cases where systems get more robust if you reduce diversity but that also reduces resilience.

If you're able to handle techno-stagnation in all domains you're able to handle any permutation of stagnating domains.

You don't. If technology A posits risk X and you need technology B to prevent risk X you are screwed in a world with A and not B but okay in a world without A and B.

When doing future planning it's better to take a bunch of different scenarios of how the future could look like and see what your proposals do in each of those than to take the status quo as a scenario.

they don't provide a resilience. If one breaks down your screwed.

Everything can be broken. It's a misleading approach to think of robust systems as breakable and resilient systems as not breakable.

Both kinds of systems will break with sufficient damage. Ceteris paribus you can't even say which one will break first. The difference is basically in how they deal with incoming force: the robust systems will ignore it and resilient systems will attempt to adjust to it. But without looking at specific circumstances you can't tell beforehand which kind will be able to survive longer or under more severe stress.

There is also the related concept of graceful degradation, by the way.

Everything can be broken. It's a misleading approach to think of robust systems as breakable and resilient systems as not breakable.

I think that model works quite well for a lot of practical intervention where people do things to increase robustness that cost resilience.

But you are right that not every robust system will break earlier than every resilient one.

In general culture changes when there a need for the change.

That's not at all clear, i.e., there isn't a general optimization process that optimizes culture for what's needed. There's memetic evolution, but that has the usual problems. In particular states and even entire civilizations have collapsed in the past.

That depends on the particular cultural change.

In a case like using genetic engineering to produce superior humans there are pressures. If you have a few people doing it and they get benefits there cultural pressure for other people to also want the benefits.

In this respect a hypothetical positive future works exactly opposite to a hypothetical negatvie future (apocalypse). The later causes action now. The point is: If you lack a risk/cost-model model than the more powerful future wins. Here obviously the singularity wins over e.g. climate change.

Compare to http://xkcd.com/989/ (if all think a great future comes it will not come. kind of selfavoiding prephecy)

Yes, you'd better have a risk/cost-model.

I avoided mentioning any specific Jubjub problems to try to minimize mind-killing effects. But since you've brought up climate change, which is usually coded as a left-wing issue in our political discourse, I'd just like to mention that there are also Jubjub problems with a very different political coding, e.g. dysgenic pressure, a reactionary right-wing issue.

[-]gwern350

I'm amused that when I was reading this, it didn't even occur to me that this might be about global warming - I just assumed it was about eugenics.

But fundamentally, I do think that the basic observation is right: our planning horizons should be fairly short, because we just don't know enough about future technology and developments to spend large amounts of resource on things with low option value. There are countless past crises that did not materialize or were averted by other developments; to give an imperfect list off the top of my head: horse shit in the streets of cities, the looming ice age, the degradation of the environment with industrialization, Kessler catastrophe, Y2K, and Hannu Kari's Internet apocalypse.

I am reminded of a story Kelly tells in The Clock of the Long Now about a Scandinavian country which set aside an island for growing big trees for making wooden warships in the 1900s, which was completely wrong since by that point, warships had switched to metal, and so the island became a nature preserve; this is a cute story of how their grossly mistaken forecasts had an unanticipated benefit, but being mistaken is not usually a good way of going about life, and the story would be a lot less cute if the action had involved something more serious like taxation or military drafts or criminal justice or economy-wide regulation.

a Scandinavian country which set aside an island for growing big trees for making wooden warships in the 1900s, which was completely wrong since by that point, warships had switched to metal, and so the island became a nature preserve;

This was probably Sweden planting lots of oaks in the early 19th century. 34 000 oaks were planted on Djurgården for shipbuilding in 1830. As it takes over a hundred years for the oak to mature, they weren't used and that bit of the Island is now a nature preserve. Quite funny is that when the parliament was deciding this issue, it seems some of the members already doubted whether oak would remain a good material to build ships from for so long.

Also observe that 1900s ≠ 19th century, so they weren't that silly.

Had some trouble finding English references for this, but this (p 4) gives some history and numbers are available in Swedish Wikipedia.

[-]Pfft40

Also observe that 1900s ≠ 19th century, so they weren't that silly.

I guess gwern meant the construction was planned to take place in the 1900s.

Quite possible. I didn't intend for that sentence to come across in a hostile way.

Since in Swedish we usually talk about the 1800s and the 1900s instead of the 19th and 20th century, I thought something could have been lost in translation somewhere between the original sources, the book by Kelly and gwern's comment, which is itself ambiguous as to whether it is intended as (set aside an island for growing big trees for making wooden warships) (in the 1900s) or as (set aside an island for growing big trees for (making wooden warships in the 1900s)). (I assumed the former)

As usual, gwern has made a great comment. But I'm going to bite the bullet and come out in favor of the tree plan. Let's go back to the 1830s.

My fellow Swedes! I have a plan to plant 34,000 oak trees. In 120 years we will be able to use them to build mighty warships. My analysis here shows that the cost is modest while the benefits will be quite substantial. But, I hear you say, what if some other material is used to build warships in 120 years? Well, we will always have the option of using the wood to build warships and if we won't take that option it will be because some even better option will have presented itself. That seems like a happy outcome to me. And wood has been useful for thousands of years - it will surely not be completely obsolete in a century. We could always build other things from it, or use it for firewood or designate the forest as a recreational area for esteemed noblemen such as ourselves. Or maybe the future will have some use for forests we cannot yet anticipate [carbon sequestration]. I don't see how we can really go wrong with trees.

Back to the present. I'm concerned with avoiding disasters. "The benefits of this long-term plan were not realized because something even better happened" is only a disaster if the cost of the plan was disastrous. Of course, some people argue that the costs of addressing some of Dr. Jubjub's problems are disastrous and that's something we can discuss on the merits.

My analysis here shows that the cost is modest while the benefits will be quite substantial.

Do show your analysis :-) Don't forget about discounting and opportunity costs :-D

For the sake of argument I'm assuming the plan made prima facie sense and was only defeated by technological developments. Sufficiently familiarizing myself with the state of affairs in 1830s Sweden to materially address the question would, I think, be excessively time-consuming.

be excessively time-consuming

Correct, but then you shouldn't handwave into existence an assertion which is really at the core of the dispute.

The issue is whether this was a good decision and let's say "good" is defined as low-cost and high-benefit. You are saying "let's assume 'the cost is modest while the benefits will be quite substantial' and then, hey, it's a good decision!".

Correct, but then you shouldn't handwave into existence an assertion which is really at the core of the dispute.

The argument I am trying to approach is about proposals which make sense under the assumption of little or no relevant technological development but may fail to make sense once disruptive new technology enters the picture. I'm assuming the tree plan made sense in the first way - the cost of planting and tending trees is such and such, the cost of quality wood is such and such and the problems with importing it (our enemies might seek to control the supply) are such and such. Other projects we could spend the same resources on have such and such cost benefit-evaluations of their own. And so on and so forth. In this thought experiment you could assume a very sophisticated analysis which comes up smelling like roses. The only thing it doesn't take into account is disruptive new technology. That's the specific issue I'm trying to address here so that's why I'm willing to assume all the other stuff works for the sake of argument.

In actual history, maybe the tree plan never even made any sense to begin with - maybe wood was cheap and plentiful and planting the oak trees was difficult and expensive. For all I know the whole thing was a ridiculous boondoggle which didn't make sense under any assumption. But that's just an uninteresting case which need not detain us.

a Scandinavian country which set aside an island for growing big trees for making wooden warships in the 1900s,

One could also see this as part of a diversified investment strategy. Putting aside some existing ressources for future use is surely not a bad idea. The inteded purpose may have been 'wrong'. But as you say: It can have an unanticipated benefit.

One could also see this as part of a diversified investment strategy.

And that seeing would be an excellent example of a post-factum justification of an error.

or an argument that we should act so that even if we are in error the consequences are not dire.

I submit that none of us has a clue as to the consequences in a hundred years of what we are doing now.

Really? Is this something you've said before and I've missed it? If true, it has huge implications.

I don't think I've said it before in these words but I may have expressed the same idea.

Why do you think there are huge implications?

If I believe that, I would forget about AI, x-risk and just focus on third-world poverty.

Well, it's up to you to decide how much the uncertainty of outcome should influence your willingness to do something. It's OK to think it's worthwhile to follow a certain path even if you don't know where would it ultimately lead.

"Uncertainty" is different than "no clue." Or maybe I'm assuming too much about what you mean by "no clue" - to my ear it sounds like saying we have no basis for action.

Large amounts of uncertainty including the paradoxical possibility of black swans == no clue.

it sounds like saying we have no basis for action

You have no basis for action if you are going to evaluate your actions on the basis of consequences in a hundred years.

[-][anonymous]00

You don't have more information about the hundred-year effects of your third-world poverty options than you do about the hundred-year effects of your AI options.

Effects of work on AI are all about the long run. Working on third-world poverty, on the other hand, has important and measurable short-run benefits.

[-][anonymous]50

Good point!

Sure if you intended it for one special purpose and just got lucky with another purpose it would be a good excuse. We don't know what the Scandinavians reasoned other than the possibly often retold war-skip story.

The lesson: If you reserve ressources for a specific purpose either make sure to allow more general usage or reserve multiple different ressources for other purposes too.

And I was more specific-- I thought it was a response to my comment that if you expect a Singularity within a hundred years, you shouldn't be bothering with most eugenics.

It was, in part. But I certainly also had climate change in mind, where I've argued the Jubjub case for years with my friends. I've also seen the "Future tech will make your concerns irrelevant" viewpoint in discussions of resource depletion and overpopulation.

Oddly, I usually see dysgenic pressure as an argument from left-wingers. It's a pretty idiosyncratic opinion, though, so I could imagine a right-wing variant as well.

[-]Shmi40

On any problem that won’t become an absolute crisis in the next few decades, someone will take the Bandersnatch view that it will be more easily solved later (with cheaper or more socially acceptable technology) so we shouldn’t work directly on it now. The way out is forward - let’s step on the gas and get to the finish line before any annoying problems catch up with us.

Ah, the black and white fallacy. The approach more likely to succeed is to start mapping out and working toward the potential solutions, among other things, dedicating proportionately more resources to the more immediate and dangerous problems, while still keeping in mind the long-term issues. There are exceptions, of course, but the Adam's law of slow-moving disasters seems to hold pretty well.

Ah, the fallacy of gray. When any particular person evaluates any particular issue, the details are going to matter. That doesn't mean it's a fallacy to identify two generic approaches to problems like this.

[-]Shmi-20

That doesn't mean it's a fallacy to identify two generic approaches to problems like this.

No, but it's a fallacy to insist that they are the only two approaches possible.

I'm by no means insisting on that. Of course you can hedge your bets.

If we assume a scenario without AGI and without a Hansonian upload economy, it seems quite likely that there are large currently unexpected obstacles for both AGI and uploading. Computing power seems to be just about sufficient right now (if we look at supercomputers), so it probably isn't the problem. So it will probably be a conceptual limitation for AGI and a scanning or conceptual limitation for uploads.

Conceptual limitation for uploads seems unlikely, because were just taking a system cutting it up into smaller pieces and and solving differential equations on a computer. Lots of small problems to solve, but no major conceptual ones. We could run into problems related to measuring quantum systems when doing the scanning (I believe Scott Aaronson wrote something about this suspicion lately). Note that this also puts a bound on the level of nano-technology we could have achieve, if we have neuron-sized scanning robots, we would be able to scan a brain and start the Hansonian scenario. Note that this does not preclude slightly larger scale manufacturing technologies, which would probably come from successive miniaturisations of 3d-printers.

Conceptual difficulties creating AGI are more or less expected by everyone around here, but in the case AGI is delayed by over a century we should get quite worried about other existential risks on our way there. Major contenders are global conflict and terrorism, especially involving nuclear, nano-technological or biological weapons. Even if nano-technology will not reach the level described in Sci-Fi, the bounds given above still allow for sufficient development to make advanced weapons be a question of blueprints and materials. Low probability huge impact risks from global warming are also worth mentioning, if only to note that there are a lot of other people working on them.

What does this tell us about analysing long-term risks like the slithy toves? Well I don't know anything about slithy toves, but let's look at the eugenics stuff discussed earlier and consider how it would influence the probability of major global conflicts, the question is not whether it would increase the risk of global conflict, but how much it would increase the risk of global conflict. On the other hand if AI-safety is already taken care of, it becomes a priority to develop AGI as soon as humanly possible. And then it would be really good if humanly possible was a sigma or so better than today. Still it wouldn't be great, since most of the risks we would be facing at this point would be quite small for each year (as it seems today we could of course get other info on our way there). It's really quite hard to say what would be the proper balance between more intelligent people and more time available at this point, we could say that if we've already had a century to solve the problem more time can't be that useful, on the other hand we could say that if we still haven't solved the problem in a century there are loads of sequential steps to get right we need all the time we can buy.

tldr: No AGI & No Uploads => most X-risk from different types of conflict => eugenics or any kind of superhumans increases X-risk due to risk of war between enhanced and old-school humans