We make decisions based upon our expectations of the future going 10 to 20 years out. However we don't have good systematic ways of making predictions. We rely on pundits and experts in their fields, which might ignore changes in other fields.

We are currently experiencing a period of low growth in the western world, so reliable economic growth is not an iron law throughout the world. So the projects and the organisations we start should depend upon what we expect of the future. If it unlikely to be a wealthy future for the sections of the world we are in or have influence over, we might do well to consider if we can do things to improve our prospects; our ability to shape the future depends upon our wealth.

A lot of current discussion on lesswrong and research by institutions such as FHI is predicated on a high wealth, high tech, high energy society. But what happens if this doesn't come to pass? There is not much point discussing existential risk or life extension if the next 40 years has us scrabbling for resources to transition away from oil, coping with the demographic transition, and social upheaval caused by continued outsourcing and mechanisation. Even if concentrating on existential risks was the right thing to do rationally, it would be politically untenable and it might become a low status activity if there are more obvious problems.

So it is not obvious to me that directly grappling with existential risk is the best thing we can do. It is currently neglected, but maybe the best thing we can do for the future is grapple with the problems in the 10-20 year time scale, so that growth is maintained at the current rate or a severe depression mitigated as much as possible, if one is likely. This would enable our future selves to have more resources to cope with existential risk

I single out the 10-20 year time scale as problems on that time scale are not normally tackled by government or business, but we still have some inkling of what we will be dealing with. These issues are currently neglected as both entities have incentives to look problems at the 3-5 year scale and so may be ignoring low hanging fruit.

So what we can do to shape the future?

Have a unified model of the earth - Everyone currently models the future, when making decisions about what subjects to study, where to live. They just do it in their heads, generally using snippets of ideas from news sources that ignore important issues. Even experts who model the future only look at certain aspects they understand. So we could integrate different peoples expertise into a coherent model of the world.

So we should be able to improve on this. Perhaps a wikipedia-inspired approach might work where people are allowed to suggest changes to the model. Perhaps several competing models should be created.

Improving social institutions rationality - if we manage to create useful models of the future world, we can create charities and companies to deal with the problems we identify. However it will be important to make sure that the charities do actually solve the problems we want and don't become self-serving. So some experimentation with different forms of control might be useful. 

Creating technologies - the obvious one, which might include better methods of communication, collaboration or open sourcing the design of something to avoid too much control.

While none of these things we can do has the force multiplier of creating a friendly artificial intelligence, they do have the advantage of being something that the average person can see the utility of so might become more mainstream and get more resources thrown at it.

New Comment
5 comments, sorted by Click to highlight new comments since: Today at 11:50 AM

So it is not obvious to me that directly grappling with existential risk is the best thing we can do.

This surely depends on what your aims are. If you are a normal biological organism, spending very much of your time grappling with existential risk is a pretty powerful sign that your memetic immune system has been compromised - and that your brain has been hijacked by apocalyptic memes.

It is currently neglected

Near total neglect+diminishing returns=decision determined

maybe the best thing we can do for the future is grapple with the problems in the 10-20 year time scale, so that growth is maintained at the current rate or a severe depression mitigated as much as possible

I doubt it.

Improving social institutions rationality

Taboo that word.

it will be important to make sure that the charities do actually solve the problems we want and don't become self-serving.

Better: make their being self serving actually solve the problems we want.

avoid too much control

You mean unified control, or similar?

Near total neglect+diminishing returns=decision determined

I don't see how diminishing returns fits in here... that says nothing about the initial pay off for investing.

I doubt it.

That article seems like a giant exercise in motivated cognition.

Improving social institutions rationality

Taboo that word.

Rationality? Well I meant it in the sense of the organisation following their stated goals.

Better: make their being self serving actually solve the problems we want.

I meant self-serving as in using resources given to the charity to maintain "useless" jobs or projects. But I can see how you misinterpreted my comment.

You mean unified control, or similar?

I meant avoiding agencies gaining monopolies on power that may not act as we would wish.

diminishing returns

This only is relevant because we are talking about the relative utility of focusing on issues, as you say:

I single out the 10-20 year time scale as problems on that time scale are not normally tackled by government or business, but we still have some inkling of what we will be dealing with. These issues are currently neglected as both entities have incentives to look problems at the 3-5 year scale and so may be ignoring low hanging fruit.

I think other organizations are thinking in the 10-20 year range, particularly the largest, most important companies and non-profits. This is a guess.

That article seems like a giant exercise in motivated cognition.

This is a good point.

What do you have to say about its conclusion?

Rationality? Well I meant it in the sense of the organisation following their stated goals.

I totally misread what you wrote. I read "rationality" as "rationally". My criticism there does not apply at all. If it had read "institutons' rationality" I might not have not made that mistake.

I meant avoiding agencies gaining monopolies on power that may not act as we would wish.

This seems like an undue focus on one way of many that organization could be suboptimal.

When one lives under the King of England, it's easy to think that the Articles of Confederation would be a good way of organizing government.

[-][anonymous]13y00

A lot of current discussion on lesswrong and research by institutions such as FHI is predicated on a high wealth, high tech, high energy society.

It sounds like a completely irrelevant consideration actually. What would you expect to see (as opposed to having grounds for arguing, as you go on after this quote) if it was the other way around?

[This comment is no longer endorsed by its author]Reply