Visualizing "The Future According to You"

The Uncertain Future is a future technology and world-modeling project by the Singularity Institute for Artificial Intelligence. Its goal is to allow those interested in future technology to form their own rigorous, mathematically consistent model of how the development of advanced technologies will affect the evolution of civilization over the next hundred years. To facilitate this, we have gathered data on what experts think is going to happen, in such fields as semiconductor development, biotechnology, global security, Artificial Intelligence and neuroscience. We invite you, the user, to read about the opinions of these experts, and then come to your own conclusion about the likely destiny of mankind.

Link: theuncertainfuture.com

New Comment
7 comments, sorted by Click to highlight new comments since:

I'm curious to see other people's quiz results and interpretations, preferably independent of having seen my results so averaging results is more meaningful. If you're inclined that way, please go take the quiz before reading below :-)




I went through this quiz a while ago, and just did again now. Mostly I answer big error bars when picking answers, like trying to have my 90% (green and red) lines contain the green and red of all the answers of all the example quotes that map to concrete suggested values (unless I think someone is just glaringly wrong). Here is what the quiz says I should expect if it isn't flawed and I'm properly consistent:

  • A5: Probability of brain emulation based AI on or before 2070 is 70%.

  • A6: Cumulative probability of some AI on or before 2070 (by eyeball) is 90%.

  • A8: Cumulative probability of nuclear or non-nuclear accident that curtails research/progress so the point is moot before 2070 (by eyeball) is 48%

  • A9 and A10: At each decade mark -> Cumulative probability of smart AI will have arrived -> Cumulative probability of AI or IA or a more prosaic catastrophe will have forced a major recalculation:

    • 2020 -> 12% -> 23%
    • 2030 -> 37% -> 52%
    • 2040 -> 66% -> 78%
    • 2050 -> 85% -> 91%
    • 2060 -> 92% -> 96%
    • 2070 -> 95% -> 98%

I don't think I "deeply endorse" these numbers because it felt like gross simplifications for the sake of creating simple linear values over things.

For example, I had a relatively lot of probability on the idea that if designer babies come about then the net effect might be that more people are intellectually shallow by very "shiny", such that research actually declines. My green line for "Q9-3" was off the graph into negative because it didn't seem designed to represent pessimism in our ability to design babies wisely. When imagining nuclear scenarios I didn't just have uncertainty about whether they would happen but whether they would hurt research (obvious reasons) or help research (like small exchanges that create hostility and fear and are used to justify military research spending).

In a couple of other places it seemed like the time axis was being jerked around, where I was being anchored on 2070 as "the end of time" when maybe I should be considering the broad sweep of time out to "2694 and beyond" (to pick a random date). Also, it was striking to me that there were no questions about things like per capita joule consumption (IE peak oil issues) and what this was likely to do to R&D progress, nor the interconnectedness of the economy and the way algorithmic trading is increasingly important (and potentially bug prone).

One thing I find interesting (though I might just not be correctly cashing out what cumulative probabilities mean) is that it looks like I should expect a serious surprise in the future with a median arrival time of the surprise around 2029. But the later the surprise arrives the more likely it is that it will be an AI surprise.

...which almost suggests to me that maybe the optimal political policy to advocate is for things that reduce the likelihood and scope of prosaic disasters (arms race prevention, de-coupled financial markets, etc) while delaying AGI work... like being against government funding (or megadollar philanthropic donations) for actual programming and instead promoting international confidence building measures or something.

Am I reading my results wrong? Help appreciated!

...which almost suggests to me that maybe the optimal political policy to advocate is for things that reduce the likelihood and scope of prosaic disasters ...

Am I reading my results wrong?

No, I think you are reading them right. If your projection places the AI singularity more than a few decades out (given business-as-usual), then some other serious civilization-collapsing disaster is likely to arise before the FAI arrives to save us.

But, the scenario that most frightens me is that the near-prospect of an FAI might itself be the trigger for a collapse - due to something like the 'Artilect War' of de Garis.

De Garis has luddites on one side of his war. That group has historically been impoverised and has lacked power. The government may well just declare them to be undesirable terrorists - and stomp on them - if they start causing trouble.

We can see the environmental movement today. They are usually fairly peace-loving. It doesn't seem terribly likely to me that their descendants will go into battle.

Every time I type in a box and click in the next box, the previous box reverts to its former value. How do I make it stop?

Edit: This is the least of my problems. The script doesn't work on my OS very well, and after about Q6 it stopped telling me what the values I was supposed to put in the boxes were supposed to mean. So, forget it.

In 2013 Java in the browser is effectively dead. Any chance this tool will be made available in a different form that isn't based on applets?

In 2013 Java in the browser is effectively dead. Any chance this tool will be made available in a different form that isn't based on applets?

Is the source for the current version available? Is the tool considered useful still and so worth reproducing?