Inspired by the talk by Anna Salamon I decided to do my own calculations about the future. This post is a place for discussion about mine and others calculations.

To me there are two possible paths for the likely development of intelligence, that I can identify.

World 1) Fast and conceptually clean. Intelligence is a concrete value like the number of neutrons in a reactor. I assign a 20% chance of this.

World 2) Slow and messy. Intelligence is contextual, much like say fitness in evolutionary biology. Proofs of intelligence of a system are only doable by a much higher intelligence entity, as it will involve discussing the complex environment. I'd assign about an 60% chance to this.

Worlds 3) Other. The other 20% chance is the rest of the scenarios that are not either of these two.

Both types of AI have the potential to change the world, both possibly destroying humanity if we don't use them correctly. So they both have the same rewards.

So for world 1, I'll go with the same figures as Anna Salamon, because I can't find strong arguments against them (and it will serve as a refresher )

Probability of an eventual AI (before humanity dies otherwise) = 80%

Probability that  AI will kill us = 80%

Probability that we manage safeguards = 40%

Probability that current work will save us = 30%

So we get 7%*20%. Gives us 1.4%

So for world 2. Assume we have an SIAI that is working on the problem of how to make messy AI Friendly or at least as Friendly as possible. It seems less likely we would make AI and harder to create safeguards as they have to act over longer time.

Probability of an eventual AI (before humanity dies otherwise) = 70%

Probability that  AI will kill us (and/or we will have to give up humanity due to  hard scrapple evolution) = 80%

Probability that we manage safeguards = 30%

Probability that current work will save us = 20%

So we get a factor of 3% times 60% give a 1.8%.

Both have the factor of 7billion lives times n, so that can be discounted. They pretty much weigh the same. Or as near as dammit for a back of the envelope calcs, considering my meta-uncertainty is high as well.

They do however interfere. The right action in world 1 is not the same as the right action in world 2. Working on Friendliness of conceptually clean AI and suppressing all work and discussion on messy AI hurts world 2 as it increases the chance we might end up with messy UFAI. There is no Singularity Institute for messy AI in this world, and I doubt there will be if SIAI becomes somewhat mainstream in AI communities, so giving money to SIAI hurts world 2, it might have a small negative expected life cost. Working on Friendliness for Messy AI wouldn't intefere with the Clean AI world, as long as it didn't do stupid tests until the messy/clean divide became solved. This tips the scales somewhat towards working on messy FAI and how it is deployed. World 3 is so varied I can't really say much about.

So for me the best information I should seek is getting more information on the messy/clean divide. Which is why I always go on about whether SIAI has a way of making sure it is on the right track with the Decision Theory/conceptually clean path.

So how do the rest of you run the numbers on the singularity?

New Comment
26 comments, sorted by Click to highlight new comments since:

I am going to say something here that is probably not going to be popular, but why are we so worried about humanity surviving?

Isn't the continuation of intelligent life (regardless its form), more important than the type of intelligent life?

(real questions, BTW)

We care about the survival of complex fragile humane values and the sort of intelligent life they value, which category is broader than humanity but far narrower than 'all intelligent minds'.

The "complex fragile humane values" post seems to be muddled to me. I don't agree with it. Robin Hanson doesn't agree with it (see the comments). It appears to be a controversial topic.

Figuring out what we want is the hardest problem. Intelligence isn't IMHO the most important thing. I also want consciousness, empathy, love, artistic and aesthetic sense, fun, and a continuing ability to evolve.

OK... Yes... I see that those are important to the current intelligence dominating the planet.

And, I guess that they are not necessarily a part of any possible intelligence. It is not hard to find humans who lack in Empathy, for instance.

However, all of those things are currently defined by our anthropomorphic point of view. It may be that another intelligence might not need them, or that they exist in different forms that we might not recognize (for instance, some of my family are stunned that I find puzzles such as Sudoku to be fun. Or, that I find doing Propositional Logic fun)... But, that is really just seeking to equivocate semantics about those concepts.

On the one hand, you keep speaking of "intelligent life" as being the only thing of importance, from which I infer that you believe intelligence is the only quality you value. On the other hand, you criticize my adding things to the list besides intelligence as being not inclusive enough. You seem to be arguing simultaneously for two viewpoints bracketing mine.

I'm probably misinterpreting you; but why do you keep saying "intelligent life" instead of "life"?

I am a little unsure how to answer this response.

I am speaking of intelligent Life as being the only thing of importance. I mentioned that the list of things you mention are important to US (Humanity), and I said that these qualities may not be important for all forms of intelligent life. It may be that humanity will have run its course and will need to make room for another form of intelligence. I find that to be a little bit disturbing, but I would rather see intelligence continue in some form than not at all.

And, I said that even among humans the qualities you mention are not universal, nor homogeneous. So, I am not sure what you are asking... It may be the case that I am arguing for two different points of view.

Oops, I edited my comment before noticing you'd replied to it.

The fact that you keep saying "intelligent life" indicates to me that you think intelligence is the one important thing, and my list is too long. But you also say the list is anthropomorphic, implying it is incomplete.

I think now that you're still saying intelligence is the One Big Thing that you care about. We (in the futurist community) do this all the time. We routinely say, for instance, that pigs are more intelligent than cows, and therefore it's ethically worse to eat pork than to eat beef. It seems to me like a very tall person assuming that tallness is the most important virtue.

As for why I don't say Life instead of Intelligence Life, that is because I think that Life itself will continue in one fashion or another regardless of what we do.

I do think that Intelligence is important now that we have it, and that may be similar to a tall man assuming that tallness is the most important virtue in some eyes (although I find it to be a stretch of an analogy - Tallness is obviously a disadvantage at many times, and I could probably find a good reason to favor shortness... But, that aside...).

I don't know why Intelligence would not be (or, to use a word that I hate: Why it should be) the characteristic that should be most valued. Is there a reason that Intelligence should not be the most important factor or characteristic of life?

[-]taw20

Downvoted for pulling numbers out of your ass and going entirely against the outside view about something you know nothing about.

Statements like "Probability that AI will kill us = 80%" are entirely devoid of content.

going entirely against the outside view about something you know nothing about.

Downvoted for misuse of 'the outside view'. Choosing a particular outside view on a topic which the poster alegedly 'knows nothing about' would be 'pulling a superficial similarity out of his arse'.

Replace the 'outside view' reference with the far more relevant reference to 'expert consensus'.

The whole point of the outside view is that "pulling a superficial similarity out of your arse" often works better than delving into complicated object-level arguments. At least a superficial similarity is more entangled with reality, more objective, than that 80% number I made up in the shower. If you want to delude yourself, it's easier to do with long chains of reasoning than with surface similarities.

The "Probability that AI will kill us = 80%" is not a figure the poster pulled out of their ass. It is Anna Salmon's figure from her talk: "Probability that AI without safeguards will kill us = 80%"- and the poster attributed it to her.

Anna Salmon may well have pulled the figure out of her ass - but that seems like a different issue.

I wrote the post while tired last night, probably not a good idea.

The numbers were not what I was trying to get across, (you can make them a lot smaller across the board and I wouldn't have a problem). It is the general shape of the problem, and the interfering nature of the actions for each world.

Do you think what we know about AI is so low that we shouldn't even try to think about trying to shape its development?

[-]Roko10

Working on Friendliness for Messy AI

What would you do?

A lot of it depends on what sort of system an AI implementation ends up having to be like. These are examples only of things you might need to do.

1) Prove that the messy AI do not have the same sort of security holes that modern computers do. A true AI botnet would be a very scary thing.

2) If has a component that evolves, prove that the it is an evolutionarily stable strategy for whatever is evolving to optimize what you want optimized.

There might also be work done to try and computationally categorize how real neural nets are different from computers. And human from other animal brains. Anything to help us push back the "Here be dragons signs" on portions of Messy AI so we know what we can and can't use. And if we can't figure out how to use human level bits safely, don't use them until we have no other choice.

There are also questions of how best to deploy AI (close sourced/design giving you more control, open source more minds to check the code).

If this is a bit jumbled it is because Messy AI has huge numbers of possibilities and we are pretty ignorant about it really.

[-]Roko00

By the way, what do you mean by "messy AI"?

The short version: AI that uses experimentation (as well as proof) to navigate through the space (or sub spaces) of turing machines in its internals.

Experimentation implies to me things like compartmentalization of parts of the AI in order to contain mistakes, potential conflict between compartments as they haven't been proved to work well together. So vaguely brain-like.

I.e. provable correctness.

We can already see fairly clearly how crippling a limitation that is. Ask a robot builder whether their software is "provably correct" and you will likely get laughed back into kindergarden.

I made this a while back to make the math easier. The default values should be Salamon's numbers. The calculations argue for more spending on AI research, but huge returns happen with almost any sort of existential risk research. Successful AI has the advantage of solving all other existential risk problems though.

Friendliness of conceptually clean AI and suppressing all work and discussion on messy AI hurts world 2 as it increases the chance we might end up with messy UFAI.

So don't suppress that sort of research unless you're sure it's harmful. I don't see why you'd want to suppress "messy" AI research in a world where "clean" AI is the solution (or vice-versa). If one type isn't the solution, then it won't work. You're also neglecting that future-successful ideas can draw inspiration or useful information from research on unsuccessful ideas. Jürgen Schmidhuber says he gets ideas from both messy and clean AI.

If you look at the advice people are giving in the thread about advice to AI makers it mainly revolves around clean AI with most other research not encouraged. I believe this is indicative of the stance SIAI takes and promotes at the moment. It is not the only stance they could take.

There is no reason why both research projects shouldn't be able to coexist (apart from the funding conflict which is a minor conflict).

I think SIAI is focusing on clean AI because most academic research is on messy AI. If you look at the margins, one more unit of research on clean AI is probably more beneficial than one more unit of research on messy AI.

Oh and there's the selection effect of LW posters replying in an AI thread.

There are very few people working on making messy AI Friendly, though. Most of it is charging forward recklessly.

So we get 7%*20%. Gives us 1.4%

Not only have you not said what you are calculating, but none of those numbers have appeared earlier in your post, so I really have no idea what that 1.4% is supposed to mean.

It was based on the talk of Anna Salamon that I linked to originally. It is required watching to understand what I was going on about.

7% was the chance that she calculated that the current work done by the SIAI would save the world. Based on multiplying out all the possibilities given above of AI, AI destroying the world, humans will do something to guard against it etc.

The 20% was the chance that we were in world 1, that is clean AI is possible, which I stated earlier. So in total that is what I back of the enveloped the chance helping SIAI would help save humanity (ignoring the negative factor if we are in world 2).

I have a ton of posts that I don't post, because I never get them up to scratch. This should have been one of those.

So how do the rest of you run the numbers on the singularity?

I think we are probably screwed (and that is as close as I can get to an actual quantisation with what I know).

Even so, the more important figures for the purpose of my decision making would be how much difference my own possible actions could make on the margin. (It is probably still worth shutting up to do the impossible.)