1 min read

13

Related to: lesswrong.com/lw/fk/survey_results/

I am currently emailing experts in order to raise and estimate the academic awareness and perception of risks from AI and ask them for permission to publish and discuss their responses. User:Thomas suggested to also ask you, everyone who is reading lesswrong.com, and I thought this was a great idea. If I ask experts to publicly answer questions, to publish and discuss them here on LW, I think it is only fair to do the same. 

Answering the questions below will help the SIAI and everyone interested to mitigate risks from AI to estimate the effectiveness with which the risks are communicated.

Questions:

  1. Assuming no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of human-level machine intelligence? Feel free to answer 'never' if you believe such a milestone will never be reached.
  2. What probability do you assign to the possibility of a negative/extremely negative Singularity as a result of badly done AI?
  3. What probability do you assign to the possibility of a human level AGI to self-modify its way up to massive superhuman intelligence within a matter of hours/days/< 5 years?
  4. Does friendly AI research, as being conducted by the SIAI, currently require less/no more/little more/much more/vastly more support?
  5. Do risks from AI outweigh other existential risks, e.g. advanced nanotechnology? Please answer with yes/no/don't know.
  6. Can you think of any milestone such that if it were ever reached you would expect human‐level machine intelligence to be developed within five years thereafter?

Note: Please do not downvote comments that are solely answering the above questions.

New Comment
19 comments, sorted by Click to highlight new comments since:
[-][anonymous]30

1) 10% - within fifty years, 50%, no idea. 90% - don't see myself as that confident it will ever be developed. I think it will, given the assumption, but can't say I'm 90% sure it will.

2) Very, very close to zero, while still being a real number and thus worthy of attention.

3) No idea. It is clearly possible, but other than that I don't know.

4) No idea. The SIAI are extraordinarily secret about FAI research, for what appear to be (if you accept their initial argument) extremely good reasons. But this could mean that they have got 99.999% of the way to a solution and just need that extra dollar to save the universe, or they could be sitting round playing Minesweeper all day. For what it's worth, I suspect they're doing some interesting, possibly useful work, but can't know.

5 No

6 No

  1. define "global catastrophe halts progress"
  2. probability of what exactly conditional on what exactly?
  3. probability of what exactly conditional on what exactly?
  4. define "require"
  5. define "outweigh"

ETA: Since multiple people seem to find this comment objectionable for some reason I don't understand, let me clarify a little. For 1 it would make some difference to my estimate whether we're conditioning on literal halting of progress or just significant slowing, and things like how global the event needs to be. (This is a relatively minor ambiguity, but 90th percentiles can be pretty sensitive to such things.) For 2 it's not clear to me whether it's asking for the probability that a negative singularity happens conditional on nothing, or conditional on no disaster, or conditional on badly-done AI, or whether it's asking for the probability that it's possible that such a singularity will happen. All these would have strongly different answers. For 3 something similar. For 4 it's not clear whether to interpret "require" as "it would be nice", or "it would be the best use of marginal resources", or "without it there's essentially no chance of success", or something else. For 5 "outweigh" could mean outweigh in probability or outweigh in marginal value of risk reduction, or outweigh in expected negative value, or something else.

  1. P(human-level AI by ? (year) | no wars ∧ no natural disasters ∧ beneficially political and economic development) = 10%/50%/90%/0%
  2. P(negative Singularity | badly done AI) = ?; P(extremely negative Singularity | badly done AI) = ? (where 'negative' = human extinction; 'extremely negative' = humans suffer;).
  3. P(superhuman intelligence within hours | human-level AI on supercomputer with Internet connection) = ?; P(superhuman intelligence within days | human-level AI on supercomputer with Internet connection) = ?; P(superhuman intelligence within < 5 years | human-level AI on supercomputer with Internet connection) = ?
  4. How much money does the SIAI currently (this year) require (to be instrumental in maximizing your personal long-term goals, e.g. survive the Singularity by solving friendly AI), less/no more/little more/much more/vastly more?
  5. What existential risk is currently most likely to have the greatest negative impact on your personal long-term goals, under the condition that nothing is done to mitigate the risk?
  6. Can you think of any milestone such that if it were ever reached you would expect human‐level machine intelligence to be developed within five years thereafter?
  1. 2025/2040/2080, modulo a fair degree of uncertainty about that estimate (a great deal depends on implementation and unknown details of cognitive science)

  2. Roughly 30% for net negative consequences and 10% for extinction or worse contingent on existence of singularity (note that this is apparently a different interpretation than XiXiDu's), details dependent on singularity type. My estimates would be higher a couple years ago, but the concerns behind friendly AI have become sufficiently well-known that I view it as likely that major AI teams will be taking them properly into consideration by the time true AGI is on the table. Negative consequences are semi-likely thanks to goal stability problems or subtle incompatibilities between human and machine implicit utility functions, but catastrophic consequences are only likely if serious mistakes are made. One important contributing factor is that I'm pretty sure a goal-unstable AI is far more likely to end up wireheading itself than tiling the world with anything, although the latter is still a possible outcome.

  3. Can't answer this with any confidence. The answer depends almost entirely on how well bounded various aspects of intelligence are by computational resources, which is a question cognitive science hasn't answered with precision yet as far as I know.

  4. Somewhere between "little more" and "much more" -- but I'd like to see the bulk of that support going into non-SIAI research. The SIAI is doing good work and could use more support, but even a 5% chance of existential consequences is way too important a topic for one research group to monopolize.

  5. Don't know. Not enough knowledge of other existential risks.

  6. Several, the most basic being that I'd expect human-level AI to be developed within five years of the functional simulation of any reasonably large mammalian brain (the brute-force approach, in other words). I'd put roughly 50% confidence on human-level AI within five years if efficient algorithms for humanlike language acquisition or a similarly broad machine-learning problem are developed, but there are a lot more unknowns in that scenario.

  1. 2025, 2040, never.

  2. P(negative Singularity & badly done AGI) = 10%. P(negative Singularity | badly done AGI) ranges from 30% to 97%, depending on the specific definition of AGI. I'm not sure what 'extremely negative' means.

  3. 'Human level' is extremely fuzzy. An AGI could be far above humans in terms of mind design but less capable due to inferior hardware or vice versa.

  4. Vastly more.

  5. Other risks, including nanotech, are more likely, though a FAI could obviously manage nanotech risks.

  6. I'm going to answer this for a Singularity in 5 years, due to my dispute of the phrase 'human-level'. A solution to logical uncertainty would be more likely than anything else I can think of to result in a Singularity in 5 years, but I still would not expect it to happen, especially if the researchers were competent. Extreme interest from a major tech company or a government in the most promising approaches would be more likely to cause a Singularity in 5 years, but I doubt that fits the implied criteria for a milestone.

I have to much meta uncertainty about my own abilities as a rationalist and general reliability of my mind to make claims I can honestly clasify as probabilities, but for some intuitive "strength of anticipation" (which a large chunk of everyone who THINKS they are giving probabilities are probably giving anyway) is as follows:

  1. 2025, 2040, 2060

  2. GIVEN the AI is bad? ~100% (again, this is not an actual probability)

  3. 99%?

  4. immensely, vastly more. Large chunk of the world economy should probably be dedicated to it.

  5. yes.

  6. Plenty. The first that comes to mind if uploading the brain of some fairly smart animal.

  1. 10% for 2015 or earlier. 50% for 2020 or earlier, 90% for 2030 or earlier.

  2. At least 50%.

  3. I think no human level AGI is necessary for that. A well calibrated worm level AGI could be enough. I am nearly sure, that it is possible, but the actual situation of the creating (accidentally or not) self enhancing "worms" is at least 50% probale to 2030. It needn't to be a catastrophe, but it may be. 50-50 prior again. The speed is almost certain to be fast. Say in days after launch.

  4. I am not sure what they could do about this. FAI as a defense will most probably be too late anyway,

  5. Yes.

  6. Many. Theorems proving Watson is just one of them. Or WolframAlpha programer, for example.

A bug. I can count 1,2,3,4,5,6 .. and did so in the above post. Visible under Edit option, but not when published. Funny,

[-][anonymous]10

3, I think no human level AGI

Why is there a comma after the 3?

THNX. Now, when the comma has gone, the numbers are okay,

  1. 2012/2050/2100
  2. 8%/16% where 16% is Extremely Negative
  3. 0.1%, 0.5%, 5%
  4. Vastly More Support
  5. Yes
  6. Brain Uploading - IE the capability to upload a mind and retain the level of variables required to create the belief of consciousness.
[-]asr00

1) 2025, 2040, No prediction. (I don't trust myself to figure out what the long-tail possibilities look like that fall short of "global catastrophe" but that still might abort AI research indefinitely.)

2) < 5%.

3) < 5% for hours/days. < 10% for self-modifies within a few years. About 50% chance for "helps humans develop and build super-human AI within 5 years"

4) No more.

5) No, not even close. Nuclear war or genetically engineered epidemics worry me more.

6) Neuron-level simulation of a mamalian brain, within a factor of 10 of real-time.

[-][anonymous]00
  1. 2030 / 2050 / never (I assign around 10% that not enough people want it enough to ever pull it off)
  2. 20 % / 5 %
  3. negligible / 1% / 20%
  4. don't care. I think this question will be raised throughout the AI community soon enough should it become relevant.
  5. Don't think so. There are other doomsday scenarios both human made and natural with probabilities in the same ballpark
  6. No. I guess computers will have human-level intelligence, but not human-like intelligence, before we will recognized it as such.
  1. 10% at 2030. 50% at 2050. 90% at 2082 (the year I turn 100).

  2. The probability that the Singularity Institute fails in the bad way? Hmm. I'd say 40%.

  3. Hours, 5%. Days, 30%. Less than 5 years, 75%. If it can't do it in the time it takes for your average person to make it through high school, then I don't think it will be able to do it at all. Or in some other respect, it isn't even trying.

  4. much more. I don't think we have too many chefs in the kitchen at this point.

  5. Seriously don't know. It seems like a very open question, like asking if a bear is more dangerous than a tiger. Are we talking worst case? Then no, I think they both end the same for humans. Are we talking likely case? Then I don't know enough about nanotech or AI to say.

  6. Realistically? I suppose in the future, consumer-grade computer had the computational power of our current best supercomputer, and there was some equivalent to the X-Prize for developing a human-level AI, I would expect someone to win the prize within 5 years.

[-][anonymous]00
  1. 2060 (10%), 2110 (50%), 2210 (90%).

  2. It depends on what you mean by "badly done". If it's "good, but not good enough", 99%. (It's possible for an AI that hasn't been carefully designed for invariant-preserving self-modification to nevertheless choose an invariant that we'd consider nice. It's just not very likely.)

  3. Hours: vanishingly small. Days: 5%. Less than 5 years: 90%. (I believe that the bottleneck would be constructing better hardware. You could always try to eat the Internet, but it wouldn't be very tasty.)

  4. More.

  5. Yes - mostly because true existential risks are few and far between. There are only a few good ways to thoroughly smash civilization (e.g. global thermonuclear war, doomsday asteroids which we'll see coming).

  6. No. This is essentially asking for a very hard problem that almost, but not quite, requires the full capability of human intelligence to solve. I suspect that, like chess and Jeopardy and Go, every individual very hard problem can be attacked with a special-case solution that doesn't resemble human intelligence. (Even things like automated novel writing/movie production/game development. Something like perfect machine translation is trivial in comparison.) And of course, the hardest problem we know of - interacting with real humans for an unbounded length of time - is just the Turing test.

  1. 2030/2060/2100
  2. 10%/0.5%
  3. 0.01%/0.1%/20%
  4. little more
  5. don't know
  6. Invention of an adaptable algorithm capable of making novel and valuable discoveries in science and mathematics given limited resources.

Some annotations:

2.) I assign a lower probability to an extremely negative outcome because I believe it to be more likely that we will just die rather than survive and suffer. And in the case that someone only gets their AI partly right, I don't think it will be extremely negative. All in all, an extremely negative outcome seems rather unlikely. But negative (we're all dead), is already pretty negative.

4.) I believe that the SIAI currently only needs a little more support because they haven't said what they would do with a lot more support (money...) right now. I also believe we need a partly empirical approach, as suggested by Ben Goertzel, to learn more about the nature of intelligence.

5.) I don't have the education and time to research how likely other existential risks are, compared to risks from AI.

Since I'm in a skeptical and contrarian mood today...

  1. Never. AI is Cargo Cultism. Intelligence requires "secret sauce" that our machines can't replicate.
  2. 0
  3. 0
  4. Friendly AI research deserves no support whatsoever
  5. AI risks outweigh nothing because 0 is not greater than any non-negative real number
  6. The only important milestone is the day when people realize AI is an impossible and/or insane goal and stop trying to achieve it.
[-][anonymous]00

Upvoted because this appears an honest answer to the question, but it'd be useful if you said why you considered it an absolute certainty that no machine will ever show human-level intelligence. Personally I wouldn't assign probability 0 even to events that appear to contradict the most basic laws of physics, since I don't have 100% confidence in my own understanding of physics...