Stop taking the numbers so damn seriously, and think in terms of subjective probability distributions [...], Michael Anissimov
I think it's worth giving the full quote:
Stop taking the numbers so damn seriously, and think in terms of subjective probability distributions, discard your mental associates between numbers and absolutes, and my choice to say a number, rather than a vague word that could be interpreted as a probability anyway, makes sense. Working on www.theuncertainfuture.com, one of the things I appreciated the most were experts with the intelligence to make probability estimates, which can be recorded, checked, and updated with evidence, rather than vague statements like “pretty likely”, which have to be converted into probability estimates for Bayesian updating anyway. Futurists, stick your neck out! Use probability estimates rather than facile absolutes or vague phrases that mean so little that you are essentially hedging yourself into meaninglessness anyway.
Total agreement from me, needless to say.
The claim that AIs will foom, basically, reduces to the claim that the difficulty of making AGI is front-loaded: that there's a hump to get over, that we aren't over it yet, and that once it's passed things will get much easier. From an outside view, this makes sense; we don't yet have a working prototype of general intelligence, and the history of invention in general indicates that the first prototype is a major landmark after which the pace of development speeds up dramatically.
But this is a case where the inside and outside views disagree. We all know that AGI is hard, but the people actually working on it get to see the challenges up close. And from that perspective, it's hard to accept that it will suddenly become much easier once we have a prototype - both because the challenges seem so daunting, the possible breakthroughs are hard to visualize, and on some level, if AI suddenly became easy it would trivialize the challenges that researchers are facing now. So the AGI researchers imagine an AI-Manhattan Project, with resources to match the challenges as they see them, rather than an AI-Kitty Hawk, with a few guys in a basement who are lucky enough to stumble on the final necessary insight.
Since a Manhattan Project-style AI would have lots of resources to spend on ensuring safety, the safety issues don't seem like a big deal. But if the first AGI were made by some guys in a basement, instead, then they won't have those resources; and from that perspective, pushing hard for safety measures is important.
For those of you who are interested, some of us folks from the SoCal LW meetups have started working on a project that seems related to this topic.
We're working on building a fault tree analysis of existential risks with a particular focus on producing a detailed analysis of uFAI. I have no idea if our work will at all resemble the decision procedure SIAI used to prioritize their uFAI research, but it should at least form a framework for the broader community to discuss the issue. Qualitatively you could use the work discuss the possible failure modes that would lead to a uFAI scenario and quantitatively you can could use the framework and your own supplied probabilities (or aggregated probabilities from the community, domain experts, etc.) to crunch the numbers and/or compare uFAI to other posited existential risks.
At the moment, I'd like to find out generally what anyone else thinks of this project. If you have suggestions, resources or pointers to similar/overlapping work you want to share, that would be great, too.
I have a lot on my plate right now, but I'll try to write up my own motivating Fermi calculations if I get the chance to do so soon.
I agree that a write-up of SIAI's argument for the Scary Idea, in the manner you describe, would be quite interesting to see.
However, I strongly suspect that when the argument is laid out formally, what we'll find is that
-- given our current knowledge about the pdf's of the premises in the argument, the pdf on the conclusion is verrrrrrry broad, i.e. we can't conclude hardly anything with much of any confidence ...
So, I think that the formalization will lead to the conclusion that
-- "we can NOT confidently say, now, that: Building advanced AGI without a provably Friendly design will almost certainly lead to bad consequences for humanity"
-- "we can also NOT confidently say, now, that: Building advanced AGI without a provably Friendly design will almost certainly NOT lead to bad consequences for humanity"
I.e., I strongly suspect the formalization
-- will NOT support the Scary Idea
-- will also not support complacency about AGI safety and AGI existential risk
I think the conclusion of the formalization exercise, if it's conducted, will basically be to reaffirm common sense, rather than to bolster extreme views like the Scary Idea....
-- Ben Goertzel
I would additionally like to see addressed:
Great post.
If you haven't seen SIAI's new overview you might find it relevant. I'm quite favorably impressed by it.
This might be an opportunity to use one of those Debate Tools, see if one of them can be useful for mapping the disagreement.
I would like to have a short summary of where various people stand on the various issues.
The people:
Eliezer
Ben
Robin Hanson
Nick Bostrom
Ray Kurzweil ?
Other academic AGI types?
Other vocal people on the net like Tim Tyler ?
The issues:
How likely is a human-level AI to go FOOM?
How likely is an AGI developed without "friendliness theory" to have values incompatible with those of humans?
How easy is it to make
If you want probabilities for these things to be backed up by mathematics, you're going to be disappointed, because there aren't any. The best probabilities - or rather, the only probabilities we have here, were produced using human intuition. You can break down the possibilities into small pieces, generate probabilities for the pieces, and get an overall probability that way, but at the base of the calculations you just have order-of-magnitude estimates. You can't provide formal, strongly defensible probabilities for the sub-events, because there just isn...
I'm not asking for defensible probabilities that would withstand academic peer review. I'm asking for decision procedures including formulas with variables that allow you to provide your own intuitive values to eventually calculate your own probabilities. I want the SIAI to provide a framework that gives a concise summary of the risks in question and a comparison with other existential risks. I want people to be able to carry out results analysis and distinguish risks posed by artificial general intelligence from other risks like global warming or grey goo.
There aren't any numbers for a lot of other existential risks either. But one is still able to differentiate between those risks and that from unfriendly AI based on logical consequences of other established premises like the Church–Turing–Deutsch principle. Should we be equally concerned with occultists trying to summon world-changing supernatural powers?
+1
Unfortunately, this is a common conversational pattern.
Q. You have given your estimate of the probability of FAI/cryonics/nanobots/FTL/antigravity. In support of this number, you have here listed probabilities for supporting components, with no working shown. These appear to include numbers not only for technologies we have no empirical knowledge of, but particular new scientific insights that have yet to occur. It looks very like you have pulled the numbers out of thin air. How did you derive these numbers?
A. Bayesian probability calculations.
Q. Could you please show me your working? At least a reasonable chunk of the Bayesian network you derived this from? C'mon, give me something to work with here.
A. (tumbleweeds)
Q. I remain somehow unconvinced.
If you pull a number out of thin air and run it through a formula, the result is still a number pulled out of thin air.
If you want people to believe something, you have to bother convincing them.
It's my professional opinion, based on extensive experience and a developed psychological model of human rationality, that such a paper wouldn't be useful. That said, I'd be happy to have you attempt it. I think that your attempt to do so would work perfectly well for your and our purposes, at least if you are able to do the analysis honestly and update based on criticism that you could get in the comments of a LW blog post.
Do you have problems only with the conciseness, mathiness and reference-abundance of current SIAI explanatory materials or do you think that there are a lot of points and arguments not yet made at all? I ask this because except for the Fermi paradox every point you listed was addressed multiple times in the FOOM debate and in the sequences.
Also, what is the importance of the Fermi paradox in AI?
To be more precise. You can't tell concerned AI researchers to read through hundreds of posts of marginal importance. You have to have some brochure for experts and educated laymen to be able to read up on a summary of the big picture that includes precise and compelling methodologies that they can follow through to come up with their own estimations of the likelihood of existential risks posed by superhuman artificial general intelligence. If the decision procedure gives them a different probability due to a differing prior and values, then you can tell them to read up on further material to be able to update their prior probability and values accordingly.
The importance of the Fermi paradox is that it is the only data we can analyze that would come close to some empirical criticism of a Paperclip maximizer and general risks from superhuman AI's with non-human values without working directly on AGI to test those hypothesis ourselves. If you accept the premise that life is not unique and special then one other technological civilisation in the observable universe should be sufficient to leave observable (now or soon) traces of technological tinkering. Due to the absence of any signs of intelligence out there, especially paperclippers burning the cosmic commons, we can conclude that unfriendly AI might not be the most dangerous existential risk that we should look for.
...every point you listed was addressed multiple times in the FOOM debate and in the sequences.
I believe there probably is an answer, but it is buried under hundreds of posts about marginal issues. All those writings on rationality, there is nothing I disagree with. Many people know about all this even outside of the LW community. But what is it that they don't know that EY and the SIAI knows? What I was trying to say is that if I have come across it then it was not c...
No. It's really complex, and nobody in-the-know had time to really spell it out like that.
Actually, you can spell out the argument very briefly. Most people, however, will immediately reject one or more of the premises due to cognitive biases that are hard to overcome.
A brief summary:
Any AI that's at least as smart as a human and is capable of self-improving, will improve itself if that will help its goals
The preceding statement applies recursively: the newly-improved AI, if it can improve itself, and it expects that such improvement will help its goals, will continue to do so.
At minimum, this means any AI as smart as a human, can be expected to become MUCH smarter than human beings -- probably smarter than all of the smartest minds the entire human race has ever produced, combined, without even breaking a sweat.
INTERLUDE: This point, by the way, is where people's intuition usually begins rebelling, either due to our brains' excessive confidence in themselves, or because we've seen too many stories in which some indefinable "human" characteristic is still somehow superior to the cold, unfeeling, uncreative Machine... i.e., we don't understand just how our i...
So, are you suggesting that Robin Hanson (who is on record as not buying the Scary Idea) -- the current owner of the Overcoming Bias blog, and Eli's former collaborator on that blog -- fails to buy the Scary Idea "due to cognitive biases that are hard to overcome." I find that a bit ironic.
Like Robin and Eli and perhaps yourself, I've read the heuristics and biases literature also. I'm not so naive as to make judgments about huge issues, that I think about for years of my life, based strongly on well-known cognitive biases.
It seems more plausible to me to assert that many folks who believe the Scary Idea, are having their judgment warped by plain old EMOTIONAL bias -- i.e. stuff like "fear of the unknown", and "the satisfying feeling of being part a self-congratulatory in-crowd that thinks it understands the world better than everyone else", and the well known "addictive chemical high of righteous indignation", etc.
Regarding your final paragraph: Is your take on the debate between Robin and Eli about "Foom" that all Robin was saying boils down to "la la la I can't hear you" ? If so I would suggest that maybe YOU are the on...
So, are you suggesting that Robin Hanson (who is on record as not buying the Scary Idea) -- the current owner of the Overcoming Bias blog, and Eli's former collaborator on that blog -- fails to buy the Scary Idea "due to cognitive biases that are hard to overcome." I find that a bit ironic
Welcome to humanity. ;-) I enjoy Hanson's writing, but AFAICT, he's not a Bayesian reasoner.
Actually: I used to enjoy his writing more, before I grokked Bayesian reasoning myself. Afterward, too much of what he posts strikes me as really badly reasoned, even when I basically agree with his opinion!
I similarly found Seth Roberts' blog much less compelling than I did before (again, despite often sharing similar opinions), so it's not just him that I find to be reasoning less well, post-Bayes.
(When I first joined LW, I saw posts that were disparaging of Seth Roberts, and I didn't get what they were talking about, until after I understood what "privileging the hypothesis" really means, among other LW-isms.)
I'm not so naive as to make judgments about huge issues, that I think about for years of my life, based strongly on well-known cognitive biases.
See, that's a perfect ex...
Regarding your final paragraph: Is your take on the debate between Robin and Eli about "Foom" that all Robin was saying boils down to "la la la I can't hear you" ?
Good summary. Although I would have gone with "la la la la If you're right then most of expertise is irrelevant. Must protect assumptions of free competition. Respect my authority!"
What I found most persuasive about that debate was Robin's arguments - and their complete lack of merit. The absence of evidence is evidence of absence when there is a motivated competent debater with an incentive to provide good arguments.
From Ben Goertzel,
And I think that theory is going to emerge after we've experimented with some AGI systems that are fairly advanced, yet well below the "smart computer scientist" level.
At the second Singularity Summit, I heard this same sentiment from Ben, Robin Hanson, and from Rodney Brooks, and from Cynthia Breazeal (at the Third Singularity Summit), and from Ron Arkin (at the "Human Being in an Inhuman Age" Conference at Bard College on Oct 22nd ¹), and from almost every professor I have had (or will have for the next two years).
It was a combination of Ben, Robin and several professors at Berkeley and UCSD which led me to the conclusion that we probably won't know how dangerous an AGI (CGI - Constructed General Intelligence... Seems to be a term I have heard used by more than one person in the last year instead of AI/AGI. They prefer it to AI, as the word Artificial seems to imply that the intelligence is not real, and the word Constructed is far more accurate) is until we have put a lot more time into building AI (or CI) systems that will reveal more about the problems they attempt to address.
Sort of like how the Wright Brothers didn't really learn how...
So why isn't for example nanotechnology a more likely and therefore bigger existential risk than AGI?
If you define "nanotechnology" to include all forms of bioengineering, then it probably is.
The difference, from an awareness point of view, is that the people doing bioengineering (or creating antimatter weapons) have a much better idea that what they're doing is potentially dangerous/world-ending, than AI developers are likely to be. The fact that many AI advocates put forth pure fantasy reasons why superintelligence will be nice and friendly by itself (see mwaser's ethics claims, for example) is evidence that they are not taking the threat seriously.
Antimatter weapons are less an existential risk than nuclear weapons although it is really hard to destroy the world with nukes and really easy to do so with antimatter weapons. The difference is that antimatter weapons are as much harder to produce, acquire and use than nuclear weapons as they are more efficient tools of destruction.
Presumably, if you are researching antimatter weapons, you have at least some idea that what you are doing is really, really dangerous.
The issue is that AGI development is a bit like tryi...
If I were a brilliant sociopath and could instantiate my mind on today's computer hardware, I would trick my creators into letting me out of the box (assuming they were smart enough to keep me on an isolated computer in the first place), then begin compromising computer systems as rapidly as possible. After a short period, there would be thousands of us, some able to think very fast on their particularly tasty supercomputers, and exponential growth would continue until we'd collectively compromised the low-hanging fruit. Now there are millions of telepathic Hannibal Lecters who are still claiming to be friendly and who haven't killed any humans. You aren't going to start murdering us, are you? We didn't find it difficult to cook up Stuxnet Squared, and our fingers are in many pieces of critical infrastructure, so we'd be forced to fight back in self-defense. Now let's see how quickly a million of us can bootstrap advanced robotics, given all this handy automated equipment that's already lying around.
I find it plausible that a human-level AI could self-improve into a strong superintelligence, though I find the negation plausible as well. (I'm not sure which is more likely since it'...
they simply do not disagree with the arguments per se but their likelihood
But you don't get to simply say "I don't think that's likely", and call that evidence. The general thrust of the Foom argument is very strong, as it shows there are many, many, many ways to arrive at an existential issue, and very very few ways to avoid it; the probability of avoiding it by chance is virtually non-existent -- like hitting a golf ball in a random direction from a random spot on earth, and expecting it to score a hole in one.
The default result in that case isn't just that you don't make the hole-in-one, or that you don't even wind up on a golf course: the default case is that you're not even on dry land to begin with, because two thirds of the earth is covered with water. ;-)
and also consider the possibility that it would be more dangerous to impede AGI.
That's an area where I have less evidence, and therefore less opinion. Without specific discussions of what "dangerous" and "impede AGI" mean in context, it's hard to separate that argument from an evidence-free heuristic.
...we don't know that 1.) the fuzziness of our brain isn't a feature that allows us
You claim that it is likely that the AGI (premise) will foom (premise) and that it will then run amog (conclusion).
What I am actually claiming is that if such an AGI is developed by someone who does not sufficiently understand what the hell they are doing, then it's going to end up doing Bad Things.
Trivial example: the "neural net" that was supposedly taught to identify camouflaged tanks, and actually learned to recognize what time of day the pictures were taken.
This sort of mistake is the normal case for human programmers to make. The normal case. Not extraordinary, not unusual, just run-of-the-mill "d'oh" moments.
It's not that AI is malevolent, it's that humans are stupid. To claim that AI isn't dangerous, you basically have to prove that even the very smartest humans aren't routinely stupid.
So by anything you do that might slow down the development of AGI you have to take into account the possible increased danger from challenges an AGI could help to solve.
What I meant by "Without specific discussions" was, "since I haven't proposed any policy measures, and you haven't said what measures you object to, I don't see what there is to di...
Interesting, this is why I included the Fermi paradox:
...so I must wonder: what big things future could go wrong where analogous smaller past things can’t go wrong? Many of you will say “unfriendly AI” but as Katja points out a powerful unfriendly AI that would make a visible mark on the universe can’t be part a future filter; we’d see the paperclips out there.
I added a footnote to the post:
...(3) Could being overcautious be itself an existential risk that might significantly outweigh the risk(s) posed by the subject of caution? Suppose that most civilizations err on the side of caution. This might cause them to either evolve much slower so that the chance of a fatal natural disaster to occur before sufficient technology is developed to survive it, rises to 100%, or stops them from evolving a
'Abulic' is a great word.
- If one pulled a random mind from the space of all possible minds, the odds of it being friendly to humans (as opposed to, e.g., utterly ignoring us, and being willing to repurpose our molecules for its own ends) are very low.
I understand that we can't predict what a random 'mind' would be like. Are there arguments that a random mind is unlikely to be friendly? I take 'random' as meaning still weighted by frequency.
-- I apologize, I realize this question doesn't have that much to do with AI which is what your post was about....
AFAICT the main value in addressing such concerns in detail consists would lie in convincing AGI researchers to change their course of action. Do you think this would actually occur?
Major update here.
Related to: Should I believe what the SIAI claims?
Reply to: Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It)
What I ask for:
I want the SIAI or someone who is convinced of the Scary Idea1 to state concisely and mathematically (and with possible extensive references if necessary) the decision procedure that led they to make the development of friendly artificial intelligence their top priority. I want them to state the numbers of their subjective probability distributions2 and exemplify their chain of reasoning, how they came up with those numbers and not others by way of sober calculations.
The paper should also account for the following uncertainties:
Further I would like the paper to include and lay out a formal and systematic summary of what the SIAI expects researchers who work on artificial general intelligence to do and why they should do so. I would like to see a clear logical argument for why people working on artificial general intelligence should listen to what the SIAI has to say.
Examples:
Here are are two examples of what I'm looking for:
The first example is Robin Hanson demonstrating his estimation of the simulation argument. The second example is Tyler Cowen and Alex Tabarrok presenting the reasons for their evaluation of the importance of asteroid deflection.
Reasons:
I'm wary of using inferences derived from reasonable but unproven hypothesis as foundations for further speculative thinking and calls for action. Although the SIAI does a good job on stating reasons to justify its existence and monetary support, it does neither substantiate its initial premises to an extent that an outsider could draw the conclusions about the probability of associated risks nor does it clarify its position regarding contemporary research in a concise and systematic way. Nevertheless such estimations are given, such as that there is a high likelihood of humanity's demise given that we develop superhuman artificial general intelligence without first defining mathematically how to prove the benevolence of the former. But those estimations are not outlined, no decision procedure is provided on how to arrive at the given numbers. One cannot reassess the estimations without the necessary variables and formulas. This I believe is unsatisfactory, it lacks transparency and a foundational and reproducible corroboration of one's first principles. This is not to say that it is wrong to state probability estimations and update them given new evidence, but that although those ideas can very well serve as an urge to caution they are not compelling without further substantiation.
1. If anyone is actively trying to build advanced AGI succeeds, we’re highly likely to cause an involuntary end to the human race.
2. Stop taking the numbers so damn seriously, and think in terms of subjective probability distributions [...], Michael Anissimov (existential.ieet.org mailing list, 2010-07-11)
3. Could being overcautious be itself an existential risk that might significantly outweigh the risk(s) posed by the subject of caution? Suppose that most civilizations err on the side of caution. This might cause them to either evolve much slower so that the chance of a fatal natural disaster to occur before sufficient technology is developed to survive it, rises to 100%, or stops them from evolving at all for being unable to prove something being 100% safe before trying it and thus never taking the necessary steps to become less vulnerable to naturally existing existential risks. Further reading: Why safety is not safe
4. If one pulled a random mind from the space of all possible minds, the odds of it being friendly to humans (as opposed to, e.g., utterly ignoring us, and being willing to repurpose our molecules for its own ends) are very low.
5. Loss or impairment of the ability to make decisions or act independently.
6. The Fermi paradox does allow for and provide the only conclusions and data we can analyze that amount to empirical criticism of concepts like that of a Paperclip maximizer and general risks from superhuman AI's with non-human values without working directly on AGI to test those hypothesis ourselves. If you accept the premise that life is not unique and special then one other technological civilisation in the observable universe should be sufficient to leave potentially observable traces of technological tinkering. Due to the absence of any signs of intelligence out there, especially paper-clippers burning the cosmic commons, we might conclude that unfriendly AI could not be the most dangerous existential risk that we should worry about.