Nick Szabo on acting on extremely long odds with claimed high payoffs:

Beware of what I call Pascal's scams: movements or belief systems that ask you to hope for or worry about very improbable outcomes that could have very large positive or negative consequences. (The name comes of course from the infinite-reward Wager proposed by Pascal: these days the large-but-finite versions are far more pernicious).  Naive expected value reasoning implies that they are worth the effort: if the odds are 1 in 1,000 that I could win $1 billion, and I am risk and time neutral, then I should expend up to nearly $1 million dollars worth of effort to gain this boon. The problems with these beliefs tend to be at least threefold, all stemming from the general uncertainty, i.e. the poor information or lack of information, from which we abstracted the low probability estimate in the first place: because in the messy real world the low probability estimate is almost always due to low or poor evidence rather than being a lottery with well-defined odds.

Nick clarifies in the comments that he is indeed talking about singularitarians, including his GMU colleague Robin Hanson. This post appears to revisit a comment on an earlier post:

In other words, just because one comes up with quasi-plausible catastrophic scenarios does not put the burden of proof on the skeptics to debunk them or else cough up substantial funds to supposedly combat these alleged threats.

New Comment
85 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

This is a terrible misrepresentation. SI does not argue for donations on these grounds; Eliezer and other SI staff have explicitly rejected such Pascalian reasons, but instead argued that the risks that they wish to avert are quite probable.

Then it constitutes a serious PR problem.

0Manfred
Or is at least a symptom of bad PR.
2JaneQ
It has to also be probable that their work averts those risks, which seem incredibly improbable by any reasonable estimate. If the alternative Earth was to adopt a strategy of ignoring prophetic groups of 'idea guys' similar to SI and ignore their pleads for donations so that they can hire competent researchers to pursue their ideas, I do not think that such decision would have increased the risk by more than a miniscule amount.
4Vladimir_Nesov
People currently understand the physical world sufficiently to see that supernatural claims are bogus, and so there is certainty about impossibility of developments predicated on supernatural. People know robust and general laws of physics that imply impossibility of perpetual motion, and so we can conclude in advance with great certainty that any perpetual motion engineering project is going to fail. Some long-standing problems in mathematics were attacked unsuccessfully for a long time, and so we know that making further progress on them is hard. In all these cases, there are specific pieces of positive knowledge that enable the inference of impossibility or futility of certain endeavors. In contrast, a lot of questions concerning Friendly AI remain confusing and unexplored. It might turn out to be impossibly difficult to make progress on them, or else a simple matter of figuring out how to apply standard tools of mainstream mathematics. We don't know, but neither do we have positive knowledge that implies impossibility or extreme difficulty of progress on these questions. In particular, the enormity of consequences does not imply extreme improbability of influencing those consequences. It looks plausible that the problem can be solved.
0JenniferRM
This kind of seems like political slander to me. Maybe I'm miscalibrated? But it seems like you're thinking of "reasonable estimates" as things produced by groups or factions, treating SI as a single "estimate" in this sense, and lumping them with a vaguely negative but non-specified reference class of "prophetic groups". The packaged claims function to reduce SI's organizational credibility, and yet it references no external evidence and makes no testable claims. For your "prophetic groups" reference class, does it include 1930's nuclear activists, 1950's environmentalists, or 1970's nanotechnology activists? Those examples come from the socio-political reference class I generally think of SI as belonging to, and I think of them in a mostly positive way. Personally, I prefer to think of "estimates" as specific predictions produced by specific processes at specific times, and they seem like they should be classified as "reasonable" or not on the basis of their mechanisms and grounding in observables in the past and the future. The politics and social dynamics surrounding an issue can give you hints about what's worth thinking about, but ultimately you have to deal with the object level issues, and the object level issues will screen off the politics and social dynamics once you process them. The most reasonable tool for extracting a "coherent opinion" from someone on the subject of AGI that is available to the public that I'm aware of is the uncertain future. (Endgame: Singularity is a more interesting tool in some respects. It's interesting for building intuitions about certain kinds of reality/observable correlations because it has you play as a weak but essentially benevolent AGI rather than as humanity, but (1) it is ridiculously over-specific as a prediction tool, and (2) seems to give the AGI certain unrealistic advantages and disadvantages for the sake of making it more fun as a game. I've had a vague thought to fork it, try to change it to be more realis
2Steve_Rayhawk
(Note that the Uncertain Future software is mostly supposed to be a conceptual demonstration; as mentioned in the accompanying conference paper, a better probabilistic forecasting guide would take historical observations and uncertainty about constant underlying factors into account more directly, with Bayesian model structure. The most important part of this would be stochastic differential equation model components that could account for both parameter and state uncertainty in nonlinear models of future economic development from past observations, especially of technology performance curves and learning curves. Robin Hanson's analysis of the random properties of technological growth modes has something of a similar spirit.)
0Paul Crowley
I think your estimate of their chances of success is low. But even given that estimate, I don't think it's Pascalian. To me, it's Pascalian when you say "my model says the chances of this are zero, but I have to give it non-zero odds because there may be an unknown failing in my model". I think Heaven and Hell are actually impossible, I'm just not 100% confident of that. By contrast, it would be a bit odd if your model of the world said "there is this risk to us all, but the odds of a group of people causing a change that averts that risk are actually zero".
3JaneQ
It is not just their chances of success. For the donations to matter, you need SI to succeed where without SI there is failure. You need to get a basket of eggs, and have all the good looking eggs be rotten inside but one fairly rotten looking egg be fresh. Even if a rotten looking egg is nonetheless more likely to be fresh inside than one would believe, it is highly unlikely situation.
0Paul Crowley
I'm afraid I'm not getting your meaning. Could you fill out what corresponds to what in the analogy? What are all the other eggs? In what way do they look good compared to SI?
4JaneQ
All the other people and organizations that are no less capable of identifying the preventable risks (if those exist) and addressing them, have to be unable to prevent destruction of mankind without SI. Just like in the Pascal's original wager, the Thor and other deities are to be ignored by omission. On how the SI does not look good, well, it does not look good to Holden Karnofsky, or me for that matter. Resistance to feedback loops is an extremely strong point of his. On the rationality movement, here's a quote from Holden.
1Viliam_Bur
Could you give me some examples of other people and organizations trying to prevent the risk of an Unfriendy AI? Because for me, it's not like I believe that SI has a great chance to develop the theory and prevent the danger, but rather like they are the only people who even care about this specific risk (which I believe to be real). As soon as the message becomes widely known, and smart people and organizations will start rationally discussing the dangers of Unfriendly AI, and how to make a Friendly AI (avoiding some obvious errors, such as "a smart AI simply must develop a human-compatible morality, because it would be too horrible to think otherwise"), then there is a pretty good chance that some of those organization will be more capable than SI to reach that goal: more smart people, better funding, etc. But at this moment, SI seems to be the only one paying attention to this topic.
4Decius
It's a crooked game, but it's the only game in town? None of that is evidence that SI would be more effective if it had more money. Assign odds to hostile AI becoming extant given low funding for SI, and compare the odds of hostile AI becoming extant given high funding for SI. The difference between those two is proportional to the value of SI (with regards to preventing hostile AI).
3JaneQ
SI being the only one ought to lower your probability that this whole enterprise is worthwhile in any way. With regards to the 'message', i think you grossly over estimate value of a rather easy insight that anyone who has watched Terminator could have. With regards to "rationally discussing", what I have seen so far here is pure rationalization and very little, if any, rationality. What the SI has on the track record is, once again, a lot of rationalizations and not enough rationality to even have had an accountant through it's first 10 years and first over 2 millions dollars in other people's money.
3David_Gerard
Note that that second paragraph is one of Holden Karnofsky's objections to SIAI: a high opinion of its own rationality that is not so far substantiable from the outside view.
6JaneQ
Yes. I am sure Holden is being very polite, which is generally good but I've been getting impression that the point he was making did not in full carry across the same barrier that has resulted in the above-mentioned high opinion of own rationality despite complete lack of results for which rationality would be better explanation than irrationality (and presence of results which set rather low ceiling for the rationality). The 'resistance to feedback' is even stronger point, suggestive that the belief in own rationality is, at least to some extent, combined with expectation that it won't pass the test and subsequent avoidance (rather than seeking) of tests; as when psychics do believe in their powers but do avoid any reliable test.
-1Eugine_Nier
Really, last time I checked Eliezer was refusing to name either a probability or a time scale.
2Paul Crowley
I'm not seeing how you get from "doesn't state an explicit probability or timescale publically" to "argues that SI should be supported on Pascalian grounds".
3David_Gerard
It looked like just a response to you saying "instead argued that the risks that they wish to avert are quite probable."
-5private_messaging
-1Richard_Kennaway
Ah well, those can't be the singularitarians he's talking about then. He doesn't name any names, leaving it to Anonymous to do so, then responds by saying "I wasn't going to name names, but..." and then continuing not to name names. I predict a no true Scotsman path of retreat if you take your argument to him.
5David_Gerard
It's not clear to me how approaching your response with an assumption of bad faith will convince him or his readers of the correctness of your position. Let us know how it works out for you.
-2Richard_Kennaway
I'm not assuming bad faith, just observing a lack of specifics about who he is talking about. But I'm not intending to make any response there, not being as informed as, say, ciphergoth on the SI's position.
-4[anonymous]
I would have preferred that he use my (even more passive-aggressive) approach, which is to say, "I'm not going to name any names[1]", and then have a footnote saying "[1] A 'name' is an identifier used to reference a proper noun. An example of a name might be 'Singularity Institute'." Get it? You're not "naming names", you're just giving an example of name in the exact neighborhood of the accusation! Tee hee! (Of course, it's even better if you actually make the accusation directly, but that's obviously not an option here.)

could win $1 billion, and I am risk and time neutral

Who has constant marginal utility of money up to $1,000,000,000?

The biggest problem with these schemes is that, the closer to infinitesimal probability, and thus usually to infinitesimal quality or quantity of evidence, one gets, the closer to infinity the possible extreme-consequence schemes one can dream up

Consequences can't be inflated to make up for arbitrarily low probabilities. Consequences are connected: if averting human extinction by proliferation of morally valueless machinery is super valuable because of future generations, then the gains of averting human extinction by asteroids, or engineered diseases, will be on the same scale.

It cost roughly $100 million to launch a big search for asteroids that has now located 90%+ of large (dinosaur-killer size) asteroids, and such big impacts happen every hundred million years or so, accompanied by mass extinctions, particularly of large animals. If working on AI, before AI is clearly near and better understood, had a lower probability of averting x-risk reduction per unit cost than asteroid defense, or adding to the multibillion dollar annual anti-nuclear proliferation or biosecurity budgets, or some other intervention, then it would lose.

"Some nonzero chance" isn't enough, it has to be a "chance per cost better than the alternatives."

6nickLW
I should have said something about marginal utility there. Doesn't change the three tests for a Pascal scam though. The asteroid threat is a good example of a low-probability disaster that is probably not a Pascal scam. On point (1) it is fairly lottery-like, insofar as asteroid orbits are relatively predictable -- the unknowns are primarily "known unknowns", being deviations from very simple functions -- so it's possible to compute odds from actual data, rather than merely guessing them from a morass of "unknown unknowns". It passes test (2) as we have good ways to simulate with reasonable accuracy and (at some expense, only if needed) actually test solutions. And best of all it passes test (3) -- experiments or observations can be done to improve our information about those odds. Most of the funding has, quite properly, gone to those empirical observations, not towards speculating about solutions before the problem has been well characterized. Alas, most alleged futuristic threats and hopes don't fall into such a clean category: the evidence is hopelessly equivocal (even if declared with a false certainty) or missing, and those advocating that our attention and other resources be devoted to them usually fail to propose experiments or observations that would imrove that evidence and thus reduce our uncertainty to levels that would distinguish them from the near-infinity of plausible disaster scenarios we could imagine. (Even with just the robot apocalypse, there are a near-infinity of ways one can plausibly imagine it playing out). Same, generally speaking, for future diseases -- there may well be a threat lying in there, but we don't have any general ways of clearly characterizing specifically what those threats might be and thus distinguishing them from the near-infinity of threats we could plausibly imagine (again generally speaking -- there are obviously some well-characterized specific diseases for which we do have such knowledge).
1buybuydandavis
Who has constant marginal utility of people up to 1000000000 people? (To answer the rhetorical question - no one.) This reminds of of Jaynes and transformation groups - establish your prior based on transforms that leave you with the same problem. I find this makes short work of arbitrary assertions that want to be taken seriously.
0A1987dM
Someone who's already got many billions? (But then again, for such a person a 1/1000 chance of getting one more billion wouldn't even be worth the time spent to participate in such a lottery, I suppose.)
1CarlShulman
From zero up to $1,000,000,000.
1Decius
I do, in that there are nonfatal actions that i would not take in exchange for that much money. Of course, at numbers over several hundred thousand, money loses unit utility very fast. One billion dollars has significantly less than one thousand times the value to me of one million dollars, because the things I can buy with a billion dollars are less than one thousand times as valuable to me as the things I can buy with a million.
[-][anonymous]70

Seems like there is more going on than just "Do transhumanists endorse Pascalian bargains?" Because of confidence levels inside and outside an argument, the fact that someone (e.g. SI) makes an argument that a particular risk has a non-negligible probability does not mean that someone examining this claim should assign a non-negligible probability. It's very possible for someone thinking about (e.g.) AI risk to assign low probabilities and thus find themselves in a Pascalian situation even if SI argues that the probability of AI risk is high.

6nickLW
Indeed. As to why I find extreme consequences from general AI highly unlikely, see here. Alas, my main reason is partly buried in the comments (I really need to do a new post on this subject). It can be summarized as follows: for basic reasons of economics and computer science, specialized algorithms are generally far superior to general ones. Specialized algorithms are what we should hope for or fear, and their positive and negative consequences occur a little at a time -- and have been occurring for a long time already, so we have many actual real-world observations to go by. They can be addressed specifically, each passing tests 1-3, so that we can solve these problems and achieve these hopes one specialized task at a time, as well as induce general theories from these experiences (e.g. of security), without getting sucked into any of the near-infinity of Pascal scams one could dream up about the future of computing and robotics.

It can be summarized as follows: for basic reasons of economics and computer science, specialized algorithms are generally far superior to general ones.

It would be better to present, as your main reason, "the kinds of general algorithms that humans are likely to develop and implement, even absent impediments caused by AI-existential risk activism, will almost certainly be far inferior to specialized ones". That there exist general-purpose algorithms which subsume the competitive abilities of all existing human-engineered special-purpose algorithms, given sufficient advantages of scale in number of problem domains, is trivial by the existence proof constituted by the human economy.

Put another way: There is some currently-unsubstitutable aspect of the economy which is contained strictly within human cognition and communication. Consider the case where the intellectual difficulties involved in understanding the essence of this unsubstitutable function were overcome, and it were implemented in silico, with an initial level of self-engineering insight already equal to that which was used to create it, and with starting capital and education sufficient to overcome transient ... (read more)

3Wei Dai
I don't understand your reasoning here. If you have a general AI, it can always choose to apply or invent a specialized algorithm when the situation calls for that, but if all you have is a collection of specialized algorithms, then you have to try to choose/invent the right algorithm yourself, and will likely do a worse (possibly much worse) job than the general AI if it is smarter than you are. So why do we not have to worry about "extreme consequences from general AI"?
5nickLW
Skill at making such choices is itself a specialty, and doesn't mean you'll be good at other things. Indeed, the ability to properly choose algorithms in one problem domain often doesn't make you an expert at choosing them for a different problem domain. And as the software economy becomes more sophisticated these distinctions will grow ever sharper (basic Adam Smith here -- the division of labor grows with the size of the market). Such software choosers will come in dazzling variety: they like other useful or threatening software will not be general purpose. And who will choose the choosers? No sentient entity at all -- they'll be chosen they way they are today, by a wide variety of markets, except that there too the variety will be far greater. Such markets and technologies are already far beyond the ability of any single human to comprehend, and that gap between economic and technological reality and our ability to comprehend and predict it grows wider every year. In that sense, the singularity already happened, and long ago.
3Steve_Rayhawk
Can you expand on this? The way you say it suggests that it might be your core objection to the thesis of economically explosive strong AI. -- put into words, the way the emotional charge would hook into the argument here would be: "Such a strong AI would have to be at least as smart as the market, and yet it would have been designed by humans, which would mean there had to be a human at least as smart as the market: and belief in this possibility is always hubris, and is characteristically disastrous for its bearer -- something you always want to be on the opposite side of an argument from"? (Where "smart" here is meant to express something metaphorically similar to a proof system's strength: "the system successfully uses unknowably diverse strategies that a lesser system would either never think to invent or never correctly decide how much to trust".) I guess, for this explanation to work, it also has to be your core objection to Friendly AI as a mitigation strategy: "No human-conceived AI architecture can subsume or substitue for all the lines of innovation that the future of the economy should produce, much less control such an economy to preserve any predicate relating to human values. Any preservation we are going to get is going to have to be built incrementally from empirical experience with incremental software economic threats to those values, each of which we will necessarily be able to overcome if there had ever been any hope for humankind to begin with; and it would be hubris, and throwing away any true hope we have, to cling to a chimerical hope of anything less partial, uncertain, or temporary."
0Wei Dai
Would you agree that humans are in general not very good at inventing new algorithms, many useful algorithms remain undiscovered, and as a result many jobs are still being done by humans instead of specialized algorithms? Isn't it possible that this situation (i.e., many jobs still being done by humans, including the jobs of inventing new algorithms) is still largely the case by the time that a general AI smarter than human (for example, an upload of John von Neumann running at 10 times human speed) is created, which at a minimum results in many humans suddenly losing their jobs and at a maximum allows the AI or its creators to take over the world? Do you have an argument why this isn't possible or isn't worth worrying about (or hoping for)?
1David_Gerard
To answer your second sentence on, one consideration is that it is highly questionable whether scanning and uploading is even possible in any practical sense, as people who actually work with brain preservation on a daily basis and would love to be able to extract state from the preserved material seem to consider the matter: It's "possible" philosophically, but not at all practically. This suggests that it's low enough feasibility at present that even paying serious attention to it may be a waste of time of the "Pascal's scam" form described in the linked post (whether the word "scam" is fair or not).
-1Wei Dai
If uploads are infeasible, what about other possible ways to build AGIs? In any case, I'm responding to Nick's argument that we do not have have to worry about extreme consequences from AGIs because "specialized algorithms are generally far superior to general ones", which seems to be a separate argument from whether AGIs are feasible.
2nickLW
When some day some people (or some things) build an AGI, human-like or otherwise, it will at that time be extremely inferior to then-existing algorithms for any particular task (including any kind of learning or choice, including learning or choice of algorithms). Culture, including both technology and morality, will have changed beyond any of our recognitions long before that. Humans will already have been obsoleted for all jobs except, probably, those that for emotional reasons require interaction with another human (there's already a growth trend in such jobs today). The robot apocalypse, in other worlds, will arrive and is arriving one algorithm at a time. It's a process we can observe unfolding, since it has been going on for a long time already, and learn from -- real data rather than imagination. Targetting an imaginary future algorithm does nothing to stop it. If, for example, you can't make current algorithms "friendly", it's highly unlikely that you're going to make the even more hyperspecialized algorithms of the future friendly either. Instead of postulting imaginary solutions to imaginary problems, it's much more useful to work empirically, e.g. on computer scecurity that mathematically prevents algorithms in general from violating particular desired rights. Recognize real problems and demonstrate real solutions to them.
3Vladimir_Nesov
The phrasing suggests a level of certainty that's uncalled for for a claim that's so detailed and given without supporting evidence. I'm not sure there is enough support for even paying attention to this hypothesis. Where does it come from? (Obvious counterexample that doesn't seem unlikely: AGI is invented early, so all the cultural changes you've listed aren't present at that time.)
2nickLW
All of these kinds of futuristic speculations are stated with false certainly -- especially the AGi-is-very-important argument, which is usually stated with a level of certainty that is incredible for an imaginary construct. As for my evidence, I provide it in the above "see here" link -- extensive economic observations have been done on the benefits of specialization, for example, and we have extensive experience in computer science with applying specialized vs. generalized algorithms to problems and assessing their relative efficiency. That vast amount of real-world evidence far outweighs the mere speculative imagination that undergirds the AGI-is-very-important argument.
5Wei Dai
Given the benefits of specialization, how do you explain the existence of general intelligence (i.e. humans)? Why weren't all the evolutionary niches that humans current occupy already taken by organisms with more specialized intelligence? My explanation is that generalized algorithms may be less efficient than specialized algorithms when specialized algorithms are available, but inventing specialized algorithm is hard (both for us and for evolution) so often specialized algorithms simply aren't available. You don't seem to have responded to this line of argument...
4Vladimir_Nesov
The belief that an error is commonly made doesn't make it OK in any particular case. (When, for example, I say that I believe that AGI is dangerous, this isn't false certainty, in the sense that I do believe that it's very likely the case. If I'm wrong on this point, at least my words accurately reflect my state of belief. Having an incorrect belief and incorrectly communicating a belief are two separate unrelated potential errors. If you don't believe that something is likely, but state it in the language that suggests that it is, you are being unnecessarily misleading.)
1Wei Dai
To rephrase my question, how confident are you of this, and why? It seems to me quite possible that by the time someone builds an AGI, there are still plenty of human jobs that have not been taken over by specialized algorithms due to humans not being smart enough to have invented the necessary specialized algorithms yet. Do you have a reason to think this can't be true? ETA: My reply is a bit redundant given Nesov's sibling comment. I didn't see his when I posted mine.
6nickLW
I am far more confident in it than I am in the AGI-is-important argument. Which of course isn't anywhere close to saying that I am highly confident in it. Just that the evidence for AGI-is-unimportant far outweighs that for AGI-is-important.
-2Risto_Saarelma
The upload thread talks about the difficulties in making an upload of a single specific adult human, which would have the acquired memories and skills from the biological human reproduced exactly. (Admittedly, "an upload of John von Neumann", taken literally, is exactly this.) A neuromorphic AI that skips the problem of engineering a general intelligence by copying the general structure of the human brain and running it in emulation doesn't need to be based on any specific person, though, just a general really very good understanding of the human brain, and it only needs to be built to the level of a baby with the capability to learn in place, instead of somehow having memories from a biological human transferred to it. The biggest showstopper for practical brain preservation seems to be preserving, retrieving and interpreting stored memories, so this approach seems quite a bit more viable. You could still have your von Neumann army, you'd just have to raise the first one yourself and then start making copies of him.

I don't think Nick made a good case as to why these movements / belief systems deserve to be call "scams" and more importantly deserve to be ignored (in favor of "spending more time learning about what has actually happened in the real world"). The fact that certain hopes and threats share certain properties (which Nick numbered 1-3 in his post) is unfortunate, but I didn't find any convincing arguments in his post showing why these hopes and threats should therefore be ignored.

(My overall position, which I'll repeat in case anyone is c... (read more)