Followup toThe Outside View's Domain, Conversation Halters
Reply toReference class of the unclassreferenceable

In "conversation halters", I pointed out a number of arguments which are particularly pernicious, not just because of their inherent flaws, but because they attempt to chop off further debate - an "argument stops here!" traffic sign, with some implicit penalty (at least in the mind of the speaker) for trying to continue further.

This is not the right traffic signal to send, unless the state of knowledge is such as to make an actual halt a good idea.  Maybe if you've got a replicable, replicated series of experiments that squarely target the issue and settle it with strong significance and large effect sizes (or great power and null effects), you could say, "Now we know."  Or if the other is blatantly privileging the hypothesis - starting with something improbable, and offering no positive evidence to believe it - then it may be time to throw up hands and walk away.  (Privileging the hypothesis is the state people tend to be driven to, when they start with a bad idea and then witness the defeat of all the positive arguments they thought they had.)  Or you could simply run out of time, but then you just say, "I'm out of time", not "here the gathering of arguments should end."

But there's also another justification for ending argument-gathering that has recently seen some advocacy on Less Wrong.

An experimental group of subjects were asked to describe highly specific plans for their Christmas shopping:  Where, when, and how.  On average, this group expected to finish shopping more than a week before Christmas.  Another group was simply asked when they expected to finish their Christmas shopping, with an average response of 4 days.  Both groups finished an average of 3 days before Christmas.  Similarly, Japanese students who expected to finish their essays 10 days before deadline, actually finished 1 day before deadline; and when asked when they had previously completed similar tasks, replied, "1 day before deadline."  (See this post.)

Those and similar experiments seem to show us a class of cases where you can do better by asking a certain specific question and then halting:  Namely, the students could have produced better estimates by asking themselves "When did I finish last time?" and then ceasing to consider further arguments, without trying to take into account the specifics of where, when, and how they expected to do better than last time.

From this we learn, allegedly, that "the 'outside view' is better than the 'inside view'"; from which it follows that when you're faced with a difficult problem, you should find a reference class of similar cases, use that as your estimate, and deliberately not take into account any arguments about specifics.  But this generalization, I fear, is somewhat more questionable...

For example, taw alleged upon this very blog that belief in the 'Singularity' (a term I usually take to refer to the intelligence explosion) ought to be dismissed out of hand, because it is part of the reference class "beliefs in coming of a new world, be it good or evil", with a historical success rate of (allegedly) 0%.

Of course Robin Hanson has a different idea of what constitutes the reference class and so makes a rather different prediction - a problem I refer to as "reference class tennis":

Taking a long historical long view, we see steady total growth rates punctuated by rare transitions when new faster growth modes appeared with little warning.  We know of perhaps four such "singularities": animal brains (~600MYA), humans (~2MYA), farming (~1OKYA), and industry (~0.2KYA)...

Excess inside viewing usually continues even after folks are warned that outside viewing works better; after all, inside viewing better show offs inside knowledge and abilities.  People usually justify this via reasons why the current case is exceptional.  (Remember how all the old rules didn’t apply to the new dotcom economy?)  So expect to hear excuses why the next singularity is also an exception where outside view estimates are misleading.  Let’s keep an open mind, but a wary open mind.

If I were to play the game of reference class tennis, I'd put recursively self-improving AI in the reference class "huge mother#$%@ing changes in the nature of the optimization game" whose other two instances are the divide between life and nonlife and the divide between human design and evolutionary design; and I'd draw the lesson "If you try to predict that things will just go on sorta the way they did before, you are going to end up looking pathetically overconservative".

And if we do have a local hard takeoff, as I predict, then there will be nothing to say afterward except "This was similar to the origin of life and dissimilar to the invention of agriculture".  And if there is a nonlocal economic acceleration, as Robin Hanson predicts, we just say "This was similar to the invention of agriculture and dissimilar to the origin of life".  And if nothing happens, as taw seems to predict, then we must say "The whole foofaraw was similar to the apocalypse of Daniel, and dissimilar to the origin of life or the invention of agriculture".  This is why I don't like reference class tennis.

But mostly I would simply decline to reason by analogy, preferring to drop back into causal reasoning in order to make weak, vague predictions.  In the end, the dawn of recursive self-improvement is not the dawn of life and it is not the dawn of human intelligence, it is the dawn of recursive self-improvement.  And it's not the invention of agriculture either, and I am not the prophet Daniel.  Point out a "similarity" with this many differences, and reality is liable to respond "So what?"

I sometimes say that the fundamental question of rationality is "Why do you believe what you believe?" or "What do you think you know and how do you think you know it?"

And when you're asking a question like that, one of the most useful tools is zooming in on the map by replacing summary-phrases with the concepts and chains of inferences that they stand for.

Consider what inference we're actually carrying out, when we cry "Outside view!" on a case of a student turning in homework.  How do we think we know what we believe?

Our information looks something like this:

  • In January 2009, student X1 predicted they would finish their homework 10 days before deadline, and actually finished 1 day before deadline.
  • In February 2009, student X1 predicted they would finish their homework 9 days before deadline, and actually finished 2 days before deadline.
  • In March 2009, student X1 predicted they would finish their homework 9 days before deadline, and actually finished 1 day before deadline.
  • In January 2009, student X2 predicted they would finish their homework 8 days before deadline, and actually finished 2 days before deadline.
  • And so on through 157 other cases.
  • Furthermore, in another 121 cases, asking students to visualize specifics actually made them more optimistic.

Therefore, when new student X279 comes along, even though we've never actually tested them before, we ask:

"How long before deadline did you plan to complete your last three assignments?"

They say:  "10 days, 9 days, and 10 days."

We ask:  "How long before did you actually complete them?"

They reply:  "1 day, 1 day, and 2 days".

We ask:  "How long before deadline do you plan to complete this assignment?"

They say:  "8 days."

Having gathered this information, we now think we know enough to make this prediction:

"You'll probably finish 1 day before deadline."

They say:  "No, this time will be different because -"

We say:  "Would you care to make a side bet on that?"

We now believe that previous cases have given us strong, veridical information about how this student functions - how long before deadline they tend to complete assignments - and about the unreliability of the student's planning attempts, as well.  The chain of "What do you think you know and how do you think you know it?" is clear and strong, both with respect to the prediction, and with respect to ceasing to gather information.  We have historical cases aplenty, and they are all as similar to each other as they are similar to this new case.  We might not know all the details of how the inner forces work, but we suspect that it's pretty much the same inner forces inside the black box each time, or the same rough group of inner forces, varying no more in this new case than has been observed on the previous cases that are as similar to each other as they are to this new case, selected by no different a criterion than we used to select this new case.  And so we think it'll be the same outcome all over again.

You're just drawing another ball, at random, from the same barrel that produced a lot of similar balls in previous random draws, and those previous balls told you a lot about the barrel.  Even if your estimate is a probability distribution rather than a point mass, it's a solid, stable probability distribution based on plenty of samples from a process that is, if not independent and identically distributed, still pretty much blind draws from the same big barrel.

You've got strong information, and it's not that strange to think of stopping and making a prediction.

But now consider the analogous chain of inferences, the what do you think you know and how do you think you know it, of trying to take an outside view on self-improving AI.

What is our data?  Well, according to Robin Hanson:

  • Animal brains showed up in 550M BC and doubled in size every 34M years
  • Human hunters showed up in 2M BC, doubled in population every 230Ky
  • Farmers, showing up in 4700BC, doubled every 860 years
  • Starting in 1730 or so, the economy started doubling faster, from 58 years in the beginning to a 15-year approximate doubling time now.

From this, Robin extrapolates, the next big growth mode will have a doubling time of 1-2 weeks.

So far we have an interesting argument, though I wouldn't really buy it myself, because the distances of difference are too large... but in any case, Robin then goes on to say:  We should accept this estimate flat, we have probably just gathered all the evidence we should use.  Taking into account other arguments... well, there's something to be said for considering them, keeping an open mind and all that; but if, foolishly, we actually accept those arguments, our estimates will probably get worse.  We might be tempted to try and adjust the estimate Robin has given us, but we should resist that temptation, since it comes from a desire to show off insider knowledge and abilities.

And how do we know that?  How do we know this much more interesting proposition that it is now time to stop and make an estimate - that Robin's facts were the relevant arguments, and that other arguments, especially attempts to think about the interior of an AI undergoing recursive self-improvement, are not relevant?

Well... because...

  • In January 2009, student X1 predicted they would finish their homework 10 days before deadline, and actually finished 1 day before deadline.
  • In February 2009, student X1 predicted they would finish their homework 9 days before deadline, and actually finished 2 days before deadline.
  • In March 2009, student X1 predicted they would finish their homework 9 days before deadline, and actually finished 1 day before deadline.
  • In January 2009, student X2 predicted they would finish their homework 8 days before deadline, and actually finished 2 days before deadline...

It seems to me that once you subtract out the scary labels "inside view" and "outside view" and look at what is actually being inferred from what - ask "What do you think you know and how do you think you know it?" - that it doesn't really follow very well.  The Outside View that experiment has shown us works better than the Inside View, is pretty far removed from the "Outside View!" that taw cites in support of predicting against any epoch.  My own similarity metric puts the latter closer to the analogies of Greek philosophers, actually.  And I'd also say that trying to use causal reasoning to produce weak, vague, qualitative predictions like "Eventually, some AI will go FOOM, locally self-improvingly rather than global-economically" is a bit different from "I will complete this homework assignment 10 days before deadline".  (The Weak Inside View.)

I don't think that "Outside View!  Stop here!" is a good cognitive traffic signal to use so far beyond the realm of homework - or other cases of many draws from the same barrel, no more dissimilar to the next case than to each other, and with similarly structured forces at work in each case.

After all, the wider reference class of cases of telling people to stop gathering arguments, is one of which we should all be wary...

New Comment
103 comments, sorted by Click to highlight new comments since: Today at 5:58 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I've put far more time that most engaging your singularity arguments, my responses have consisted of a lot more than just projecting a new growth jump from stats on the last three jumps, and I've continue to engage the topic long after that June 2008 post. So it seems to me unfair to describe me as someone arguing "for ending argument-gathering" on the basis that this one projection says all that can be said.

I agree that inside vs outside viewing is a continuum, not a dichotomy. I'd describe the key parameter as the sort of abstractions used, and the key issue is how well grounded are those abstractions. Outside views tend to pretty directly use abstractions that are based more "surface" features of widely known value. Inside views tend to use more "internal" abstractions, and inferences with longer chains.

Responding most directly to your arguments my main critiques have been about the appropriateness of your abstractions. You may disagree with them, and think I try to take the conversation in the wrong direction, but I don't see how I can be described as trying to halt the conversation.

I don't see how I can be described as trying to halt the conversation.

Allow me to disclaim that you usually don't. But that particular referenced post did, and taw tried it even more blatantly - to label further conversation as suspect, ill-advised, and evidence of morally nonvirtuous (giving in to pride and the temptation to show off) "inside viewing". I was frustrated with this at the time, but had too many higher-priority things to say before I could get around to describing exactly what frustrated me about it.

It's also not clear to me how you think someone should be allowed to proceed from the point where you say "My abstractions are closer to the surface than yours, so my reference class is better", or if you think you just win outright at that point. I tend to think that it's still a pretty good idea to list out the underlying events being used as alleged evidence, stripped of labels and presented as naked facts, and see how much they seem to tell us about the future event at hand, once the covering labels are gone. I think that under these circumstances the force of implication from agriculture to self-improving AI tends to sound pretty weak.

8Tyrrell_McAllister14y
I think that we should distinguish 1. trying to halt the conversation, from 2. predicting that your evidence will probably be of low quality if it takes a certain form. Robin seems to think that some of your evidence is a causal analysis of mechanisms based on poorly-grounded abstractions. Given that it's not logically rude for him to think that your abstractions are poorly grounded, it's not logically rude for him to predict that they will probably offer poor evidence, and so to predict that they will probably not change his beliefs significantly. I'm not commenting here on whose predictions are higher-quality. I just don't think that Robin was being logically rude. If anything, he was helpfully reporting which arguments are mostly likely to sway him. Furthermore, he seems to welcome your trying to persuade him to give other arguments more weight. He probably expects that you won't succeed, but, so long as he welcomes the attempt, I don't think that he can be accused of trying to halt the conversation.
6xamdam14y
Can someone please link to the posts in question for the latecomers?
2Cyan14y
Thanks to the OB/LW split, it's pretty awkward to try to find all the posts in sequence. I think Total Nano Domination is the first one*, and Total Tech Wars was Robin's reply. They went back and forth after that for a few days (you can follow along in the archives), and then restored the congenial atmosphere by jointly advocating cryonics. In fall 2009 they got into it again in a comment thread on OB. * maybe it was prompted by Abstract/Distant Future Bias.
7wedrifid14y
Don't neglect the surrounding context. The underlying disagreements have been echoing about all over the place in the form of "Contrarians boo vs Correct Contrarians yay!" and "here is a stupid view that can be classed as an inside view therefore inside view sucks!" vs "high status makes you stupid" and "let's play reference class tennis".
0Cyan14y
Good point. Hard to track down the links though.
2RobinHanson14y
There are many sorts of arguments that tend to be weak, and weak-tending arguments deserved to be treated warily, especially if their weakness tends not to be noticed. But pointing that out is not the same as trying to end a conversation. It seems to me the way to proceed is to talk frankly about various possible abstractions, including their reliability, ambiguity, and track records of use. You favor the abstractions "intelligence" and "self-improving" - be clear about what sort of detail those summaries neglect, why that neglect seems to you reasonable in this case, and look at the track record of others trying to use those abstractions. Consider other abstractions one might use instead.

I've got no problem with it phrased that way. To be clear, the part that struck me as unfair was this:

Excess inside viewing usually continues even after folks are warned that outside viewing works better; after all, inside viewing better show offs inside knowledge and abilities. People usually justify this via reasons why the current case is exceptional. (Remember how all the old rules didn’t apply to the new dotcom economy?) So expect to hear excuses why the next singularity is also an exception where outside view estimates are misleading. Let’s keep an open mind, but a wary open mind.

9wedrifid14y
Another example that made me die a little inside when I read it was this: The inside-view tells me that is an idiotic assumption to make.
7Eliezer Yudkowsky14y
Agreed that this is the conflict of two inside views, not an inside view versus an outside view. You could as easily argue that most stars don't seem to have been eaten, therefore, the outside view suggests that any aliens within radio range are environmentalists. And certainly Robin is judging one view right and the other wrong using an inside view, not an outside view. I simply don't see the justification for claiming the power and glory of the Outside View at all in cases like this, let alone claiming that there exists a unique obvious reference class and you have it.
-1RobinHanson14y
It seems to me the obvious outside view of future contact is previous examples of contact. Yes uneaten stars is also an outside stat, which does (weakly) suggest aliens don't eat stars. I certainly don't mean to imply there is always a unique inside view.

Why isn't the obvious outside view to draw a line showing the increased peacefulness of contacts with the increasing technological development of the parties involved, and extrapolate to super-peaceful aliens? Isn't this more or less exactly why you argue that AIs will inevitably trade with us? Why extrapolate for AIs but not for aliens?

To be clear on this, I don't simply distrust the advice of an obvious outside view, I think that in cases like these, people perform a selective search for a reference class that supports a foregone conclusion (and then cry "Outside View!"). This foregone conclusion is based on inside viewing in the best case; in the worst case it is based entirely on motivated cognition or wishful thinking. Thus, to cry "Outside View!" is just to conceal the potentially very flawed thinking that went into the choice of reference class.

9JGWeissman14y
Weakly? What are your conditional probabilities that we would observe stars being eaten, given that there exist star-eating aliens (within range of our attempts at communication), and given that such aliens do not exist? Or, if you prefer, what is your likelyhood ratio?
4Eliezer Yudkowsky14y
This is an excellently put objection - putting it this way makes it clear just how strong the objection is. The likelihood ratio to me sounds like it should be more or less T/F, where for the sake of conservatism T might equal .99 and F might equal .01. If we knew for a fact that there were aliens in our radio range, wouldn't this item of evidence wash out any priors we had about them eating stars? We don't see the stars being eaten!
0Tyrrell_McAllister14y
You think that it is idiotic to believe that many (not all) "broadcasters" expect that any aliens advanced enough to harm us will be peaceful?
2wedrifid14y
No. The idiotic assumption is that which is described as the 'standard' assumption. No indirection.
-2RobinHanson14y
To clarify, I meant an excess of reliance on the view, not of exploration of the view.

Forbidding "reliance" on pain of loss of prestige isn't all that much better than forbidding "exploration" on pain of loss of prestige. People are allowed to talk about my arguments but of course not take them seriously? Whereas it's perfectly okay to rely on your "outside view" estimate? I don't think the quoted paragraph is one I can let stand no matter how you reframe it...

3wedrifid14y
It could even be somewhat worse. Forbidden things seem to be higher status 'bad' than things that can be casually dismissed.
0Tyrrell_McAllister14y
The "on pain of loss of prestige" was implicit, if it was there at all. All that was explicit was that Robin considered your evidence to be of lower quality than you thought. Insofar as there was an implicit threat to lower status, such a threat would be implicit in any assertion that your evidence is low-quality. You seem to be saying that it is logically rude for Robin to say that he has considered your evidence and to explain why he found it wanting.

"inside view" and "outside view" seem misleading labels for things that are actually "bayesian reasoning" and "bayesian reasoning deliberately ignoring some evidence to account for flawed cognitive machinery". The only reason for applying the "outside view" is to compensate for our flawed machinery, so to attack an "inside view", one needs to actually give a reasonable argument that the inside view has fallen prey to bias. This argument should come first, it should not be assumed.

2RobinHanson14y
Obviously the distinction depends on being able to distinguish inside from outside considerations in any particular context. But given such a distinction there is no asymmetry - both views are not full views, but instead focus on their respective considerations.
3Alex Flint14y
Well an ideal Bayesian would unashamedly use all available evidence. It's only our flawed cognitive machinery that suggests ignoring some evidence might sometimes be beneficial. But the burden of proof should be on the one who suggests that a particular situation warrants throwing away some evidence, rather than on the one who reasons earnestly from all evidence.
4wedrifid14y
I don't think ideal Bayesian's use burden of proof either. Who has the burden of proof in demonstrating that burden of proof is required in a particular instance?
2Alex Flint14y
Occams razor: the more complicated hypothesis acquires a burden of proof
2Eliezer Yudkowsky14y
In which case there's some specific amount of distinguishing evidence that promotes the hypothesis over the less complicated one, in which case, I suppose, the other would acquire this "burden of proof" of which you speak?
3Alex Flint14y
Not sure that I understand (I'm not being insolent, I just haven't had my coffee this morning). Claiming that "humans are likely to over-estimate the chance of a hard-takeoff singularity in the next 50 years and should therefore discount inside view arguments on this topic" requires evidence, and I'm not convinced that the standard optimism bias literature applies here. In the absence of such evidence one should accept all arguments on their merits and just do Bayesian updating.
3RobinHanson14y
If we are going to have any heuristics that say that some kinds of evidence tend to be overused or underused, we have to be able to talk about sets of evidence that are less than than the total set. The whole point here is to warn people about our evidence that suggests people tend to over-rely on inside evidence relative to outside evidence.
3Alex Flint14y
Agreed. My objection is to cases where inside view arguments are discounted completely on the basis of experiments that have shown optimism bias among humans, but where it isn't clear that optimism bias actually applies to the subject matter at hand. So my disagreement is about degrees rather than absolutes: How widely can the empirical support for optimism bias be generalized? How much should inside view arguments be discounted? My answers would be, roughly, "not very widely" and "not much outside traditional forecasting situations". I think these are tangible (even empirical) questions and I will try to write a top-level post on this topic.
2jimmy14y
What would you call the classic drug testing example where you use the outside view as a prior and update based on the test results? If the test is sufficiently powerful, it seems like you'd call it using the "inside view" for sure, even though it really uses both, and is a full view. I think the issue is not that one ignores the outside view when using the inside view- I think it's that in many cases the outside view only makes very weak predictions that are easily dwarfed by the amount of information one has at hand for using the inside view. In these cases, it only makes sense to believe something close to the outside view if you don't trust your ability to use more information without shooting yourself in the foot- which is alexflint's point.
1RobinHanson14y
I really can't see why a prior would correspond more to an outside view. The issue is not when the evidence arrived, it is more about whether the evidence is based on a track record or reasoning about process details.
5jimmy14y
Well, you can switch around the order in which you update anyway, so that's not really the important part. My point was that in most cases, the outside view gives a much weaker prediction than the inside view taken at face value. In these cases using both views is pretty much the same as using the inside view by itself, so advocating "use the outside view!" would be better translated as "don't trust yourself to use the inside view!"
1RobinHanson14y
I can't imagine what evidence you think there is for your claim "in most cases, the outside view gives a much weaker prediction."
5Eliezer Yudkowsky14y
Weakness as in the force of the claim, not how well-supported the claim may be.
2JGWeissman14y
This confuses me. What force of a claim should I feel, that does not come from it being well-supported?
8Eliezer Yudkowsky14y
Okay, rephrase: Suppose I pull a crazy idea out of my hat and scream "I am 100% confident that every human being on earth will grow a tail in the next five minutes!" Then I am making a very forceful claim, which is not well-supported by the evidence. The idea is that the outside view generally makes less forceful claims than the inside view - allowing for a wider range of possible outcomes, not being very detailed or precise or claiming a great deal of confidence. If we were to take both outside view and inside view perfectly at face value, giving them equal credence, the sum of the outside view and the inside view would be mostly the inside view. So saying that the sum of the outside view and the inside view equals mostly the outside view must imply that we think the inside view is not to be trusted in the strength it says its claims should have, which is indeed the argument being made.
2JGWeissman14y
Thank you, I understand that much better.

We say: "Would you care to make a side bet on that?"

And I'd say . . . . "Sure! I recognize that I normally plan to finish 9 to 10 days early to ensure that I finish before the deadline and that I normally "fail" and only finish a day or two early (but still succeed at the real deadline) . . . . but now, you've changed the incentive structure (i.e. the entire problem) so I will now plan to finish 9 or 10 days before my new deadline (necessary to take your money) of 9 or 10 days before the real deadline. Are you sure that you really want to make that side bet?

I note also that "Would you care to make a side bet on that? is interesting as a potential conversation-filter but can also, unfortunately, act as a conversation-diverter.

7wedrifid14y
Good point. But do you also show people your cards when playing poker? I'd say 'Perhaps' and get straight into settling on acceptable odds and escrow system and throwing down the collateral.

Eliezer, the 'outside view' concept can also naturally be used to describe the work of Philip Tetlock, who found that political/foreign affairs experts were generally beaten by what Robin Dawes calls the "the robust beauty of simple linear models." Experts relying on coherent ideologies (EDIT: hedgehogs) did particularly badly.

Those political events were affected by big systemic pressures that someone could have predicted using inside view considerations, e.g. understanding the instability of the Soviet Union, but in practice acknowledged experts were not good enough at making use of such insights to generate net improvements on average.

Now, we still need to assign probabilities over different different models, not all of which should be so simple, but I think it's something of a caricature to focus so much on the homework/curriculum planning problems.

(It's foxes who know many things and do better; the hedgehog knows one big thing.)

I haven't read Tetlock's book yet. I'm certainly not surprised to hear that foreign affairs "experts" are full of crap on average; their incentives are dreadful. I'm much more surprised to hear that situations like the instability of the Soviet Union could be described and successfully predicted by simple linear models, and I'm extremely suspicious if the linear models were constructed in retrospect. Wasn't this more like the kind of model-based forecasting that was actually done in advance?

Conversely if the result is just that hedgehogs did worse than foxes, I'm not surprised because hedgehogs have worse incentives - internal incentives, that is, there are no external incentives AFAICT.

I have read Dawes on medical experts being beaten by improper linear models (i.e., linear models with made-up -1-or-1 weights and normalized inputs, if I understand correctly) whose factors are the judgments of the same experts on the facets of the problem. This ought to count as the triumph or failure of something but it's not quite isomorphic to outside view versus inside view.

I think there probably is a good reference class for predictions surrounding the singularity. When you posted on "what is wrong with our thoughts? you identified it: the class of instances of the human mind attempting to think and act outside of its epistemologically nurturing environment of clear feedback from everyday activities.

See, e.g. how smart humans like Stephen Hawking, Ray Kurzweil, Kevin Warwick, Kevin Kelly, Eric Horowitz, etc have all managed to say patently absurd things about the issue, and hold mutually contradictory positions, with massive overconfidence in some cases. I do not exclude myself from the group of people who have said absurd things about the Singularity, and I think we shouldn't exclude Eliezer either. At least Eliezer has put in massive amounts of work for what may well be the greater good of humanity, which is morally commendable.

To escape from this reference class, and therefore from the default prediction of insanity, I think that bringing in better feedback and a large diverse community of researchers might work. Of course, more feedback and more researchers = more risk according to our understanding of AI motivations. But ultimately, that's an unavoidable trade-off; the lone madman versus the global tragedy of the commons.

Large communities don't constitute help or progress on the "beyond the realm of feedback" problem. In the absence of feedback, how is a community supposed to know when one of its members has made progress? Even with feedback we have cases like psychotherapy and dietary science where experimental results are simply ignored. Look at the case of physics and many-worlds. What has "diversity" done for the Singularity so far? Kurzweil has gotten more people talking about "the Singularity" - and lo, the average wit of the majority hath fallen. If anything, trying to throw a large community at the problem just guarantees that you get the average result of failure, rather than being able to notice one of the rare individuals or minority communities that can make progress using lower amounts of evidence.

I may even go so far as to call "applause light" or "unrelated charge of positive affect" on the invocation of a "diverse community" here, because of the degree to which the solution fails to address the problem.

6Roko14y
Good question. It seems that academic philosophy does, to an extent, achieve this. The mechanism seems to be that it is easier to check an argument for correctness than to generate it. And it is easier to check whether a claimed flaw in an argument really is a flaw, and so on. In this case, a mechanism where everyone in the community tries to think of arguments, and tries to think of flaws in others' arguments, and tries to think of flaws in the criticisms of arguments, etc, means that as the community size --> infinity, the field converges on the truth.
2wedrifid14y
With some of my engagements with academic philosophers in mind I have at times been tempted to lament that that 'extent' wasn't rather a lot greater. Of course, that may be 'the glass is half empty' thinking. I intuit that there is potential for a larger body of contributers to have even more of a correcting influence of the kind that you mention than what we see in practice!
3Roko14y
Philosophy has made some pretty significant progress in many areas. However, sometimes disciplines of that form can get "stuck" in an inescapable pit of nonsense, e.g. postmodernism or theology. In a sense, the philosophy community is trying to re-do what the theologians have failed at: answering questions such as "how should I live", etc.
1CarlShulman14y
Many-worlds has made steady progress since it was invented. Especially early on, trying to bring in diversity would get you some many-worlds proponents rather than none, and their views would tend to spread.
1Eliezer Yudkowsky14y
Think of how much more progress could have been made if the early many-worlds proponents had gotten together and formed a private colloquium of the sane, providing only that they had access to the same amount of per capita grant funding (this latter point being not about a need for diversity but a need to pander to gatekeepers).
8Roko14y
It isn't clear to me that the MWI-only group would have achieved anything extra - do you think that they would have done?
3whpearson14y
Logged in to vote this up... However I wouldn't go the lots of people route either. At least not until decent research norms had been created. The research methodology that has been mouldering away in my brain for the past few years is the following: We can agree that computational systems might be dangerous (in the FOOM sense). So let us start from the basics and prove that bits of computer space aren't dangerous either by experiments we have already done (or have been done by nature) or by formal proof. Humanity has played around with basic computers and networked computers in a variety of configurations, if our theories say that they are dangerous then our theories are probably wrong. Nature has and is also in the process of creating many computational systems. The Gene networks I mentioned earlier and if you want to look at the air around you as a giant quantum billiard ball computer of sorts, then giant ephemeral networks of "if molecule A collides with molecule B then molecule A will collide with molecule C" type calculations are being performed all around you without danger. * The proof section is more controversial. There are certain mathematical properties I would expect powerful systems to have. The ability to accept recursive languages and also modify internal state (especially state that controls how they accept languages) based on them seems crucial to me. If we could build up a list of properties like this we can prove that certain systems don't have them and aren't going to be dangerous. You can also correlate the dangerousness of parts of computational space with other parts of computational space. One way of looking at self-modifying systems is that it that they are equivalent to non-self-modifying systems with infinite program memory and a bit of optimisation. As if you can write a program that changes function X to Y when it sees input Z, you can write a program that chooses to perform function X rather than Y if input Z has been seen usi
2Roko14y
Thanks! I think you may be overestimating how much work formal proof can do here. For example, could formal proof have proved that early homonids would cause the human explosion?
0whpearson14y
Data about the world is very important in my view of intelligence. Hominid brains were collecting lots of information about the world, then losing it all when they were dying, because they couldn't pass it all on. They could only pass on what they could demonstrate directly. (Lots of other species were doing so as well, so this argument applies to them as well.) The species that managed to keep a hold of this lost information and spread it far and wide, you could probably prove would have a different learning pattern to the "start from scratch-learn/mimic-die" model of most animals, and potentially explode as "things with brains" had before. Could you have proven it would be homonids? Possibly, you would need to know more about how the systems could realistically spread information between them including protection from lying and manipulation. And whether homonids had the properties that made them more likely to explode.

Hanson's argument was interesting but ultimately I think it's just numerology - there's no real physical reason to expect that pattern to continue, especially given how different/loosely-related the 3 previous changes were.

This is an excellent post, thank you.

An earlier comment of yours pointed out that one compensates for overconfidence not by adjusting ones probability towards 50%, but by adjusting it towards the probability that a broader reference class would give. In this instance, the game of reference class tennis seems harder to avoid.

4RobinZ14y
Speaking of that post: It didn't occur to me when I was replying to your comment there, but if you're arguing about reference classes, you're arguing about the term in your equation representing ignorance. I think that is very nearly the canonical case for dropping the argument until better data comes in.
4arundelo14y
My introduction to that idea was RobinZ's "The Prediction Hierarchy".
0Eliezer Yudkowsky14y
Agreed.
3komponisto14y
So what do we do about it?

It seems like the Outside View should only be considered in situations which have repeatably provided consistent results. This is an example of the procrastinating student. The event has been repeated numerous time with closely similar outcomes.

If the data is so insufficient that you have a hard time casting it to a reference class, that would imply that you don't have enough examples to make a reference and that you should find some other line of argument.

This whole idea of outside view is analogous to instance based learning or case based reasoning. Y... (read more)

2taw14y
... then the data is most likely insufficient for reasoning in any other way. Reference class of smart people's predictions of the future performs extremely badly, even though they all had some real good inside view reasons for them.
6Matt_Stevenson14y
I'm not sure what you are trying to argue here? I am saying that trying to use a reference class prediction in a situation where you don't have many examples of what you are referencing is a bad idea and will likely result in a flawed prediction. You should only try and use the Outside View if you are in a situation that you have been in over and over and over again, with the same concrete results. If you are presented with a question about a post-singularity world, and the only admissible evidence (reference class) is I'm sorry, but I am not going to trust any conclusion you draw. That is a really small class to draw from, small enough that we could probably name each instance individually. I don't care how smart the person is. If they are assigning probabilities from sparse data, it is just guessing. And if they are smart, they should know better than to call it anything else. There have been no repeated trials of singularities with consistent unquestionable results. This is not like procrastinating students and shoppers, or estimations in software. Without enough data, you are more likely to invent a reference class than anything else. I think the Outside View is only useful when your predictions for a specific event have been repeatedly wrong, and the the actual outcome is consistent. The point of the technique is to correct for a bias. I would like to know that I actually have a bias before correcting it. And, I'd like to know which way to correct. Edit: formatting
5Alex Flint14y
I don't think they all had "good inside view reasons" if they were all, in fact, wrong! Perhaps they thought they had good reasons, but you can't conclude from this all future "good-sounding" arguments are incorrect.

I may be overlooking something, but I'd certainly consider Robin's estimate of 1-2 week doublings a FOOM. Is that really a big difference compared with Eliezer's estimates? Maybe the point in contention is not the time it takes for super-intelligence to surpass human ability, but the local vs. global nature of the singularity event; the local event taking place in some lab, and the global event taking place in a distributed fashion among different corporations, hobbyists, and/or governments through market mediated participation. Even this difference isn... (read more)

6wedrifid14y
I think Eliezer estimates 1-2 week until game over. An intelligence that has undeniable, unassailable dominance over the planet. This makes economic measures output almost meaningless. I think you're right on the mark with this one. My thinking diverges with yours here. The global scenario gives a fundamentally different outcome than a local event. If participation is market mediated then the influence is determined by typical competitive forces. Whereas a local foom gives a singularity and full control to whatever the effective utility function is embedded in the machine, as opposed to a rapid degeneration into a hardscrapple hell. More directly in the local scenario that Eliezer predicts outside contributions stop once 'foom' starts. Nobody else's help is needed. Except, of course, as cats paws while bootstrapping.

If the Japanese students had put as much effort into their predictions as Eliezer has put into thinking about the singularity then I dare say they would have been rather more accurate, perhaps even more so than the "outside view" prediction.

I prefer the outside view when speaking with good friends, because they know me well enough to gather what I'm really saying isn't 'Stop Here!' but rather 'Explain to me why I shouldn't stop here?'

Perhaps this isn't really the outside view but the trappings of the outside view used rhetorically to test whether the other party is willing to put some effort into explaining their views. The Outside View as a test of your discussion partner.

The Inside View can be a conversation halter as well; going 'farther inside' or 'farther outside' than your partner can d... (read more)

Someone should point out that "That is a conversation-halter" is very often a conversation halter.

5grouchymusicologist14y
Very often? Really? Any examples you could cite for us?
2Liron14y
An even bigger conversation halter is pointing out meta-inconsistency.
6Eliezer Yudkowsky14y
Um... no they're not? Refuting a specific point is not the same as trying to halt a debate and stop at the current answer.
0[anonymous]14y
Um... no it's not?
0wedrifid14y
By 'often' do you mean 'since Eliezer introduced the term he has used it as a conversation halter in every post he has made'? Although even then it didn't so much halt the conversation as it did preface an extensive argument with reasoning.

The outside view technique is as follows:

You are given an estimation problem f(x)=?. x is noisy and you don't know all of the internals of f. First choose any set of functions F containing f. Then find a huge subset G of F such that g in G has that for all y in Y, g(y) is (say) bounded to some nice range R. Now find your probability p that x is in Y and your probability q that f is in G. Then with probability p*q f(x) is in R and this particular technique says nothing about f(x) in the remaining 1-p*q of your distribution.

Sometimes this is extremely helpfu... (read more)

Again, even if I don't have time for it myself, I think it would be useful to gather data on particular disagreements based on such "outside views" with vague similarities, and then see how often each side happened to be right.

Of course, even if the "outside view" happened to be right in most such cases, you might just respond with the same argument that all of these particular disagreements are still radically different from your case. But it might not be a very good response, in that situation.

9Eliezer Yudkowsky14y
But the original problem was that, in order to carry an argument about the "outside view" in case of what (arguendo) was so drastic a break with the past as to have only two possible analogues if that, data was being presented about students guessing their homework times. If instead you gather data about predictions made in cases of, say, the industrial revolution or the invention of the printing press, you're still building a certain amount of your conclusion into your choice of data. What I would expect to find is that predictive accuracy falls off with dissimilarity and attempted jumps across causal structural gaps, that the outside view wielded with reasonable competence becomes steadily less competitive with the Weak Inside View wielded with reasonable competence. And this could perhaps be extrapolated across larger putative differences, so as to say, "if the difference is this large, this is what will happen". But I would also worry that outside-view advocates would take the best predictions and reinterpret them as predictions the outside view "could" have made (by post-hoc choice of reference class) or because the successful predictor referenced historical cases in making their argument (which they easily could have done on the basis of having a particular inside view that caused them to argue for that conclusion), and comparing this performance to a lot of wacky prophets being viewed as "the average performance of the inside view" when a rationalist of the times would have been skeptical in advance even without benefit of hindsight. (And the corresponding wackos who tried to cite historical cases in support not being considered as average outside viewers - though it is true that if you're going crazy anyway, it's a bit easier to go crazy with new ideas than with historical precedents.) And outside-view advocates could justly worry that by selecting famous historical cases, we are likely to be selecting anomalous breaks with the past that (arguendo) could
0[anonymous]14y
Which outside view do you propose to use to evaluate the intelligence explosion, if you find that "the" outside view is usually right in such cases?

I always love reading Less Wrong. I am just sometimes confused, for many days, about what exactly I have read. Until, something pertinent comes along and reveals the salience of what I had read, and then I say "OH! Now I get it!"

At present, I am between those two states... Waiting for the Now I get it moment.

2gwern13y
Perhaps it would be better to wait until you get it and then post about how you got it, than to comment that you don't get it. That would be much more interesting to read.
0MatthewB13y
But, it would also not have the function of letting others who may struggle with certain concepts of knowing that they were not alone in struggling.

"Eventually, some AI will go FOOM, locally self-improvingly rather than global-economically"

Ouch. This statement smells to me like backtracking from your actual position. If you honestly have no time estimate beyond "eventually", why does the SIAI exist? Don't you have any more urgent good things to do?

(edited to remove unrelated arguments, there will be time for them later)

I honestly don't understand your objection. "Eventually" means "sometime between tomorrow and X years from now" where my probability distribution over X peaks in the 20-40 range and then starts dropping off but with a long tail because hey, gotta widen those confidence intervals.

If I knew for an absolute fact that nothing was going to happen for the next 100 years, it would still be a pretty damned urgent problem, you wouldn't want to just let things slide until we ended up in a position as awful as the one we probably occupy in real life.

I still feel shocked when I read something like this and remember how short people's time horizons are, how they live in a world that is so much tinier than known space and time, a world without a history or a future or an intergalactic civilization that bottlenecks through it. Human civilization has been around for thousands of years. Anything within the next century constitutes the last minutes of the endgame.

0[anonymous]14y
20 to 40 years? Didn't you disavow that?
2Vladimir_Nesov14y
There is a path of retreat from belief in sudden FOOM, that still calls for working on FAI (no matter what is feasible, we still need to preserve human value as effectively as possible, and FAI is pretty much this project, FOOM or not): * Relevance of intelligence explosion

It's a bit off topic, but I've been meaning to ask Eliezer this for a while. I think I get the basic logic behind "FOOM." If a brain as smart as ours could evolve from pretty much nothing, then it seems likely that sooner or later (and I have not the slightest idea whether it will be sooner or later) we should be able to use the smarts we have to design a mind that is smarter. And if we can make a mind smarter than ours, it seems likely that that mind should be able to make one smarter than it, and so on. And this process should be pretty explosi... (read more)

Once again, Bayesian reasoning comes to the rescue. The assertion to stop updating based on new data (ignore the inside view!) is just plain wrong.

However a reminder to be careful and objective about the probability one might assign to a new bit of data (Inside view data is not privileged over outside view data! And it might be really bad!) is helpful.

The assertion to stop updating based on new data (ignore the inside view!) is just plain wrong.

I'd like to be able to say that, but there actually is research showing how human beings get more optimistic about their Christmas shopping estimates as they try to visualize the details of when, where, and how.

Your statement is certainly true of an ideal rational agent, but it may not be carried in human practice.

1Cyan14y
Are updating based on new data and updating based on introspection equivalent? If not, then LongInTheTooth equivocated by calling ignoring the inside view a failure to update based on new data. But maybe they are equivalent under a non-logical-omniscience view of updating, and it's necessary to factor in meta-information about the quality and reliability of the introspection.
2LongInTheTooth14y
"But maybe they are equivalent under a non-logical-omniscience view of updating, and it's necessary to factor in meta-information about the quality and reliability of the introspection." Yes, that is what I was thinking in a wishy-washy intuitive way, rather than an explicit and clearly stated way, as you have helpfully provided. The act of visualizing the future and planning how long a task will take based on guesses about how long the subtasks will take, I would call generating new data which one might use to update a probability of finishing the task on a specific date. (FogBugz Evidence Based Scheduling does exactly this, although with Monte Carlo simulation, rather than Bayesian math) But research shows that when doing this exercise for homework assignments and Christmas shopping (and, incidentally, software projects), the data is terrible. Good point! Don't lend much weight to this data for those projects. I see Eliezer saying that sometimes the internally generated data isn't bad after all. So, applying a Bayesian perspective, the answer is: Be aware of your biases for internally generated data (inside view), and update accordingly. And generalizing from my own experience, I would say, "Good luck with that!"

I see a couple of problems with classifying "intelligent design by machines" as a big evolutionary leap away from "intelligent design by humans".

The main difference is one of performance. Performance has been increasing gradually anyway - what is happening now is that it is now increasing faster.

Also, humans routinely augment their intelligence by using machine tools - blurring any proposed line between the machine-augmented humans that we have today and machine intelligence.

My favoured evolutionary transition classification scheme is ... (read more)

You seem to be phrasing this as an either/or decision.

Remembering that all decisions are best formalized as functions, the outside view asks, "What other phenomena are like this one, and what did their functions look like?" without looking at the equation. The inside view tries to analyze the function. The hardcore inside view tries to actually plot points on the function; the not-quite-so-ambitious inside view just tries to find its zeros, inflection points, regions where it must be positive or negative, etc.

For complex issues, you should go b... (read more)

7wedrifid14y
That is the real conversation halter. "Appeal to the outside view" is usually just a bad argument, making something silly regardless of the answer, that's a conversation halter and a mind killer.
2PhilGoetz14y
If something really is silly, then saying so is a mind-freer, not a mind-killer. If we were actually having that conversation, and I said "That's silly" at the start, rather than at the end, you might accuse me of halting the conversation. This is a brief summary of a complicated position, not an argument.
1MichaelVassar14y
I don't think that many people expect the elimination of resource constraints. Regarding the issue of FAI as Santa, wouldn't the same statement apply to the industrial revolution? Regarding reviving the cryonically suspended, yes, probably a confusion, but not clearly, and working within the best model we have, the answer is "plausibly" which is all that anyone claims.
-2PhilGoetz14y
The industrial revolution gives you stuff. Santa gives you what you want. When I read people's dreams of a future in which an all-knowing benevolent Friendly AI provides them with everything they want forever, it weirds me out to think these are the same people who ridicule Christians. I've read interpretations of Friendly AI that suggest a Friendly AI is so smart, it can provide people with things that are logically inconsistent. But can a Friendly AI make a rock so big that it can't move it?
1MichaelVassar14y
Citation needed. The economy gives you what you want. Cultural snobbishness gives you what you should want. Next...
[-][anonymous]14y-10

The "Outside View" is the best predictor of timeframes and happiness (Dan Gilbert 2007)

The reason why people have choosen to study students and what they think about homework is probably because they supposed there were mistakes in average students declarations. Same for studying affective forecast (technical term for 'predicting future happiness given X').

I suspect this kind of mistake does not happen in engineering companies when they evaluate next year's profits given their new technical environment... Therefore no one bothers to study engineers inside views.

[-][anonymous]14y-20

I see a couple of problems with classifying "intelligent design by machines" as a big leap away from "intelligent design by humans".

The main difference is one of performance. Performance has been increasing gradually anyway - what is happening now is that it is now increasing faster.

Also, humans routinely augment their intelligence by using machine tools - blurring the proposed line between human and machine intelligence.

My favoured evolutionary transition classification scheme is to bundle the modern changes together - and describe th... (read more)