I think some of our recent arguments against applying the outside view are wrong.

1. In response to taw's post, Eliezer paints the outside view argument against the Singularity thus:

...because experiments show that people could do better at predicting how long it will take them to do their Christmas shopping by asking "How long did it take last time?" instead of trying to visualize the details.

This is an unfair representation. One of the poster-child cases for the outside view (mentioned by Eliezer, no less!) dealt with students trying to estimate completion times for their academic projects. And what is AGI if not a research project? One might say AGI is too large for the analogy to work, but outside view helpfully tells us that large projects aren't any more immune to failures and schedule overruns :-)

2. In response to my comment claiming that Dennett didn't solve the problem of consciousness "because philosophers don't solve problems", ciphergoth writes:

This "outside view abuse" is getting a little extreme. Next it will tell you that Barack Obama isn't President, because people don't become President.

The outside view may be rephrased as "argument from typicality". If we'd just heard of this random dude named Barack Obama, we'd be perfectly justified in saying he won't become President! Which would be the proper analogy to first hearing about Dennett and his work. Another casual application of the outside view corroborates the conclusion: what other problems has Dennett solved? Is the problem of consciousness the first problem he solved? Does this seem typical of anything?

3. Technologos attacks taw's post, again, with the following argument:

"beliefs that the future will be just like the past" have a zero success rate.

For each particular highly speculative technology, we can assert that it won't appear with high confidence (let's say 90%). But this doesn't mean the future will be the same in all respects! The conjunction of many 90%-statements (X won't appear, AND Y won't appear, AND etc.) gets assigned the product, a very low confidence, as it should. We're sure that some new technologies will arise, we just don't know which ones. Fusion power? Flying cars? We've been on the fast track to those for some time now, and they still sound less far out then the Singularity! Anyone who's worked with tech for any length of time can recite a looooong list of Real Soon Now technologies that never materialized.

4. In response to a pro-outside-view comment by taw, wedrifid snaps:

Choosing a particular outside view on a topic which the poster allegedly 'knows nothing about' would be 'pulling a superficial similarity out of his arse'.

Well, duh. If the red pill doesn't make you offended about your pet project, you aren't taking enough of it :-) The method works with nonzero efficiency as long as we're pattern-matching on relevant traits or any traits causally connected to relevant traits, which means pretty much every superficial similarity gives you nonzero information. And the conjunction rule applies, so the more similar stuff you can find, the better. 'Pulling a similarity out of your arse' isn't something to be ashamed of - it's the whole point of the outside view. Even a superficial similarity is harder to fake, more entangled with reality, more objective than a long chain of reasoning or a credence percentage you came up with. In real-world reasoning, parallel beats sequential.

In conclusion let's grant the inside view object-level advocates the benefit of the doubt one last time. Conveniently, the handful of people who say we must believe in the Singularity are all doing work in the AGI field. We can gauge exactly how believable their object-level arguments are by examining their past claims about the schedules of their own projects - the perfect case for the inside view if there ever was one... No, I won't spell out the sordid collection of hyperlinks here. Every reader is encouraged to Google on their own for past announcements by Doug Lenat, Ben Goertzel, Eliezer Yudkowsky (those are actually the heroes of the bunch), or other people that I'm afraid to name at the moment.

New Comment
29 comments, sorted by Click to highlight new comments since: Today at 10:39 AM

You forgot to subscript; I think you meant Eliezer_1998, who had just turned old enough to vote, believed in ontologically basic human-external morality, and was still babbling about Moore's Law in unquestioning imitation of his elders. I really get offended when people compare the two of us.

Growing up on the Internet is like walking around with your baby pictures stapled to your forehead.

I also consider it an extremely basic fallacy and extremely annoying, to lump together "people who predict AI arriving in 10 years" and "people who predict AI arriving at some unknown point in the future" into the same reference class so that the previous failure of the former class of predictions is an argument for the failure of the latter class, that is, since some AI scientists have overpromised in the short run AI must be physically impossible in the long run. After all, it's the same charge of negative affect in both cases, right?

[-][anonymous]14y00

After reading your comment like 20 times I still have no idea what you object to. Your timeframes might have grown more flexible since 1998, but you're still making statements about the future of technology, so my reference class includes you. Personally I think AI is physically possible, but something else will happen instead, like with flying cars.

Which reference to you calls for that subscript?

Every reader is encouraged to Google on their own for past announcements by Doug Lenat, Ben Goertzel, Eliezer Yudkowsky (those are actually the heroes of the bunch), or other people that I'm afraid to name at the moment.

Presumably.

[-][anonymous]14y-10

to lump together "people who predict AI arriving in 10 years" and "people who predict AI arriving at some unknown point in the future"

My bad, I thought you belonged to the first group (replacing 10 with <=50). Perhaps you should consider joining it anyway, because the second group is making an unfalsifiable prediction.

This whole debate looks like a red herring to me. The entire distinction makes no sense-- all views are outside views. Our only knowledge of the future comes from knowledge of regularities. So all arguments are arguments from typicality. Some of that knowledge comes from surveys of how long it takes someone to finish a project. Some comes from experimental science. Some of that knowledge comes from repeated personal experience-- say completing lots of projects on time. Some of it is innate, driven into us though generations of evolution. But all of it is outside view. The so-called "inside view" arguments are just a lot harder to express by pointing to a single reference class. We believe Barack Obama is President because usually widely held beliefs about who holds important government positions is accurate, because the media doesn't lie about such things, because the people who get referred to as "President x" usually are president etc.

Those who are saying they are taking the outside view are just ignoring some of the relevant regularities for these big issues. Now there might be reason to disregard some of those regularities. For example, it seems clear that people are too biased to estimate how long it will take them to complete certain kinds of tasks. In these cases then, it makes sense to disregard their self-estimations. It turns out, in other words, that self-estimation isn't a very reliable regularity. There are other biases that will cause us to think something is reliable evidence when it isn't (or isn't once better evidence is considered). But the right approach is to identify those biases, not just assume some data isn't good evidence because it is part of this mysterious "inside view".

If AGI researchers are all suffering from a bias that leads them to conclude AGI will happen when they shouldn't I'm sure they would appreciate knowing that. If this is the case, someone should describe the bias and point to examples. But you can't just ignore their arguments that AGI will happen and just claim higher ground with "the outside view". Every view is an outside view, the question is which views are biased.

One of the poster-child cases for the outside view (mentioned by Eliezer, no less!) dealt with students trying to estimate completion times for their academic projects. And what is AGI if not a research project? One might say AGI is too large for the analogy to work, but outside view helpfully tells us that large projects aren't any more immune to failures and schedule overruns :-)

Hence Eliezer_2009's refusal to make quantitative predictions.

If we'd just heard of this random dude named Barack Obama, we'd be perfectly justified in saying he won't become President! Which would be the proper analogy to first hearing about Dennett and his work.

If you're not going to look at Dennett's arguments and you're not going to take into account others' claims that Dennett is interesting, how will your opinion ever change?

I will update if/when I see a short convincing explanation of his solution or a substantial number of other people acknowledge that it's correct. My little experience with Dennett's writing hasn't yet turned up anything interesting, much less anything novel+correct.

Really? Reading a new Dennet book or paper is always a joy for me.

If you'd like to get more Dennett in short bits, this page has a nice archive.

There's a lot of links on that page; that's a joy to a Dennett fan such as myself (I upvoted!), but for someone not previously interested, pointing out a couple particularly enjoyable ones might be helpful.

Edit: I like Explaining the 'Magic' of Consciousness - it, too, is relevant to the recent remarks on consciousness, and it follows Dennett's analytical style quite closely.

I'd recommend clicking on links with interesting titles, but some ones to check out:

I find the intentional stance interesting. Although we won't get much further at AI unless we can switch to the design stance of an intelligence.

One of the poster-child cases for the outside view (mentioned by Eliezer, no less!) dealt with students trying to estimate completion times for their academic projects. And what is AGI if not a research project? One might say AGI is too large for the analogy to work, but outside view helpfully tells us that large projects aren't any more immune to failures and schedule overruns :-)

Thus, you would be perfectly within your rights to say that most people predicting AI believe that it will arrive long before it actually does. You can use the outside view to deny timeframes and specific projects, not the basic possibility of transhuman AI.

This "outside view abuse" is getting a little extreme. Next it will tell you that Barack Obama isn't President, because people don't become President.

The outside view may be rephrased as "argument from typicality". If we'd just heard of this random dude named Barack Obama, we'd be perfectly justified in saying he won't become President!

So what you're saying is that the outside view quickly has to defer to information from the inside view, such as that Barack Obama is in fact President - or that for example the plausibility that Romney might be President is much higher than one in 300 million.

Yes, completely agreed about Romney. If someone shows how Dennett stands out from the crowd of philosophers who claim to understand consciousness (which includes Hofstadter and Penrose), or how the Singularity stands out from the crowd of failed tech predictions, this will convince me. But evidence of the "just believe me I'm special" variety won't, unless it's as impeccable as 2*2=4.

But evidence of the "just believe me I'm special" variety won't.

Oh come on, no-one is arguing that this should convince you. Obviously the outside view is the correct position in the absence of inside view evidence, no-one disputes that. The dispute is this: some people seem to believe that any effort to look into the details of the claim rather than taking the outside view is simply a self-serving effort to avoid the conclusions the outside view brings you to. Taken to extremes this leads to the position I'm mocking. If you agree that inside view evidence is worth examining, then we're on the same side in this discussion.

If you agree that inside view evidence is worth examining, then we're on the same side in this discussion.

Isn't the whole point of the outside view, as laid out in Eliezer's original post, that sometimes you can get a better prediction by deliberately ignoring relevant inside view evidence? We need an algorithm to determine which inside view evidence to ignore, and the optimal algorithm clearly can't be either "all" or "none".

My instinct is to be conservative in how much inside view evidence to ignore. That is, only adopt the outside view in circumstances substantially similar to one of the experiments showing an advantage for the outside view.

In the case of cousin_it's claim about Dennett not having solved the problem of consciousness, he seems to be saying that we should ignore the evidence that is constituted by the words in Dennett's book, but take into account personal information about Dennett as a philosopher. I don't see how this position is supportable by the empirical evidence.

Isn't the whole point of the outside view, as laid out in Eliezer's original post, that sometimes you can get a better prediction by deliberately ignoring relevant inside view evidence? We need an algorithm to determine which inside view evidence to ignore, and the optimal algorithm clearly can't be either "all" or "none".

This is exactly the right way to state it. The question is, when is it better to ignore evidence? More precisely, when is it better for a human to ignore evidence? What, if any, biases and limitations of the human mind make inside-view reasoning dangerous?

This is an empirical question, to be settled by a study of human cognition. It's not an abstract epistemological question that can be settled by arm-chair reasoning.

This is exactly the right way to state it. The question is, when is it better to ignore evidence? More precisely, when is it better for a human to ignore evidence? What, if any, biases and limitations of the human mind make inside-view reasoning dangerous?

I failed to convey an idea of my answer in "Consider representative data sets". Prototypes that come to mind in planning are not representative (they are about efficient if-all-goes-well plans and not their real-world outcomes), and so should either be complemented by more concepts to make up representative data sets (which doesn't work in practice), or forcefully excluded from consideration. The deeper the inside view the better, unless you are arriving at an answer intuitively under the conditions of predictably tilted availability.

Dennett's discussion of consciousness proceeds from a strongly empirical standpoint, with an emphasis on using current scientific knowledge as well as inventing further experiments. This gives his conclusions a reasonable backing in observation, unlike most philosophers in general, not just in consciousness.

He is further known in the field of action and responsibility, particularly with his book Elbow Room: The Varieties of Free Will Worth Wanting, so it's not like he burst into the field from nowhere.

Dennett and Hofstadter agree in large part, and even collaborated on a book about consciousness, so it's strange to lump them in with Penrose.

Penrose furthermore being a mathematical physicist...

[-][anonymous]14y00

Penrose furthermore being a physicist...

Well, duh. If the red pill doesn't make you offended about your pet project, you aren't taking enough of it

You appear confused about both my attitude ('contemptuous and condescending' would be more accurate than 'offended') and my position. Quoting the more relevant sentence from the same comment:

Replace the 'outside view' reference with the far more relevant reference to 'expert consensus'.

If you do, in fact, 'know nothing about' something it is best to base your opinion on that of a majority opinion of experts. It is silly to just throw about with confidence either numbers or analogies that happen to best fit your intuition.

Choosing a particular outside view on a topic which the poster allegedly 'knows nothing about' would be 'pulling a superficial similarity out of his arse'.

You are quoting out of context a sentence that consists almost entirely of quotes that ironically reference the context. Don't do that.

Hmm, I wonder what would be appropriate outside views to give a good estimate of the dangers of AI?

Number of species made extinct by competition rather than natural disaster? (Assume AI is something like a new species)

How well humans can control and predict technologies?

I'm willing to believe that if AI-roughly-as-described-by-Eliezer gets developed, it will be able to exterminate humanity, because we apparently have already invented weapons that can exterminate humanity. As for the chance of such AI getting developed at all, why not apply the usual reference classes of futuristic technology?

ETA: or, more specifically, futuristic software.

Judging from peoples previous predictions about when we will get futuristic software I am quite happy to push the likelihood of them being on the right track to quite low levels*. Which is why I am interested in way of ruling out approaches experimentally if at all possible.

However even if we eliminate their approaches, we still don't know the chances of non-Eliezer (or Goertzelian etc) like futuristic software wiping out humanity. So we are back to square one.

*Any work on possible AIs I want to explore, I mainly view as trying to rule out a possible angle of implementation. And AI I consider a part a multi-generational humanity wide work to understand the human brain.

To be clear, I wasn't arguing against applying the outside view--just against the belief that the outside view gives AGI a prior/outside view expected chance of success of (effectively) zero. The outside view should incorporate the fact that some material number of technologies not originally anticipated or even conceived do indeed materialize: we expected flying cars, but we got the internet. Even a 5% chance of Singularity seems more in line with the outside view than the 0% claimed in the reference class article, no?

I agree with your comment on the previous post, incidentally, that the probability of the Singularity as conceived by any individual or even LW in general is low; the possible types of Singularity are so great that it would be rather shocking if we could get it right from our current perspective. Again, I was responding only to the assertion that the outside view shows no successes for the class of breakthroughs containing AGI/cryo/Singularity.

I should note too that the entirety of the quotation you ascribe to me is originally from Eliezer, as the omitted beginning of the quoted sentence indicates.

You're right. Some reference classes containing the Singularity have a 0% success rate, some fare better. I don't assign the Singularity exactly zero credence, and I don't think taw does either.