I think some of our recent arguments against applying the outside view are wrong.
1. In response to taw's post, Eliezer paints the outside view argument against the Singularity thus:
...because experiments show that people could do better at predicting how long it will take them to do their Christmas shopping by asking "How long did it take last time?" instead of trying to visualize the details.
This is an unfair representation. One of the poster-child cases for the outside view (mentioned by Eliezer, no less!) dealt with students trying to estimate completion times for their academic projects. And what is AGI if not a research project? One might say AGI is too large for the analogy to work, but outside view helpfully tells us that large projects aren't any more immune to failures and schedule overruns :-)
2. In response to my comment claiming that Dennett didn't solve the problem of consciousness "because philosophers don't solve problems", ciphergoth writes:
This "outside view abuse" is getting a little extreme. Next it will tell you that Barack Obama isn't President, because people don't become President.
The outside view may be rephrased as "argument from typicality". If we'd just heard of this random dude named Barack Obama, we'd be perfectly justified in saying he won't become President! Which would be the proper analogy to first hearing about Dennett and his work. Another casual application of the outside view corroborates the conclusion: what other problems has Dennett solved? Is the problem of consciousness the first problem he solved? Does this seem typical of anything?
3. Technologos attacks taw's post, again, with the following argument:
"beliefs that the future will be just like the past" have a zero success rate.
For each particular highly speculative technology, we can assert that it won't appear with high confidence (let's say 90%). But this doesn't mean the future will be the same in all respects! The conjunction of many 90%-statements (X won't appear, AND Y won't appear, AND etc.) gets assigned the product, a very low confidence, as it should. We're sure that some new technologies will arise, we just don't know which ones. Fusion power? Flying cars? We've been on the fast track to those for some time now, and they still sound less far out then the Singularity! Anyone who's worked with tech for any length of time can recite a looooong list of Real Soon Now technologies that never materialized.
4. In response to a pro-outside-view comment by taw, wedrifid snaps:
Choosing a particular outside view on a topic which the poster allegedly 'knows nothing about' would be 'pulling a superficial similarity out of his arse'.
Well, duh. If the red pill doesn't make you offended about your pet project, you aren't taking enough of it :-) The method works with nonzero efficiency as long as we're pattern-matching on relevant traits or any traits causally connected to relevant traits, which means pretty much every superficial similarity gives you nonzero information. And the conjunction rule applies, so the more similar stuff you can find, the better. 'Pulling a similarity out of your arse' isn't something to be ashamed of - it's the whole point of the outside view. Even a superficial similarity is harder to fake, more entangled with reality, more objective than a long chain of reasoning or a credence percentage you came up with. In real-world reasoning, parallel beats sequential.
In conclusion let's grant the inside view object-level advocates the benefit of the doubt one last time. Conveniently, the handful of people who say we must believe in the Singularity are all doing work in the AGI field. We can gauge exactly how believable their object-level arguments are by examining their past claims about the schedules of their own projects - the perfect case for the inside view if there ever was one... No, I won't spell out the sordid collection of hyperlinks here. Every reader is encouraged to Google on their own for past announcements by Doug Lenat, Ben Goertzel, Eliezer Yudkowsky (those are actually the heroes of the bunch), or other people that I'm afraid to name at the moment.
You forgot to subscript; I think you meant Eliezer_1998, who had just turned old enough to vote, believed in ontologically basic human-external morality, and was still babbling about Moore's Law in unquestioning imitation of his elders. I really get offended when people compare the two of us.
Growing up on the Internet is like walking around with your baby pictures stapled to your forehead.
I also consider it an extremely basic fallacy and extremely annoying, to lump together "people who predict AI arriving in 10 years" and "people who predict AI arriving at some unknown point in the future" into the same reference class so that the previous failure of the former class of predictions is an argument for the failure of the latter class, that is, since some AI scientists have overpromised in the short run AI must be physically impossible in the long run. After all, it's the same charge of negative affect in both cases, right?
After reading your comment like 20 times I still have no idea what you object to. Your timeframes might have grown more flexible since 1998, but you're still making statements about the future of technology, so my reference class includes you. Personally I think AI is physically possible, but something else will happen instead, like with flying cars.