I agree- the answer given in the FAQ isn't a complete and valid response to the critics of the Singularity. But it was never meant to be; it was meant to be "short answers to common questions." The SI's longer responses to critics of the Singularity are mostly in peer-reviewed research; for example, in:
Luke Muehlhauser and Anna Salamon (2012). Intelligence Explosion: Evidence and Import. In The Singularity Hypothesis, Springer. (http://singularity.org/files/IE-EI.pdf)
Carl Shulman and Nick Bostrom (2012). How Hard is Artificial Intellience?. In Journal of Consciousness Studies, Imprint Academic. (http://www.nickbostrom.com/aievolution.pdf)
Chalmers, D. (2010). "The Singularity: A Philosophical Analysis." Journal of Consciousness Studies 17:7-65. (http://consc.net/papers/singularity.pdf)
Sotala, Kaj (2012) Advantages of Artificial Intelligences, Uploads, and Digital Minds. International Journal of Machine Consciousness 4 (1), 275-291. ( http://kajsotala.fi/Papers/DigitalAdvantages.pdf )
Of course, now I feel pretty bad for linking you to several hundred pages of arguments- which are often overlapping and repetitive, and which still don't represent everything the SI h...
Was the main post edited?
Yes, this version can be for most intents and purposes be considered an entirely new post. I can imagine your confusion!
The comments seem entirely disconnected from the article.
We could perhaps consider this a rather startling success for those comments. Usually it is only possible to influence future posts and current perceptions.
It might be hard at first to tell the difference, so I'm going to have to use some examples. I'd ask that you try and suspend any emotional reactions you have to the examples I chose and just look at which approach seems more rational.
Bullshit. You aren't providing an example because it is "hard to tell the difference at first". You started with an intent to associate SIAI with self delusion and then tried to find a way to package it as some kind of rationality related general point.
Contrast this with the Singularity Institute. A skeptic might well ask whether the Singularity is actually going to occur. Well, the SIAI FAQ addresses this, but only to summarily dismiss a couple objections in a cursory paragraph (that evades most of the force of the objections). And that's the closest the FAQ gets to any sort of skepticism, the rest of it is just a straight and confident summary that tries to persuade you of SIAI beliefs.
The FAQ on the website is not the place to signal humility and argue against your own conclusions. All that would demonstrate is naivety and incompetence. You are demanding something that should not exist. This isn't to say that there aren't valid cr...
You started with an intent to associate SIAI with self delusion
I see, he must be one of those innately evil enemies of ours, eh?
My current model of aaronsw is something like this: He's a fairly rational person who's a fan of Givewell. He's read about SI and thinks the singularity is woo, but he's self-skeptical enough to start reading SI's website. He finds a question in their FAQ where they fail to address points made by those who disagree, reinforcing the woo impression. At this point he could just say "yeah, they're woo like I thought". But he's heard they run a blog on rationality, so he makes a post pointing out the self-skepticism failure in case there's something he's missing.
The FAQ on the website is not the place to signal humility and argue against your own conclusions.
Why not? I think it's an excellent place to do that. Signalling humility and arguing against your own conclusions is a good way to be taken seriously.
Overall, I thought aaronsw's post had a much higher information to accusations ratio than your comment, for whatever that's worth. As criticism goes his is pretty polite and intelligent.
Also, aaronsw is not the first person I've seen on...
FWIW, I don't think the Singularity Institute is woo and my current view is that giving money to lukeprog is probably a better idea that the vast majority of charitable contributions.
my current view is that giving money to lukeprog is probably a better idea that the vast majority of charitable contributions.
I like the way you phrase it (the "lukeprog" charity). Probably true at that.
I agree with your model of aaronsw, and think wedrifid's comments are over the top. But wedrifid is surely dead right about one important thing: aaronsw presented his article as "here is a general point about rationality, and I find that I have to think up some examples so here they are ..." but it's extremely obvious (especially if you look at a few of his other recent articles and comments) that that's simply dishonest: he started with the examples and fitted the general point about rationality around them.
(I have no idea what sort of process would make someone as smart as aaronsw think that was a good approach.)
If you find yourself responding with tu quoque, then it is probably about time you re-evaluated the hypothesis that you are in mind-kill territory.
In this particular context, I think a more appropriate label would be the "Appeal to Come on, gimme a friggen' break!"
The comment he was responding to was quite loaded with connotation, voluntarily or not, despite the "mostly true" and "arguably within the realm of likely possibilities" denotations that would make the assertion technically valid.
Being compared, even as a metaphorical hypothesis, to sophistry-flinging rhetoric-centric politicians is just about the most mind-killer-loaded subtext assault you could throw at someone.
I see, he must be one of those innately evil enemies of ours, eh?
I made no such claim. I do claim that the specific quote I was replying to is a transparent falsehood. Do you actually disagree?
Far from being innately evil aaronsw appears to be acting just like any reasonably socially competent human with some debate skills can be expected to act when they wish to persuade people of something. It just so happens that doing so violates norms against bullshit, undesired forms of rhetoric and the use of arguments as soldiers without applying those same arguments to his own position.
Forget "innately evil". In fact, forget about the author entirely. What matters is that the post and the reasoning contained therein is below the standard I would like to see on lesswrong. Posts like it need to be weeded out to make room for better posts. This includes room for better reasoned criticisms of SIAI or lesswrong, if people are sufficiently interested (whether authored by aaronsw or someone new).
...The FAQ on the website is not the place to signal humility and argue against your own conclusions.
Why not? I think it's an excellent place to do that. Signalling humility and arguing ag
He's also someone with an actual track record of achievement. Could we do with some of those on LW?
Some of which are quite dangerous. Either the JSTOR or PACER incidents could have killed any associated small nonprofit with legal bills. (JSTOR's annual revenue is something like 53x that of SIAI.)
As fun as it is to watch Swartz's activities (from a safe distance), I would not want such antics conducted on a website I enjoy reading and would like to see continue.
As fun as it is to watch Swartz's activities (from a safe distance), I would not want such antics conducted on a website I enjoy reading and would like to see continue.
Wait, are you saying this aaronsw is the same guy as the guy currently being (tragically, comically) prosecuted for fraud? That's kinda cool!
I don't think it's fair - I think it's a bit motivated - to mention these as mysterious controversies and antics, without also mentioning that his actions could reasonably be interpreted as heroic. I was applauding when I read the JSTOR incident, and only wish he'd gotten away with downloading the whole thing and distributing it.
I agree they were heroic and good things, and I was disgusted when I looked into JSTOR's financial filings (not that I was happy with the WMF either).
But there's a difference between admiring the first penguin off the ice and noting that this is a good thing to do, and wanting to be that penguin or near enough that penguin that one might fall off as well. And this is especially true for organizations.
Even if so, one should still at least mention, in a debate on character, that the controversy in question just happened to be about an attempted heroic good deed.
You started with an intent to associate SIAI with self delusion and then tried to find a way to package it as some kind of rationality related general point.
No, I'd love another example to use so that people don't have this kind of emotional reaction. Please suggest one if you have one.
UPDATE: I thought of a better example on the train today and changed it.
Reply to revised OP: Policy debates should not appear one-sided (because there's no systematic reason why good policies should have only positive consequences) but epistemic debates are frequently normatively one-sided (because if your picture of likelihood ratios is at all well-calibrated, your updates should trend in a particular direction and you should almost never find strong evidence on more than one side of an epistemic debate; "strong evidence" by Bayesian definition is just that sort of evidence which we almost never see when the hypothesis is wrong).
In my own experience, self skepticism isn't sufficient. It's bloody useful of course, but it's also an exceptional time sink -- occasionally to the point where I'll forget to actually think of solutions to the problem.
Does anyone have any algorithms they use to balance self-skepticism with actually solving the problem?
Responding to the new version of this article, I'll observe that the intent and competence of the Cochrane vs. Heritage folks seems to suggest:
CC signal intellectual honesty and are willing to sacrifice their impact on the beliefs of the masses, expecting in compensation to have better academic reputations, and perhaps influence a few more rationalists.
Heritage signal "don't oppose us or we'll embarrass you" and "our side is going to win. don't even think about switching" and perhaps "we all understand that politics is war. we'll b...
outside tests showed the new techniques only caused reading scores to go down.
The Feynman article you link says they didn't test them, not that they tested them and failed.
Related posts: The Proper Use of Humility and How To Be More Confident... That You're Wrong.
BTW, what justifies calling self-skepticism "the first principle of rationality"? Feynman called it that, but Eliezer doesn't seem to think that self-skepticism is as important as, for example, having something to protect.
I agree with the general principle that self-skepticism is vital, as a personality trait, for rationalism. However, I don't think that a single written work necessarily gives very much evidence of its author's self-skepticism; it's common to make lots of objections and fail to put them in writing, or to put them in different places. I have also noticed that objections to futurist and singularitarian topics, moreso than other things, tend to lead into clouds of ambiguity and uncertainty which cannot be resolved in either direction, which don't suggest such ...
The reason I used the term "Dark Arts" is that your post cleverly generalizes from unknown examples, infers this to be true for an overwhelming majority of cases, and then proposes this as a Fully General Counterargument that any LWer attempting to revise themselves is merely plugging in "self-doubt" while generalizing.
Your argument effectively proves by axiom that there are very few LWers if any actually using real epistemic rationality skills, and by hidden connotation also shows that we're all Gray, and thus because Gray isn't White...
I hope SI will agree that the FAQ answer you linked is inadequate (either overlooking some common objections, or lumping them together dismissively as unspecified obstacles that will be revealed in the future). For example, "building an AI seems hard. no human (even given much longer lifespans) or team of humans will ever be smart enough to build something that leads to an intelligence explosion", or "computing devices that can realistically model an entire human brain (even taking shortcuts on parts that turn out to be irrelevant to intelli...
I hope SI will agree that the FAQ answer you linked is inadequate
As Randaly notes, an FAQ of short answers to common questions is the wrong place to look for in-depth analysis and detailed self-skepticism! Also, the FAQ links directly to papers that do respond in some detail to the objections mentioned.
Another point to make is that SI has enough of a culture of self-skepticism that its current mission (something like "put off the singularity until we can make it go well") is nearly the opposite of its original mission ("make the singularity happen as quickly as possible"). The story of that transition is here.
When Richard Feynman started investigating irrationality in the 1970s, he quickly begun to realize the problem wasn't limited to the obvious irrationalists.
Uri Geller claimed he could bend keys with his mind. But was he really any different from the academics who insisted their special techniques could teach children to read? Both failed the crucial scientific test of skeptical experiment: Geller's keys failed to bend in Feynman's hands; outside tests showed the new techniques only caused reading scores to go down.
What mattered was not how smart the people were, or whether they wore lab coats or used long words, but whether they followed what he concluded was the crucial principle of truly scientific thought: "a kind of utter honesty--a kind of leaning over backwards" to prove yourself wrong. In a word: self-skepticism.
As Feynman wrote, "The first principle is that you must not fool yourself -- and you are the easiest person to fool." Our beliefs always seem correct to us -- after all, that's why they're our beliefs -- so we have to work extra-hard to try to prove them wrong. This means constantly looking for ways to test them against reality and to think of reasons our tests might be insufficient.
When I think of the most rational people I know, it's this quality of theirs that's most pronounced. They are constantly trying to prove themselves wrong -- they attack their beliefs with everything they can find and when they run out of weapons they go out and search for more. The result is that by the time I come around, they not only acknowledge all my criticisms but propose several more I hadn't even thought of.
And when I think of the least rational people I know, what's striking is how they do the exact opposite: instead of viciously attacking their beliefs, they try desperately to defend them. They too have responses to all my critiques, but instead of acknowledging and agreeing, they viciously attack my critique so it never touches their precious belief.
Since these two can be hard to distinguish, it's best to look at some examples. The Cochrane Collaboration argues that support from hospital nurses may be helpful in getting people to quit smoking. How do they know that? you might ask. Well, they found this was the result from doing a meta-analysis of 31 different studies. But maybe they chose a biased selection of studies? Well, they systematically searched "MEDLINE, EMBASE and PsycINFO [along with] hand searching of specialist journals, conference proceedings, and reference lists of previous trials and overviews." But did the studies they pick suffer from selection bias? Well, they searched for that -- along with three other kinds of systematic bias. And so on. But even after all this careful work, they still only are confident enough to conclude "the results…support a modest but positive effect…with caution … these meta-analysis findings need to be interpreted carefully in light of the methodological limitations".
Compare this to the Heritage Foundation's argument for the bipartisan Wyden–Ryan premium support plan. Their report also discusses lots of objections to the proposal, but confidently knocks down each one: "this analysis relies on two highly implausible assumptions ... All these predictions were dead wrong. ... this perspective completely ignores the history of Medicare" Their conclusion is similarly confident: "The arguments used by opponents of premium support are weak and flawed." Apparently there's just not a single reason to be cautious about their enormous government policy proposal!
Now, of course, the Cochrane authors might be secretly quite confident and the Heritage Foundation might be wringing their hands with self-skepticism behind-the-scenes. But let's imagine for a moment that these aren't just reportes intended to persuade others of a belief and instead accurate portrayals of how these two different groups approached the question. Now ask: which style of thinking is more likely to lead the authors to the right answer? Which attitude seems more like Richard Feynman? Which seems more like Uri Geller?