timtyler comments on What I would like the SIAI to publish - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (218)
Welcome to humanity. ;-) I enjoy Hanson's writing, but AFAICT, he's not a Bayesian reasoner.
Actually: I used to enjoy his writing more, before I grokked Bayesian reasoning myself. Afterward, too much of what he posts strikes me as really badly reasoned, even when I basically agree with his opinion!
I similarly found Seth Roberts' blog much less compelling than I did before (again, despite often sharing similar opinions), so it's not just him that I find to be reasoning less well, post-Bayes.
(When I first joined LW, I saw posts that were disparaging of Seth Roberts, and I didn't get what they were talking about, until after I understood what "privileging the hypothesis" really means, among other LW-isms.)
See, that's a perfect example of a "la la la I can't hear you" argument. You're essentially claiming that you're not a human being -- an extraordinary claim, requiring extraordinary proof.
Simply knowing about biases does very nearly zero for your ability to overcome them, or to spot them in yourself (vs. spotting them in others, where it's easy to do all day long.)
Since you said "many", I'll say that I agree with you that that is possible. In principle, it could be possible for me as well, but...
To be clear on my own position: I am a FAI skeptic, in the sense that I have a great many doubts about its feasibility -- too many to present or argue here. All I'm saying in this discussion is that to believe AI is dangerous, one only need to believe that humans are terminally stupid, and there is more than ample evidence for that proposition. ;-)
Also, more relevant to the issue of emotional bias: I don't primarily identify as an LW-ite; in fact I think that a substantial portion of the LW community has its head up its ass in overvaluing epistemic (vs. instrumental) rationality, and that many people here are emulating a level of reasoning they don't personally comprehend... and before I understood the reasoning myself, I thought the entire thing was a cult of personality, and wondered why everybody was making such a religious-sounding fuss over a minor bit of mathematics used for spam filtering. ;-)
My take is that before the debate, I was wary of AI dangers, but skeptical of fooming. Afterward, I was convinced fooming was near inevitable, given the ability to create a decent AI using a reasonably small amount of computing resources.
And a big part of that convincing was that Robin never seemed to engage with any of Eliezer's arguments, and instead either attacked Eliezer or said, "but look, other things happen this other way".
It seems to me that it'd be hard to do a worse job of convincing people of the anti-foom position, without being an idiot or a troll.
That is, AFAICT, Robin argued the way a lawyer argues when they know the client is guilty: pounding on the facts when the law is against them, pounding on the law when the facts are against them, and pounding on the table when the facts and law are both against.
Yep.
I'm curious what stronger assertion you think is necessary. I would personally add, "Humans are bad at programming, no nontrivial program is bug-free, and an AI is a nontrivial program", but I don't think there's a lack of evidence for any of these propositions. ;-)
[Edited to add the "given" qualification on "nearly inevitable", as that's been a background assumption I may not have made clear in my position on this thread.]
I looked briefly at the evidence for that. Most of it seemed to be from the so-called "self-serving bias" - which looks like an adaptive signalling system to me - and so is not really much of a "bias" at all.
People are unlikely to change existing adaptive behaviour just because someone points it out and says it is a form of "bias". The more obvious thing to do is to conclude is that they don't know what they are talking about - or that they are trying to manipulate you.