pjeby comments on What I would like the SIAI to publish - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (218)
Welcome to humanity. ;-) I enjoy Hanson's writing, but AFAICT, he's not a Bayesian reasoner.
Actually: I used to enjoy his writing more, before I grokked Bayesian reasoning myself. Afterward, too much of what he posts strikes me as really badly reasoned, even when I basically agree with his opinion!
I similarly found Seth Roberts' blog much less compelling than I did before (again, despite often sharing similar opinions), so it's not just him that I find to be reasoning less well, post-Bayes.
(When I first joined LW, I saw posts that were disparaging of Seth Roberts, and I didn't get what they were talking about, until after I understood what "privileging the hypothesis" really means, among other LW-isms.)
See, that's a perfect example of a "la la la I can't hear you" argument. You're essentially claiming that you're not a human being -- an extraordinary claim, requiring extraordinary proof.
Simply knowing about biases does very nearly zero for your ability to overcome them, or to spot them in yourself (vs. spotting them in others, where it's easy to do all day long.)
Since you said "many", I'll say that I agree with you that that is possible. In principle, it could be possible for me as well, but...
To be clear on my own position: I am a FAI skeptic, in the sense that I have a great many doubts about its feasibility -- too many to present or argue here. All I'm saying in this discussion is that to believe AI is dangerous, one only need to believe that humans are terminally stupid, and there is more than ample evidence for that proposition. ;-)
Also, more relevant to the issue of emotional bias: I don't primarily identify as an LW-ite; in fact I think that a substantial portion of the LW community has its head up its ass in overvaluing epistemic (vs. instrumental) rationality, and that many people here are emulating a level of reasoning they don't personally comprehend... and before I understood the reasoning myself, I thought the entire thing was a cult of personality, and wondered why everybody was making such a religious-sounding fuss over a minor bit of mathematics used for spam filtering. ;-)
My take is that before the debate, I was wary of AI dangers, but skeptical of fooming. Afterward, I was convinced fooming was near inevitable, given the ability to create a decent AI using a reasonably small amount of computing resources.
And a big part of that convincing was that Robin never seemed to engage with any of Eliezer's arguments, and instead either attacked Eliezer or said, "but look, other things happen this other way".
It seems to me that it'd be hard to do a worse job of convincing people of the anti-foom position, without being an idiot or a troll.
That is, AFAICT, Robin argued the way a lawyer argues when they know the client is guilty: pounding on the facts when the law is against them, pounding on the law when the facts are against them, and pounding on the table when the facts and law are both against.
Yep.
I'm curious what stronger assertion you think is necessary. I would personally add, "Humans are bad at programming, no nontrivial program is bug-free, and an AI is a nontrivial program", but I don't think there's a lack of evidence for any of these propositions. ;-)
[Edited to add the "given" qualification on "nearly inevitable", as that's been a background assumption I may not have made clear in my position on this thread.]
I looked briefly at the evidence for that. Most of it seemed to be from the so-called "self-serving bias" - which looks like an adaptive signalling system to me - and so is not really much of a "bias" at all.
People are unlikely to change existing adaptive behaviour just because someone points it out and says it is a form of "bias". The more obvious thing to do is to conclude is that they don't know what they are talking about - or that they are trying to manipulate you.
PJ, I'd love to drag you off topic slightly and ask you about this:
What is it that you now understand, that you didn't before?
That is annoyingly difficult to describe. Of central importance, I think, is the notion of privileging the hypothesis, and what that really means. Why what we naively consider "evidence" for a position, really isn't.
ISTM that this is the core of grasping Bayesianism: not understanding what reasoning is, so much as understanding why what we all naively think is reasoning and evidence, usually isn't.
That hasn't really helped... would you try again?
(What does privileging the hypothesis really mean? and why is reasoning and evidence usually ... not?)
Have you come across the post by that name? Without reading that it may be hard to reverse engineer the meaning from the jargon.
The intro gives a solid intuitive description:
That is privileging the hypothesis. When you start looking for evidence and taking an idea seriously when you have no good reason to consider it instead of countless others that are just as likely.
I have come across that post, and the story of the murder investigation, and I have an understanding of what the term means.
The obvious answer to the murder quote is that you look harder for evidence around the crimescene, and go where the evidence leads, and there only. The more realistic answer is that you look for recent similar murders, for people who had a grudge against the dead person, for criminals known to commit murder in that city... and use those to progress the investigation because those are useful places to start.
I'm wondering what pjeby has realised, which turns this naive yet straightforward understanding into wrongthought worth commenting on.
If evidence is not facts which reveal some result-options to be more likely true and others less likely true, then what is it?
Consider a hypothesis, H1. If a piece of evidence E1 is consistent with H, the naive interpretation is that E1 is an argument in favor of H1.
In truth, this isn't an argument in favor of H1 -- it's merely the absence of an argument against H1.
That, in a nutshell, is the difference between Bayesian reasoning and naive argumentation -- also known as "confirmation bias".
To really prove H1, you need to show that E1 wouldn't happen under H2, H3, etc., and you need to look for disconfirmations D1, D2, etc. that would invalidate H1, to make sure they're not there.
Before I really grokked Bayesianism, the above all made logical sense to me, but it didn't seem as important as Eliezer claimed. It seemed like just another degree of rigor, rather than reasoning of a different quality.
Now that I "get it", the other sort of evidence seems more-obviously inadequate -- not just lower-quality evidence, but non-evidence.
ISTM that this is a good way to test at least one level of how well you grasp Bayes: does simple supporting evidence still feel like evidence to you? If so, you probably haven't "gotten" it yet.
That is from 'You can’t prove the null by not rejecting it'.
That isn't a wrongthought. Factors like you mention here are all good reason to assign credence to a hypothesis.
Yes, no, maybe... that is exactly what it is! An example of an error would be having some preferred opinion and then finding all the evidence that supports that particular opinion. Or, say, encountering a piece of of evidence and noticing that it supports your favourite position but neglecting that it supports positions X, Y and Z just as well.
I don't believe it's a meaningful property (as used in this context), and you should do well to taboo it (possibly, to convince me it's actually meaningful).
True enough; it would be more precise to say that he argues positions based on evidence which can also support other positions, and therefore isn't convincing evidence to a Bayesian.
What do you mean? Evidence can't support both sides of an argument, so how can one inappropriately use such impossible evidence?
What do you mean, "both"?
It would be a mistake assume that PJ was limiting his evaluation to positions selected from one of those 'both sides' of a clear dichotomy. Particularly since PJ has just been emphasizing the relevance of 'privileging the hypothesis' to bayesian reasoning and also said 'other positions' plural. This being the case no 'impossible evidence' is involved.
I see. But in that case, there is no problem with use of such evidence.
That's true. I believe that PJ was commenting on how such evidence is used. In this context that means PJ would require that the evidence be used more rather than just for a chosen position. The difference between a 'Traditional Rationalist' debater and a (non-existent, idealized) unbiased Bayesian.