You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Armok_GoB comments on Stupid Questions Open Thread - Less Wrong Discussion

42 Post author: Costanza 29 December 2011 11:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (265)

You are viewing a single comment's thread.

Comment author: Armok_GoB 30 December 2011 03:31:48PM 3 points [-]

How do I stop my brain from going: "I believe P and I believe something that implies not P -> principle of explosion -> all statements are true!" and instead go "I believe P and I believe something that implies not P -> I one of my beliefs are incorrect". It doesn't happen to often, but it'd be nice to have an actual formal refutation for when it does.

Comment author: endoself 30 December 2011 08:39:27PM 4 points [-]

Do you actually do this - "Oh, not P! I must be the pope." - or do you just notice this - "Not P, so everything's true. Where do I go from here?".

If you want to know why you shouldn't do this it's because you never really learn not P, you just learn evidence against P which you should update with Bayes' rule. If you want to understand this process more intuitively (and you've already read the sequences and are still confused), I would recommend this short tutorial or studying belief propagation in Bayesian networks, for which I don't know a great source for the intuitions behind, but units 3 and 4 of the online Stanford AI class might help.

Comment author: Armok_GoB 31 December 2011 12:03:38AM 2 points [-]

I've actually done that class and gotten really good grades.

Looking at it, it seems I have automatic generation of nodes for new statements, and the creation of a new node does not check for an already existing node for it's inversion.

To complicate matters further, I don't go "I'm the pope" nor "all statements are true.", I go "NOT Bayes theorem, NOT induction, and NOT Occhams razor!"

Comment author: endoself 31 December 2011 04:08:37AM *  1 point [-]

Well, one mathematically right thing to do is to make a new node descending from both other nodes representing E = (P and not P) and then observe not E.

Did you read the first tutorial? Do you find the process of belief-updating on causal nets intuitive, or do you just understand the math? How hard would it be for you to explain why it works in the language of the first tutorial?

Strictly speaking, causal networks only apply to situations where the number of variables does not change, but the intuitions carry over.

Comment author: Armok_GoB 31 December 2011 01:22:35PM 1 point [-]

Thats what I try to do, the problem is I end up observing E to be true. And E leads to an "everything" node.

I'm not sure how well I understand the math, but I feel like I probably do...

Comment author: endoself 31 December 2011 07:32:38PM 1 point [-]

You don't observe E to be true, you infer it to be (very likely) true by propagating from P and from not P. You observe it to be false using the law of noncontradiction.

Parsimony suggests that if you think you understand the math, it's because you understand it. Understanding Bayesianism seems easier than fixing a badly-understood flaw in your brain's implementation of it.

Comment author: Armok_GoB 31 December 2011 07:51:58PM 1 point [-]

How can I get this law of noncontradiction? it seems like an useful thing to have.

Comment author: Vladimir_Nesov 30 December 2011 05:15:32PM *  3 points [-]

The reason is that you don't believe anything with logical conviction, if your "axioms" imply absurdity, you discard the "axioms" as untrustworthy, thus refuting the arguments for their usefulness (that always precede any beliefs, if you look for them). Why do I believe this? My brain tells me so, and its reasoning is potentially suspect.

Comment author: Armok_GoB 30 December 2011 05:57:36PM 0 points [-]

I think I've found the problem: I don't have any good intuitive notion of absurdity. The only clear association I have with it is under "absurdity heuristic" as "a thing to ignore".

That is: It's not self evident to me that what it implies IS absurd. After all, it was implied by a chain of logic I grok and can find no flaw in.

Comment author: Vladimir_Nesov 30 December 2011 06:47:35PM 0 points [-]

I used "absurdity" in the technical math sense.

Comment author: orthonormal 30 December 2011 06:32:12PM *  1 point [-]

To the (mostly social) extent that concepts were useful to your ancestors, one is going to lead to better decisions than the other, and so you should expect to have evolved the latter intuition. (You trust two friends, and then one of them tells you the other is lying- you feel some consternation of the first kind, but then you start trying to figure out which one is trustworthy.)

Comment author: Armok_GoB 30 December 2011 06:51:27PM 0 points [-]

It seems a lot of intuitions all humans are supposed to have were overwritten by noise at some point...