Gabriel comments on Risks from AI and Charitable Giving - Less Wrong

2 Post author: XiXiDu 13 March 2012 01:54PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (126)

You are viewing a single comment's thread. Show more comments above.

Comment author: Gabriel 14 March 2012 11:19:27AM *  3 points [-]

Then you should decompose it like this:

P(FOOM) = P(premise_n | premises_1..n-1) * P(premise_n-1 | premises_1..n-2) * ... * P(premise_2 | premise_1) * P(premise_1)

Then you're precisely measuring the additional probability penalty introduced. And if premise PX implies premise PY, you throw out PY for simplicity. If you can give an upper bound on P(PX) then you gave exactly the same upper bound P(PX & PY). You can't make it stronger by reordering and writing P(PX & PY) = PY * (PX | PY) and then saying 'but PY doesn't imply PX so there'.

Talking about the conjunctive fallacy looks disingenuous when the conjuncts have strong dependencies.