Gabriel comments on Risks from AI and Charitable Giving - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (126)
Then you should decompose it like this:
P(FOOM) = P(premise_n | premises_1..n-1) * P(premise_n-1 | premises_1..n-2) * ... * P(premise_2 | premise_1) * P(premise_1)
Then you're precisely measuring the additional probability penalty introduced. And if premise PX implies premise PY, you throw out PY for simplicity. If you can give an upper bound on P(PX) then you gave exactly the same upper bound P(PX & PY). You can't make it stronger by reordering and writing P(PX & PY) = PY * (PX | PY) and then saying 'but PY doesn't imply PX so there'.
Talking about the conjunctive fallacy looks disingenuous when the conjuncts have strong dependencies.