XiXiDu comments on Risks from AI and Charitable Giving - Less Wrong

2 Post author: XiXiDu 13 March 2012 01:54PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (126)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 14 March 2012 09:48:15AM *  1 point [-]

It wasn't my intention to double-count probabilities. As far as what I wrote suggested that, I am simply wrong. My intention was to show that risks from AI are not as likely as its logical possibility and not as likely as its physical possibility. My intention was to show that there are various premises that need to be true, each of which introduces an additional probability penalty.

Comment author: Gabriel 14 March 2012 11:19:27AM *  3 points [-]

Then you should decompose it like this:

P(FOOM) = P(premise_n | premises_1..n-1) * P(premise_n-1 | premises_1..n-2) * ... * P(premise_2 | premise_1) * P(premise_1)

Then you're precisely measuring the additional probability penalty introduced. And if premise PX implies premise PY, you throw out PY for simplicity. If you can give an upper bound on P(PX) then you gave exactly the same upper bound P(PX & PY). You can't make it stronger by reordering and writing P(PX & PY) = PY * (PX | PY) and then saying 'but PY doesn't imply PX so there'.

Talking about the conjunctive fallacy looks disingenuous when the conjuncts have strong dependencies.