gwern comments on Risks from AI and Charitable Giving - Less Wrong

2 Post author: XiXiDu 13 March 2012 01:54PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (126)

You are viewing a single comment's thread. Show more comments above.

Comment author: gwern 13 March 2012 09:21:25PM *  24 points [-]

The point is that P2 does not imply P3, yet P2 has to be true in the first place.

By covering premises which are subsumed or implied by other premises, you are engaged in one of the more effective ways to bias a conjunctive or necessary analysis: you are increasing the number of premises and double-counting probabilities. By the conjunction rule, this usually decreases the final probability.

I've pointed out before that use of the conjunction approach can yield arbitrarily small probabilities based on how many conjuncts one wishes to include.

For example, if I were to argue that based on methodological considerations one could never have greater than 99% confidence in a theory and say that none of the premises could therefore be more than 0.99, I can take a theory of 2 conjuncts with 0.99^2 =98% maximum confidence and knock it down to 94% solely by splitting each conjunct into 3 premises ('this premise conceals a great deal of complexity; let us estimate it by taking a closer look at 3 equivalent but finer-grained propositions...') and claiming each is max 99%, since 0.99^6=0.941.

With your 5 premises, that'd be starting with 95%, and then I can knock it down to 90% by splitting each premise which I could do very easily and have already implied in my first criticism about the self-improvement premise.

You can do this quite easily with cryonics as well - one attempt I saw included transportation to the hospital and used no probability >99%! Needless to say, the person concluded cryonics was a ludicrously bad idea.

It's a strange kind of analysis that only allows the final probability to get smaller and smaller and smaller...

(Obviously as a violation of Cox's theorems - by putting upper bounds on probabilities - this lets us get Dutchbooked.)

Comment author: XiXiDu 14 March 2012 09:48:15AM *  1 point [-]

It wasn't my intention to double-count probabilities. As far as what I wrote suggested that, I am simply wrong. My intention was to show that risks from AI are not as likely as its logical possibility and not as likely as its physical possibility. My intention was to show that there are various premises that need to be true, each of which introduces an additional probability penalty.

Comment author: Gabriel 14 March 2012 11:19:27AM *  3 points [-]

Then you should decompose it like this:

P(FOOM) = P(premise_n | premises_1..n-1) * P(premise_n-1 | premises_1..n-2) * ... * P(premise_2 | premise_1) * P(premise_1)

Then you're precisely measuring the additional probability penalty introduced. And if premise PX implies premise PY, you throw out PY for simplicity. If you can give an upper bound on P(PX) then you gave exactly the same upper bound P(PX & PY). You can't make it stronger by reordering and writing P(PX & PY) = PY * (PX | PY) and then saying 'but PY doesn't imply PX so there'.

Talking about the conjunctive fallacy looks disingenuous when the conjuncts have strong dependencies.