TheAncientGeek comments on Snowdenizing UFAI - Less Wrong

5 Post author: JoshuaFox 05 December 2013 02:42PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (71)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheAncientGeek 10 December 2013 05:54:45PM 1 point [-]

It's not strong in the sense of reducing the likelihood of uFAI to 0. It strong enough to disprove a confident "will be unfriendly". Note that the combination of low likleihood and high impact (and asking for money to solve the problem) is a Pascal's mugging.

Comment author: JoshuaZ 10 December 2013 07:19:10PM 0 points [-]

So how low a likelyhood do you need before it is a Pascal's Mugging? 70%? 50%? 10%? 1%? Something lower?

Comment author: TheAncientGeek 10 December 2013 07:43:13PM 0 points [-]

That's not my problem. It's MIRIs problem to argue that the likelihood is above their threshold.

Comment author: linkhyrule5 17 December 2013 12:09:37PM 1 point [-]

... nnnot if your goal is "find out whether or not AI existential risk is a problem," and not "win an argument with MIRI".

Comment author: JoshuaZ 11 December 2013 12:51:19AM 0 points [-]

You've argued that this is a Pascal's mugging. So where do you set that threshold?

Comment author: TheAncientGeek 11 December 2013 09:45:31AM 0 points [-]

I argue that a sufficiently low likelihood is a P's M, by MIRI's definition, so MIRI needs to show the likelihood is above that threshold.

Comment author: JoshuaZ 11 December 2013 03:07:46PM *  0 points [-]

I fail to follow that logic. There's not some magic opinion associated with MIRI that's relevant to this claim. MIRI's existence or opinions of how to approach this doesn't alter at all whether or not this is an existential threat that needs to be taken seriously, or whether the orthogonality thesis is plausible, or any of the other issues. That's an example of the genetic fallacy.

Comment author: TheAncientGeek 11 December 2013 04:16:06PM 0 points [-]

Whether or not anyone should believe that this is an existential threat that needs to be taken seriously depends on whether or not the claim can be justified, and only MIRI is making these specific version for the claim. You are trying to argue "never mind the justification, look at the truth", but truth is not knowable except by justifying claims. If MIRI/LW is making a kind of claim, a P'S M (as defined by MIRI/LW) that MIRI/LW separately maintains is not a kind of claim that should be believed, then MIRI/LW is making incoherent claims (like "Don't believe holy books, but believe the Bible).