SaidAchmiz comments on No Universally Compelling Arguments in Math or Science - Less Wrong

30 Post author: ChrisHallquist 05 November 2013 03:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (227)

You are viewing a single comment's thread. Show more comments above.

Comment author: SaidAchmiz 05 November 2013 06:51:44PM 0 points [-]

So... what if you try to build a rational/persuadable AGI, but fail, because building an AGI is hard and complicated?

This idea that because AI researchers are aiming for the rational/persuadable chunk of mindspace, they will therefore of course hit their target, seems to me absurd on its face. The entire point is that we don't know exactly how to build an AGI with the precise properties we want it to have, and AGIs with properties different from the ones we want it to have will possibly kill us.

Comment author: TheAncientGeek 05 November 2013 06:54:39PM *  0 points [-]

So... what if you try to build a rational/persuadable AGI, but fail, because building an AGI is hard and complicated?

What if you try to hardwire in friendliness and fail? Out of the two, the latter seems more brittle to me -- if it fails, it'll fail hard. A merely irrational AI would be about as dangerous as David Icke.

This idea that because AI researchers are aiming for the rational/persuadable chunk of mindspace, they will therefore of course hit their target, seems to me absurd on its face.

If you phrase it, as I didn't, in terms of necessity, yes. The actual point was that our probability of hitting a point in mindspace will be heavily weighted by what we are trying to do, and how we are doing it. An unweighted mindspace may be populated with many Lovercraftian horrors, but that theoretical possibility is no more significant than p-zombies.

AGIs with properties different from the ones we want it to have will possibly kill us.

Possibly , but with low probability, is a Pascal's Mugging. MIRI needs significant probability.

Comment author: SaidAchmiz 05 November 2013 07:14:44PM 0 points [-]

I see. Well, that reduces to the earlier argument, and I refer you to the mounds of stuff that Eliezer et al have written on this topic. (If you've read it and are unsatisfied, well, that is in any case a different topic.)

Comment author: TheAncientGeek 05 November 2013 07:16:57PM -2 points [-]

I refer you to the many unanswered objections.