Nick_Tarleton comments on Sarah Connor and Existential Risk - Less Wrong

-9 [deleted] 01 May 2011 06:28PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (77)

You are viewing a single comment's thread. Show more comments above.

Comment author: fubarobfusco 01 May 2011 08:22:55PM 5 points [-]

"Bad argument gets counterargument. Does not get bullet. Never. Never ever never for ever."

But I'll propose a possibly even more scarily cultish idea:

Why attempt to perfect human rationality? Because someone's going to invent uploading sometime. And if the first uploaded person is not sufficiently rational, they will rapidly become Unfriendly AI; but if they are sufficiently rational, then there's a chance they will become Friendly AI.

(The same argument can be used for increasing human compassion, of course. Sufficiently advanced compassion requires rationality, though.)

Comment author: Nick_Tarleton 02 May 2011 02:44:32AM *  9 points [-]

(Tangentially:)

And if the first uploaded person is not sufficiently rational, they will rapidly become Unfriendly AI

"Will" is far too strong. Becoming UFAI at least requires that an upload be given sufficient ability to self-modify (or sufficiently modified from outside), and that IA up to superintelligence on uploads be not only tractable (likely but not guaranteed) but, if it's going to be the first upload, easy enough that lots more uploads don't get made first. Digital intelligences are not intrinsically, automatically hard takeoff risks, which it sounds like you're modeling them as. (Not to mention, up to a point insufficient rationality would make an upload less likely to ever successfully increase its intelligence.)

(That said, there are lots of risks and horrible scenarios involving uploads that don't require strong superintelligence, just subjective speedup or copiability.)