MichaelAnissimov comments on A Prodigy of Refutation - Less Wrong

18 Post author: Eliezer_Yudkowsky 18 September 2008 01:57AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (19)

Sort By: Old

You are viewing a single comment's thread.

Comment author: MichaelAnissimov 18 September 2008 07:19:24AM 0 points [-]

So, what if it becomes clear that human intelligence is not enough to implement FAI with the desirable degree of confidence, and transhuman intelligence is necessary? After all, the universe has no special obligation to set the problem up to be humanly achievable.

If so, then instead of coming up with some elaborate weighting scheme like CEV, it'd be easier to pursue IA or have the AI suck the utility function directly out of some human -- the latter being "at least as good" as an IA Singularity.

If programmer X can never be confident that the FAI will actually work, with the threat of a Hell Outcome or Near Miss constantly looming, they might decide that the easiest way out is just to blow everything up.