eli_sennesh comments on Debunking Fallacies in the Theory of AI Motivation - LessWrong

8 Post author: Richard_Loosemore 05 May 2015 02:46AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (343)

You are viewing a single comment's thread. Show more comments above.

Comment author: Richard_Loosemore 11 May 2015 11:42:49PM -2 points [-]

So, to be clear, it is permitted for you to invoke "all-powerful" capability in the AGI, if that particular all-powerful capability allows you to make an outrageous assertion that wins the argument....

But when I ask you to be consistent and take this supposed "all-powerfulness" to its logical conclusion, all of sudden you want to explain that there might be all kinds of limitations to the all-powerfulness .... like, it might not actually be able to do time travel, after all?

Oh dear.

Comment author: [deleted] 16 May 2015 12:47:24AM *  1 point [-]

So, to be clear, it is permitted for you to invoke "all-powerful" capability in the AGI, if that particular all-powerful capability allows you to make an outrageous assertion that wins the argument....

Well, on some level, of course. We're not trying to design something that will be weak and stupid, you know. There's no point in an FAI if you only apply it to tasks a human and a brute computer could handle alone. We damn well intend that it become significantly more powerful than we can contain, because that is how powerful it has to be to fix the problems we intend it to fix and yield the benefits we intend it to yield!