eli_sennesh comments on Debunking Fallacies in the Theory of AI Motivation - LessWrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (343)
So, to be clear, it is permitted for you to invoke "all-powerful" capability in the AGI, if that particular all-powerful capability allows you to make an outrageous assertion that wins the argument....
But when I ask you to be consistent and take this supposed "all-powerfulness" to its logical conclusion, all of sudden you want to explain that there might be all kinds of limitations to the all-powerfulness .... like, it might not actually be able to do time travel, after all?
Oh dear.
Well, on some level, of course. We're not trying to design something that will be weak and stupid, you know. There's no point in an FAI if you only apply it to tasks a human and a brute computer could handle alone. We damn well intend that it become significantly more powerful than we can contain, because that is how powerful it has to be to fix the problems we intend it to fix and yield the benefits we intend it to yield!