cousin_it comments on BOOK DRAFT: 'Ethics and Superintelligence' (part 1) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (107)
Roko's original proposed basilisk is not and never was the problem in Roko's post. I don't expect it to be part of CEV, and it would be caught by generic procedures meant to prevent CEV from running if 80% of humanity turns out to be selfish bastards, like the Last Jury procedure (as renamed by Bostrom) or extrapolating a weighted donor CEV with a binary veto over the whole procedure.
EDIT: I affirm all of Nesov's answers (that I've seen so far) in the threads below.
wedrifid is right: if you're now counting on failsafes to stop CEV from doing the wrong thing, that means you could apply the same procedures to any other proposed AI, so the real value of your life's work is in the failsafe, not in CEV. What happened to all your clever arguments saying you can't put external chains on an AI? I just don't understand this at all.
Since my name was mentioned I had better confirm that I generally agree with your point but would have left out this sentence:
I don't disagree with the principle of having a failsafe - and don't think it is incompatible with the aforementioned clever arguments. But I do agree that "but there is a failsafe" is an utterly abysmal argument in favour of preferring CEV<humanity> over an alternative AI goal system.
Tell me about it. With most people if they kept asking the same question when the answer is staring them in the face and then act oblivious as it is told to them repeatedly I dismiss them as either disingenuous or (possibly selectively) stupid in short order. But, to borrow wisdom from HP:MoR:
Any given FAI design can turn out to be unable to do the right thing, which corresponds to tripping failsafes, but to be a FAI it must also be potentially capable (for all we know) of doing the right thing. Adequate failsafe should just turn off an ordinary AGI immediately, so it won't work as an AI-in-chains FAI solution. You can't make AI do the right thing just by adding failsafes, you also need to have a chance of winning.
Affirmed.