khafra comments on How can we ensure that a Friendly AI team will be sane enough? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (64)
If you had two such teams, working independently, who came to the same conclusions for the same reasons, that would be at least weak evidence that they're both being rational.
Perhaps it would be best to have as many teams as possible working on different pieces independently, with some form of arithmetic coding operating over their output.
Could you clarify what you mean by "arithmetic coding operating over their output"?
The point of having teams work independently on the same project is that they're unlikely to make exactly the same mistakes. Publishers do this for proofreading: have two proofreaders return error-sets A and B, and estimate the number of uncaught errors as a function of |A\B|, |B\A| and |A∩B|. If A=B, that would be strong evidence that there are no errors left. (But depending on priors, it might not bring P(no errors left) close to 1.) If two proofreaders worked on different parts of the book, you couldn't use the same technique.
Could the arithmetic coding make checks like this unnecessary?
Unlikely, but not independent. "Are N average software versions better than 1 good version?", Hatton 1997:
No, it would just be a more efficient and error-resistant way to do the checks that way--with overlapping sections of the work--than with a straight duplication of effort. Arithmetic coding has a wikipedia article; error-correcting output coding doesn't seem to; but it's closer to the actual implementation a team of teams could use.
(edited because the link to ECOC disappeared)