Viliam_Bur comments on Thoughts on the Singularity Institute (SI) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (1270)
Here it is. The process of revision may itself prefer some outputs/futures over other outputs/futures. Inconsistent ones will be iterated away, and the more consistent ones will replace them.
A possible future "X happens" will be removed from the report if the Oracle AI realizes that printing a report "X happens" would prevent X from happening (although X might happen in an alternative future where Oracle AI does not report anything). A possible future "Y happens" will not be removed from the report if the Oracle AI realizes that printing a report "Y happens" really leads to Y happening. Here is a utility function born: it prefers Y to X.
We can dance around the words "utility" and "prefer", or we can ground them down to math/algorithms.
Take the AIXI formalism for example. "Utility function" has a specific meaning as a term in the optimization process. You can remove the utility term so the algorithm 'prefers' only (probable) futures, instead of 'prefering' (useful*probable) futures. This is what we mean by "Oracle AI".