Viliam_Bur comments on Thoughts on the Singularity Institute (SI) - Less Wrong

256 Post author: HoldenKarnofsky 11 May 2012 04:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1270)

You are viewing a single comment's thread. Show more comments above.

Comment author: Viliam_Bur 18 May 2012 03:49:56PM 1 point [-]

It may revise it's prediction output after considering that the future impact of that output would falsify the original prediction.

Here it is. The process of revision may itself prefer some outputs/futures over other outputs/futures. Inconsistent ones will be iterated away, and the more consistent ones will replace them.

A possible future "X happens" will be removed from the report if the Oracle AI realizes that printing a report "X happens" would prevent X from happening (although X might happen in an alternative future where Oracle AI does not report anything). A possible future "Y happens" will not be removed from the report if the Oracle AI realizes that printing a report "Y happens" really leads to Y happening. Here is a utility function born: it prefers Y to X.

Comment author: jacob_cannell 18 May 2012 04:00:48PM 0 points [-]

Here is a utility function born: it prefers Y to X.

We can dance around the words "utility" and "prefer", or we can ground them down to math/algorithms.

Take the AIXI formalism for example. "Utility function" has a specific meaning as a term in the optimization process. You can remove the utility term so the algorithm 'prefers' only (probable) futures, instead of 'prefering' (useful*probable) futures. This is what we mean by "Oracle AI".