jacob_cannell comments on Thoughts on the Singularity Institute (SI) - Less Wrong

256 Post author: HoldenKarnofsky 11 May 2012 04:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1270)

You are viewing a single comment's thread. Show more comments above.

Comment author: jacob_cannell 18 May 2012 03:42:54PM *  1 point [-]

How exactly does an Oracle AI predict its own output, before that output is completed?

Iterative search, which you more or less have worked out in your post. Take a chess algorithm for example. The future of the board depends on the algorithm's outputs. In this case the Oracle AI doesn't rank the future states, it is just concerned with predictive accuracy. It may revise it's prediction output after considering that the future impact of that output would falsify the original prediction.

This is still not a utility function, because utility implies a ranking over futures above and beyond liklihood.

To avoid being stuck in such loops, we could make the Oracle AI examine all its possible outputs, until it finds one where the future after having reported R really becomes R (or until humans hit the "Cancel" button on this task).

Or in this example, the AI could output some summary of the iteration history it is able to compute in the time allowed.

Comment author: Viliam_Bur 18 May 2012 03:49:56PM 1 point [-]

It may revise it's prediction output after considering that the future impact of that output would falsify the original prediction.

Here it is. The process of revision may itself prefer some outputs/futures over other outputs/futures. Inconsistent ones will be iterated away, and the more consistent ones will replace them.

A possible future "X happens" will be removed from the report if the Oracle AI realizes that printing a report "X happens" would prevent X from happening (although X might happen in an alternative future where Oracle AI does not report anything). A possible future "Y happens" will not be removed from the report if the Oracle AI realizes that printing a report "Y happens" really leads to Y happening. Here is a utility function born: it prefers Y to X.

Comment author: jacob_cannell 18 May 2012 04:00:48PM 0 points [-]

Here is a utility function born: it prefers Y to X.

We can dance around the words "utility" and "prefer", or we can ground them down to math/algorithms.

Take the AIXI formalism for example. "Utility function" has a specific meaning as a term in the optimization process. You can remove the utility term so the algorithm 'prefers' only (probable) futures, instead of 'prefering' (useful*probable) futures. This is what we mean by "Oracle AI".