abramdemski comments on Thoughts on the Singularity Institute (SI) - Less Wrong

256 Post author: HoldenKarnofsky 11 May 2012 04:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1270)

You are viewing a single comment's thread. Show more comments above.

Comment author: abramdemski 12 May 2012 05:53:28AM *  3 points [-]

However the primary risk you set out seems accurate.

(I assume you mean, self-fulfilling prophecies.)

In order to get these, it seems like you would need a very specific kind of architecture: one which considers the results of its actions on its utility function (set to "correctness of output"). This kind of architecture is not the likely architecture for a 'tool'-style system; the more likely architecture would instead maximize correctness without conditioning on its act of outputting those results.

Thus, I expect you'd need to specifically encode this kind of behavior to get self-fulfilling-prophecy risk. But I admit it's dependent on architecture.

(Edit-- so, to be clear: in cases where the correctness of the results depended on the results themselves, the system would have to predict its own results. Then if it's using TDT or otherwise has a sufficiently advanced self-model, my point is moot. However, again you'd have to specifically program these, and would be unlikely to do so unless you specifically wanted this kind of behavior.)

Comment author: Vladimir_Nesov 12 May 2012 10:36:41PM *  1 point [-]

However, again you'd have to specifically program these, and would be unlikely to do so unless you specifically wanted this kind of behavior.

Not sure. Your behavior is not a special feature of the world, and it follows from normal facts (i.e. not those about internal workings of yourself specifically) about the past when you were being designed/installed. A general purpose predictor could take into account its own behavior by default, as a non-special property of the world, which it just so happens to have a lot of data about.

Comment author: abramdemski 14 May 2012 01:04:24AM 2 points [-]

Right. To say much more, we need to look at specific algorithms to talk about whether or not they would have this sort of behavior...

The intuition in my above comment was that without TDT or other similar mechanisms, it would need to predict what its own answer could be before it could compute its effect on the correctness of various answers, so it would be difficult for it to use self-fulfilling prophecies.

Really, though, this isn't clear. Now my intuition is that it would gather evidence on whether or not it used the self-fulfilling prophecy trick, so if it started doing so, it wouldn't stop...

In any case, I'd like to note that the self-fulfilling prophecy problem is much different than the problem of an AI which escapes onto the internet and ruthlessly maximizes a utility function.

Comment author: Vladimir_Nesov 14 May 2012 01:42:45AM *  2 points [-]

I was thinking more of its algorithm admitting an interpretation where it's asking "Say, I make prediction X. How accurate would that be?" and then maximizing over relevant possible X. Knowledge about its prediction connects the prediction to its origins and consequences, it establishes the prediction as part of the structure of environment. It's not necessary (and maybe not possible and more importantly not useful) for the prediction itself to be inferable before it's made.

Agreed that just outputting a single number is implausible to be a big deal (this is an Oracle AI with extremely low bandwidth and peculiar intended interpretation of its output data), but if we're getting lots and lots of numbers it's not as clear.

Comment author: abramdemski 15 May 2012 09:04:05AM 0 points [-]

I'm thinking that type of architecture is less probable, because it would end up being more complicated than alternatives: it would have a powerful predictor as a sub-component of the utility-maximizing system, so an engineer could have just used the predictor in the first place.

But that's a speculative argument, and I shouldn't push it too far.

It seems like powerful AI prediction technology, if successful, would gain an important place in society. A prediction machine whose predictions were consumed by a large portion of society would certainly run into situations in which its predictions effect the future it's trying to predict; there is little doubt about that in my mind. So, the question is what its behavior would be in these cases.

One type of solution would do as you say, maximizing a utility over the predictions. The utility could be "correctness of this prediction", but that would be worse for humanity than a Friendly goal.

Another type of solution would instead report such predictive instability as accurately as possible. This doesn't really dodge the issue; by doing this, the system is choosing a particular output, which may not lead to the best future. However, that's markedly less concerning (it seems).

Comment author: timtyler 15 May 2012 10:01:26AM 0 points [-]

It seems like powerful AI prediction technology, if successful, would gain an important place in society.

It would pass the Turing test - e.g. see here.