Wei_Dai comments on A cynical explanation for why rationalists worry about FAI - Less Wrong

25 Post author: aaronsw 04 August 2012 12:27PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (179)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 09 August 2012 01:15:54PM 1 point [-]

How it would work under any form of a practical bound (e.g. forbidding zillions upon zillions of the quark level simulations of everything from big bang to now to occur within an AI, which seem to me like a very conservative bound), is a highly complicated open problem.

I certainly don't disagree when you put it like that, but I think the convention around here is when we say "SI/AIXI will do X" we are usually referring to the theoretical (uncomputable) construct, not predicting that an actual future AI inspired by SI/AIXI will do X (in part because we do recognize the difficulty of this latter problem). The reason for saying "SI/AIXI will do X" may for example be to point out how even a simple theoretical model can behave in potentially dangerous ways that its designer didn't expect, or just to better understand what it might mean to be ideally rational.

Comment deleted 09 August 2012 02:08:26PM *  [-]
Comment author: Wei_Dai 09 August 2012 07:31:27PM 2 points [-]

Ok, if what you're saying is not "SI concludes this" but just that we don't really know what even the theoretical SI concludes, then I don't disagree with that and in fact have made similar points before. (See here and here.) I guess I give Eliezer and Luke more of a pass (i.e. don't criticize them heavily based on this) because it doesn't seem like any other proponent of algorithmic information theory (for example Schmidhuber or Hutter) realizes that Solomonoff Induction may not assign most posterior probability mass to "physics sim + location" type programs, or if they do realize it, choose not to point it out. That presentation you linked to earlier is a good example of this.

I believe that Hutter et all were rightfully careful not to expect something specific, i.e. not to expect it to not kill him, not to expect it to kill him, etc etc.

You would think that if Hutter thought there's a significant chance that AIXI would kill him, he would point that out prominently so people would prioritize working on this problem or at least keep it in mind as they try to build AIXI approximations. But instead he immediately encourages people to use AIXI as a model to build AIs (in A Monte Carlo AIXI Approximation for example) without mentioning any potential dangers.

Those are questions to be, at last, formally approached.

Before you formally approach a problem (by that I assume you mean try to formally prove it one way or another), you have to think that the problem is important enough. How can we decide that, except by using intuition and heuristic/informal arguments? And in this case it seems likely that a proof would be too hard to do (AIXI is uncomputable after all) so intuition and heuristic/informal arguments may be the only things we're left with.