private_messaging comments on A cynical explanation for why rationalists worry about FAI - Less Wrong

25 Post author: aaronsw 04 August 2012 12:27PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (179)

You are viewing a single comment's thread. Show more comments above.

Comment author: private_messaging 09 August 2012 11:31:00AM *  0 points [-]

On the issue privatemessaging raises, I think it's a serious philosophical problem, but not necessarily a practical one (as he claims), assuming Solomonoff Induction could be made practical in the first place, because the hypothetical AI could quickly update away even a factor of 2^1000 when it turns on its senses, before it has a chance to make any important wrong decisions. privatemessaging seems to have strong intuitions that it will be a practical problem, but he tends to be overconfident in many areas so I don't trust that too much.

Are you picturing AI that has simulated multiverse from big bang up inside a single universe, and then it just uses camera sense data to very rapidly pick the right universe? Well yes that will dispose of 2^1000 prior very easily. Something that is instead e.g. modelling humans using minimum amount of guessing without knowing what's inside their heads, and which can't really run any reductionist simulations at the level of quarks to predict it's camera data, can have real trouble getting right the fine details of it's grand unified theory of everything, and most closely approximate a crackpot scientist. Furthermore, having to include a non-reductionist model of humans, it may even end up religious (feeding stuff into human mind model to build it's theory of everything by intelligent design).

How it would work under any form of a practical bound (e.g. forbidding zillions upon zillions of the quark level simulations of everything from big bang to now to occur within an AI, which seem to me like a very conservative bound), is a highly complicated open problem. edit: and the very strong intuition I have is that you can't just dismiss this sort of stuff out of hand. So many ways it can fail. So few ways it can work great. And no rigour what so ever in the speculations here.

Comment author: Wei_Dai 09 August 2012 01:15:54PM 1 point [-]

How it would work under any form of a practical bound (e.g. forbidding zillions upon zillions of the quark level simulations of everything from big bang to now to occur within an AI, which seem to me like a very conservative bound), is a highly complicated open problem.

I certainly don't disagree when you put it like that, but I think the convention around here is when we say "SI/AIXI will do X" we are usually referring to the theoretical (uncomputable) construct, not predicting that an actual future AI inspired by SI/AIXI will do X (in part because we do recognize the difficulty of this latter problem). The reason for saying "SI/AIXI will do X" may for example be to point out how even a simple theoretical model can behave in potentially dangerous ways that its designer didn't expect, or just to better understand what it might mean to be ideally rational.

Comment deleted 09 August 2012 02:08:26PM *  [-]
Comment author: Wei_Dai 09 August 2012 07:31:27PM 2 points [-]

Ok, if what you're saying is not "SI concludes this" but just that we don't really know what even the theoretical SI concludes, then I don't disagree with that and in fact have made similar points before. (See here and here.) I guess I give Eliezer and Luke more of a pass (i.e. don't criticize them heavily based on this) because it doesn't seem like any other proponent of algorithmic information theory (for example Schmidhuber or Hutter) realizes that Solomonoff Induction may not assign most posterior probability mass to "physics sim + location" type programs, or if they do realize it, choose not to point it out. That presentation you linked to earlier is a good example of this.

I believe that Hutter et all were rightfully careful not to expect something specific, i.e. not to expect it to not kill him, not to expect it to kill him, etc etc.

You would think that if Hutter thought there's a significant chance that AIXI would kill him, he would point that out prominently so people would prioritize working on this problem or at least keep it in mind as they try to build AIXI approximations. But instead he immediately encourages people to use AIXI as a model to build AIs (in A Monte Carlo AIXI Approximation for example) without mentioning any potential dangers.

Those are questions to be, at last, formally approached.

Before you formally approach a problem (by that I assume you mean try to formally prove it one way or another), you have to think that the problem is important enough. How can we decide that, except by using intuition and heuristic/informal arguments? And in this case it seems likely that a proof would be too hard to do (AIXI is uncomputable after all) so intuition and heuristic/informal arguments may be the only things we're left with.