private_messaging comments on Non-orthogonality implies uncontrollable superintelligence - Less Wrong

14 Post author: Stuart_Armstrong 30 April 2012 01:53PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

You are viewing a single comment's thread. Show more comments above.

Comment author: private_messaging 04 May 2012 06:51:18AM *  -2 points [-]

build any computational system which generates a range of actions, predicts the consequences of those actions relative to some ontology and world-model, and then selects among probable consequences using criterion X.

Nothing mysterious here: this naive approach has incredibly low payoff per computation, and even if you start with such system, and get it to be smart enough to make improvements, the first thing it'll be improving is changing it's architecture.

If I gave you 10^40 flops, which probably can support 'super intelligent' mind, your naive approach would still be dumber than a housecat on many tasks. For some world evolution & utility, you can do inverse of the 'simulate and choose' much better (think towering exponents times better) than brute-force 'try different actions'. In general you can't. Some functions are easier to find inverse of, than others. A lot easier.