timtyler comments on Dreams of AIXI - Less Wrong

-1 Post author: jacob_cannell 30 August 2010 10:15PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (145)

You are viewing a single comment's thread. Show more comments above.

Comment author: rhollerith_dot_com 31 August 2010 08:52:45AM *  3 points [-]

The only way to fully know what a program will do in general given some configuration of its memory is to simulate the whole thing - which is equivalent to making a copy of it.

And the probability that a sufficiently intelligent agent will ever need to fully know what a program will do is IMHO negligible. If the purpose of the program is to play chess, for example, the agent probably only cares that the program does not persist in making an illegal move and that it gets as many wins and draws as possible. Even if the agent cares about more than just that, the agent cares only about a small, finite list of properties.

If the purpose of the program is to keep track of bank balances, the agent again only cares whether the program has a small, finite list properties: e.g., whether it disallows unauthorized transactions, whether it ensures that every transaction leaves an audit trail and whether the bank balances and accounts obey "the law of the conservation of money".

It is emphatically not true that the only way to know whether a program has those properties is to run or simulate the program.

Could it be that you are interpreting Rice's theorem too broadly? Rice's theorem says that there is always some program that cannot be classified correctly as to whether it has some property. But programmers just pick programs that can be classified correctly, and this always proves possible in practice.

In other words, if the programmer wants his program to have properties X, Y, and Z, he simply picks from the class of programs that can be classified correctly (as to whether the program has properties X, Y and Z) and this is straightforward and not something an experienced programmer even has consciously to think about unless the "programmer" (who in that case is really a theory-of-computing researcher) was purposefully looking for a set of properties that cannot be satisfied by a program.

Now it is true that human programmers spend a lot of time testing their programs and "simulating" them in debuggers, but there is no reason that all the world's programs could not be delivered without doing any of that: those techniques are simply not necessary to delivering code that is assured to have the properties desired by our civilization.

For example, if there were enough programmers with the necessary skills, every program could be delievered with a mathematical proof that it has the properties that it was intended to have, and this would completely eliminate the need to use testing or debugging. (If the proof and the program are developed at the same time, the "search of the space of possible programs" naturally avoids the regions where one might run into the limitation described in Rice's theorem.)

There are in fact not enough programmers with the necessary skills to deliver such "correctness proofs" for all the programs that the world's programmers currently deliver, but superintelligences will not suffer from that limitation. IMHO they will almost never resort to testing and debugging the programs they create. They will instead use more efficient techniques.

And if a superintelligence -- especially one that can improve its own source code -- happens on a program (in source code form or in executable form), it does not have to run, execute or simulate the program to find out what it needs to find out about it.

Virtual machines, interpreters and the idea of simulation or program execution are important parts of curren technology (and consequently current intellectual discourse) only because human civilization does not yet have the intellectual resources to wield more sophisticated techniques. To reach this conclusion, it was sufficient for me to study of the line of research called "programming methodology" or axiomatic semantics which began in the 1960s with John McCarthy, R.W. FLoyd, C.A.R. Hoare and Dijkstra.

Note also that what is now called discrete-event simulation and what was in the early decades of computing called simply "simulation" has shrunk in importance over the decades as humankind has learned more sophisticated and more productive ways (e.g., statistical machine learning, which does not involve the simulation of anything) of using computers.