You're building a box to contain a strong AI out of type checking?
No, where did I say that? A VM layer with strong type checking is just one layer.
I suggest that unless your types include "strings that can be displayed to a human without causing him to let the AI out of the box", this is unlikely to succeed.
I don't want to debate this here because (a) I haven't the time, and (b) prior debates I've had on LW on the subject have been 100% unproductive, but in short: there hasn't been a single convincing argument against the feasibility of boxing, and there is nothing of value to be learnt from the AI box "experiments". So you don't want to give a single isolated human a direct line to a UFAI running with no operational oversight. Well, duh. It's a total strawman.
Won't you run into Halting Problem issues?
No.
Incidentally, can you quantify this and say how your machine is less powerful than a Turing-complete one but more powerful than regexes?
I did: http://en.wikipedia.org/wiki/Total_functional_programming
there is nothing of value to be learnt from the AI box "experiments".
I would disagree: even if the experiments do not deter us from running boxed AI, they might allow us to know what arguments to prepare for.
Do we know whether anyone has let the AI out of the box in experiments without the AI using simulation warfare? Because AFAICT there is a considerable amount of processing power & information required in this case, and limiting these factors while precommiting not to let the AI out could minimise the risk.
How are you saving the world? Please, let us know!
Whether it is solving the problem of death or teaching rationality, one of the correlated phenomena of being less wrong is making things better. Given the value many of us place on altruism, this extends beyond just ourselves and into that question of, “How can I make The Rest better?” The rest of my community. The rest of my country. The rest of my species. The rest of my world. To word it in a less other-optimizing way: How can I save the world?
So, tell us how you are saving the world. Not how you want to save the world. Not how you plan to. How you are, actively, saving the world. It doesn’t have to be “I invented a friendly AI,” or “I reformed a nation’s gender politics” or “I perfected a cryonics reviving process.” It can be a simple goal (“I taught a child how to recognize when they use ad hominen” or "I stopped using as much water to shower") or a simple action as part of a larger plan (such as “I helped with a breakthrough on reducing gas emissions in cars by five percent”).
If we accept this challenge of saving the world, then let us be open and honest with our progress. Let us put our successes on display and our shortcomings as well, so that both can be recognized, recommended, and, if need be, repaired.
If you are not doing anything to save the world, even something as simple as “learning about global risks” or “encouraging others to research a topic before deciding on it”? Then find something. Find a goal and work for it. Find an act that needs doing and do it.
Then tell us about it.