jacob_cannell comments on Leaving LessWrong for a more rational life - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (268)
I don't think anyone at MIRI arrived at worries like 'AI might be able to deceive their programmers' or 'AI might be able to design powerful pathogens' by staring at the equation for AIXI or AIXItl. AIXI is a useful idea because it's well-specified enough to let us have conversations that are more than just 'here are my vague intuitions vs. your vague-intuitions'; it's math that isn't quite the right math to directly answer our questions, but at least gets us outside of our own heads, in much the same way that an empirical study can be useful even if it can't directly answer our questions.
Investigating mathematical and scientific problems that are near to the philosophical problems we care about is a good idea, when we still don't understand the philosophical problem well enough to directly formalize or test it, because it serves as a point of contact with a domain that isn't just 'more vague human intuitions'. Historically this has often been a good way to make intellectual progress, though it's important to keep in mind just how limited our results are.
AIXI is also useful because the problems we couldn't solve even if we (impossibly) had recourse to AIXI often overlap with the problems where our theoretical understanding of intelligence is especially lacking, and where we may therefore want to concentrate our early research efforts.
The idea that AI will have various 'superpowers' comes more from:
(a) the thought that humans often vary a lot in how much they exhibit the power (without appearing to vary all that much in hardware);
(b) the thought that human brains have known hardware limitations, where existing machines (and a fortiori machines 50 or 100 years down the line) can surpass humans by many orders of magnitude; and
(c) the thought that humans have many unnecessary software limitations, including cases where machines currently outperform humans. There's also no special reason to expect evolution's first stab at technology-capable intelligence to have stumbled on all the best possible software ideas.
A more common intuition pump is to simply note that limitations in human brains suggest speed superintelligence is possible, and it's relatively easy to imagine speed superintelligence allowing one to perform extraordinary feats without imagining other, less well-understood forms of cognitive achievement. Rates of cultural and technological progress in human societies are a better (though still very imperfect) source of data than AIXI about how much improvement intelligence makes possible.
This should be possible to some extent, especially when it comes to progress in mathematics. We should also distinguish software experiments from physical experiments, since it's a lot harder to keep an AI from performing the former, and the former are much easier to speed up in proportion to speed-ups in the experimenter's ability to analyze results.
I don't think there's any specific consensus view about how much progress requires waiting for results from slow experiments. I frequently hear Luke raise the possibility that slow natural processes could limit rates of self-improvement in AI, but I don't know whether he considers that a major consideration or a minor one.
This is actually completely untrue and is an example of a typical misconception about programming - which is far closer to engineering than math. Every time you compile a program, you are physically testing a theory exactly equivalent to building and testing a physical machine. Every single time you compile and run a program.
If you speedup an AI - by speeding up its mental algorithms or giving it more hardware, you actually slow down the subjective speed of the world and all other software systems in exact proportion. This has enormous consequences - some of which I explored here and here. Human brains operate at 1000hz or or less, which suggests that a near optimal (in terms of raw speed) human-level AGI could run at 1 million X time dilation. However that would effectively mean that the AGI's computers it had access to would be subjectively slower by 1 million times - so if it's compiling code for 10 GHZ CPUs, those subjectively run at 10 kilohertz.