darius comments on Fast Minds and Slow Computers - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (90)
Yes these are some good points loqi.
Much would depend on how parallel compilers advance by the time such an AGI design becomes feasible.
Right now compilers are just barely multi-threaded, and so they have a very long way to go before reaching their maximum speed.
So the question really becomes what are the limits of compilation speed? If you had the fastest possible C++ compiler running on dozens of GPUs for example.
I'm not a compilation expert, but from what I remember many of the steps in compilation/linking are serial and much would need to be rethought. This would be a great deal of work.
There would still be minimum times just to load data to and from storage and transfer it around the network if the program is distributed.
And depending on what you are working on, there is the minimum debug/test cycle. For most complex real world engineering systems this is always going to be an end limiter.
The slowest phase in a nonoptimizing compiler is lexical scanning. (An optimizer can usefully absorb arbitrary amounts of effort, but most compiles don't strictly need it.) For most languages, scanning can be done in a few cycles/byte. Scanning with finite automata can also be done in parallel in O(log(n)) time, though I don't know of any compilers that do that. So, a system built for fast turnaround, using methods we know now (like good old Turbo Pascal), ought to be able to compile several lines/second given 1 kcycle/sec. Therefore you still want to recompile only small chunks and make linking cheap -- in the limit there's the old 8-bit Basics that essentially treated each line of the program as a compilation unit. See P. J. Brown's old book, or Chuck Moore's Color Forth.