You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

jacob_cannell comments on Leaving LessWrong for a more rational life - Less Wrong Discussion

33 [deleted] 21 May 2015 07:24PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (268)

You are viewing a single comment's thread. Show more comments above.

Comment author: jacob_cannell 15 June 2015 07:19:17AM 0 points [-]

We should also distinguish software experiments from physical experiments, since it's a lot harder to keep an AI from performing the former, and the former are much easier to speed up in proportion to speed-ups in the experimenter's ability to analyze results.

This is actually completely untrue and is an example of a typical misconception about programming - which is far closer to engineering than math. Every time you compile a program, you are physically testing a theory exactly equivalent to building and testing a physical machine. Every single time you compile and run a program.

If you speedup an AI - by speeding up its mental algorithms or giving it more hardware, you actually slow down the subjective speed of the world and all other software systems in exact proportion. This has enormous consequences - some of which I explored here and here. Human brains operate at 1000hz or or less, which suggests that a near optimal (in terms of raw speed) human-level AGI could run at 1 million X time dilation. However that would effectively mean that the AGI's computers it had access to would be subjectively slower by 1 million times - so if it's compiling code for 10 GHZ CPUs, those subjectively run at 10 kilohertz.