You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

buybuydandavis comments on Why AI may not foom - Less Wrong Discussion

23 Post author: John_Maxwell_IV 24 March 2013 08:11AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (78)

You are viewing a single comment's thread. Show more comments above.

Comment author: Manfred 25 March 2013 06:56:13PM *  2 points [-]

Right. You're definitely gonna be able to get the same solution to the same problem twice as fast. The thing labeled by labels like "NP hard" is that doubling your hardware doesn't let you solve problems that are twice as complicated in your unit of time. So your dumb robot can do dumb things twice as fast, but it can't do things twice as smart :P

There's one more consideration, which is that if you're approximating and you keep the problem the same, doubling your hardware won't always let you find a solution that's twice as good. But I think this can reasonably be either sublinear or superlinear, until you get up to really large amounts of computing power.