Kevin comments on Singularity Summit 2010 on Aug. 14-15 in San Francisco - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (22)
Or see That Alien Message. Basically, an AI that is able to make truly efficient use of sensory information could have a chance of solving Cosmology in short order.
http://lesswrong.com/lw/qk/that_alien_message/
I read that article once and some parts of it more, but I still fail to see how its relevant to this. It must be because 2 people have already given links.
The point is that even with only moderate intelligence, if you speed that intelligence up enough you can potentially have a lot of gains. Thus for example, if you took a moderately smart human (say an average Less Wrongian) and were able to think a hundred times as fast, they'd be pretty damn productive, even if their overall creativity was not that much higher. Now, we don't know what the minimal processing power it takes to create an intelligence. Imagine for example what it would turn out if you could simulate in realtime an intelligence of about a human on an old 486 and that the main issue was just figuring out the algorithms. That means, that a cheap commercial machine computer now can run that AI at around a thousand times as fast as a human. Now, you may object that you find it implausible that an AI would be able to run in real time on a 486. That's ok. Do you think it is plausible it could run on a machine today if we had the algorithms? Ok. Then imagine what happens if we find those algorithms 20 years from now. The same end result. Unless you believe that we will coincidentally discover how to make general AI at about the same time we have precisely the processing power to run them, they will likely be quite fast little buggers.
Thats a misconception. We're not trying to simulate human or human-like brains. IMO, NNs and the like are dead ends. The AI project I'm currently working on will be (theoretically) able to run on any machine. The thing is, that on a super fast machine, it can just spend extra time analyzing problems, while on the slow one it will probably have to spend most of its time figuring out how to do the problem without wasting so much power. So, yes, there is a definite advantage in speed, but it will always be as efficient as possible given the power it has. So measuring intelligence by how well it does compared to a human isn't practical. With that, a calculator could be argued to be thousands of times faster then a human.
That's a response that relies on specific models of AI. If one can construct any AI that does functionally resemble that of a human then speed of this sort will matter.