TRIZ-Ingenieur comments on Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (232)
I've worked on the D-Wave machine (in that I've run algorithms on it - I haven't actually contributed to the design of the hardware). About that machine, I have no idea if it's eventually going to be a huge deal faster than conventional hardware. It's an open question. But if it can, it would be huge, as a lot of ML algorithms can be directly mapped to D-wave hardware. It seems like a perfect fit for the sort of stuff machine learning researchers are doing at the moment.
About other kinds of quantum hardware, their feasibility remains to be demonstrated. I think we can say with fair certainty that there will be nothing like a 512-qubit fully-entangled quantum computer (what you'd need to, say, crack the basic RSA algorithm) within the next 20 years at least. Personally I'd put my money on >50 years in the future. The problems just seem too hard; all progress has stalled; and every time someone comes up with a way to try to solve them, it just results in a host of new problems. For instance, topological quantum computers were hot a few years ago since people thought they would be immune to some types of incoherence. As it turned out, though, they just introduce sensitivity to new types of incoherence (thermal fluctuations). When you do the math, it turns out that you haven't actually gained much by using a topological framework, and further you can simulate a topological quantum computer on a normal one, so really a TQC should be considered as just another quantum error correction algorithm, of which we already know many.
All indications seem to be that by 2064 we're likely to have a human-level AI. So I doubt that quantum computing will have any effect on AI development (or at least development of a seed AI). It could have a huge effect on the progression of AI though.
Our human cognition is mainly based on pattern recognition. (compare Ray Kurzweil "How to Create a Mind"). Information stored in the structures of our cranial neural network is waiting sometimes for decades until a trigger stimulus makes a pattern recognizer fire. Huge amounts of patterns can be stored while most pattern recognizers are in sleeping mode consuming very little energy. Quantum computing with incoherence time in orders of seconds is totally unsuitable for the synergistic task of pattern analysis and long term pattern memory with millions of patterns. IBMs newest SyNAPSE chip with 5.4 billion transistors on 3.5cm² chip and only 70mW power consumption in operation is far better suited to push technological development towards AI.