timtyler comments on Slowing Moore's Law: Why You Might Want To and How You Would Do It - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (90)
Brain emulations are a joke. Intelligence augmentation seems much more significant - though it is not really much of an alternative to machine intelligence.
Why would you think they're a joke? We seem to be on a clear path to achieve it in the near future.
As a route to machine intelligence they don't make sense - because they will become viable too late - they will be beaten.
How do you know that?
Multiple considerations are involved. One of them is to do with bioinspiration. To quote from my Against Whole Brain Emulation essay:
The existence of non-biomimetic technology does not prove that biomimetics are inherently impractical.
There's plenty of recent examples of successful biomimetics... Biomimetic solar: http://www.youtube.com/watch?v=sBpusZSzpyI Anisotropic dry adhesives: http://bdml.stanford.edu/twiki/bin/view/Rise/StickyBot Self cleaning paints: http://www.stocorp.com/blog/?tag=lotusan Genetic algorithms: http://gacs.sourceforge.net/
The reason we didn't have much historical success with biomimetics is because biological systems are far to complex to just understand with a cursory look. We need modern bioinformatics, imaging, and molecular biology techniques to begin understanding how natural systems work, and be able to manipulate things on a small enough scale to replicate them.
It's just now becoming possible. Engineers didn't look at biology before, because they didn't know anything about biology, and lacked tools to manipulate molecular systems. Bioengineering itself is a very new field, and a good portion of the academic bioengineering departments that exist now are less than 5 years old! Bioengineering now is in a similar situation as physics was in the 19th century.
I looked at your essay, and don't see that you have any evidence showing that WBE is infeasible, or will take longer to develop than de novo AI. I would argue there's no way to know how long either will take to develop, because we don't even know what the obstacles are really. WBE could be as simple as building a sufficiently large network with neuron models like the ones we have already, or we could be missing some important details that make it far more difficult than that. It's clear that you don't like WBE, and you have some interesting reasons why we might not want to use WBE.
That seems as though it is basically my argument. Biomimetic approaches are challenging and lag behind engineering-based ones by many decades.
I don't think WBE is infeasible - but I do think there's evidence that it will take longer. We already have pretty sophisticated engineered machine intelligence - while we can't yet create a WBE of a flatworm. Engineered machine intelligence is widely used in industry; WBE does nothing and doesn't work. Engineered machine intelligence is in the lead, and it is much better funded.
If one is simpler than the other, absolute timescales matter little - but IMO, we do have some idea about timescales.
Polls of "expert" opinions on when we will develop a technology are not predictors when we will actually develop them. Their opinions could all be skewed in the same direction by missing the same piece of vital information.
For example, they could all be unaware of a particular hurdle that will be difficult to solve, or of an upcoming discovery that makes it possible to bypass problems they assumed to be difficult.
This is an important generalization, but there are also many counterexamples in our use of biotech in agriculture, medicine, chemical production, etc. We can't design a custom cell, but Craig Venter can create a new 'minimal' genome from raw feedstuffs by copying from nature, and then add further enhancements to it. We produce alcohol using living organisms rather than a more efficient chemical process, and so forth. It looks like humans will be able to radically enhance human intelligence genetically through statistical study of human variation rather than mechanistic understanding of different pathways.
Creating an emulation involves a lot of further work, but one might put it in a reference class with members like the extensive work needed to get DNA synthesis, sequencing, and other biotechnologies to the point of producing Craig Venter's 'minimal genome' cells.
Sure - but again, it looks as though that will mostly be relatively insignificant and happen too late. We should still do it. It won't prevent a transition to engineered machine intelligence, though it might smooth the transition a little.
As I argue in my Against Whole Brain Emulation essay the idea is more wishful thinking and marketing than anything else.
Whole brain emulation as a P.R. exercise is a pretty stomach-churing idea from my perspective - but that does seem to be what is happening.
Possibly biotechnology will result in nanotechnological computing substrates. However, that seems to be a bit different from "whole brain emulation".
People like Kurzweil (who doesn't think that WBE will come first) may talk about it in the context of "we will merge with the machines, they won't be an alien outgroup" as a P.R. exercise to make AI less scary. Some people also talk about whole brain emulation as an easy-to-explain loose upper bound on AI difficulty. But people like Robin Hanson who argue that WBE will come first do not give any indications of being engaged in PR, aside from their disagreement with you on the difficulty of theoretical advances in AI and so forth.