Comment author:Morendil
02 January 2010 06:00:06PM
*
3 points
[-]
I expect that Brain-Computer Interfaces will make their way into consumer devices by the next decade, with disruptive consequences, once people become able to offload some auxiliary cognitive functions into these devices.
Call it 75% - I would be more than mildly surprised if it hadn't happened by 2020.
For what I have in mind, what counts as BCI is the ability to interact with a smartphone-like device in an inconspicuous manner, without using your hands.
My reasoning is similar to Michael Vassar's AR prediction, and based on the iPhone's success. That doesn't seem owed to any particular technological innovation; rather, Apple made things usable that were only previously feasible in the technical sense. A mobile device for searching the Web, finding out your GPS position and compass orientation, and communicating with others was technically feasible years ago. Making these features only slightly less awkward than previously has revealed a hidden demand for unsuspected usages, often combining old features in unexpected ways.
However, in many ways these interfaces are still primitive and awkward. "Sixth Sense" type interfaces are interesting, but still strike me as overly intrusive on others' personal space.
It would make sense to me to be able, say, to subvocalize a command such as "Show me the way to metro station X", then have my smartphone gently "tug" me in the right direction as I turn left and right, using a combination of compass and vibrations. This is only one scenario that strikes me as already easy to implement, requiring only some slightly greater integration of functionality.
I expect such things to be disruptive, because the more transparent the integration between our native cognitive abilities, and those provided by versatile external devices connected to the global network, the more we will effectively turn into "augmented humans".
When we merely have to think of a computation to have it performed externally and receive the result (visually or otherwise), we will be effectively smarter than we are now with calculators (and already essentially able, some would say, to achieve the same results).
I am not predicting with 75% probability that such augmentation will be pervasive by 2020, only that by then some newfangled gadget will have started to reveal hidden consumer demand for this kind of augmentation.
ETA: I don't mind this comment being downvoted, even as shorthand for "I disagree", but I'd be genuinely curious to know what flaws you're seeing in my thinking, or what facts you're aware of that make my degree of confidence seems way off.
Comment author:spasinsky
03 January 2010 12:17:26AM
-1 points
[-]
Given the feasibility that currently exists for gadgets that you envision... and Apple's uncanny ability to bring those ides to market... I say 2015 is a 75% target for the iThought side-processor device. :) .
I expect that Brain-Computer Interfaces will make their way into consumer devices by the next decade, with disruptive consequences, once people become able to offload some auxiliary cognitive functions into these devices.
Call it 75% - I would be more than mildly surprised if it hadn't happened by 2020.
For what I have in mind, what counts as BCI is the ability to interact with a smartphone-like device in an inconspicuous manner, without using your hands.
My reasoning is similar to Michael Vassar's AR prediction, and based on the iPhone's success. That doesn't seem owed to any particular technological innovation; rather, Apple made things usable that were only previously feasible in the technical sense. A mobile device for searching the Web, finding out your GPS position and compass orientation, and communicating with others was technically feasible years ago. Making these features only slightly less awkward than previously has revealed a hidden demand for unsuspected usages, often combining old features in unexpected ways.
However, in many ways these interfaces are still primitive and awkward. "Sixth Sense" type interfaces are interesting, but still strike me as overly intrusive on others' personal space.
It would make sense to me to be able, say, to subvocalize a command such as "Show me the way to metro station X", then have my smartphone gently "tug" me in the right direction as I turn left and right, using a combination of compass and vibrations. This is only one scenario that strikes me as already easy to implement, requiring only some slightly greater integration of functionality.
I expect such things to be disruptive, because the more transparent the integration between our native cognitive abilities, and those provided by versatile external devices connected to the global network, the more we will effectively turn into "augmented humans".
When we merely have to think of a computation to have it performed externally and receive the result (visually or otherwise), we will be effectively smarter than we are now with calculators (and already essentially able, some would say, to achieve the same results).
I am not predicting with 75% probability that such augmentation will be pervasive by 2020, only that by then some newfangled gadget will have started to reveal hidden consumer demand for this kind of augmentation.
ETA: I don't mind this comment being downvoted, even as shorthand for "I disagree", but I'd be genuinely curious to know what flaws you're seeing in my thinking, or what facts you're aware of that make my degree of confidence seems way off.
Given the feasibility that currently exists for gadgets that you envision... and Apple's uncanny ability to bring those ides to market... I say 2015 is a 75% target for the iThought side-processor device. :) .