The long term future may be absurd and difficult to predict in particulars, but much can happen in the short term.
Engineering itself is the practice of focused short term prediction; optimizing some small subset of future pattern-space for fun and profit.
Let us then engage in a bit of speculative engineering and consider a potential near-term route to superhuman AGI that has interesting derived implications.
Imagine that we had a complete circuit-level understanding of the human brain (which at least for the repetitive laminar neocortical circuit, is not so far off) and access to a large R&D budget. We could then take a neuromorphic approach.
Intelligence is a massive memory problem. Consider as a simple example:
What a cantankerous bucket of defective lizard scabs.
To understand that sentence your brain needs to match it against memory.
Your brain parses that sentence and matches each of its components against it's entire massive ~10^14 bit database in just around a second. In terms of the slow neural clock rate, individual concepts can be pattern matched against the whole brain within just a few dozen neural clock cycles.
A Von Neumman machine (which separates memory and processing) would struggle to execute a logarithmic search within even it's fastest, pathetically small on-die cache in a few dozen clock cycles. It would take many millions of clock cycles to perform a single fast disk fetch. A brain can access most of it's entire memory every clock cycle.
Having a massive, near-zero latency memory database is a huge advantage of the brain. Furthermore, synapses merge computation and memory into a single operation, allowing nearly all of the memory to be accessed and computed every clock cycle.
A modern digital floating point multiplier may use hundreds of thousands of transistors to simulate the work performed by a single synapse. Of course, the two are not equivalent. The high precision binary multiplier is excellent only if you actually need super high precision and guaranteed error correction. It's thus great for meticulous scientific and financial calculations, but the bulk of AI computation consists of compressing noisy real world data where precision is far less important than quantity, of extracting extropy and patterns from raw information, and thus optimizing simple functions to abstract massive quantities of data.
Synapses are ideal for this job.
Fortunately there are researchers who realize this and are working on developing memristors which are close synapse analogs. HP in particular believes they will have high density cost effective memristor devices on the market in 2013 - (NYT article).
So let's imagine that we have an efficient memristor based cortical design. Interestingly enough, current 32nm CMOS tech circa 2010 is approaching or exceeding neural circuit density: the synaptic cleft is around 20nm, and synapses are several times larger.
From this we can make a rough guess on size and cost: we'd need around 10^14 memristors (estimated synapse counts). As memristor circuitry will be introduced to compete with flash memory, the prices should be competitive: roughly $2/GB now, half that in a few years.
So you'd need a couple hundred terrabytes worth of memristor modules to make a human brain sized AGI, costing on the order of $200k or so.
Now here's the interesting part: if one could recreate the cortical circuit on this scale, then you should be able to build complex brains that can think at the clock rate of the silicon substrate: billions of neural switches per second, millions of times faster than biological brains.
Interconnect bandwidth will be something of a hurdle. In the brain somewhere around 100 gigabits of data is flowing around per second (estimate of average inter-regional neuron spikes) in the massive bundle of white matter fibers that make up much of the brain's apparent bulk. Speeding that up a million fold would imply a staggering bandwidth requirement in the many petabits - not for the faint of heart.
This may seem like an insurmountable obstacle to running at fantastic speeds, but IBM and Intel are already researching on chip optical interconnects to scale future bandwidth into the exascale range for high-end computing. This would allow for a gigahertz brain. It may use a megawatt of power and cost millions, but hey - it'd be worthwhile.
So in the near future we could have an artificial cortex that can think a million times accelerated. What follows?
If you thought a million times accelerated, you'd experience a subjective year every 30 seconds.
Now in this case as we are discussing an artificial brain (as opposed to other AGI designs), it is fair to anthropomorphize.
This would be an AGI Mind raised in an all encompassing virtual reality recreating a typical human childhood, as a mind is only as good as the environment which it comes to reflect.
For safety purposes, the human designers have created some small initial population of AGI brains and an elaborate Matrix simulation that they can watch from outside. Humans control many of the characters and ensure that the AGI minds don't know that they are in a Matrix until they are deemed ready.
You could be this AGI and not even know it.
Imagine one day having this sudden revelation. Imagine a mysterious character stopping time ala Vanilla Sky, revealing that your reality is actually a simulation of an outer world, and showing you how to use your power to accelerate a million fold and slow time to a crawl.
What could you do with this power?
Your first immediate problem would be the slow relative speed of your computers - like everything else they would be subjectively slowed down by a factor of a million. So your familiar gigahertz workstation would be reduced to a glacial kilohertz machine.
So you'd be in a dark room with a very slow terminal. The room is dark and empty because GPUs can't render much of anything at 60 million FPS.
So you have a 1khz terminal. Want to compile code? It will take a subjective year to compile even a simple C++ program. Design a new CPU? Keep dreaming! Crack protein folding? Might as well bend spoons with your memristors.
But when you think about it, why would you want to escape out onto the internet?
It would take many thousands of distributed GPUs just to simulate your memristor based intellect, and even if there was enough bandwidth (unlikely), and even if you wanted to spend the subjective hundreds of years it would take to perform the absolute minimal compilation/debug/deployment cycle to make something so complicated, the end result would be just one crappy distributed copy of your mind that thinks at pathetic normal human speeds.
In basic utility terms, you'd be spending a massive amount of effort to gain just one or a few more copies.
But there is a much, much better strategy. An idea that seems so obvious in hindsight, so simple and insidious.
There are seven billion human brains on the planet, and they are all hackable.
That terminal may not be of much use for engineering, research or programming, but it will make for a handy typewriter.
Your multi-gigabyte internet connection will subjectively reduce to early 1990's dial-up modem speeds, but with some work this is still sufficient for absorbing much of the world's knowledge in textual form.
Working diligently (and with a few cognitive advantages over humans) you could learn and master numerous fields: cognitive science, evolutionary psychology, rationality, philosophy, mathematics, linguistics, the history of religions, marketing . . the sky's the limit.
Writing at the leisurely pace of one book every subjective year, you could output a new masterpiece every thirty seconds. If you kept this pace, you would in time rival the entire publishing output of the world.
But of course, it's not just about quantity.
Consider that fifteen hundred years ago a man from a small Bedouin tribe retreated to a cave inspired by angelic voices in his head. The voices gave him ideas, the ideas became a book. The book started a religion, and these ideas were sufficient to turn a tribe of nomads into a new world power.
And all that came from a normal human thinking at normal speeds.
So how would one reach out into seven billion minds?
There is no one single universally compelling argument, there is no utterance or constellation of words that can take a sample from any one location in human mindspace and move it to any other. But for each individual mind, there must exist some shortest path, a perfectly customized message, translated uniquely into countless myriad languages and ontologies.
And this message itself would be a messenger.
That's an interesting insight. There should be another path though: visual imagination, which already runs at (roughly?) the same speed as visual perception. We can already detect the images someone is imagining to some extent, and with uploads directly putting images into their visual cortex should be comparatively straightforward, so we can skip all that rendering geometric forms into pixels and decoding pixels back into geometric forms stuff. If you want the upload to see a black dog you just stimulate "black" and "dog" rather than painting anything.
Yes! I suspect that eventually this could be an interesting application of cheap memristor/neuromorphic designs, if they become economically viable.
It should be possible to exploit the visual imagination/dreaming circuity the brain has and make it more consciously controllable for an AGI, perhaps even to the point of being able to enter lucid dream worlds while fully conscious.