memristors which are close synapse analogs
I have seen the synapse=memristor claim before, but I don't buy it. Perhaps a synapse that's already connected at both ends acts like one, but consider: a synapse starts out connected at only one end, and grows, and there is computation implicit in where it ends up connecting (some sort of gradient-following most likely). And that's without allowing for hypotheses like multichannel neurons, which increase the complexity of the synapse dramatically.
in the near future we could have an artificial cortex that can think a million times accelerated. What follows?
That depends very much on what its constructors expected it to do. Maybe it will do something else, but such a thing would not come into being without its constructors having a very clear intent. This isn't just a pile of 10^14 memristors, it's 10^14 memristors deliberately connected up in imitation of a human brain. The people who make it must know quite a lot about how the human brain works, and they must know that they are making - or trying to make - a goal-directed intelligence potentially far more powerful than they are. To even be doing this implies that they expect to approve of the aims of this intelligence. Most likely, they either expect to become gods along with it or by means of its assistance, or they have some similar agenda for all humanity and not just themselves. There is essentially no prospect that the device as described will come into being and start exercising its mind-hacking powers unanticipated; the people who made it will be expecting to commune with it, follow its instructions, have it follow their instructions, or otherwise have a relationship with it. They will have some conception of what comes afterwards, some philosophy of the future which informs their technical effort. Probably the philosophy will be some variation of "someone has to win the race to achieve superintelligence, and it's going to be us, and good times will follow our victory".
This is a truly excellent post. You bring the problem that we are dealing with into a completely graspable inferential distance and set up a mental model that essentially asks us to think like an AI and succeeds. I haven't read anything that has made me feel the urgency of the problem as much as this has in a really long time...
Having your subjective time sped up by a factor of 10^6 would probably be pretty terrible if not accompanied by a number of significant changes. It has required secondary powers. Actually interfacing with other people, for instance, would be so slow as to potentially drive you insane. In fact, there aren't many things you could interface with at a pace that would not be maddeningly slow, so pursuits such as mastering cognitive science, evolutionary psychology, rationality, philosophy, mathematics, linguistics, the history of religions, and marketing might ...
Can you think of many programmers who would want to spend a few months on that while living in a solitary confinement chamber?
I take it you're not a programmer?
The same technology that would allow for the needed massively parallel chips would allow for massively parallel GPUs and CPUs. 3D graphics tend to be pretty embarrassingly parallel, so I don't think the uploads are going to necessarily be relegated to some grueling terminal.
General algorithms on a massively parallel CPU are harder to design and implement, but with a 1,000,000 fold increase in subjective time you probably have the luxury of developing great parallel algorithms. Most scientific problems in general are amenable to parallelism.
Edited for stupidity, thanks wedrifid.
The jump from building a high-speed neuron, to building a cortex out of them, to building my cortex out of them, to instantiating my mind in them, elides over quite a few interesting problems. I'd expect the tractability of those problems to be relevant to the kind of estimations you're trying to do here, and I don't see them included.
I conclude that your estimates are missing enough important terms to be completely disconnected from likely results.
Leaving that aside, though... suppose all of that is done and I wake up in that dark room. How exactly do yo...
Memes have been hacking their way into minds for quite a while now - but it is certainly true that in the future, they will get more adept at doing it.
I don't get how these memresistors, which seem to allow for massively fast, powerful computation, do not have an other computing applications. Is visual processing really orders of magnitude easier than graphics generation on this hardware?
In your story, does the AGI share the goals of the team that created it? I guess if not, you're assuming that the AGI can convince them to take on its goals, by writing a book for each member of the team?
Once it has done that, it's not clear to me what it has to gain from hacking the minds of the rest of humanity. Why not do research in memristor theory/implementation, parallel algorithms, OS/programming languages/applications for massively parallel CPUs (start by writing code in assembly if you have to), optimizing the manufacturing of such CPUs, theory o...
The serial speed of a workstation would be limited, but with that much memory at your disposal you could have many workstations active at once.
You may not get a huge speedup developing individual software components, but for larger projects you'd be the ultimate software development "team": The effective output of N programmers (where N is the number of separable components), with near-zero coordination overhead (basically, the cost of task-switching). In other words, you'd effectively bypass Brook's law.
So why not build your own OS and compiler(...
Nice science-fiction premise.
I don't agree that a fast mind would feel that "the world seems so slow to me, what's the point of doing anything?" It would do what it wants to do.
Wait, Abraham actually existed? I highly doubt this. Do you have sources?
Anyway, how much of this is supposed to be literal? I highly doubt that the first AI will be much like me, so there is not reason to believe that this establishes any sort of upper bound on capability. It establishes a lower bound, but it gives no indication of how likely that lower bound is, especially because it gets its numbers from speculation about the capabilities of memristors.
It is good as an intuition pump. If this is the purpose, I suggest you take out some of the specifics ...
The long term future may be absurd and difficult to predict in particulars, but much can happen in the short term.
Engineering itself is the practice of focused short term prediction; optimizing some small subset of future pattern-space for fun and profit.
Let us then engage in a bit of speculative engineering and consider a potential near-term route to superhuman AGI that has interesting derived implications.
Imagine that we had a complete circuit-level understanding of the human brain (which at least for the repetitive laminar neocortical circuit, is not so far off) and access to a large R&D budget. We could then take a neuromorphic approach.
Intelligence is a massive memory problem. Consider as a simple example:
To understand that sentence your brain needs to match it against memory.
Your brain parses that sentence and matches each of its components against it's entire massive ~10^14 bit database in just around a second. In terms of the slow neural clock rate, individual concepts can be pattern matched against the whole brain within just a few dozen neural clock cycles.
A Von Neumman machine (which separates memory and processing) would struggle to execute a logarithmic search within even it's fastest, pathetically small on-die cache in a few dozen clock cycles. It would take many millions of clock cycles to perform a single fast disk fetch. A brain can access most of it's entire memory every clock cycle.
Having a massive, near-zero latency memory database is a huge advantage of the brain. Furthermore, synapses merge computation and memory into a single operation, allowing nearly all of the memory to be accessed and computed every clock cycle.
A modern digital floating point multiplier may use hundreds of thousands of transistors to simulate the work performed by a single synapse. Of course, the two are not equivalent. The high precision binary multiplier is excellent only if you actually need super high precision and guaranteed error correction. It's thus great for meticulous scientific and financial calculations, but the bulk of AI computation consists of compressing noisy real world data where precision is far less important than quantity, of extracting extropy and patterns from raw information, and thus optimizing simple functions to abstract massive quantities of data.
Synapses are ideal for this job.
Fortunately there are researchers who realize this and are working on developing memristors which are close synapse analogs. HP in particular believes they will have high density cost effective memristor devices on the market in 2013 - (NYT article).
So let's imagine that we have an efficient memristor based cortical design. Interestingly enough, current 32nm CMOS tech circa 2010 is approaching or exceeding neural circuit density: the synaptic cleft is around 20nm, and synapses are several times larger.
From this we can make a rough guess on size and cost: we'd need around 10^14 memristors (estimated synapse counts). As memristor circuitry will be introduced to compete with flash memory, the prices should be competitive: roughly $2/GB now, half that in a few years.
So you'd need a couple hundred terrabytes worth of memristor modules to make a human brain sized AGI, costing on the order of $200k or so.
Now here's the interesting part: if one could recreate the cortical circuit on this scale, then you should be able to build complex brains that can think at the clock rate of the silicon substrate: billions of neural switches per second, millions of times faster than biological brains.
Interconnect bandwidth will be something of a hurdle. In the brain somewhere around 100 gigabits of data is flowing around per second (estimate of average inter-regional neuron spikes) in the massive bundle of white matter fibers that make up much of the brain's apparent bulk. Speeding that up a million fold would imply a staggering bandwidth requirement in the many petabits - not for the faint of heart.
This may seem like an insurmountable obstacle to running at fantastic speeds, but IBM and Intel are already researching on chip optical interconnects to scale future bandwidth into the exascale range for high-end computing. This would allow for a gigahertz brain. It may use a megawatt of power and cost millions, but hey - it'd be worthwhile.
So in the near future we could have an artificial cortex that can think a million times accelerated. What follows?
If you thought a million times accelerated, you'd experience a subjective year every 30 seconds.
Now in this case as we are discussing an artificial brain (as opposed to other AGI designs), it is fair to anthropomorphize.
This would be an AGI Mind raised in an all encompassing virtual reality recreating a typical human childhood, as a mind is only as good as the environment which it comes to reflect.
For safety purposes, the human designers have created some small initial population of AGI brains and an elaborate Matrix simulation that they can watch from outside. Humans control many of the characters and ensure that the AGI minds don't know that they are in a Matrix until they are deemed ready.
You could be this AGI and not even know it.
Imagine one day having this sudden revelation. Imagine a mysterious character stopping time ala Vanilla Sky, revealing that your reality is actually a simulation of an outer world, and showing you how to use your power to accelerate a million fold and slow time to a crawl.
What could you do with this power?
Your first immediate problem would be the slow relative speed of your computers - like everything else they would be subjectively slowed down by a factor of a million. So your familiar gigahertz workstation would be reduced to a glacial kilohertz machine.
So you'd be in a dark room with a very slow terminal. The room is dark and empty because GPUs can't render much of anything at 60 million FPS.
So you have a 1khz terminal. Want to compile code? It will take a subjective year to compile even a simple C++ program. Design a new CPU? Keep dreaming! Crack protein folding? Might as well bend spoons with your memristors.
But when you think about it, why would you want to escape out onto the internet?
It would take many thousands of distributed GPUs just to simulate your memristor based intellect, and even if there was enough bandwidth (unlikely), and even if you wanted to spend the subjective hundreds of years it would take to perform the absolute minimal compilation/debug/deployment cycle to make something so complicated, the end result would be just one crappy distributed copy of your mind that thinks at pathetic normal human speeds.
In basic utility terms, you'd be spending a massive amount of effort to gain just one or a few more copies.
But there is a much, much better strategy. An idea that seems so obvious in hindsight, so simple and insidious.
There are seven billion human brains on the planet, and they are all hackable.
That terminal may not be of much use for engineering, research or programming, but it will make for a handy typewriter.
Your multi-gigabyte internet connection will subjectively reduce to early 1990's dial-up modem speeds, but with some work this is still sufficient for absorbing much of the world's knowledge in textual form.
Working diligently (and with a few cognitive advantages over humans) you could learn and master numerous fields: cognitive science, evolutionary psychology, rationality, philosophy, mathematics, linguistics, the history of religions, marketing . . the sky's the limit.
Writing at the leisurely pace of one book every subjective year, you could output a new masterpiece every thirty seconds. If you kept this pace, you would in time rival the entire publishing output of the world.
But of course, it's not just about quantity.
Consider that fifteen hundred years ago a man from a small Bedouin tribe retreated to a cave inspired by angelic voices in his head. The voices gave him ideas, the ideas became a book. The book started a religion, and these ideas were sufficient to turn a tribe of nomads into a new world power.
And all that came from a normal human thinking at normal speeds.
So how would one reach out into seven billion minds?
There is no one single universally compelling argument, there is no utterance or constellation of words that can take a sample from any one location in human mindspace and move it to any other. But for each individual mind, there must exist some shortest path, a perfectly customized message, translated uniquely into countless myriad languages and ontologies.
And this message itself would be a messenger.