How close are we to a singularity? Well, computers started being able to think faster than us in the 1990s, (neurons have a processing speed of 200 Hz), and are now many orders of magnitude faster. 

"But wait!" you reasonably object, "that's only one aspect of computer power! What about hard drive storage, and how does working memory compare to RAM?" 

I'm not sure how much memory the human brain can hold. Scientific American says it can hold 2.5 petabytes, and this figure seems to be the most heavily cited among pop science articles, but they don't really explain their numbers. AI Impacts, which does seem to be much more mathematically rigorous in their articles and  usually shows their mathematical process, claims that "Most computational neuroscientists tend to estimate human storage capacity somewhere between 10 terabytes and 100 terabytes" (about 1/10th of a petabyte). Humans are very unlikely to have the most efficient possible algorithm, of course, and it's possible that 1 terabyte is more than enough for a perfectly-optimized general intelligence. But still, let's assume for the sake of argument that the highest estimate of 2.5 petabytes is correct and is the absolute minimum needed for a human-level or higher intelligence. How does that compare to the current state of computers?

The internet has been estimated to contain hundreds of exabytes of data... in 2007! A 2009 article said that Google alone had about an exabyte of data. In 2013 Randall Munroe (of xkcd) estimated that Google had 10 exabytes. If the whole internet is added up in 2020, the estimates range in the double digits of zettabytes. Each exabyte is a thousand petabytes and each zettabyte is a thousand exabytes. This memory is unavailable and used for other purposes, but if any big company knew how to program a superintelligence it's easy to see that they'd be able to build a "mere" 2.5 petabyte server building. Some botnets also probably contain this amount of hardware already. Memory would need to be distributed in some way, but there are already some methods to do that used for things like archives and youtube. Computers clearly have enough physical memory for a superhuman intellegence.

What about RAM? This is so different from human working memory that a direct comparison can't really be made. Humans can analyze a very complicated ongoing event based on vision and sound in real time, but (usually) can't multiply two floating point numbers in their mind. One thing that can be used for comparison, though, is self-driving cars.

There are already hundreds of self-driving cars and taxis on the road that have a good enough safety record to continue functioning in Arizona. There's an occasional collision, such as the high-profile lethal hit of a female pedestrian that was publicized to the point where Uber stopped testing autonomous cars for a while, and which seems to be mentioned every time there's an article criticizing the safety of self-driving cars. That was tragic, of course, but we need to be fair in our comparison. If a human driver hit someone who was crossing the street outside of a crosswalk at night, nobody would tell the news and the driver might not even be considered at fault. There have been other incidents where autonomous cars were caught breaking the law, but when you compare them to the rate that human drivers break the law it's not so bad. In fact, some reports say that they are already safer than humans. I'd say based on this that computer RAM can function at least as well as human working memory at spatial tasks which humans are good at if it has a good algorithm, and RAM is much better than working memory at anything involving computation.

Here's a summary of the situation right now. Clock speed is orders of magnitude better than humans'. Memory is at least as much and probably a couple of orders of magnitude more than humans' (and will gain 5-6 orders of magnitude more if it takes over the internet). RAM is about as good at some things and far better at other things than humans' equivalent. If connected to the internet, an AI would have access to far more information than a human does, especially once it figures out how to hack cameras. Computer power is still improving each year. The only reason the singularity hasn't happened yet is that humans don't understand intelligence enough to create even a human-level one. In a physics metaphor, a 10,000kg pile of enriched uranium can fission at any time, but it won't until the first free neutron appears and starts the reaction.


Recently there was a post called Fun with +12 OOMs of Computation, where people imagine things that could happen if suddenly computation power, memory, and things like that were multiplied by 10^12. There was a poll near the end asking people to estimate the chance of the singularity happening by the end of the year, based on their mental model of reality without considering things like peer beliefs. Most people put things like 95%, believing that computation power is the main barrier preventing hyper exponential growth. The problem with their models, as I see it, is that we already have +several OOMs of Computation more than needed. If computer power is the only thing standing between us and the singularity then we will finally have enough computer power... a decade ago.

I know realizing that the deadline for AI alignment is "now" can be stressful, but there is a bit of good news about this. Since "just having lots of compute" wasn't enough, people are very unlikely to succeed unless they really know what they're doing. We are probably safe from people trying brute force methods or mostly random evolutions.

New Comment
3 comments, sorted by Click to highlight new comments since:
[-][anonymous]20

Here's why you are wrong:

           a.  Just recognizing objects with something like human levels of accuracy takes hundreds of teraflops per second!  We call it "TOPS" and just keeping up with a few full resolution cameras with networks that are not as robust as humans costs about 300-400 TOPS, or 'all she's got' from a current gen accelerator board.  This is like an inferior version of a human visual cortex, with our main issue being lack of generality and all sorts of terrible edge cases [that can misidentify objects, leading to a crash]

          Hundreds of teraflops in a single chip wasn't available until a few years ago, where several companies [Tesla, Nvidia, Waymo] developed NN accelerators.

          b.  You don't understand the computer architecture consequences when we say a brain has 2.5 petabytes.  This is not the same as having 2.5 petabytes of data where only a tiny fraction is being accessed at any time.  At any given moment, any of the neurons in the brain might get hit with a spike train and have to give outputs, taking into account the connection weight and connectivity graph.  The graph (what connects to what) and the strength and type of each connection are all information, and this is where the 2.5 petabyte estimate is coming from - the number of connections (86 billion  times 1000) and how many bits of information you estimate each connection holds.

(2.5 petabyte) / (1000 * 86 billion) = 29 bytes, apparently that is all scientific America thinks a synapse holds.  Say 8 bits of resolution for the weight, some "in progress" state variables (there are variables that are changed over time that are used to update the weights for learning), and enough bytes to specify uniquely that synapse's relative position in a graph with 86 trillion entries.  

Anyways your computer must be able to compute on all 2.5 petabytes at all times.  Every timestep, even neurons that are not going to fire are checking to see if they should, and they must access the synaptic weights to do this and update in progress state variables.  The brain isn't synchronous but 1000 timesteps per synapse per second of realtime is a rough estimate of what you need.

This is architecturally sorta like having all 2.5 petabytes at a minimum in RAM with a very very fast bus connecting you to the chip, but if you really get serious you need massive caches or you need to just build the circuitry that evaluates the neural network directly on top of the storage medium that holds the weights.  (several startups are doing this)

Let's make a more concrete estimate.   We have 2.5 petabytes of values we need to access 1000 times a second.

Therefore, our bandwidth requirement is 2.5 petabytes * 1000 = 2.5 exabytes/second .  Each Nvidia A100 has 2 terabytes/second of bandwidth.   Therefore for 1 brain equivalent we need 1,250,000 A100 AI accelerators, all released May 14, 2020.  World's largest supercomputer uses 158,976 nodes so this would be 10x larger. 

Also the amount of traffic between A100 nodes probably exceeds available interconnects but I'm not sure about that assertion.  

Each A100 is $199,000 MSRP.  But probably there is a chip shortage so you would need to wait a few years for your order to fill.

So you need 248.75 billion in A100 accelerators to get to "1 brain worth" of capacity.  And current AI algorithms are much less efficient than humans so..

Please note, I fully believe TAI is possible but with numbers like these it starts to become apparent why we don't have it yet.  Also this explains neatly the "AI winter", the AI winter happened because it turned out, with computers of that era, meaningful progress wasn't possible.

Also note the 200k pricetag is Nvidia's MSRP, which factors in their need to pay back all the money they spent on R&D.  They 'only' spent a few billion and maybe you could cut a deal.  Each A100 likely only costs $1000 or less in real component costs.

I understand his point is not that we have enough CPU and RAM to simulate a human brain. We do not. His point seems to be that the observable memory capacity of the human brain is on the order of TB to PB. He doesn't go too deep into the compute part but the analogy with self-driving cars seems suitable. After all quite a big part of the brain is devoted to image processing and object detection. I think it is not inconceivable that there are better algorithms than what the brain has to make do with for the intelligence part.

[-][anonymous]30

He's specifically talking about building a computer not any more efficient than a brain algorithm wise and saying we have enough compute to do this.

He is incorrect because he is not factoring in the compute architecture. The reason you should consider my judgement is I am a working computer engineer and I have personally designed systems (for smaller scale tasks, I am not the architect for the self driving team I work for now)

Of course more efficient algorithms exist but by definition they take time and effort to find. And we do not know how much more efficient a system we can build and still have sentience.