FAWS comments on Fast Minds and Slow Computers - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (90)
I work mainly in graphics/GPU work. I had a longer explanation of why rendering at 60 million FPS would be difficult, but I cut it for brevity.
Let's say you had ten trillion billion GPU chips. So what?
Both the neuromorphic brain and the GPU run on the same substrate and operate at the same speeds - gigahertz roughly. So it experiences about one subjective second every 100 to 1,000 clock cycles.
So this means the GPU will need to compute pixel values in just a few dozen clock cycles (to produce 60 frames per second).
So the problem is not one that can be solved by even infinite parallelization of current GPUs. You can parallelize only to the point where you have one little arithmetic unit working on each pixel. At that point everything breaks down.
There is no rendering pipeline i'm aware of that could possibly execute in just a few dozen or hundred clock cycles. I think the current minimum is on the order of a million clock cycles or so. (1,000 FPS)
So in the farther future, once GPUs are parallel to the point of one thread per pixel, at that point you may see them optimizing for minimal clock step path, but that is still a ways away - perhaps a decade.
The brain is at the end of a long natural path that we will find our machines taking for the same physical constraints. You go massively parallel to get the most energy efficient computational use of your memory, and then you must optimize for extremely short computational circuits and massively high fan-in / fan-out.
You could think about them, but you could not actually load programs, compile code, or debug and run programs any faster than a human.
Thus I find it highly likely you would focus your energies on low-computational endeavors that could run at your native speed.
That's an interesting insight. There should be another path though: visual imagination, which already runs at (roughly?) the same speed as visual perception. We can already detect the images someone is imagining to some extent, and with uploads directly putting images into their visual cortex should be comparatively straightforward, so we can skip all that rendering geometric forms into pixels and decoding pixels back into geometric forms stuff. If you want the upload to see a black dog you just stimulate "black" and "dog" rather than painting anything.
Yes! I suspect that eventually this could be an interesting application of cheap memristor/neuromorphic designs, if they become economically viable.
It should be possible to exploit the visual imagination/dreaming circuity the brain has and make it more consciously controllable for an AGI, perhaps even to the point of being able to enter lucid dream worlds while fully conscious.