Paul Christiano recently suggested that we can use neuroimaging to form a complete mathematical characterization of a human brain, which a sufficiently powerful superintelligence would be able to reconstruct into a working mind, and the neuroimaging part is already possible today, or close to being possible.
In fact, this project may be possible using existing resources. The complexity of the human brain is not as unapproachable as it may at first appear: though it may contain 1014 synapses, each described by many parameters, it can be specified much more compactly. A newborn’s brain can be specified by about 109 bits of genetic information, together with a recipe for a physical simulation of development. The human brain appears to form new long-term memories at a rate of 1-2 bits per second, suggesting that it may be possible to specify an adult brain using 109 additional bits of experiential information. This suggests that it may require only about 1010 bits of information to specify a human brain, which is at the limits of what can be reasonably collected by existing technology for functional neuroimaging.
Paul was using this idea as part of an FAI design proposal, but I'm highlighting it here since it seems to have independent value as an alternative or supplement to cryonics. That is, instead of (or in addition to) trying to get your body to be frozen and then preserved in liquid nitrogen after you die, you periodically take neuroimaging scans of your brain and save them to multiple backup locations (1010 bits is only about 1 gigabyte), in the hope that a friendly AI or posthuman will eventually use the scans to reconstruct your mind.
Are there any neuroimaging experts around who can tell us how feasible this really is, and how much such a scan might cost, now or in the near future?
ETA: Given the presence of thermal noise and the fact that a set of neuroimaging data may contain redundant or irrelevant information, 1010 bits ought to be regarded as just a rough lower bound on how much data needs to be collected and stored. Thanks to commenters who pointed this out.
I've been thinking about this more and I recall a study that's already been done and which seems to be capturing most of the important data, and which even has built in quality assurance process that provides a measure of assurance that the measurements contain meaningful content... reconstruction of observed images from brain scanning. Once you see the first example it suggests a whole range of iteratively more complicated research projects. One might attempt reconstruction of an audio channel, reconstruction of audio+video, doing structured interview and reconstructing the audio and text transcripts of the subjects vocal production, and perhaps eventually doing video or audio chat of some sort, with an attempt to reconstruct both sides of the conversation from two separate but synchronized brain scans. Neat :-)
ETA: The conversational reconstruction attempt seems like it would have specific cryonics implications, in terms of gaining data on specific social dynamics whose visceral experienced realities are really really important to people when the subject is discussed in near mode.
Do you know how that "reconstruction" works? They are not just displaying brain data, with a little post-processing added. If you play the video, you'll see completely spurious text floating around in some of the reconstructions. That's because the reconstructed video is a weighted sum over a few hundred videos taken from Youtube. They have a computational model of the mapping from visual input to neural activity, but when they invert the mapping, they assume that the input was some linear combination of those Youtube videos. The reconstruction i... (read more)