You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

gwern comments on Neuroimaging as alternative/supplement to cryonics? - Less Wrong Discussion

17 Post author: Wei_Dai 12 May 2012 11:26PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (68)

You are viewing a single comment's thread. Show more comments above.

Comment author: Kaj_Sotala 13 May 2012 01:49:36PM *  14 points [-]

After a few decades of video, there would have to be more than enough data to do the reconstruction.

Sandberg & Bostrom are skeptical. Page 109:

Again, it is sometimes suggested that recording enough of the sensory experiences and actions would be enough to produce brain emulation. This is unlikely to work simply because of the discrepancy in the number of degrees of freedom between the brain (at least 10^14 synaptic strengths distributed in a 10^22 element connectivity matrix) and the number of bits recorded across a lifetime (less than 2 * 10^14 bits (Gemmell, Bell et al., 2006)).

Also, Sandberg in 2010:

I would really like to develop a good argument about when reconstructing a mind from its inputs and outputs is possible. Being a slice-and-dice favoring WBE thinker, I am suspicious of the feasibility. But am I wrong?

It is not too hard to construct "minds" that cannot be reconstructed easily from outputs. Consider a cryptographically secure pseudorandom number generator: watching the first k bits will not allow you to predict the k+1 bit with more than 50% probability, until you have run through the complete statespace (requires up to ~2^(number of state bits) output bits). This "mind" is not reconstructible from its output in any useful way.

However, this cryptographic analogy also suggests that some cryptographic approaches might be relevant. Browsing a paper like Cryptanalytic Attacks on Pseudorandom Number Generators by Kelsey, Schneier, Wagner and Hall (PDF) shows a few possibilities: input-based attacks would involve sending various inputs to the mind, and cryptoanalyzing the outputs. State compromise extension attacks make use of partially known states (maybe we have some partial brainscans). But it also describes ways the attacks might be made harder, and many of these seem to apply to minds: outputs are hashed (there are nontrivial transformations between the mindstate and the observable behavior), inputs are combined with a timestamp (there might be timekeeping or awareness that makes the same experience experienced twice feel different), occasionally generate new starting state (brain states might change due to random factors such as neuron misfiring, metabolism or death, sleep and comas, local brain temperature, head motion, cell growth etc). While the analogy is limited (PRNGs are very discrete systems where the update rule is simple rather than messy, more or less continuous systems with complex update rules - much harder to neatly cryptoanalyze) I think these caveats do carry over.

But this is not a conclusive argument. Some minds are likely nonreconstructible (imagine the "mind" that just stores a list of its future actions is another example: it can be reconstructed up until the point where the data runs out, and then becomes completely opaque), but other minds are likely trivially reconstructible (like the "mind" that just outputs 1 at every opportunity). A better kind of argument is to what extent our behavioural output constrains possible brain states. I think the answer is hidden in the theory of figuring out hidden Markov models.

Comment author: gwern 13 May 2012 05:20:59PM 7 points [-]
Comment author: Kaj_Sotala 13 May 2012 06:39:58PM 2 points [-]

Ah, that's the post I was primarily looking for, but couldn't find for some reason.

Comment author: Lightwave 14 May 2012 07:55:29AM 0 points [-]

Oh well, the last resort would be to hope that the future AI will just recreate most possible past humans.