Its interesting to me that so much of the commenting and voting reveals a model of "neural self" that is very very attentive to micro-physical details without paying attention to the macro structure of thought patterns, memories, goals, etc. I think that nanoscale structures are quite likely to have something to do with the thing I'd be interested in transmitting into the future to be reconstructed as a participant in whatever happens in the future, but those nano-structures aren't essential to my model of who or what I am. What matters to me survives temporary inebriation due to alcohol and is fully recovered after I sober up. What matters to me survives going to sleep and awakening with significant restructuring of my memories.
Suppose I died by violence in the presence of cryo-positive medical personnel, for example, so that some of my "lower brain" was obliterated but the rest was reasonably well preserved. I think I'd be OK substituting in generic components that had the right "gross" parameters inferred from other contexts and meshed with what was known at high resolution. If I don't walk with exactly the same neurons firing exactly the same way, because details about my motor cortex was lost... I mean, I'd prefer not, but its not that big a deal in the big picture. My way of walking is part of me, but it isn't essential to who I am.
I've had conversations with a similar theme with LW people F2F in the past and was surprised by their feelings here (in those cases, the best intuition pump I've found used a discussion of possibility and value of reconstructing a plausible Tutankhamen from physical records, in the complete absence of detailed brain data -- I'm generally in the minority in thinking this is possible and probably worth eventually doing despite obvious limitations). I see the same detail-oriented mindset (that I predict would vote against King Tut) reflected in the comments and posting.
Having consistently different attitudes towards what's plausible or worthwhile is a sign of an educational opportunity. In my experience, the best way to approach such opportunities is to assume the defect is in myself, so... What am I missing?
The ability to identify important people in our life is vital to our identity. What level of fine detail would be required to preserve this? It would be disconcerting to be reconstructed but to lack a memory of what your mother looked like.
I think we are in agreement that recognition of important faces is pretty important... but I can imagine losing just that in a stroke and having to re-learn it, and still being me. It would be annoying, and I can imagine that too much change of that sort might cause what's left of me to be incapable or uninterested in recognizing and cherishing things that are important to me. In that case, I think a copy of my current self and the future inhabitant of a body historically connected to this one might agree we weren't really instances of the same person, because they'd diverged too much.
However, it seems like you might be able to do this sort of brain scan using input data with the things you cherished in them, and many raw facts about the valence and elements of your care for aspects of what you care about would be recorded. Your tenderness or inattention or lust or disgust for many specific parts of the different things that you currently care about would be recorded. How your eyes saccade around each image would probably be reconstructable, so it would capture not just that you recognize your mother, but how you recognize your mother.
Starting with a well preserved brain, I can imagine nanoscale structures being "bottom-up" reconstructed in functional form, but when it comes time to do "integration testing" on the resulting brain model, I could imagine the initial draft having seizures, because certain long distance ratios being a bit different might have set up new resonance possibilities or something. I'm not defending this precise neurological fact (I have no high precision model of how siezures work), I'm just trying to provide a simple example from the space of possible bugs to show that the space is probably non-empty. Perfect representation of a frozen brain run on simulated physics would be just as non-functional in the simulation as a the frozen brain is in reality. You'd need to adjust the brain model and/or the simulation process (maybe both) to get something that works at all.
Testing for seizure prone-ness and adaptively adjusting things to fix that seems like the sort of thing that would be a very basic and obvious part of quality assurance processes on reconstructions. Testing for "memories of cherishing X" seems like something that would be much harder to make a part of basic QA processes, absent external data about this fact captured at the rough level of abstraction where it was obvious (ie by measuring actual responses to actual stimulus). If this was one of the first ever reconstructions of a person then this extra layer of data for "emotional recognition level QA" is something I can imagine appreciating a lot, as an engineer.
When I imagine reconstructing an acceptable King Tut, I'm thinking of this as happening deep into the development of revival processes, where so much information from detail-informed reconstruction of other people's brains is known that basic details of functional neurology aren't even science anymore, they're just engineering, or maybe just application of existing tools by far future script kiddies, and if the tools are solid enough and they hook into the far future Everythingpedia, then maybe all you really need is description of who the person was that cuts human personhood at the joints.
Like the big five is an attempt to describe "personality" in a way that was state of the art for the 1940s and is only very slowly filtering out into mainstream cultural awareness even now. What I'm thinking of would be like the "big million" and perhaps be state of the art for the 2140's. In that sort of context, enough QA data might be all that is really required, and (plus or minus) I can imagine enough QA data being available from archaeological artifacts. A reconstructed Tut really might recognize his Aunt because there is surviving data on her. Given genomics data, a statue of a relative, and lots of computer time, perhaps he would also be able to recognize his mother as she once was.
In some sense there is a really extreme inside-view/outside-view issue here. Taking the extreme outside view, with background knowledge about the entire earth at this moment in time counting as an "accessible codebook" for compression, it takes less than 50 yes/no questions to identify any one of us. Taking the extreme inside view, with the precise configuration of every molecule considered as a completely de novo surprise it probably takes something like 10^50 yes/no questions to precisely describe any one of us. Maybe the issue is that I tend to imagine reconstruction happening in an environment richer in know-how, data, and resources, and so I tend to think fewer bits are required than other people?
Out of interest: how much more faithful do you think your reconstructed Tutankhamen would be than a talented and well-informed actor told to play Tutankhamen?
The physicality of the reconstruction alone would require impossibly good casting. A leg that's actually injured, a body with epigenetics consistent with the inferred diets of ancient Egyptian monarchs, food preferences consistent with dental wear marks... and so on. That leaves aside finding someone with the right physicality who can even act, and learn Egyptian, and so on and so forth. Which still leaves aside that there would be a lot in the actor that was not in Tut. Jim Carrey was not Andy Kaufman.
I don't buy the line of reasoning. It sounds to me like claiming, "I took the output of this random 100 state Turing machine and deleted 9999 of every 10000 bits. But a sufficiently advanced mind ought to be able to figure out the missing pieces." Maybe it is possible, but my prior is very low.
Current neuroimaging is no where near single neuron-fidelity. In other words, there is no such scan nor is there reason to expect one anytime soon.
Wei_Dai wasn't saying there was. He was supposing that conventional brain scans produce data which is entangled with your particular brain; and that it's possible a sufficient number of such scans could enable a future superintelligence to reconstruct you to a sufficient level of fidelity. If FAI is coming but I buy the farm first, I would prefer lots of MRIs + a frozen brain to just a frozen brain; and I would prefer lots of MRIs to nothing at all.
edit: I do, of course, think the entangling is weak enough that 10GB of head MRIs << 10GB of mind state.
Sure but these things cost money and we have finite resources. A dedicated fMRI machine would cost somewhere between 2-6 times SIAI's annual expenditures.
That's a separate argument than the one over whether such a scan is possible with present technology. I do agree that an fMRI machine shouldn't be a budget priority for SI; I also do not consider it worth my money to get frequent MRIs (although I have saved the one I got for other reasons last year). If I were sufficiently wealthy, though, I'd buy frequent brain scans before I'd buy, say, a Ferrari (and I do really enjoy fast, flashy cars).
Such a scan isn't possible with present technology. What is possible are scans and other methods of recording information that could conceivably be relevant to an attempt to reconstruct your mind at a future time. If the argument is simply that brain scans "couldn't hurt"... well, 'duh'. But making a diary of your frequent thoughts or videotaping yourself as you go about your day couldn't hurt either. Knowing the regional blood-flow patterns in your brain in response to narrow and limited stimuli is not in a significantly different category.
The question is whether the cost and time involved in these endeavors is is better than plausible counter-factual spending. My point isn't just that SI shouldn't purchase a machine; it's that giving to existential risk research, or ensuring the financial stability of your cryonics organization probably has a better return for your own long-term survival than getting frequent fMRI scans does. The point that spending money on brain scans has a better return than an expenditure that probably lowers your long-term survival rate (due to car accidents) is not a strong argument.
Current-day neuroimaging techniques seem to be inadequate for a full reconstruction, but having such scans in addition to a cryopreserved brain couldn't hurt.
I think that if you're getting cryo, you should go the full way, documenting as many details of your life as possible. The more information there is about you, the more likely it is that you can be accurately recovered. If you can get brain scans of yourself, all the better.
How much loss is acceptable in the reconstruction?
I'd imagine the reconstructed minds would be happier with their own fidelity than the deconstructed minds. And that the reconstructor might trade off some fidelity for utility towards whatever purpose they had in doing the reconstruction.
I see http://lesswrong.com/lw/b93/brain_preservation/ and http://lesswrong.com/lw/bg0/cryonics_without_freezers_resurrection/ -- are there other good discussions?
The gap between creating a working mind and producing an exact reconstruction seems large.
A newborn’s brain can be specified by about 10^9 bits of genetic information
While the brain of a new-born baby can be generated by 10^9 bits of genetic information, it’s not true that this is enough to suitably specify a particular new-born’s brain. This is because of the large impact that conditions in the womb have on brain development (e.g. drugs&alcohol) and the limited extent to which brain structure is inherited.
However it’s quite likely that specific sections of the genome contribute to brain development, meaning that your lower bound for how much information it takes to generate a new-born’s brain is (*probably!) much lower than 10^9 bits. Though this still won’t be enough information to specify a particular new-born’s brain, just enough to considerably narrow-down the region of brain-structure-space that the new-born’s brain can occupy.
*Don't take my word for this – I don't know nearly enough to substantiate that claim.
Also, on a tentative note, it might be worth comparing scans of a brain before and after it's been cryogenically preserved in order to see if it's possible to tell the difference (and subsequently if the data from the pre-freezing brain can be approximated from the post freezing brain data).
it’s not true that this is enough to suitably specify a particular new-born’s brain. This is because of the large impact that conditions in the womb have on brain development
The amount of information from a womb to the brain is about zero.
You are welcome to "bother" anytime.
I eat a lot. A half to a kilogram per day. What is the amount of information I've got this way? Very little, if any. Even drinking of a lot of alcohol - what I don't - and which would destroy my livers, would mean a very miniscule data transfer.
Some brain stimulating drugs one might take, and what brings him a significantly higher intelligence, is not a big data flow.
Just isn't. Biologists should respect what "information" and "data stream" and so on - mean.
I agree, yet none of that changes the fact that conditions in the womb have a large impact on brain development. Hence information about the conditions in the womb is required to generate a specific new-born’s brain. Sure when an adult takes a stimulating drug there's not a large data flow, but when the brain is actually forming drugs can fundamentally alter its final structure.
Those biologists are enthusiastic about it, I know. But they simply don't understand what information means, IMHO.
A few years ago, some of them frequently claimed, that there is a ridiculously big number of bytes stored in human memory. Something much greater than goes inside the Beckenstein's bound for the planet, let alone the head.
In my humble opinion, and with all the respect, they don't know what they are talking about.
I'm confused. Are you saying that they are wrong when they say that womb environment impacts intelligence and sexual preference? Is it possible that there's an issue of definitions going on here about what is meant by "about zero"?
A few years ago, some of them frequently claimed, that there is a ridiculously big number of bytes stored in human memory. Something much greater than goes inside the Beckenstein's bound for the planet, let alone the head.
Do you have a citation for this? I'd be curious to see that.
There's one estimate in the While Brain Emulation roadmap from (Wang, Liu et al., 2003) estimating that the brain has a computational capacity with 10^8432^ bits of memory.
Sandberg & Bostrom sardonicly note in a footnote that 'This information density is far larger than the Bekenstein black hole entropy bound on the information content in material systems (Bekenstein, 1981).'
I'm not surprised that such estimates exist. What I'm more doubtful is the claim that such bounds were "frequently claimed".
Can't find it now, I am sorry. But I remember the number 2^8000 or there about bytes, mentioned a few years ago as an estimation by some scientist. Neurologist. Now it is impossible to find it, since Google can't search "2^8* ... bytes ... brains" type of a string. Or some regular expressions or something.
Are you saying that they are wrong when they say that womb environment impacts intelligence and sexual preference
I am saying, that there is at the most a very tiny amount of the information flow, even if the womb can make you smarter or dumber. If a lightning strikes and makes somebody a 10 IQ points smarter - what I can see as a possibility - the amount of information by the thunder is about zero.
The main issue I am skeptical of is the statistics rather than the neuroscience. Just because the brain can be stored in 10^10 bits does not imply that measuring O(10^10) bits at random will give you what you want. But perhaps Paul has a reason to believe this beyond e.g. the intuition from the fact that random projections work for compressed sensing (which seems qualitatively different to me, since recovering L^2 distances is a much less structured problem than recovering brains, so we have more reason to believe in that scenario that random bits are approximately as good as carefully chosen bits).
I think I agree with you, but it might be misleading to talk about brain images as random bits.
Peter Passaro would be a good person to ask. I think he did doctorate work on using neuroimaging and statistical analysis to work toward WBE without needing to slice-and-scan.
Ok so I'm presuming that an extremely fine grained scan stored with some naive compression is massively more than 10^14 synapse-bits. In order to store all that now in the information theoretic minimum, don't we need some kind of incredibly awesome compression algorithm NOW that we simply don't have?
No, I think the idea is to do coarse-grained scans, which the superintelligence will have to heavily process in order to infer the original brain structure. (Yeah, it's not clear this is possible even with a whole universe worth of computing power and whatever algorithmic breakthroughs a superintelligence might come up with.)
you periodically take neuroimaging scans of your brain and save them to multiple backup locations (1010 bits is only about 1 gigabyte)
I think I understand but I'm lost as to why that 10^10 is showing up here. Wouldn't it be whatever the scan happens to be rather than a reference to the compressed size of a human's unique experiences? We might plausibly have a 10^18 scan that is detailed in the wrong ways (like it carries 1024 bits per voxel of color channel info :p).
eta: In case it's not clear, I can't actually help you answer the question of just how useful a scan is.
It's also not clear that one could tell whether it failed. That is, OK, it processes the scans, interpolates over the gaps, and a person pops out the other end who believes himself to remember being me. Yay? Maybe.
Then again, it's not clear to me that I ought to care about the difference.
It's also not clear that one could tell whether it failed.
If the superintelligence does the same kind of coarse-grained scan to living humans and successfully copies/recreates them from that information alone, there would every reason to think the process would work just as well with dead humans like you, right?
Then again, it's not clear to me that I ought to care about the difference.
Well, if you care about living, rather than about somebody similar to you that wrongly believes to be you, you definitely should care about the difference.
I care about living (usually), but it's not clear to me that what I care about when I care about living is absent in the "failed" scenario.
As far as I can tell, "being me" just isn't all that precisely defined in the first place; it describes a wide range of possible conditions. Which seems to allow for the possibility of two entities A and B existing at some future time such that A and B are different, but both A and B satisfy the condition of being me.
I agree, though, that if A is the result of my body traveling through time in the conventional manner, and B is the result of some other process, and A and B are different, it is conventional to say that A is really me and B is not. It's just that this strikes me as a socially constructed truth more than an empirically observed one.
I also agree that the test you describe is compelling evidence that the copy/recreation process is as reliable a self-preserver as anything could be.
It should be possible to check for corruption in the process by having the AGI not use some known information in the reconstruction, then asking the reconstruct to answer questions with known answers.
(For example, the AGI could not use the (known, from records) birthdate of the person during reconstruction; afterwards, if the reconstruct doesn't remember their correct birthdate, that would be strong evidence that the process had failed. Given a sufficiently large number of these tests, the superintelligence could verify with reasonable accuracy the fidelity of the reconstruction.)
Huh. I had never explicitly noticed that one could acquire side loading data in the present. Nor have I seen an information theoretic analysis of brain contents that was so plausible and so low. This seems like it might be important but I'll have to ponder implications for a while. Thank you for posting this :-)
I've been thinking about this more and I recall a study that's already been done and which seems to be capturing most of the important data, and which even has built in quality assurance process that provides a measure of assurance that the measurements contain meaningful content... reconstruction of observed images from brain scanning. Once you see the first example it suggests a whole range of iteratively more complicated research projects. One might attempt reconstruction of an audio channel, reconstruction of audio+video, doing structured interview and reconstructing the audio and text transcripts of the subjects vocal production, and perhaps eventually doing video or audio chat of some sort, with an attempt to reconstruct both sides of the conversation from two separate but synchronized brain scans. Neat :-)
ETA: The conversational reconstruction attempt seems like it would have specific cryonics implications, in terms of gaining data on specific social dynamics whose visceral experienced realities are really really important to people when the subject is discussed in near mode.
Do you know how that "reconstruction" works? They are not just displaying brain data, with a little post-processing added. If you play the video, you'll see completely spurious text floating around in some of the reconstructions. That's because the reconstructed video is a weighted sum over a few hundred videos taken from Youtube. They have a computational model of the mapping from visual input to neural activity, but when they invert the mapping, they assume that the input was some linear combination of those Youtube videos. The reconstruction is just about determining the weights. So maybe in reality you just see a caravan of elephants walking in front of you, but your "reconstructed" visual experience also has text from a music video flickering past, because that music video supplies one of the basis vectors and it was assigned a high similarity by the model.
If you click through to the discussion on Youtube, there are commenters freaking out and speculating that the words must be subconscious thoughts of the people who had their brains scanned. So we're getting several lessons at once from this exercise: If people are reconstructed from a lossy backup, there may be spurious insertions as well as lost data; and, the non-technical public will interpret artefacts as real, in a creative way which also attributes much more power to a technology than it possesses.
I didn't "know" the details of the reconstruction, but I suspected it was relatively simple, and you've confirmed that. Also, I agree denotationally with everything you said about inevitable bugs and a public that leaves something to be desired. Nonetheless it is neat anyway, because sturgeon's law (90% of everything is crap) is roughly right, and this is non-crappy enough that it deserves some appreciation :-)
Also, if someone was going to use non-destructively collect data from various sources to attempt a side-load by constraining on observable frozen anatomy, recordable functional outcomes, etc, etc, then this general kind of raw data might help constrain the final model, or speed up the annealing, by completely ruling out certain sorts of overall neural facts, like what things will trigger vivid recognition or not (and with what sort of emotional resonances).
I added this note to the post:
Given the presence of thermal noise and the fact that a set of neuroimaging data may contain redundant or irrelevant information, 10^10 bits ought to be regarded as just a rough lower bound on how much data needs to be collected and stored. Thanks to commenters who pointed this out.
If we're relying on a future superintelligence to reconstruct our brains, why not make it a little harder?
There's no reason you couldn't buy a wearable camera that recorded your inputs and outputs, and back everything up to hard disks in HD. Much cheaper to store than a frozen brain. After a few decades of video, there would have to be more than enough data to do the reconstruction. Then, when you die, you just stick the big stack-o-harddrives into a vault and wait for the future AI overlord to find them, scan them, and put them back together into a person again. Boom. Immortality on the cheap.
After a few decades of video, there would have to be more than enough data to do the reconstruction.
Sandberg & Bostrom are skeptical. Page 109:
Again, it is sometimes suggested that recording enough of the sensory experiences and actions would be enough to produce brain emulation. This is unlikely to work simply because of the discrepancy in the number of degrees of freedom between the brain (at least 10^14 synaptic strengths distributed in a 10^22 element connectivity matrix) and the number of bits recorded across a lifetime (less than 2 * 10^14 bits (Gemmell, Bell et al., 2006)).
Also, Sandberg in 2010:
I would really like to develop a good argument about when reconstructing a mind from its inputs and outputs is possible. Being a slice-and-dice favoring WBE thinker, I am suspicious of the feasibility. But am I wrong?
It is not too hard to construct "minds" that cannot be reconstructed easily from outputs. Consider a cryptographically secure pseudorandom number generator: watching the first k bits will not allow you to predict the k+1 bit with more than 50% probability, until you have run through the complete statespace (requires up to ~2^(number of state bits) output bits). This "mind" is not reconstructible from its output in any useful way.
However, this cryptographic analogy also suggests that some cryptographic approaches might be relevant. Browsing a paper like Cryptanalytic Attacks on Pseudorandom Number Generators by Kelsey, Schneier, Wagner and Hall (PDF) shows a few possibilities: input-based attacks would involve sending various inputs to the mind, and cryptoanalyzing the outputs. State compromise extension attacks make use of partially known states (maybe we have some partial brainscans). But it also describes ways the attacks might be made harder, and many of these seem to apply to minds: outputs are hashed (there are nontrivial transformations between the mindstate and the observable behavior), inputs are combined with a timestamp (there might be timekeeping or awareness that makes the same experience experienced twice feel different), occasionally generate new starting state (brain states might change due to random factors such as neuron misfiring, metabolism or death, sleep and comas, local brain temperature, head motion, cell growth etc). While the analogy is limited (PRNGs are very discrete systems where the update rule is simple rather than messy, more or less continuous systems with complex update rules - much harder to neatly cryptoanalyze) I think these caveats do carry over.
But this is not a conclusive argument. Some minds are likely nonreconstructible (imagine the "mind" that just stores a list of its future actions is another example: it can be reconstructed up until the point where the data runs out, and then becomes completely opaque), but other minds are likely trivially reconstructible (like the "mind" that just outputs 1 at every opportunity). A better kind of argument is to what extent our behavioural output constrains possible brain states. I think the answer is hidden in the theory of figuring out hidden Markov models.
Well, if I'm doing morningstar rhetoric, I'd best get my game face on.
First, in the paper below, their estimate of the information density of the brain is somewhat wrong. What you actually need is the number of neurons in the brain (10^11), squared, times two bytes for half-precision floating point storage of the strength of the synaptic connection, plus another two bytes to specify the class of neuron, times two, to add fudge factor for white matter doing whatever it is that white matter does, which all works out to 6.4 * 10^23.
Now that we've actually set up the problem, let's see if we can find a way it might still be possible. First, let's do the obvious thing and forget about the brain stem. Provided you've got enough other human brain templates on ice, that can probably be specified in a negligable number of terrabytes, to close enough accuracy that the cerebral cortex won't miss it. What we really care about are the 2 10^10 neurons in the cerebral cortex. Which brings out overall data usage down to 1 10^22. Not a huge improvement, I grant you, but we're working. Second, remember that our initial estimate was for the ammount of RAM needed, not the entropy. We're storing slots in a two dimensional array for each neuron to connect to every other neuron, which will never happen. Assuming 5000 synapses per neuron, that means that 5000 / ( 2* 10^9) of our dataset is going to be zeroes. Let's apply run-length encoding for zeroes only, and we should see a reduction by a factor of a hundred thousand, conservatively. That brings it down to 10^17 bits, or 11 petabytes.
Now let's consider that the vast majority of connectomes will never occur inside a human brain. If you generate a random connectome from radio noise and simulate it as an upload, even within the constraints already specified, the result will not be able to walk, talk, reason, or breathe. This doesn't happen to neurologically healthy adults, so we can deduce that human upbringings and neural topology tend to guide us into a relatively narrow section of connectome space. In fact, I suspect that there's a good chance that uploading this way would be a process of starting with a generic human template, and tweaking it. Either way, if you took a thousand of those randomly generated minds, I would be very surprised if any of them was anything resembling a human being, so we can probably shave another three orders of magnitude off the number of bits required. That's 10^15 bits of data, or 113 terrabytes. Not too shabby.
Based on this, and assuming that nights are a wash, and we get no data, in order to specify all those bits in ten years, we would need to capture something like 6.4 megabits a second of entropy, or a little less than a megabyte. This seems a little high. However, there are other tricks you could use to boost the gain. For example, if you have a large enough database of human brain images, you could meaningfully fill in gaps using statistics. For example: if, of the ten thousand people with these eight specific synaptic connections, 99% also have a particular ninth one, it'd be foolish not to include it.
In short, it seems somewhat infeasible, but not strictly impossible. You could augment by monitoring spinal activity, implanting electrodes under your scalp to directly record data on brain region activation, and by the future use of statistical analytics.
Now, actually deducing the states of the brain based on its output, as you said, might be difficult or impossible enough to put an end to the whole game before it starts. Still, it might actually work.
What you actually need is the number of neurons in the brain (10^11), squared
But the vast majority of neuron-pairs is not connected at all, which suggests storing a list of connections instead of the full table of pairs which you propose. If every neuron can be specified in 1KB (location, all connections), we're talking ~100 TB, about $10.000 in hard disks or less in e.g. tape media.
Of course, actually getting all this data is expensive, and you'd probably want a higher level of data security than "write it to a consumer hard drive and store that in a basement".
1 KB seems very optimistic. Uniquely identifying each neuron would require the log of the number of neurons in the brain, or 36 bits. Figuring five thousand connections per neuron, that's 36 5000 to store which synapse goes where, and (64 + 36) 5000 to store which synapse goes where, plus the signal intensity and metadata. In short, it'd actually be more like 500 KB per neuron, or 50,000 TB.
Granted that's before compression, but still.
Need to use complexity theory to try and see how powerful the super-intelligence has to be. It may end up that it will need to be substantially bigger than Jupiter Brain (cubed, squared, to 100th power?) to construct anything relevant to self preservation, from such a coarse data set. As intuition pump, and I said that before, a computing system that is to mankind as mankind is to 1 amoeba should be expected to predict the weather for about two times longer than mankind can - that's given perfect information - given limited information the gain may be quite small and entirely unimportant. For the coarse brain scans to detailed brain state, one has to somehow run simulation in reverse (or worse yet, bruteforce the state), which I think explodes even worse than forward butterfly effect. I'd say we don't know enough about brain to be able to tell what it takes to do this kind of thing, but what we do know about chaotic non-linear systems in general, does not inspire optimism.
The jump that if might be possible in principle, it would therefore be doable by super-intelligence, only works if you think theological thoughts about AI.
It may end up that it will need to be substantially bigger than Jupiter Brain (cubed, squared, to 100th power?) to construct anything relevant to self preservation, from such a coarse data set.
Yeah, I'm aware of this and said as much in a previous comment. I appreciate you giving a more detailed explanation, but wish you hadn't also included a sentence implying that I "think theological thoughts about AI".
As you say, this needs to be investigated more using complexity theory, but my guess is that we won't reach any definitive conclusions due to the difficulty of the problem. (For example we can't even prove the hardness of inverting cryptographic hash functions specifically designed to be secure, so how can we expect to prove the hardness of doing this kind of brain reconstruction?) What we should do in that case isn't clear, but it seems worth taking a chance if the cost of (sufficiently detailed) neuroimaging is low enough. What do you think?
I just thought of a good analogy: if you have a hash of random gibberish you don't need recovered (thermal fluctuations amplified by neurons) combined with the data you want to recover (personality), if the random gibberish is larger than hash you'll probably not be able to recover useful data. That leaves open the question how much random noise gets hashed into result.
Nah, I don't think you think theological thoughts, but I do admit I have that opinion of some other people. Theological thoughts are strangely attractive to minds :/ . That's how we got religion in first place. Can't be too careful about recreating religion.
With the hash functions, there seems to be a very strong disparity - a definition of a hash can fit in 1 page, but our entire might can not solve it. And that's with us trying to use fewer iterations.
Here we are speaking of a hash, definition of which is trillions times larger, and which introduces random thermal noise. Complexity theory is not only tool... there's also the Lyapunov's exponent considerations; it may well be that too big range of minds are compatible with the dataset, even though the dataset has enough data - if the thermal noise got hashed in. Then there's can be future ethical considerations against any form of brute force process that creates minds only to destroy them.
On whenever it is worth a try, for me the strategic considerations apply - if I am to deem something like this worth a try, I will end up buying snake oil. Also, the money, for most of us, can be spent on living better life now.
(A big reason I took part in a study on people at high risk for schizophrenia is 'cuz I got paid to get a few hours of fMRI. There might be similar opportunities available to be found on aggregator websites for medical studies.)
fMRI is an extremely coarse-grained scan and is unlikely to substantially assist in reconstruction. fMRI works by detecting change in blood flow indicating that specific areas of the brain are using more energy to do the task in question. In practice fMRI is so noisy that in order to get good data one generally needs to average a large number of scans of different people together. Otherwise the noise almost completely overrides any data. The smallest region which an fMRI can scan (a voxel) generally has at least millions of distinct neurons. The smallest temporal resolution levels for an fMRI are around a half a second which is a massive length of time for purposes of thought behavior. Overall, the data that this gives for reconstruction is extremely marginal when it is limited to only a handful of scans.
Nothing, I haven't even bothered getting UCSF to send it to me. I haven't yet looked into what I should do once I get around to doing that.
Academia is not known to attract the most organized people; if you're ever going to request it, you should probably do it as soon as possible. (And if you weren't, then haven't you wasted your time?)
The probabilistic existence of the data makes it easier for hypothetical superintelligences to resurrect me, whether or not I have access to the data. I am not going to get cryonics.
...And the probability is much higher if you have a personal copy on your computer, backed up like everything else.
What is this, 'Newsome's Wager'? "There exists a non-infinitesimal chance that your brain pattern has been preserved by various processes..."
...And the probability is much higher if you have a personal copy on your computer, backed up like everything else.
I doubt that it's more than an order of magnitude difference. Also see my reply to Wei Dai elsewhere in this thread.
I don't understand this. There's probabilistic existence of the data even if you didn't take part in the study (some aliens could always have abducted you and took scans while you were asleep one day). If you care enough to want to increase the probability, why not increase it further by doing a small amount of extra work? (In case it's not clear, the increase comes from having backups of the data around if UCSF deletes or destroys their copy.)
Laziness caused by ugh fields caused by guilt caused by dropping out of their study immediately after getting the fMRI, which explicitly wasn't against the rules but still feels immoral. I also think the probability is low that UCSF and all the people they send the data to will all delete the data. I also don't emotionally care about living forever or being resurrected or whatever, I just have negative motivations stemming from the potential immorality of not trying to save information about my mind that is for whatever reason useful, e.g. useful for future gods to judge me by. Also I think there are already gods hanging around that already have access to information about my brain-mind. Also I think that even if there weren't such gods already then alien gods would still be able to collect the information as they rushed towards Earth and bring it back to any future Earth-born gods. Also I'm skeptical that an fMRI makes a big difference, i.e. I'm skeptical of Paul's approach, especially when compared to options like eating the internet and looking at my writings, including my descriptions of my memories and cognitive style and so on. Basically the scenario where my additional effort makes a difference strikes me as a really unlikely scenario so it's hard for me to care about it, especially when there are way more important things for me to care about. (ETA: I also tentatively believe in a Leibnizian/Thomistic God, Who would know everything about me already. This is mostly disjunctive with thinking that there are already gods on Earth.)
I find this a much saner approach than the cryonics. And it's the uploading in essence - the first step is this imaging, the second is the playing of it, inside a computer.
Paul Christiano recently suggested that we can use neuroimaging to form a complete mathematical characterization of a human brain, which a sufficiently powerful superintelligence would be able to reconstruct into a working mind, and the neuroimaging part is already possible today, or close to being possible.
Paul was using this idea as part of an FAI design proposal, but I'm highlighting it here since it seems to have independent value as an alternative or supplement to cryonics. That is, instead of (or in addition to) trying to get your body to be frozen and then preserved in liquid nitrogen after you die, you periodically take neuroimaging scans of your brain and save them to multiple backup locations (1010 bits is only about 1 gigabyte), in the hope that a friendly AI or posthuman will eventually use the scans to reconstruct your mind.
Are there any neuroimaging experts around who can tell us how feasible this really is, and how much such a scan might cost, now or in the near future?
ETA: Given the presence of thermal noise and the fact that a set of neuroimaging data may contain redundant or irrelevant information, 1010 bits ought to be regarded as just a rough lower bound on how much data needs to be collected and stored. Thanks to commenters who pointed this out.