Could a future superintelligence bring back the already dead? This discussion has come up a while back (and see the somewhat related); I'd like to resurrect the topic because ... it's potentially quite important.
Algorithmic resurrection is a possibility if we accept the same computational patternist view of identity that suggests cryonics and uploading will work. I see this as the only consistent view of my observations, but if you don't buy this argument/belief set then the rest may not be relevant.
The general implementation idea is to run a forward simulation over some portion of earth's history, constrained to enforce compliance with all recovered historical evidence. The historical evidence would consist mainly of all the scanned brains and the future internet.
The thesis is that to the extent that you can retrace historical reality complete with simulated historical people and their thoughts, memories, and emotions, to this same extent you actually recreate/resurrect the historical people.
So the questions are: is it feasible? is it desirable/ethical/utility-efficient? And finally, why may this matter?
Simulation Feasibility
A few decades ago pong was a technical achievement, now we have avatar. The trajectory seems to suggest we are on track to photorealistic simulations fairly soon (decades). Offline graphics for film arguably are already photoreal, real-time rendering is close behind, and the biggest remaining problem is the uncanny valley, which really is just the AI problem by another name. Once we solve that (which we are assuming), the Matrix follows. Superintelligences could help.
There are some general theorems in computer graphics that suggest that simulating an observer optimized world requires resources only in proportion to the observational power of the observers. Video game and film renderers in fact already rely heavily on this strategy.
Criticism from Chaos: We can't even simulate the weather more than a few weeks in advance.
Response: Simulating the exact future state of specific chaotic systems may be hard, but simulating chaotic systems in general is not. In this case we are not simulating the future state, but the past. We already know something of the past state of the system, to some level of detail, and we can simulate the likely (or multiple likely) paths within this configuration space, filling in detail.
Physical Reversibility Criticism: The AI would have to rewind time, it would have to know the exact state of every atom on earth and every photon that has left earth.
Response: Yes the most straightforward brute force way to infer the past state of earth would be to compute the reverse of all physical interactions and would require ridiculously impractical amounts of information and computation. The best algorithm for a given problem is usually not brute force. The specifying data of a human mind is infinitesimal in comparison, and even a random guessing algorithm would probably require less resources than fully reversing history.
Constrained simulation converges much faster to perfectly accurate recovery, but by no means is full perfect recovery even required for (partial) success. The patternist view of identity is fluid and continuous.
If resurrecting a specific historical person is better than creating a hypothetical person, creating a somewhat historical person is also better, and the closer the better.
Simulation Ethics
Humans appear to value other humans, but each human appears to value some more than others. In general humans typically roughly value themselves the most, then kin and family, followed by past contacts, tribal affiliations, and the vaguely similar.
We can generalize this as a valuation in person-space which peaks at the self identity-pattern and then declines in some complex fashion as we move away to more distant locales and less related people.
If we extrapolate this to a future where humans have the power to create new humans and or recreate past humans, we can infer that the distribution of created people may follow the self-centered valuation distribution.
Thus recreating specific ancestors or close relations is better than recreating vaguely historical people which is better than creating non-specific people in general.
Suffering Criticism: An ancestral simulation would recreate a huge amount of suffering.
Response: Humans suffer and live in a world that seems to suffer greatly, and yet very few humans prefer non-existence over their suffering. Evolution culls existential pessimists.
Recreating a past human will recreate their suffering, but it could also grant them an afterlife filled with tremendous joy. The relatively small finite suffering may not add up to much in this consideration. It could even initially relatively enhance subsequent elevation to joyful state, but this is speculative.
The utilitarian calculus seems to be: create non-suffering generic people who we value somewhat less vs recreate initially suffering specific historical people who we value more. In some cases (such as lost love ones), the moral calculus weighs heavily in favor of recreating specific people. Many other historicals may be brought along for the ride.
Closed Loops
The vast majority of the hundred billion something humans who have ever lived share the singular misfortune of simply being born too early in earth's history to be saved by cryonics and uploading.
Recreating history up to 2012 would require one hundred billion virtual brains. Simulating history into the phase when uploading and virtual brains become common could vastly increase the simulation costs.
The simulations have the property that they become more accurate as time progresses. If a person is cryonically perserved and then scanned and uploaded, this provides exact information. Simulations will converge to perfect accuracy at that particular moment in time. In addition, the cryonic brain will be unconscious and inactive for a stretch.
Thus the moment of biological death, even if the person is cryonically preserved, could be an opportune time to recycle simulation resources, as there is no loss of unique information (threads converged).
How would such a scenario effect the Simulation Argument? It would seem to shift probabilities such that more (most?) observer moments are in pre-uploading histories, rather than in posthuman timelines. I find this disquieting for some reason, even though I don't suspect it will effect my observational experience.
Sure. Specifying my position more precisely will take a fair number of words, but OK, here goes.
There are three entities under discussion here:
A = Dave at T1, sitting down in the copier.
B = Dave at T2, standing up from the copier.
C = Copy-of-Dave at T2, standing up from the copier.
...and the question at hand is which of these entities, if any, is me. (Yes? Or is that a different question than the one you are interested in)
Well, OK. Let's start with A... why do I believe A is me?
Well, I don't, really. I mean, I have never sat down at an identity-copying machine.
But I'm positing that A is me in this thought experiment, and asking what follows from that.
Now, consider B... why do I believe B is me?
Well, in part because I expect B and A to be very similar, even if not quite identical.
But is that a fair assumption in this thought experiment?
It might be that the experience of knowing C exists would cause profound alterations in my psyche, such that B believes (based on his memories of being A) that A was a very different person, and A would agree if he were somehow granted knowledge of what it was like to be B. I'm told having a child sometimes creates these kinds of profound changes in self-image, and it would not surprise me too much if having a duplicate sometimes did the same thing.
More mundanely, it might be that the experience of being scanned for copy causes alterations in my mind, brain, or body such that B isn't me even if A is.
Heck, it's possible that I'm not really the same person I was before my stroke... there are certainly differences. It's even more possible that I'm not really the person I was at age 2... I have less in common with that entity than I do with you.
Thinking about it, it seems that there's a complex cluster of features that I treat as evidence of identity being preserved from one moment to another, none of which is either sufficient or necessary in isolation. Sharing memories is one such feature. Being in the same location is another. Having the same macroscopic physical composition (e.g. DNA) is a third. Having the same personality is a fourth. (Many of these are themselves complex clusters of neither-necessary-nor-sufficient features.)
For convenience, I will label the comparison operation that relies on that set of features to judge similarity F(x,y). That is, what F(A,B) denotes is comparing A and B, determining how closely they match along the various referenced dimensions, weighting the results based on how important that dimension is and degree of match, comparing those weighted results to various thresholds, and ultimately coming out at the other end with a "family resemblance" judgment: A and B are either hashed into the same bucket, or they aren't.
So, OK. B gets up from the machine, and I expect that while B may be quite different from A, F(B,A) will still sort them both into the same bucket. On that basis, I conclude that B is me, and I therefore expect that I will get up from the machine.
If instead I assume that F(B,A) sorts them into different buckets, then the possibility that I don't get up from that machine starts to seem reasonable... B gets up, but B isn't me.
I just don't expect that to happen, because I have lots of experiences of sitting down and getting up from chairs.
But of course those experiences aren't probitive. Sure, my memories of the person who sat down at my desk this morning match my sense of who I am right now, but that doesn't preclude the possibility that those memories are different from what they were before i sat down, and I just don't remember how I was then. Heck, I might be a Boltzman brain.
I can't disprove any of those ideas, but neither is there any evidence supporting them; there's no reason for those hypotheses to be promoted for consideration in the first place. Ultimately, I believe that I'm the same person I was this morning because it's simplest to assume so; and I believe that if I wake up tomorrow I'll be the same person then as well for the same reason. If someone wants me to seriously consider the possibility that these assumptions are false, it's up to them to provide evidence of it.
Now let's consider C.
Up to a point, C is basically in the same case as B: C gets up from the machine, and I expect that while C may be quite different from A, F(C,A) will still sort them both into the same bucket. As with B, on that basis I expect that I will get up from the machine (a second time).
If instead I assume that F(C,A) sorts them into different buckets, the possibility that I don't get up from that machine a second time starts to seem reasonable... C gets up, but C isn't me.
So, sure. If the duplication process is poor enough that evaluating the key cluster of properties for C gives radically different results than for A, then I conclude that A and C aren't the same person. If A is me, then I sit down at the machine but I don't get up from it.
And, yes, my expectations about the reliability of the duplication process governs things like how I split my wealth, etc.
None of this strikes me as particularly confusing or controversial, though working out exactly what F() comprises is an interesting cognitive science problem.
Oh, and just to be clear, since you brought up quantum-identity: quantum-identity is irrelevant here. If it turns out that my quantum identity has not been preserved over the last 42 years of my existence, that doesn't noticeably alter my confidence that I've been me during that time.
I'm a bit embarrassed to have made you write all that out in long form. Because it doesn't really answer my question: all the complexity is hidden in the F function, which we don't know.
You suggest F is to be empirically derived by (in the future) observing other people in the same situations. That's a good strategy for dealing with other people, but should I update towards having the same F as everyone else? As Eliezer said, I'm not perfectly convinced, and I don't feel perfectly safe, because I don't understand the problem that is purportedly being solved, even though I seem to understand the solution.