We had wildly differing intuitions about reconstructive personhood.
One of the things that I thought about while reading Scott's latest post was that "personal identity is definitely a thing where the tails come apart, too". See also those endless debates over "would a destructive upload of me be me or a copy".
So with happiness, subjective well-being and the amount of positive emotions that you experience are normally correlated and some people's brains might end up learning "happiness is subjective well-being" and others end up learning "happiness is positive emotions". With personal identity, a bunch of things like "the survival of your physical body" and "psychological continuity over time" and "your mind today being similar to your mind yesterday" all tend to happen together, and then different people's brains tend to pick one of those criteria more strongly than the others and treat that as the primary criteria for whether personal identity survives.
Right now I can remember the faces of some people I saw at the gym yesterday. Can that memory be reconstructed from my online footprint and from memories of my friends and family? I don't think so. Or are you saying this memory is unimportant to my identity and can be replaced with another similar one? In that case I'd like to understand better what's considered important enough or similar enough.
It seems unlikely that the memory can be reconstructed (although it could be recreated, if the AI happens to know who-all was at the gym that day). Your perspective makes sense to me but for my part I don't think that kind of detail is important to me; it's okay if I wake up missing a lot of minor episodic memories like that.
Interesting point about public record and private thoughts and feelings not being revealed. They might be essential, or inconsequential, or somewhere in between. Human mind might be like a hologram, where you can break off chunks and lose some fidelity of the image, but the whole of it will still be there, potentially restorable once you let the resurrected brain fill the gaps as long as the remaining resolution is not too low.
Personally, I do not think that there is a universal criteria of the success of "Reconstructive Psychosurgery", and it is up to each person to decide what traits and features are essential to preserve post-op. Some people would require a near-perfect preservation, others, like myself, would settle for the basic outline.
From a multiverse perspective, it might be alright even if you resurrect a version significantly different from the original, see the essay by avturchin. Yudkowsky also discussed it somewhere on Facebook, but I don't know how to find it.
The useful question is about value of the data that can be collected about people, not so much its usefulness for achieving a particular task, because it may no longer make sense to perform that task once it becomes possible to do so (as in giant cheesecake fallacy). A system that can reconstruct people from data can do many other things that may be more valuable than reconstruction of people even from the point of view of these people that could've been reconstructed. It's a question of what should be done with the resources and capabilities of that system.
The value of the data is in its contribution to the value of the best things that can be made, and these best things don't necessarily improve through availability of that data, because they are not necessarily reconstructed people. I'm not sure there is any knowable-on-human-level difference in value between what can be done with the world given the knowledge about particular people who used to live in the past (either through indirect data or cryopreserved brains), compared to the value of what can be done without that knowledge. I guess it can't hurt if it doesn't consume resources that could otherwise find a meaningful purpose.
Previously on AI Reading Group Thoughts
In a highly unofficial meeting of the AI Reading Group (most of the participants happened to be in a room and we started talking about AI; there was no formal meeting time, no reading homework, and no dessert), we meandered around to discussing rescue simulations and the related idea of reconstructing people based on their digital (or physical) footprint plus memories of their friends and family.
We had wildly differing intuitions about reconstructive personhood. Scott thinks it should work fine to just use third party reports and your Livejournal, and that performing an appropriately superintelligent process on this data should get close enough that he doesn't expect to have a problem with results (and thinks that "close enough" is a meaningful concept, in the same way that today!Scott is very similar to yesterday!Scott). He's inclined to collapse the question of whether or not reconstructed people are "really" whoever they're supposed to be as opposed to more or less similar (a distinction that applies just as well intra-lifetime). And he seems to think that humans are made significantly of low-granularity parts like "introversion" and thinks that weird hidden thought processes might turn out to drop out of all the other constraints on the problem, the same way an alien engineer trying to build a car based on watching videos might put an internal combustion engine under the hood even if none of the videos popped the hood.
Kelsey thinks the connectome has got to be enough to work with - it might get "you during a weird dream" or "you during a wacky drug trip", producing irregular temporary qualia out of poorly chosen electrical impulses and chemicals filling in unknowns about the non-connectome features of your brain, but weird dreams and wacky drug trips are a recoverable state, at least compared to "dead". We might not even need the whole connectome, because we do seem to find people with various brain damage to be "still themselves".
We had differing intuitions about how much people ever actually have overlapping qualia states in their lifetimes. Kelsey thinks the time she got hydrogen peroxide in her eye she was probably having pretty similar experiences to other mindslices in extreme eye pain, but of course when the pain receded she resumed being Kelsey and didn't proceed to be a different person who remembered being subject to unanaesthetized ophthalmic surgery; there were underlying tendencies in her architecture that restored her traits, if not exact state, once the extreme stimulus was gone. And states more interesting than "extreme eye pain" are probably less likely to be shared between people, given their greater number of details.
I think whether or not reconstruction works depends on contingent facts about human mindspace, which a superintelligence can probably figure out (and apply the answer, if the answer is "yes, it works fine") but might be really hard to pin down even that fact with only conventional brains on the question.
It might be that each bit of information about someone, even noisy information, rules out huge swathes of ways a human could be, and that humans can't be so many different ways that this leaves you meaningfully uncertain about which one you're trying to grab. Maybe we're bad random number generators, or good random number generators but proceeding deterministically from a small number of seeds (not like "five", but maybe like "four hundred thousand") with a small range of values each, interacting in ways that are pretty easy to identify and understand with enough processing power and context information about humans in the general case. As a simplified example, someone in my Discord says that in generation 2 of the Pokémon games, whether a Pokémon is shiny or not is possible to derive based on information about its speed stat. One can imagine the same stats of a given monster holding static whether it's shiny or not, but in fact within the constraints of the game some of those values are contradictions. Humans aren't trying to fit onto Nintendo cartridges, but we're trying to fit onto hacky wetware; maybe we contain similar weird connections between seemingly unrelated features of the mind, and nobody is both (*rolls dice*) exactly six feet tall and (*spins wheel*) inclined to use topic-comment sentence patterns more than 15% of the time.
It might also be that humans have enough independently varying parts that small bits of noisy information don't tell you much about other features of the person, and major parts of identity just aren't revealed in typical public records or private recollections however smart the inspecting intelligence is. This seems especially likely to be true of certain kinds of thoughts - memories of dreams that you never discuss, private opinions that you never find a reason to bring up, or even things you did discuss but that only substantially affected your private relationship with someone else who is also dead and can't contribute information to the project of resurrecting you. Are those kinds of things numerous? Are they important? It might depend on the target resurrectee; it might depend on human architecture in general.