Giulio Prisco made a blog post giving permission to use the data in his Gmail account to reconstruct an uploaded copy of him.


To whom it may concern:

I am writing this in 2010. My Gmail account has more than 5GB of data, which contain some information about me and also some information about the persons I have exchanged email with, including some personal and private information.

I am assuming that in 2060 (50 years from now), my Gmail account will have hundreds or thousands of TB of data, which will contain a lot of information about me and the persons I exchanged email with, including a lot of personal and private information. I am also assuming that, in 2060:

1) The data in the accounts of all Gmail users since 2004 is available.
2) AI-based mindware technology able to reconstruct individual mindfiles by analyzing the information in their aggregate Gmail accounts and other available information, with sufficient accuracy for mind uploading via detailed personality reconstruction, is available.
3) The technology to crack Gmail passwords is available, but illegal without the consent of the account owners (or their heirs).
4) Many of today's Gmail users, including myself, are already dead and cannot give permission to use the data in their accounts.

If all assumptions above are correct, I hereby give permission to Google and/or other parties to read all data in my Gmail account and use them together with other available information to reconstruct my mindfile with sufficient accuracy for mind uploading via detailed personality reconstruction, and express my wish that they do so.

Signed by Giulio Prisco on September 28, 2010, and witnessed by readers.

NOTE: The accuracy of the process outlined above increases with the number of persons who give their permission to do the same. You can give your permission in comments, Twitter or other public spaces.

Ben Goertzel copied the post and gave the same permission on his own blog. I made some substantial changes, such as adding a caveat to exclude the possibility of torture worlds (unlikely I know, but can't hurt), and likewise gave permission in my blog. Anders Sandberg comments on the thing.

New to LessWrong?

New Comment
23 comments, sorted by Click to highlight new comments since: Today at 7:59 PM

Recreating a person from haphazard data would almost certainly be impossible without a FAI, and with FAI you don't need to make wishes, it knows better whether you should've wished something or not.

This is probably true, though there's still the possibility of e.g. the FAI having low confidence of what I'd have wished, and the values of the population being such that it chooses to err on the side of not making copies when there's uncertainty. Not the scenario I'd assign a terribly high probability on, but then explicitly giving permission also only took some minutes or so.

Your permission doesn't answer the relevant question of whether it should reconstruct you, it only tells that you think it should (and, of course, it really shouldn't, there are better alternatives).

of course, it really shouldn't, there are better alternatives

I share this intuition as well and sometimes bring it up during discussions with SIAI people about cryonics. Can you explain your reasoning further? My arguments were like "All the resources an FAI would need to upload cryonics patients would be enough for years and years and years of simulated fun theory agents or whatever an FAI would use computronium for."

In general I guess I just assume that post-Singularity computronium (if the FAI doesn't just hack out of any matrixes it can) (and without considering acausal trade) would be used for things we can't really anticipate; probably not lots of happy brain emulations. But others think this 'identity' thing is really important to humanity and we're likely to hold onto it even through volition extrapolation. In response I'm often reminded of Wei's 'Complexity of Value != Complexity of Outcome'.

What are your thoughts on the matter?

I antipredict that "people get reanimated", but it doesn't follow that preserving people's minds using cryonics is morally irrelevant, or less so than the corresponding chance of saving human life. By preserving one's mind, you give the future additional information that it can use to produce more value, even if that's done not in the status quo manner (by reanimating).

Agreed, and thanks for sharing.

I don't want to attain immortality through my emails and blogs, I want to attain immortality through not dying !

(Am I willing to settle for second best? You bet I am.)

I don't see this overcoming the negentropy of the backwards system.

Let's say I write a piece based upon my own life - Harlan Ellison style. You have the finished produce, but there's a billion different ways that product could have been arrived at. You can't determine which night of the week I stayed up agonizing over it; how much whiskey went into its production; or how long it sat around in my head, before coming out on paper.

And that's only the first difficulty

Furthermore, as Ellison once quipped, he doesn't write about positive memories because they're none of our damned business. Even assuming that his stories (or my stories) are 100% factually accurate, there are going to be huge gaps where a million different things could fit.

Google's library on us could narrow down the possible mind-space significantly, but not to a T. The construct would end up being very similarl to be, but I doubt it would actually be me. A very large number of people could have lived the life which is evidenced by my google archive.

There is a discount factor: what you are today is most relevant for the quality of what you can do today, and less so for the quality of what you can do in 50 years.

I'm sort of dubious that this is even possible, and even if it were, it wouldn't be me since it wouldn't share continuity with me. It would likely, at best, just be an AI taught to pretend to be me.

What is 'continuity'? Why is continuity important to you? What changes in anticipated experience would you expect if continuity was important to identity versus not important to identity, and why do you have a preference for one over the other?

Continuity of consciousness, and it's important because without it, the me that results from the uploading isn't this me, just an instance of me, which are not the same thing. The fact that there is a copy of me going around does not change the fact that this instance of me is dead.

Whenever you enter deep sleep you lose continuity of consciousness. Whence your intuition that continuity is important? Are you not impressed by timeless physics, nor Tegmark's multiverses? In a spatially infinite universe with particles being indistinguishable and whole Hubble volumes also being indistinguishable (the standard cosmological position), in what sense are different 'you's actually different people, even if there is no causal connection between them?

The fact that there is a copy of me going around does not change the fact that this instance of me is dead.

But does it even matter? If it looks like a you, thinks like a you, cares about all the same things you do, then your utility function should probably just consider it a you.

If you believe in Tegmark's multiverse, what's the point of uploading at all? You already inhabit an infinity of universes, all perfectly optimized for your happiness.

Personally I'm very inclined toward Tegmark's position and I have no idea how to answer the above question.

Infinity, yes, but the relative sizes of infinity matter. There's also an infinity of universes of infinite negative utility. Uploading yourself is increasing the relative measure of 'good' universes.

This is especially true if you think of 'measure' or 'existence' being assigned to computations via a universal prior of some kind as proposed by Schmidhuber and almost everyone else (and not a uniform prior as Tegmark tended towards for some reason). You want as large a swath of good utility in the 'simple' universes as possible, since those universes have the most measure and thus 'count' more according to what we might naively expect our utility functions to be.

Uploading in a simple universe would thus be worth significantly more utility than the infinity of universes all optimized for your happiness.

That said, it's likely that our intuitions about this are all really confused: UDT is the current approach to reasoning about these issues, and I'm not fit to explain the intuitions or implications of UDT. Wei? Nesov? Anyone like to point out how all of this works, and how anthropics gets dissolved in the meantime?

How do you do that quote thing, anyway?

In any case, you don't lose continuity of consciousness when asleep; your brain keeps ticking away in the background as it does its maintenance routines. Never heard of timeless physics or Tegmark's multiverses; looking at the links, though, I don't really see the relevance of them. I do in fact believe in the many-worlds theory of quantum physics; that there are a nigh-infinite copies of me does not mean that this particular instance of me is unimportant to me (and, of course, the a good chunk of those other mes likely feel the same).

If someone built a teleporter that created a copy of me, said copy of me would be an instance of the class of persons who refer to themselves as "nick012000" on the internet; however, creating another instance of said class does not mean that it is the same as another instance of said class. To use a programming metaphor, it'd be like saying that "Nick012000 nick1 = new Nick012000; Nick012000 nick2 = (Nick012000) nick1.clone();" produces one variable. It doesn't; it produces two.

To continue the programming metaphor, I also wouldn't join into a hivemind, since that would turn that particular instance of the Nick012000 class into just a data field in that instance of the Hivemind class, but I would be okay with creating a hivemind with multiple blank bodies with my mind written onto them, since that would just be like running "Nick012000 nick1 = new Nick012000; Nick012000 nick2=nick1;", and both variable names refer to the same object.

And, yes, it would matter to my utility function, since my utility function gives a strong positive weight to the continued existence of both the class of persons who refer to themselves as "nick012000" on the Internet, and to the particular instance of said class that is evaluating and executing said utility function.

How do you do that quote thing, anyway?

I refuse to tell you! Just kidding. You preface a line or block of text with the '>' symbol followed by a space. You can click the little green help button on the bottom right of the comment window to see other kinds of formatting (it should really be called something else, I know).

Never heard of timeless physics or Tegmark's multiverses; looking at the links, though, I don't really see the relevance of them.

I highly recommend reading Tegmark's popular science paper on multiverses, it's an excellent example of clear and concise science writing.

I think I understand your position better now, thanks for clarifying.

I refuse to tell you! Just kidding. You preface a line or block of text with the '>' symbol followed by a space. You can click the little green help button on the bottom right of the comment window to see other kinds of formatting (it should really be called something else, I know).

Thanks.

I highly recommend reading Tegmark's popular science paper on multiverses, it's an excellent example of clear and concise science writing.

I'll probably do so, once I have the time. I'm procrastinating from doing university stuff at the moment.

I think I understand your position better now, thanks for clarifying.

No worries! I think I might have editted it after you posted, though.

[-][anonymous]13y20

I agree--the whole idea of writing oneself into the future seems extremely implausible, especially using something like email.

Much more plausible is the notion that you could modify a "standard median mind" to fit someone's writing. But I suspect that most of the workings of such a creation would be from the standard model than the writings, and also that this is not what people have in mind as far as "writing oneself into the future" goes.

[-][anonymous]13y10

I agree. I don't see how even an FAI could reproduce a model of your brain that is significantly more accurate than a slightly modified standard median mind. Heck, even if an FAI had some parts of your brain preserved and some of your writings (e.g. email) I'm not sure it could reproduce the rest of you with accuracy.

I think this is one of those domains where structural uncertainty plays a large part. If you're talking about a Bayesian superintelligence operating at the physical limits of computation... I'd feel rather uneasy making speculations as to what limits it could possibly have. In a Tegmark ensemble universe, you get possibilities like 'hacking out of the matrix' or acausal trade or similarly AGI meta-golden rule cooperative optimization, and that's some seriously powerful stuff.

[-][anonymous]13y00

What do you mean by "continuity", and why is it important to identity?