Cryonics appears to be the best hope for continuing a person's existence beyond physical death until other technologies provide better solutions.  But despite its best-in-class status, cryonics has several serious downsides.

First and foremost, cryonics is expensive—well beyond a price that even a third of humanity can afford.  Economies of scale may eventually bring the cost down, but in the mean time billions of people will die without the benefit of cryonics, and, even when the cost bottoms out, it will likely still be too expensive for people living at subsistence levels.  Secondly, many people consider cryonics immoral or at least socially unacceptable, so even those who accept the idea of cryonics and want to pursue taking personal advantage of it are usually socially pressured out of signing up for cryonics.  Combined, these two forces reduce the pool of people who will act to sign up for cryonics to be less than even a fraction of a percent of the human population.

Given that cryonics is effectively not an option for almost everyone on the planet, if we're serious about preserving lives into the future then we have to consider other options, especially ones that are morally and socially acceptable to most of humanity.  Pushed by my own need to find an alternative to cryonics, I began trying to think of ways I could be restored after physical death.

If I am unable to preserve the physical components that currently make me up, it seems that the next best thing I can do is to record in some way as much of the details of the functioning of those physical components as possible.  Since we don't yet have the brain emulation technology that would make cryonics irrelevant for the still living, I need a lower tech way to making a record of myself.  And of all the ways I might try to record myself, none seems to better balance robustness, cost, and detail than writing.

Writing myself into the future—now we're on to something.

At first this plan didn't feel like such a winner, though:  How can I continue myself just through writing?  Even if I write down everything I can about myself—memories, medical history, everything—how can that really be all that's needed to restore me (or even most of me)?  But when we begin to break down what writing everything we can about ourselves really gives us, writing ourselves into the future begins to make more sense.

For most of humanity, what makes you who you are is largely the same between all people.  Since percentages would make it seem that I have too precise an idea of how much, let's put it like this:  up to your eyebrows, all humans (except those with extreme abnormalities) are essentially the same.  Because we share the same evolutionary past as all of our conspecifics, the biology and psychology of our brains is statistically the same.  We each have our quirks of genetics and development, but even those are statistically similar among people who share our quirks.  Thus with just a few bits of data we can already record most of what makes you who you are.

Most people find this idea unsettling when they first encounter it and have an urge to look away or disagree.  "How can I, the very unique me, be almost completely the same as everyone else?"  Since this is Less Wrong and not a more general forum, though, I'll assume you're still with me at this point.  If not, I recommend reading some of the introductory sequences on the site.

So if we begin with a human template, add in a few modifiers for particular genetic and developmental quirks, we get to a sort of blank human that gets us most of the way to restoring you after physical death.  To complete the restoration, we need to inject the stuff that sets you uniquely apart even from your fellow humans who share your statistically regular quirks:  your memories.  If the record of your memories is good enough, this should effectively create a person who is so much like you as to be indistinguishable from the original, i.e. restore you.

But, you may ask, is this restoration of you from writing really still you in the same way that the you restored from cryonics is you?  Maybe.  To me, it is.  Despite what subjective experience feels like, there doesn't seem to be anything in the brain that makes you who you are besides the general process of your brain and its memories.  Transferring yourself from your current brain to another brain or a brain emulation via writing doesn't seem that much different from transferring yourself via neuron replacement or some other technique except that writing introduces a lossy compression step, necessitated only by a lack of access to better technology.  Writing yourself into the future isn't the best solution, but it does seem to be an effective stopgap to death.


If you're still with me, we have a few nagging questions to answer.  Consider this an FAQ for writing yourself into the future.

How good a record is good enough?  In truth, I don't think we even know enough to get the order of magnitude right.  The best I can offer is that you need to record as much as you are willing to.  The more you record, the more there will be to work with, and the less chance there will be of insufficient data.  It may turn out that you simply can't record enough to create a good restoration of a person from writing, but this is little different from the risk in cryonics of not being well preserved enough to restore despite best efforts.  If you're willing to take the risk that cryonics won't work as well as you hope, you should be willing to accept that writing yourself into the future might not work as well as you hope.

How is writing yourself into the future more socially acceptable than cryonics?  Basically, because people already do this all the time, although not with an eye toward their eventually restoration.  People regularly keep journals, write blogs, write autobiographies, and pass on stories of their lives, even if only orally.  You can write a record of yourself, fully intending for it to be used to restore you at some future time, without ever having to do anything that is morally or socially unacceptable to other people (at least, for people in most societies) other than perhaps specify in your writing of yourself that you want it to be used to restore you after you die.

How is writing yourself into the future more accessible to the poor?  If a person is literate and has access to some form of durable writing material, they can write themselves into the future, limited only by their access to durable writing material and reliable storage.  Of course, many people are not literate, but the cost of teaching literacy is far lower than the cost of cryonics, and literacy has other benefits beyond writing yourself into the future, so it's an easy sell to increase literacy even to people who are opposed to the idea of life extension.

Will the restoration really be me?  Let me address this in another way.  You, like everything else, are a part of the universe.  Unlike what we believe to be true of most of the stuff in the universe, though, the stuff that makes up what we call you is aware of its existence.  As best we can tell, the way that you are aware of your existence is because you have a way of recalling previous events during your existence.  If we take away the store and recall of experience, we're left with some stuff that can do essentially everything it could when it had memory, but will not have any concept of existing outside the current moment.  Put the store and recall back in, though, and suddenly what we would recognize as self-awareness returns.

Other questions?  Post them and I'll try to address them.  I have a feeling that there will be some strong disagreement from people who disagree with me about what self-awareness means and how the brain works, and I'll try to explain my position as best I can to them, but I'm also interested in any other questions that people might have since there are likely many issues I haven't even considered yet.

New to LessWrong?

New Comment
147 comments, sorted by Click to highlight new comments since: Today at 9:36 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Writing is extremely low-bandwidth. If I recall correctly, Shannon did some experimentation and found that per letter, English was no more than a bit and I've seen other estimates that it's less than a bit, per letter. (In comparison, depending on language and encoding, a character can take up to 32 bits to store uncompressed. Even ASCII requires 8 bits/1 byte per character.) And given the difficulty of producing a megabyte of personal information, and the vast space of potential selves...

If we're going to try to preserve ourselves through recorded information, wouldn't it make much more sense to instead spend a few hundred/thousand dollars on lifelogging? If you really do record your waking hours, then preservation of your writings is automatically included - as well as all the other stuff. Plus, this solves the issue of mundane experiences.

EDIT: I've put up some notes at https://www.gwern.net/Differences

3groupuscule14y
Writing might be inferior to lifelogging as a way of preserving yourself, but it might actually be better than lifelogging as a way of having a specific type of impact on the future. Since neither form of reconstruction is going to provide the same type of experiential immortality as cryonics potentially would, why not attempt to reincarnate your ideal self? (As far as general anthropological data goes, there's going to be plenty of footage of average schmucks doing random stuff.)
2gwern12y
Curiously, there seem to be people interested in actually doing this sort of thing. From "The Terasem Mind Uploading Experiment". Not sure how seriously to take them, but my own readings have been inclining me to the point of view that personal identities are just not that information-rich. Also relevant is another paper in that issue, on very large scale use of personality questionnaires: http://www.worldscinet.com/ijmc/04/0401/S1793843012400082.html
3Mati_Roy4y
first link is broken, but available on the Wayback Machine: The Terasem Mind Uploading Experiment second link is broken; was it linking to this article?: How Accurate Are Personality Tests?
1CarlShulman12y
Could you say more about said research? My sense is that that people can be flexible about preserving a tiny portion of their unique information, e.g. that many people would be very happy to forget most of their daily experiences from their lives so far (replacing with brief text files, records of major relationships and their emotional intensities, that sort of thing) in exchange for greatly extended life in the same body. But the "mindfile" backup method inevitably involves a chance for the original to diverge and evoke intuitions that "the thing over there, which I perceive as separate" doesn't provide continuity.

Could you say more about said research?

It's basically as above. Traits like IQ offer remarkable predictive power; Big Five on top of IQ allows more prediction, and the second paper's Small 100 seems like it'd add nontrivial data if anyone runs it on a suitable database to establish what each trait does. Much of the remaining variation can be traced to the environment, which obviously doesn't help in establishing that human personalities are extremely rich & complex.

Long-term memory is much smaller than most would guess when talking about 'galaxies of galaxies of neurons', and autobiographical memory is famously malleable and more symbolic than sensory. (Like dreams: they seem lifelike detailed and amazing computational feats, but if you try to actually test the detail, like read a book in your dream, you'll usually fail.) Skills don't involve very much personality, either, since there are so many ways to be a bad amateur and so few to be an expert (consider how few items it takes to make a decent expert system - not millions and billions!) and are measurable anyway. "The mind is flat", one might say. Once you get past the (very difficult) tasks of perceiving and modeling the

... (read more)
2jimrandomh14y
Unfortunately, lifelogging is illegal in my home state, and in many other places. Specifically, it is illegal to record audio here without informing all the parties being recorded, which is prohibitively impractical when you want to record 24/7. (There is no similar restriction for video, but video is likely to be less useful for reconstruction purposes than audio.)
7gwern14y
That's unfortunate. I guess you would want a discreet camera until the laws become more sensible.
2thomblake14y
That's terrible - it's clearly a rights violation to disallow recording in public. Based on this guide it looks like only a few states require consent of all parties, and Vermont is the only one with basically no restrictions on recording. Of course, having a camera/recorder in plain view tends to entail that consent is assumed, so maybe lifelogging sans the hidden camera is in order.
2Kaj_Sotala14y
What's the situation with commercially available lifelogging software/hardware? Can I just put in some money to get a recorder and start using that, or does it still involve a lot of customization to get something that might or might not work very well for the purpose?
3gwern13y
I've started a discussion on the topic here: http://lesswrong.com/r/discussion/lw/2vv/lifelogging_the_recording_device/
2gwern14y
I'm afraid I couldn't really say. I have seen the specs of enough small digital cameras and surveillance devices to know that decent quality 8 or 16 hour products using Flash are perfectly possible (and hard drive space is now so cheap as to not be worth discussing). But I have yet to hear of anything that strikes me as ideal. Perhaps some other commentators know more.
-1jaimeastorga200012y
But that does not have the advantages over cryonics that gworley uses to argue for writing. Cryonics at its cheapest costs $1250 for a lifetime CI membership plus the recurring life insurance payments. An initial investment on lifelogging combined with the periodical maintaining and/or replacing of equipment ought to be comparable (although you could count on technological advancement to bring these costs down as time goes on). And I don't think lifelogging is significantly more socially acceptable than cryonics.
1gwern12y
The membership is not even the costliest part. The insurance costs you a hard drive a month or more, and per Kryder's law and general consumer electronics, the disparity gets worse every year as the cost of lifelogging drops like a stone. Alcor runs at an annual loss and by definition is underpricing its services; I suspect CI is. Inflation is a major issue which will push up prices much higher than they currently are, which is what the current wrangling over 'grandfathering' is about - people have bought far too little insurance. tl;dr: cryonics is more expensive than lifelogging. Cryonics will only get more expensive; lifelogging will only get cheaper. You do the math.

Dear Diary,

My doctor told me today that of all the Elven Jedi he has ever treated I have by far the largest penis...

(Words we write down are only very loosely correlated with who we are.)

(Words we write down are only very loosely correlated with who we are.)

No, it says something about you that you made that joke and Morendil made the other. (this is also a response to nickernst's comment about not knowing himself)

0Blueberry14y
Can you elaborate on what you think it means about them?
6Douglas_Knight14y
No, I cannot, just like nickernst cannot do a core dump. But a lot of information leaks out that might be exploitable by a superintelligence. Cryonics has the advantage of not requiring a superintelligence, probably only nanotech.
4wedrifid14y
Well, for a start, that we're funny guys. ;) But it also hints at my general cognitive processes. When I encounter a concept I understand it by exploring the extremes, the boundary cases and exceptions (even in my own thoughts). The exploits. That I chose to make a joke rather than a criticism in this context is also fairly indicative of my personality, although the inference there is somewhat more difficult. It would require looking at my social responses to various other situations.

Also, using the terms "Elven Jedi" is a pretty clear indicator of you being more affiliated than average with the scifi/fantasy geekdom. Choosing to include "sexual" content such as the size of one's penis also says something - there are lots of people who'd be too prudish to go even that far. Some weak inferences could probably be also drawn from the fact that you used the expression "Dear Diary".

None of this would be enough to justify any firm conclusions by itself, of course. But combined with a large enough number of other weak pieces of evidence...

4Alicorn14y
I will never write an epistolary work of fiction again!
7gwern13y
I'm sorry, the 'I will never X again!' snowclone just tagged you as a white 21st century American in the top decile of intelligence who has spent a great deal of time reading webcomics in the style of or affiliated to Dinosaur Comics, with all that implies.
2jimrandomh13y
What's wrong with being a white, 21st century American in the upper decile of intelligence who has read Dinosaur Comics?

Alicorn's disavowal is due to fear of something learning about her through a series of weak inferences; however, ironically, her disavowal/vow to avoid revealing data useful for weak inference is itself grist for the weak inference mill. (And hopefully my example inferences show that the weak inferences may not be all that weak.)

That something learning about her might be entirely neutral. On the other hand, it might be an unFriendly AI programmed by Black Panthers who were picked on as kids by nerds and are irritated by Ryan North's longboard stylings.

1kodos9614y
This made me literally LOL. Uncontrollably. For about a minute. My coworkers are looking at me funny now.

Unfortunately people who can't afford cryonics are unlikely to have the time or resources to create meticulous records of themselves. When you consider the opportunity cost of creating such records, the actual materials needed, and the cost of preserving them reliably for at least several decades, it isn't obvious that this is much cheaper than cryonics.

There's also the problem that most people don't consider 'make a perfect copy of me' and 'bring me back to life' to be equivalent operations, and the ones who do are almost all Western intellectual types who could easily afford cryonics if they actually wanted to. The world’s poor almost all see their personal identity as tied to their physical body, so this kind of approach would seem pointless to them.

3Gordon Seidoh Worley14y
I agree with you here in that almost no one, especially the world's poor, will consider this a valid means of coming back to life. But, then, that's sort of the point. Depending on how you present it you can potentially get people to keep these kinds of writings even if they don't believe it will extend their lives in any meaningful way, and then they won't be completely lost because they didn't believe it was possible to come back from a biological death. And it lets those who do believe it will let them come back to life pursue their interest without hitting against social backlash.
0ewbrownv14y
What are you going to tell an illiterate subsistence farmer in Bangladesh that will convince him to put an hour a week into recording his life instead of feeding his family? I think you greatly underestimate the difficulty of implementing a scheme like this, and overestimate the chance that the effort will actually accomplish anything. If you really want to save lives in the Third World you'd have a bigger impact donating to traditional charity efforts.
0ocr-fork14y
Writing isn't feasible, but lifelogging might be. (see gwern's thread). The government could hand out wearable cameras that double as driving licenses, credit cards, etc. If anyone objects all they have to do rip out the right wires.
1DSimon14y
I object a great deal! Once we're all carrying around wearable cameras, the political possibility of making it illegal to rip out the wires would seem much less extreme than a proposal today to introduce both the cameras and the anti-tampering laws. Introducing these cameras would be greasing a slippery slope. I'd rather keep the future probability for total Orwellian surveillance low, thanks.
4gwern14y
Have you read David Brin's The Transparent Society? Surveillance societies are already here (look at London and its million-plus cameras), and purely on the side of the authorities. Personal cameras at least may help even the scales.

I find most of the public debates on these issues rather myopic, in that they focus on the issue of surveillance by governments as the main problem. What I find to be a much more depressing prospect, however, are the consequences of a low-privacy society that may well come to pass through purely private institutions and transactions.

Even with the most non-intrusive and fair government imaginable, if lots of information about your life is easily available online, it means that a single stupid mistake in life that would earlier have only mild consequences can ruin your reputation forever and render you permanently unemployable and shunned socially. Instead of fading memories and ever more remote records about your past mistakes, they will forever be thrown right into the face of anyone who just types your name into a computer (and not to even mention the future more advanced pattern-matching and cross-referencing search technologies). This of course applies not just to mistakes, but also to any disreputable opinions and interests you might have that happen to be noted online.

Moreover, the social norms may develop to the point where it's expected that you constantly log the details ... (read more)

7Kaj_Sotala14y
I've heard this opinion expressed frequently, but it always seems to kind of contradict itself. If there's lots of information available about everyone, and all kinds of stupid mistakes will easily become permanently recorded... ...then wouldn't that lead to just about everyone's reputation being ruined in the eyes of everyone? But that doesn't make any sense - if almost everyone's going to have some stupid mistakes of theirs caught permanently on file, then all that will happen is that you'll find out you're not the only one who has made stupid mistakes. Big deal. In fact, this to me seems potentially preferrable than our current society. Right now, people's past mistakes get lost in the past. As a result, we construct an unrealistic image where most people seem far more perfect than they actually are. Some past mistake coming out might ruin someone's reputation, and people who have made perfectly normal and reasonable mistakes will feel a lot more guilty about it than would be warranted. If the mistakes everyone had made were available, then we wouldn't have these unrealistic unconscious conceptions of how perfect people must be. Society might be far healthier as a result.

Kaj_Sotala:

But that doesn't make any sense - if almost everyone's going to have some stupid mistakes of theirs caught permanently on file, then all that will happen is that you'll find out you're not the only one who has made stupid mistakes.

There are at least three important problems with this view:

  • First, this is only one possible equilibrium. Another possibility is a society where everyone is extremely cautious to the point of paranoia, so that very few people ever commit a faux pas of any sort -- and although most people would like things to be more relaxed, they're faced with a problem of collective action. I don't think this is at all unrealistic -- people living under repression quickly develop the instinct to watch their mouth and behavior obsessively.

  • Second, even under the most optimistic "good" equilibrium, this argument applies only to those behaviors and opinions that are actually widespread. Those whose unconventional opinions and preferences are in a small minority, let alone lone-wolf contrarians, will have to censor themselves 24/7 or suffer very bad consequences.

  • Third, many things people dare say or do only in private are not dangerous because

... (read more)
1[anonymous]14y
I think it can apply even to minority opinions, because the minority opinions add up. Even if only 1% of the population has a given minority opinion, significantly more than 1% of the population is probably going to have at least one minority opinion about something. If people choose to be super-intolerant of 1% opinions, and if 70% of the population has at least one 1% opinion, then it's not 1% of the population that people will have to be super-intolerant of, but 70% of the population. Or if 70% seems too extreme a possibility, try 30%. The point is that the sum total of small minorities adds up to a total that is less small, and this total will determine what happens. Take the extreme case: suppose the total adds up to 100%, so that 100% of the population holds at least one extreme-minority opinion. Can a person afford to ostracize close to 100% of the population (consisting of everybody who has at least one extreme-minority opinion that he does not share)? I think not. Therefore he will have to learn to be much more tolerant of extreme-minority opinions. While that is only the extreme case, and 30% is not 100%, I think the point is made, that the accumulated total of all people who have minority opinions matters, and not merely the total for each minority opinion.
1Sticky14y
It seems unlikely that people would think that way. Taking myself as an example, I favor an extensive reworking of the powers, internal organization, and mode of election of the U.S. House of Representatives. I know that I'm the only person in the world who favors my program, because I invented it and haven't yet described it completely. I've described parts of it in online venues, each of which has a rather narrow, specialist audience, so there might possibly be two or three people out there who agree with me on a major portion of it, but certainly no one who agrees on the whole. That makes me an extreme minority. There are plenty of extreme minorities I feel no sympathy for at all. Frankly, I think moon-hoax theorists should be shunned.
3[anonymous]14y
You are not facing the situation I'm describing, because it hasn't happened yet. It is a future speculation that would occur in a sufficiently transparent society. As long as you are unaware of most people's odd opinions, you can afford to shun the tiny minority of odd thinkers whose odd thoughts you are aware of, because in doing so you are only isolating yourself socially from that tiny minority, which is no skin off your nose. However, in a sufficiently transparent society you may, hypothetically, discover that 99% of everyone has at least one opinion which (previously) you were ready to shun a person for. In that hypothetical case, if you continue your policy of shunning those people, you will find yourself socially isolated to a degree that a homeless guy living under a bridge might feel sorry for. In that hypothetical situation, then, you may find yourself with no choice but to relax your standards about whether to shun people with odd opinions. On second thought, in a sense it has happened. I happen to live in that world now, because I happen to think that pretty much everybody has views about as batty is moon-hoax theorists. In reaction to finding myself in this situation, I am not inclined to shun people who espouse moon-hoax-theory-level idiocy, because I would rather have at least one or two friends.
3reaver12114y
You're assuming that because someone has made mistakes themselves they will judge others less harshly. That is not necessarily the case. Besides, most people make indeed mistakes but not the same mistakes. If you're boss is a teetotaler and you are a careful driver, you are not going to think well of each other if you get drunk and your boss gets into an car accident. Even I have the same problem. I tend to procrastinate so if a coworker is past his deadline I don't really care. But I dislike sloppy thinking and try to eradicate it in myself so it really gets on my nerves if someones goes all irrational on me. (Although I seem to be getting better as I get older in accepting that most people don't think like me.)
4[anonymous]14y
Actually, I don't think that's the only or most important factor. People who learn about the skeletons in your closet will compare you, not only to themselves, but to other people. If everyone has skeletons in their closet and everyone knows about them, then your prospective employer Bob (say) will be comparing the skeletons in your closet not only to the skeletons in his own closet, but more importantly to the skeletons in the closets of the other people who are competing with you for the same job. As for people not making equal mistakes, to put it in simple binary terms merely for the purpose of making the point, divide people into "major offenders" and "minor offenders" and suppose major offenders are all equally major and minor all equally minor. If major offenders outnumber minor offenders, then being a major offender is not such a big deal since you're part of the majority. But if minor offenders outnumber major offenders, then only a minority of people will be major offenders and therefore only a minority will have to worry about a transparent society. So either way, the transparent society is not that big a thing to fear for the average person. It's a self-limiting danger. The more probable it is that the average person will be revealed to have Pervert Type A, the greater the fraction of people who will be revealed to be Pervert Type A, and therefore the harder it will be for other people to ostracize them, since to do so would reduce the size of their own social network.
0reaver12114y
Good points. Just read the whole conversation between you and Vladimir_M and I agree it could go both ways.
3NancyLebovitz14y
One part of how that plays out depends on whether there's a group that can enforce "it's different when we do it."
6gwern14y
An overall view of the 20th century would note that one's own government is a major threat to one's life. I don't especially see why one would think this has ceased to be true in the 21st; history has seen many sclerotic regimes pass and be replaced by fresher ones, and a one-way surveillance society would only enhance government power. Why do you think social norms are a greater threat?
7Vladimir_M14y
Well, that's a complex topic that can't possibly be done justice to in a brief comment. But to put it as succinctly as possible, modern governments are already so powerful that given the existing means at their disposal, additional surveillance won't change things much. Your argument can in fact be used to argue against its relevance -- all the sundry 20th century totalitarians had no problem doing what they did without any surveillance technology to speak of. My view, which would take much more space than is available here to support by solid arguments, is that the modern Western system of government will continue sliding gradually along the same path as now, determined by bureaucratic inertia and the opinions fashionable among high-status groups; both these things are fairly predictable, as far as any large-scale predictions about human affairs go. Whether these developments should be counted as good or bad, depends on many difficult, controversial, and/or subjective judgments, but realistically, even though I'm inclined towards the latter view, I think anyone with a little prudence will be able to continue living fairly comfortably under the government's radar for the foreseeable future. Even in the conceivable scenarios that might end up in major instability and uncertain outcomes, I don't think surveillance technology will matter much when it comes to the trouble that awaits us in such cases. On the other hand, I see a very realistic prospect of social norms developing towards a zero-privacy world, where there would be no Orwellian thought police coming after you, but you would be expected to maintain a detailed public log of your life -- theoretically voluntarily, but under the threat of shunning and unemployability in case you refuse it. Already, employers, school admissions bureaucrats, etc. are routinely searching through people's trails left on Facebook and Google. What happens when an even greater portion of one's life will be customarily posted online?
7gwern14y
While we're simply stating our beliefs... I view this as merely a transition period. You say we cannot both maintain our old puritanical public standards and ever increasing public disclosure. I agree. However, the latter is driven by powerful and deep economic & technological & social trends, and the former is a weak creature of habit and tradition which has demonstrated in the 20th century its extreme malleability (just look at homosexuality!). It is a case of a movable object meeting an unstoppable force; the standards will be forced to change. A 10 year old growing up now would not judge harshly an old faux pas online, even if the 30 and 40 year olds currently in charge would and do now judge harshly. Those 30 and 40 year olds' time is numbered.
6Vladimir_M14y
On the whole, I don't think that people are becoming more tolerant of disreputable behaviors and opinions, or that they are likely to become so in the future -- or even that the set of disreputable traits will become significantly smaller, though its composition undoubtedly will change. Every human society has its taboos and strong status markers attached to various behaviors and opinions in a manner that seems whimsical to outsiders; it's just that for the last few decades, the set of behaviors and opinions considered disreputable has changed a lot in Western societies. (The situation is also confused by the fact that, similar to its inconsistent idealization of selectively applied "free-thinking," our culture has developed a strange inconsistent fondness for selectively applied "tolerance" as a virtue in its own right.) Of course, those whose opinions, preferences, and abilities are more in line with the new norms have every reason to be happy, and to them, it will look as if things have become more free and tolerant indeed. Trouble is, this is also why it's usually futile to argue the opposite: even by merely pointing out those things where you are now under greater constraint by social norms than before, you can't avoid the automatic status-lowering association with such things and the resulting derision and/or condemnation. Realistically, the new generations will react to reduced privacy by instinctively increasing conformity, not tolerance. Ultimately, I would speculate that in a world populated by folks who lack the very idea of having a private sphere where you can allow yourself to do or say something that you wouldn't want to be broadcast publicly, the level of tolerance would in fact go way down, since typical people would be brought up with an unrelenting focus on watching their mouth and their behavior, and lack any personal experience of the satisfaction of breaking a norm when no one untrusted is watching.
3[anonymous]14y
It is commonly said that status competition is zero-sum. This seems a more certain invariant than what you just wrote above. If that's the case, then any change in the degree of tolerance will be perfectly matched by a corresponding change in the degree of conformity - and vice versa. The picture you paint, however, is of the average person becoming more of a pariah, more unemployable, fewer friends, because they are haunted by that one ineradicable disreputable behavior in their past. This picture violates the assumption that status competition is zero-sum - an assumption which I have a stronger confidence in than I do in your claim that we are not going to become "more tolerant". In fact your claim is ambiguous, because there is surely no canonical way to compare different sets of taboo behavior so that the degree of tolerance of different cultures can be compared. It is a similar problem to the problem of adjusting for inflation with price indexes. I have more confidence in our ability to measure, and compare, the fraction of the population relegated to low status (eg unemployability), than I do in our ability to measure, and compare, the magnitudes of the sets of taboos of different societies.
3NancyLebovitz14y
If there are a lot of pariahs in a connected world, then they will form their own subcultures.
2Vladimir_M14y
Constant: Maybe I failed to make my point clearly, but that is not what I had in mind. The picture I paint is of the average person becoming far more cautious and conformist, and of a society where various contrarians and others with unconventional opinions and preferences have no outlet at all for speaking their mind or indulging their preferences. Average folks would presumably remain functioning normally (within whatever the definition of normality will be), only in a constant and unceasing state of far greater caution, hiding any dangerous thoughts they might have at all times and places. The number of people who actually ruin their lives by making a mistake that will haunt them forever won't necessarily be that high; the unceasing suffocating control of everyone's life will be the main problem. What the society might end up looking like after everyone has grown up in a no-privacy world, we can only speculate. It would certainly not involve anything similar to the relations between people we know nowadays. (For example, you speak of friends -- but at least for me, a key part of the definition of a close friend vs. friend vs. mere acquaintance is the level of confidentiality I can practice with the person in question. I'm not sure if the concept can exist in any meaningful form in a world without privacy.) That's a very good analogy! But note that none of my claims depend on any exact comparison of levels of tolerance. Ultimately, the important question is whether, in a future Brinesque transparent society, there would exist taboo opinions and preferences whose inevitable suppression would be undesirable by some reasonable criteria. I believe the answer is yes, and that it is unreasonably optimistic to believe that such a society would become so tolerant and libertarian that nothing would get suppressed except things that rightfully should be, like violent crime. (And ultimately, I believe that such unwarranted optimism typically has its roots in the same bia
0[anonymous]14y
Thanks - I have nothing specifically in reply. Just to be clear about where I'm coming from, while I am not convinced that the future will unfold as you describe, neither am I convinced that it will not. So, I agree with you that popular failure to devote any attention to the scenario is myopic.
1NancyLebovitz14y
The interesting thing is that is isn't just going to be reasonable individual choices. I assume there will be serious social pressure to take some faux pas seriously and ignore others.
3NancyLebovitz14y
I'll throw some complexity in-- those social standards change, sometimes as a result of deliberate action, sometimes as a matter of random factors. The most notable recent example is prejudice against homosexuality getting considerably toned down. I agree that there's a chance that just not having a public record of oneself mightl be considered to be suspicious. I'm hoping that the loss of privacy will lead to a more accurate understanding of what people are really like, and more reasonable standards, but I'm not counting on it.
0Unknowns14y
Once it becomes sufficiently obvious that everyone frequently does or says "not very respectable" things, people will begin to just laugh when someone brings them up as a criticism. It will no longer be possible to pretend that such things apply only to the people you criticize.
4Vladimir_M14y
That is only one possible equilibrium. The other one is that as the sphere of privacy shrinks, people become more and more careful and conformist, until ultimately, everyone is behaving with extreme caution. In this equilibrium, people are locked in a problem of collective action -- nobody dares to say or do what's on his mind, even though most people would like to. Moreover, even in the "good" equilibrium, the impossibility of hypocrisy protects only those behaviors and opinions that are actually characteristic of a majority. If your opinions and preferences are in a small minority, there is nothing at all to stop you from suffering condemnation, shunning, low status, and perhaps even outright persecution from the overwhelming majority.
3ewbrownv14y
I found it stunningly naive. So far the actual response of governments to citizen surveillance has been to make it illegal whenever it becomes inconvenient, and of course government systems are always fenced in with 'protections' to prevent private individuals from 'misusing' the data they collect. In an actual surveillance state the agency with control of the surveillance system would have the ability to imprison anyone at any time while being nearly immune to retaliation, a situation that ensures it will quickly mutate into an oppressive autocracy no matter what it started out as.
1NancyLebovitz14y
This has the cheering implication that surveillance by citizens makes a difference when it does happen, and it's important to push to make sure it's legal.
0Unknowns14y
Here's one vote for total Orwellian surveillance.
0[anonymous]14y
DSimon: Sadly, that horse has long left the barn -- and in any case, it seems to me that privacy is even in principle incompatible with highly developed digital technology. What I find to be a much more realistic danger than the prospect of Orwellian government are the social and market implications of a low-privacy world. If a lot of information about your life is easily accessible online, this means that embarrassing mistakes that would cause only mild consequences in the past can now render you permanently unemployable, and perhaps even socially ostracized. In such a world, once you do anything disreputable, it's bound to haunt you forever, throwing itself into the face of anyone who just types your name into a computer (and not to even mention the future technologies for other sorts of pattern-matching and cross-referencing search). To make things even worse, in a society where you're expected to place a detailed log of your private life online by social convention -- and it seems like we are going towards this, if the "social networking" websites are any indication -- refusal to do so will send off a thunderous signal of weirdness and suspiciousness. Thus, the combination of technology and social trends can result in a suffocatingly controlling society even with the most libertarian government imaginable.
[-][anonymous]14y170

I don't have direct access to a large percent of my memories. Many cannot be put into words, and I don't just mean music and imagery. The knot between these memories is utterly complex. In self-reflection, I am dishonest with myself, and I don't feel like it is so. My mother has the idea that poetry is a means of most honestly recording some of the difficult-to-explain thoughts, but I think that the scope, inexpressibility and interconnection of the memories makes this infeasible.

3Kaj_Sotala14y
Presumably those memories affect your behavior somehow, though. A superintelligence might be able to re-create, if not the same memories, then functionally equivalent ones. Whether it was capable of doing so depends on how much information of your behavior is retained. On the other hand, if those memories don't affect your behavior, then that implies that they're not essential for rebuilding something we could call "you".

How good a record is good enough? In truth, I don't think we even know enough to get the order of magnitude right. The best I can offer is that you need to record as much as you are willing to.

But this estimate is essential. By deciding to pursue this course of action, you in effect state that you estimate sufficient probability of it being enough to justify the additional effort. You can't say "I don't know" and act on this knowledge at the same time.

1Gordon Seidoh Worley14y
Perhaps I made a mistake in using the LW taboo words "I don't know". Really, how much is probably a function of how fine-grained you want the restoration from writing to be. Since I think it's reasonable to assume decreasing marginal utility from additional writing, I think a good estimate is that something like the first 10 pages of an autobiography are worth about the same as the following 100 pages (assuming a uniform distribution of information, so not the first 10 pages of a typical autobiography that might go in chronological order). The more you write the better the restoration will be. How good that restoration will actually turn out to be compared to, say, a cryonic restoration, is hard to know because we don't actually know how that will turn out either for sure, but obviously I think it will turn out to be pretty good.
1Douglas_Knight14y
In other words, if we don't know the necessary order of magnitude, we should only bother if we can increase our output by orders of magnitude.

This post addresses the subject of the appropriate human data compression format. Though an interesting idea, I think that the proposed method is too low in resolution. You acknowledge the lossiness, but I think it's just going to be too much.

Although the method you advocate is probably the best thing short of cryonics, I doubt there is any satisfactory compression method that can make anyone that's more similar to you than a best friend or a family member who gets stuck with your memories. It's better to have too much data than too much.

Because we share the same evolutionary past as all of our conspecifics, the biology and psychology of our brains is statistically the same. We each have our quirks of genetics and development, but even those are statistically similar among people who share our quirks. Thus with just a few bits of data we can already record most of what makes you who you are.

I'm not confident in this part. Although a large percentage of human biology and psychology are identical, the devil is in the details. From a statistical perspective, aren't humans and chimps practical identical also? Percentage similarity of traits is probably the wrong metric, since sm... (read more)

3apophenia14y
I think you're equating the relative risks here. If I wrote about myself for an hour a day for the rest of my life, I would rate chances very low that I could be reconstructed in contrast to cryonics. Your chances of being reconstructed increase with the amount of information present. One of the arguments for the safety of vitrifying the brain is that because the brain has a lot of redundancy (structure), we might be able to reconstruct damage. I worked for a while in cryptography, where we try to recover original data wholly or partially from encrypted data. Based on those experiences and talking with Peter de Blanc, I looked into this reconstruction problem for a couple of hours at one point. Off the top of my head here's some tips which might make it easier for me to reconstruct your body, brain and memories if I'm alive: Record lots of data. Speech is better than writing. Include at least one photo. Videoblogging should record at a ate of least ten to a hundred times faster than writing, and if storage stays as cheap as it now it'll survive. Freeze a DNA sample (cheap, but riskier) or record one (expensive because you have to scan it, but less likely to be destroyed). This should allow one to reconstruct a physical twin at minimum. I'm going to stick my neck out here and say don't censor yourself if you're recording audio. If I want to reconstruct your brain, the first step is probably to reverse-engineer your thoughts, so the more freely you talk the easier to deduce how you arrive at what you say. For example, I would learn more about someone if I watch them solving a cryptogram for ten seconds, than if they just gave me the answer. I have no idea if free association is equally as likely to work as essays. In general, the closer to the source the better in reverse-engineering: I've heard an estimate I buy that a minute of high-quality video could replace DNA in a pinch, but would still pay to freeze it, myself. A friend of mine made a design for a $50 EEG c
2Gordon Seidoh Worley14y
Even if that's all the better we can do, that's much better than the nothing that will befall those who would otherwise have been totally lost because they didn't sign up for cryonics.
4GreenRoot14y
I'm curious to know why you make this judgment. I imagine future people choosing between making a new person and making an as-similar-as-a-relative copy of a preserved person. In both cases, one additional person gets to exist. In both cases, that person is not somebody who has ever existed before. In neither case case does a future person get to revive a loved one, because the result will only be somebody similar to that loved one. Reviving the preserved person is better for the preserved person, I guess, but making a new person is better for the new person. Once you've lost continuity of identity, you've lost any reason why basing new people on recordings is better than making new people the old fashioned way. Put another way, the nothing that will befall the totally lost feels exactly as bad to me as the nothing that will befall the future unborn whom they displace. I know that ethical reasoning about potentially-existing people is hard, so I'm not too clear on this, so I'd like to know why you feel the way you do.

How many hours do you estimate you'll be putting into your autobiography for the resulting record to be "good enough"?

Next question, what is your hourly pay rate?

3Gordon Seidoh Worley14y
I see where this is going, so I'll go ahead and let you run an economic analysis on me. But, keep in mind that cost is not the only factor, only the main one for most of the world's population. For me it has far more to do with the social costs I would have to pay to sign up for cryonics. That said, I estimate I'll be putting about 1 hour a week into writing myself into the future. I am currently paid at a rate of approximately $18 an hour. I'm not sure what my lifetime average pay rate will be, but let's go ahead and estimate it at $60 per hour in 2010 USD (I have two M.S. degrees, one in computer science and one in mathematics, and I'm willing to do work with questionable ethical outcomes, like "defense" contracting).
0wedrifid14y
The figures given put that at 10 years of writing at 1h per week resolution vs a cryonic preservation.

How good a record is good enough?

No record in English (and I'm using English as a shorthand for any human language) can ever be good enough. English is not a technology for transmitting information.

English is a compression format, and a very lossy and somewhat inaccurate compression format at that. But it has a stupendously high compression rate and compression algorithms with reasonable running speeds on specially adapted hardware (i.e. brains), so for the particular purposes of human communication English is a pretty decent option.

I own a t-shirt with... (read more)

4sfb14y
How is your description of English as a compression format different from the idea of the detached lever, where one puts the characters a p p l e into a computer and hopes it will have crunchy, juicy properties? I believe I speak Modern English, and could probably look for wavy hands on sticks penicillin mold or coil wires around magnets, but how does "atoms can be split" help me reproduce a major scientific/engineering discovery? It's not a compressed instruction, it's a teachers password I can say to other people who know that atoms can be split so we can be comfortably "scientific" in each other's presence. I don't know what it means in terms of equations, machinery, or testable predictions - and more to the point, - I still don't know what it means after reading the t-shirt. I could probably grope about in a corpse and find a heart or a lung, but how do I tell when I have a pancreas instead of a phlogistondix? And which bit of it is the pancreatic duct? And how do I tell if the fluid that comes out of the unknown lump of creature that I have is insulin? Or after injection, how to tell if it's working? The only constrained anticipation I have for 'insulin' is that it helps diabetic people - although I now note that I have no real idea what 'diabetic people' means in medical terms or how, if I were thrust back in time, I would be able to reliably identify them. I suggest that t-shirt is not a compressed guide, it's a memory aid for people who already know the details behind it and who could, if their memory was entirely under their command, manage exactly the same without it.
2WrongBot14y
I agree with JoshuaZ, but would add: English is a compression algorithm, but most of the information required for that algorithm is stored in your brain. Your brain hears the word "apple" and expands it to represent everything that you know about apples. If your brain can't expand "pancreas" as far, that is a characteristic of your brain and not the word. As is true of software compression algorithms, the purpose of your brain's compression algorithm is to allow you to shrink the size of your knowledge and messages, at the cost of computing time and accuracy.
0JoshuaZ14y
But these terms don't exist in complete isolation. Say for example I'm sent back to 1850. Then I don't know what the different parts of a pancreas look like, but doctors will know. So I can bootstrap my knowledge based on that (and presumably they know what a diabetic is and how to recognize them). Some of these (like using quartz crystals to make clocks) are difficult due to infrastructural problems, but most of them have large amounts of associated ideas that connect to the terms. By analogy with the issue being discussed, the terms being used don't function completely as detached levers, since when we have a written record of you saying "I like to eat apples but not oranges" we have a specific idea of what "apple" means.
0sfb14y
Are you saying that once you have a written record of me mentioning apples, then you can talk to me about 'apples' with no explanation, but before that you would have to talk to me about 'apples (which are ...)' with an explanation?
0JoshuaZ14y
Hmm, ok. That can't be right when phrased that way. So something is wrong with my notions. It may be that the point about time-travel holds but generalizing it to the lever issue fails.
0NancyLebovitz14y
I believe that language is for communicating the shared part of experience, or sometimes for creating the illusion of shared experience. Whatever is unique about a person's experience is going to get lost if you try to communicate it through language. Ok, that's maybe a little too harsh-sounding. I think some people are relatively similar to each other, so that language can resonate rfairly well between them. Still, I believe in tacit knowledge. And even if a skillful person can find words for some of it-- turn the bike wheel towards the direction you're falling is sound advice, but how would you convey exactly what it's like to be you riding a bike on a particular day, or what it's like to know how to ride a bike before you have words for it?
[-][anonymous]14y70

A few thoughts:

-This would require both an enormous amount of time spent meticulously documenting your experiences (most of which would be mundane), and incredible writing skill to be able to capture various nuances of emotion. The number of people who are able to satisfy both these conditions may be less than the number who will sign up for cryonics.

-It's not clear to me that there's any consistent way to translate the written expression of a memory (particularly an emotional memory) into a mental state, partially because...

-I'm not sure writing is a fine... (read more)

One way to help test the feasibility of this plan is to both write prolificly and undergo cryonic preservation.

8thomblake14y
Not a very useful test though; it's generally assumed that once we know whether cryonics works, we won't really need it anymore.

On the one hand, I do expect a society after a positive Singularity to be interested in, say, reconstructing Feynman from the evidence he left, and of course the result would be indistinguishable from the original recipe to anyone who knew him or knew his writings, etc. It goes without saying that I expect this to be awesome, and look forward to talking with reconstructed historical figures as if they were the originals.

However, I do suspect that there's a deep structure to an individual human's experience and thinking which might be essential to the cont... (read more)

3Vladimir_Nesov14y
I agree, provided "the future myself" is understood as a particular concept describing the structure of the future, and not magical carrier of subjective experience. The terminology of continuation of subjective experience can be decoded through this concept, whenever its instance is found in the environment, but the terminological connection starts to break down when it's not, for example when there are multiple copies. Such cases reveal the problems with subjective experience ontology, its limited applicability. It's really interesting to read an argument that uses subjective experience terminology, through this lens. For example, take this phrase: This translates thusly: "The most probable reconstruction may not have the property of having the structure of "original person". And I have a much stronger preference for the future containing "future myself" than for the future containing "someone very like me but still significantly different"".
2orthonormal14y
I agree with your expansion of the concept.

Martine Rothblatt has written a lot about this idea, using the term 'mindfiles.'

You should probably give a number for the cost of cryo.

As far as I know, $9,000 is the cheapest possibility, which is cheaper than many cars, and there are a lot of those in the world.

3Kevin14y
Where can you get $9,000 cryonics?
3Roko14y
CI does full body for $30,000, but for a young person the actual payments to a life insurance policy would only be $9000 thanks to compound interest. Somebody who is already 75 years old wouldn't get that benefit, but we're talking about the cheapest possibility, which would be for a young person.
5CronoDAS14y
Is that $9000 figure for a term life insurance policy that becomes worthless after a number of years have passed, or for "whole life insurance" that pays off when you die, regardless of when that happens to be?
1Roko14y
I don't know, but I suspect the latter, since 9k compounded for 50 years ~ 30k.
0Gordon Seidoh Worley14y
Note, though, that you're talking about costs for people living in the First World. If you live in Sudan, for example, I doubt you can get access to cryonics short of paying for it all upfront in full: after all, who would want to insure someone's life when they live in such a deadly country.
0Roko14y
You could put the money in a bank account, but $9000 is probably more money than the average Sudanese person earns in a week, so it's a moot point.

Part of the reason why I make available records of e.g. the books I own, the music I listen to and the board games I've played (though this last list is horribly incomplete) is to make it possible for someone to reconstruct me in the future. There's a lot of stuff about me available online, and if you add non-public information like the contents of my hard drive with many years worth of IRC and IM logs, an intelligent enough entity should be able to produce a relatively good reconstruction. A lot would be missing, of course, but it's still better than noth... (read more)

5ocr-fork14y
That's orders of magnitude less than the information content of your brain. The reconstructed version would be like an identical twin leading his own life who coincidentally reenacts your IRC chats and reads your books.
0Kaj_Sotala14y
Sure. What about it?
2ocr-fork14y
Your surviving friends would find it extremely creepy and frustrating. Nobody would want to bring you back.
5Kaj_Sotala14y
If I had surviving friends, then optimally the process would also extract their memories for the purpose. If we have the technology to reconstruct people like that, then surely we also have the technology to read memories off someone's brain, though it might require their permission which might not be available. If they gave their permission, though, they wouldn't be able to tell a difference since all their memories of me were used in building that copy.
[-][anonymous]14y40

You don't want a written diary, you want a highly efficient miniature camera that's always on. And maybe an option to annotate it in real time.

2gwern14y
As I suggest, lifelogs.

we need to inject the stuff that sets you uniquely apart even from your fellow humans who share your statistically regular quirks: your memories. If the record of your memories is good enough, this should effectively create a person who is so much like you as to be indistinguishable from the original, i.e. restore you.

I put a lot less importance on memory than you do. For instance, if I suffered amnesia and was not conscious of any of my previous experiences, I would still be me. In fact, given the choice between (A) someone who had a completely diffe... (read more)

I don't think that writing yourself into the future would work very well, but I've got another idea for a cheap cryonics-substitute: get your brain frozen in plain old ice. By the time we get whole brain emulation, a brain frozen in ice may contain enough information to replicate on a computer, even if it cannot be biologically revived like a cryogenically frozen brain.

Permafrost burial has been explored, but is generally considered an inferior option. If I were going for a cheap cryonics substitute, I'd try plasticization. A lab can do a head for a couple thousand bucks, it preserves enough microstructure for an electron scanning microscope, and there's no worries about staying cool.

3Chroma14y
A couple of things: The cost of cryonics is more than just the liquid nitrogen. You need to mobilize a team to properly preserve the brain, then keep it in a refrigeration unit indefinitely. If you keep tissue at temperatures slightly below 0ºC, it's not really frozen. Tiny pockets of concentrated ions will lower the freezing temperature of water in in those areas, keeping portions of the tissue liquid. I think the effect is similar to salting roads in the winter time. Anyway, the tissue degrades over time scales we care about.
2Roko14y
Cryo is not expensive. Cryo is not expensive. Cryo is not expensive. Cryo is not expensive. Cryo is not expensive. Cryo is not expensive. Cryo is not expensive. Cryo is not expensive. Repeat 5000 times until it sticks... Seriously, the cost of reliably getting people to bury you in ice somewhere would be more than the cheapest cryo. * It has to be ice that never melts * You have to find a group of people who are willing to "move a body" for you without freaking out. They will probably also have to trek somewhere remote and then break the law. They have to not "chicken out" at the end. * It has to not be found out by the authorities and exhumed * It has to be found again at the other end (perhaps when your associates are themselves dead) * It has to be super-cold ice (south pole, perhaps?) and even then bacteria would be a problem. EDIT: apparently cryonics societies will do the work for you, but it'll still cost $5000+. Why not get the real deal for only a little more?

I find it deeply weird that nobody has pointed out that the information describing you, written as prose, is not conscious. This is a major drawback. The OP mentioned it, and dared people to take him/her up on it, and nobody did.

I attribute this to a majority of people on LW taking Dennett's position on consciousness, which is basically to try to pretend that it doesn't exist, and that being a materialist means believing that there is no "qualia problem".

I don't follow. The OP didn't claim that just having the written information would be enough. They were saying that the information could be used to build a copy of you. The prose might not be conscious, but the copy would be.

2PhilGoetz14y
Oops, you're right.
5ocr-fork14y
Is a vitrified brain conscious?
-3cousin_it14y
No idea. We haven't yet revived any vitrified brains and asked them whether they experience personal continuity with their pre-vitrification selves. The answer could turn out either way.
7ocr-fork14y
They remember being themselves, so they'd say "yes." I think the OP thinks being cryogenically frozen is like taking a long nap, and being reconstructed from your writings is like being replaced. This is true, but only because the reconstruction would be very inaccurate, not because a lump of cold fat in a jar is intrinsically more conscious than a book. A perfect reconstruction would be just as good as being frozen. When I asked if a vitrified brain was conscious I meant "why do you think a vitrified brain is conscious if a book isn't."
1cousin_it14y
You don't know that until you've actually done the experiment. Some parts of memory may be "passive" - encoded in the configuration of neurons and synapses - while other parts may be "active", dynamically encoded in the electrical stuff and requiring constant maintenance by a living brain. To take an example we understand well, turning a computer off and on again loses all sorts of information, including its "thread of consciousness". EDIT: I just looked it up and it seems this comment has a high chance of being wrong. People have been known to come to life after having a (mostly) flat EEG for hours, e.g. during deep anaesthesia. Sorry.
1[anonymous]14y
Joke probably? As if the above experiment has any connection to this confusion of a hypothesis.

With cryonics at $9000, you have to ask which method is getting you the most utility per unit effort. $9000 equates to about 200-600 hours of work for most reading this, but if the writing takes an hour a day for the rest of your life, that's 10,000+ hours.

Of course the best protection would be to do both.

0Nisan14y
Is that really how much it costs? Can you give me a link to a reference? ETA: Ah, Roko's reasoning is here

This post is in dire need of a reference to Hofstadter's I am a Strange Loop.

Also maybe Halperin's The First Immortal which explicitly considers the possibility raised here.

Also maybe Lion Kimbro.

8gwern14y
While we're going with fictional examples, the John Keats cybrid in Dan Simmons's Hyperion and Fall of Hyperion is pretty much exactly this suggestion.
6chronophasiac14y
Spoiler warning for a Greg Egan short story... Steve Fever is this suggestion, exactly. It is a fairly disturbing account of an unFriendly AI attempting to resurrect a dead man using this method. Recommended.
3[anonymous]14y
This is also a primary plot point of the Battlestar Galactica prequel Caprica. This also comes in Charles Stross's Accelerando up when some evil AIs decide to do this to most of humanity based on our historical records. The full text of the novel is at http://www.antipope.org/charlie/blog-static/fiction/accelerando/accelerando.html Search for "Frequently Asked Questions" to find the relevant section. There is also another similarly interesting plot thread in the story which can be summed up by this excerpt:
2anon89514y
It might be time to take this thread to TV Tropes.
0Gordon Seidoh Worley14y
I have no doubt that this sort of thing has been occasionally explored in fiction. That said, there's a big difference between considering an idea in fiction and considering acting on an idea in real life.
1JoshuaZ14y
And in Alastair Reynolds' Revelation Space universe there are two major types of simulations of people, the alphas are a very accurate model from a fast, destructive scan of the brain, while the betas are essentially this.

This doesn't seem particularly useful to me. Even if the written copy could be identical to me in every way, I would place a much lower value on the creation of such a copy than on the extension of my current life. You're right that this might be slightly preferable to death, but I certainly wouldn't position it as a real alternative even to cryonics.

1ata14y
What do you mean by "identical to me in every way"? Does that mean that it actually contains all the same information as your brain, or something less exact or complete than that?
0Democritus14y
I am referring to a copy that contains exactly the same information as the current "me".

We are going to have to rely on simulations of the deceased for the foreseeable future. Individuals who have not left extensive records will be relatively more quality simulations.

Hopefully at some point a sufficiently advanced simulation will exist which can interpolate the remainder of humanity, but even then we are left looking for a reason to do so.

Would you also write about the large pecentage of your time spent writing?

Evidently. :)

This brings up a related point. How do you write your skills into the future? You can't just write "As of 2010 I was an excellent piano player".

But wait - maybe you can. If you're assuming a reconstruction technology which can uncompress verbal descriptions of behaviours into the much more complicated expression of such behaviours in terms of the neural substrate, then quite possibly this technology will also have massive general knowledge about human skills allowing it to uncompress such a statement into its equivalent in neural and muscular organization.

But then, what a temptation! As of 2010 I am not, in fact, able to play the piano, but if this record for the future can also serve as my letter to Santa, why not? It's not as if any of it is readily verifiable. I could say I like the taste of lemon when actually I hate it.

This line of thought isn't to ridicule the idea of writing yourself into the future - just to bring out some consequences the OP may not have thought about.

4Gordon Seidoh Worley14y
Of course this is a possibility. Even with cryonics, presumably if we have the technology to restore you then we'll have the technology to restore you with whatever modifications you'd like. The person you write into the future will be like you only insofar as you make them like you. If you choose to write someone like yourself but who is an excellent piano player into the future, so be it.
1JamesAndrix14y
Piano playing is easy to record.

I've had a similar idea for a while. It involves reconstructing people from the memories of those who knew them, like Kaj_Sotala describes. So, for instance, my maternal grandfather died a few years ago. But if a bunch of us who knew him lived into a future where our memories of him could be scanned and analyzed, a copy of him could be built that we would find indistinguishable from the original as far as those of us who remember him are concerned. I thought it might be useful from a comfort standpoint.

How much is gained by writing about yourself? Aside from personal development aspects, less than video-logging. But likely different information than video-logging. Possibly stuff buried in your childhood. Logging biometrics like heartbeat could be more total benefit than video-logging, but again different information. Having a combination could prove more useful in a synergistic manner than any one information source on its own, because comparing them against each other allows you to infer more information.

The same goes for cryonics. At a guess I'd say pr... (read more)

This is an idea Paul Almond suggested a while ago in his article Indirect Mind Uploading.

Also, he has quite a few new AI related articles on his website http://www.paul-almond.com. I haven't read any of them yet, so I can't comment.

I think this is missing out on a lot of other higher bandwidth source of information about us. Part of the problem is a focus on output, as if creating a pale imitation of the process of scanning a brain. But you could also reconstruct much of brain by looking at the things that went into it: DNA, the environment. tack on the records that are already automatically made of its choices, and you isolate a very small part of the potential mind space.

Any person with my DNA, my grades, my bookshelf, my pantry, my bank statements and my web VIEWING history would... (read more)

0NancyLebovitz14y
There may be a difference of temperament here-- to my mind, a lot of what's most distinctive about being me is the feeling I chase when I'm being creative-- the sense of rightness that I use to adjust what I'm doing. It's conceivable that a new person which was producing calligraphy and webposts similar to mine would be trying to make things in consonance with that feeling, but it's not obvious that they would be.
[-][anonymous]14y00

How good a record is good enough? In truth, I don't think we even know enough to get the order of magnitude right. The best I can offer is that you need to record as much as you are willing to. The more you record, the more there will be to work with, and the less chance there will be of insufficient data. It may turn out that you simply can't record enough to create a good restoration of a person from writing, but this is little different from the risk in cryonics of not being well preserved enough to restore despite best efforts. If you're willing

... (read more)

I'm very curious as to your theory of what happens if you do both. That is, suppose you're cryogenically frozen and then revived, while someone also makes a top-notch copy of you based on the recorded memories you left behind. It seems rather obvious that you can't have double-you, so what happens?

This hypothetical suggests to me that one or both are doomed - and if it's just one, I'd think it's this method you've suggested that wouldn't work. But I really haven't thought too hard on this issue, so I'm curious as to what others think the solution/outcome is.

5Kaj_Sotala14y
Why not?
0Psychohistorian14y
There's no mechanism linking the two entities, so it seems necessary that whatever that each entity has a distinct first-person experience. Whoever "you" are, then, you can't experience being both entities. I think that's the cleanest way to express what I mean, and thank you for calling me on using "obvious." Another way of thinking about this: Suppose someone offers to make 1 million essentially perfect copies of you and subject them to the best life they can engineer for them, which you get to confirm fits perfectly with your own values. The catch: prior to the copy, they'll paint a 1 on your forehead, which will not be copied. They'll then find "you" and subject the "original" to endless torture. I, for one, would not hesitate to reject this offer for largely self-interested reasons. I can understand an altruist taking it, though that makes the fact that the million people are copies rather irrelevant. If I understand the stance of many people here (RH, for example), they'd take the deal out of self interest (at least some number of copies, which could be greater than a million), because they don't distinguish between copies. This seems like severely flawed reasoning, though too complex to properly address in a sub-sub-comment. I'd like to know if this is a straw man.
4Kaj_Sotala14y
In general, I find that continuity of consciousness is an illusion that's hard-wired into us for self-preservational purposes. We can explain the mind without needing to define some sort of entity that remains the same from our birth to death, and any attempted definition for such an entity gets more and more convoluted as you try to consistently answer questions like "if you lose all your memories, is it still you", "if you get disassembled and then rebuilt, is that still you" and "how can you at 5 be the same person as you at 50". It's a bit like believing in a soul. Still, the concept of a 'you' has various e.g. legal and social uses, and it's still a relatively well-defined concept for as long as you don't try to consider various weird cases. Once we have a world where people can be copied, however, the folk-psychological concept of "you" pretty much becomes incoherent and arbitrary. Which still doesn't force you to completely abandon the concept, of course - you can arbitrarily define it however you wish. As for your thought experiment, there are at two interpretations that make sense to me. One is that since every copy will have experiences and memories identical to being me and there are a million of them, then there's a 1/1,000,000 chance for me to "become" any particular one of the copies. Correspondingly, there's a 1/1,000,000 chance that I'll be tortured. The other interpretation is that there is a 100% chance that I will "become" each of the copies, so a 100% that I'll become the one that is eternally tortured and a 100% chance that I'll also become the 999,999 others. Alternatively, you could also say that there's a 100% chance that I'll remain the one who had "1" painted on his forehead. Or that I'll become all of the copies whose number happens to be a prime. Or whatever. Identity is arbitrary in such a scenario, so which one is "correct" depends pretty much only on your taste.
2orthonormal14y
Your question is a more complicated version of "what happens if I'm non-destructively copied", and the answer to that one is that both of them are you, and so before the copying is done you should assign equal probability to "ending up as" the original or as the copy. (It should work the same as Everett branching.) In this case, I don't fully expect the "reconstructed from writings" self to be as connected to my current subjective experience as a cryopreserved self would be. But the mere fact of there being "two selves" doesn't present an inherent problem.
1Vladimir_Nesov14y
It's not a given that building this kind of probabilistic model is helpful. (Forgetful driver and beauty again.)
0Psychohistorian14y
If I understand the physics and the link even a little bit correctly, those copies would have to be identical to an arbitrarily high degree of specification. That identicalness would end soon (I'd imagine something like nanoseconds) after the new brain was generated (and I think it's extremely charitable to posit that such a replication is meaningfully possible); it seems like even variations in local gravity would break the identity. Certainly, within a few seconds, processing necessarily different sensory data (as both copies can't be observing from the exact same location) would make the two different. What happens to double-me at that point, or is that somehow not material?
5orthonormal14y
Well, ISTM that only the gross structure (the cells, the strength of their connections, and the state of firing) is really essential to the relevant pattern. Advanced nanotechnology is theoretically more than capable of recording such data and constructing a copy, to within the accuracy of body-temperature thermal noise. (So if you really wanted to be careful, you'd put the brain in suspended animation at low temperature, copy it there, and warm both copies back up to normal; but I don't think that would be necessary in practice.) Yup, the copies diverge. Just as there are different quantum versions of me branching as life goes along (see here for a relevant parable), my experience would branch there, with two people who once were "me". When I observe a quantum random coinflip, half of future mes are in worlds where they observe heads and half are in worlds where they observe tails; they quickly become different people from each other, both of them remembering having been me-before-the-flip, and so it's quite coherent for me to say before the flip that I expect to see heads with 1/2 probability and tails with 1/2 probability. The duplication experiment is no different, except that this time my branched copies have the chance to play chess against each other afterwards. I expect 1/2 probability of finding myself to be the one who remained in the scanning room (and who gets to play White), and 1/2 chance of finding myself to be the one who wakes in the construction room (and who gets to play Black).
0Psychohistorian14y
This is somewhat redundant with my previous response, but suppose we have some superficial way to distinguish - i.e. you're marked with something that doesn't get copied. Why would you not expect to continue to have the experience associated with the physical object that is your brain, i.e. not wake up as the copy? It's also interesting that this assumes it's meaningfully possible to replicate a brain, which is an unanswered empirical question. Even granted that the world is perfectly materialistic, it does not seem to follow that one can make a copy of a brain so perfect that one's experience could jump from one to the other, so to speak. Sort of like Heisenberg's uncertainty principle, but for brain replication. ...unless you're referring to the situation where you wake up after an individual has been copied. In that case, it does seem like the odds you're the original are 50/50. But if you're the original going to the copying-lab, it seems like you should be virtually guaranteed to wake up in your own body, which will be confirmable if you give it some identifying mark beforehand (or ensure that it's directed to a red room and the copy to a blue one, or whatever).
5orthonormal14y
OK, so we do disagree on this fundamental level. I apologize for the following infodump, especially when it's late at night for me... I assign high probability to the patternist theory of consciousness: the thesis that the subjective thread of consciousness is not maintained by material continuity or a metaphysical soul, but by the patterned relations between the different complicated brain-states (or mind-moments, if we want to be less brain-chauvinistic). That is, you can identify the configuration that is my-brain-right-now (A1), and the configuration that is my-brain-a-millisecond-later (A2), and they're connected in a way similar to the way that successive states of Conway's Game of Life are connected. (Of course, there are multiple ways A1 could go, so throw in A2', A2'', etc, but only a small subset of possible brain-configurations have a nonnegligible connection of this sort to A1.) Anyway, my estimate for "what I'll experience next after A1" should just be a matter of counting all the A2s and variants in the multiverse, and comparing the measures of each. This sounds weird to our evolved intuitions, but it appears to be the simplest theory of subjective experience which doesn't involve extra metaphysical entities or new, heretofore unobserved, laws of physics. As noted in the link above, the notion of "material continuity" is a practical aggregate consequence which doesn't cut to the way the universe actually works. Reality is made of configurations, not objects, and it would be unnatural to introduce a basic property for a substructure of a configuration (like A2) which wouldn't hold for an identical substructure placed elsewhere in space and time. (Trivial properties like "location" obviously excepted, and completely historical-social properties like "the first nanotube of this length ever constructed" should be in a different category as well.) The patternist theory of consciousness, incidentally, is basically assumed in the OP and in a good deal of t
0Psychohistorian14y
I follow this general theory and mostly agree with it, though I admit it isn't fully adapted into my thoughts on consciousness generally. What I don't see, exactly, is how "good enough" copies could work. (I also don't see how identical copies could work, but that's a practical issue, not a conceptual one.) Recreating someone who's significantly more like me than most seems rather categorically different from freezing and later reactivating my brain, particularly since people who are significantly more like me than most probably already exist to some degree. At what degree does similarity cross some relevance threshold, if ever? Or have I misconstrued the issue?
4orthonormal14y
That's precisely the issue at the heart of the current discussion, as I see it. And it's on that issue that I'm uncertain. A copy of the cellular structure and activity of my brain is definitely good enough to carry on my conscious experience. Is a best-guess reconstruction of that structure from my written records good enough? I strongly suspect not, but it's always dicey to say what a superintelligence couldn't figure out from limited evidence.

Cryopreservation works pretty well for embryos, eggs and sperm. Or, if you are feeling optimistic, you could sequence your DNA - and store that. That is not preserving everything - but it should be enough to make some identical twins - which is pretty close for most purposes.