These are the kinds of sci-fi-ish posts that keep me from wanting to become very involved with this site :P
1) RE your collection problem:
Collect what? There is no empirically-validated working model for consciousness, you do not know what you need to collect. (Is a connectome enough? Catalogue of genes expressed in every neuron? Epigenetic regulators of those genes? Post-translational modifications of the gene products? Positional information of where each protein is within a cell, or just the expression level? If you think any of these are dispensable, is there an argument as to why they should be dispensable?)
You don't even know if you should just be collecting a brain. (Gut microbes have massive influence on personality, intellect, and vulnerability to mental illness. The state of the digestive system has giant effects on the state of the CNS. Do you plan to collect your endocrine system too? If you think any of these are dispensable, is there an argument as to why they should be dispensable?)
2) RE your creation problem: In what sense would a simulation of your physiology constitute immortality? We now have exquisitely sophisticated models of weather patterns, but they never ACTUALLY rain. We can describe some interesting aspects of weather mathematically, and make some predictions based upon our simulations, but this does not create weather in any meaningful sense. Similarly, we might one day have a near-complete model of a lung, down to subcellular detail. It won't actually respirate. No real oxygen will be exchanged, some numbers will just be crunched and we can check if the outputs are similar to what real lungs do. Perfectly simulating an organ is VERY different from actually producing what that organ produces. Consciousness is produced by brains. What is the reason to think that simulating a brain will produce actual consciousness or simulating a lung will produce actual respiration? At best this is useful for making predictions about the output, no? If you think that mathematically modeling of consciousness IS sufficient to produce consciousness, what is that argument?
3) RE both collection and creation problems: Pretend that your "collection problem" were well defined, and solved. What is the argument that you'd be able to create or simulate a brain or mind from that static image? Do you think you could form a useful simulation of time varying process in the stock market from a static image of trades at a given instant? Do you think from that set of trades you could derive complex time varying concepts like "trade war" or "inflation", or from a set of synaptic connections you could derive complex time varying processes like "curiosity" or "sense of humor"? Maybe that's possible, but it's not AT ALL clear to me. Why do you think so?
I find discussions like this not worthwhile (if you're not going to get into the actual nitty-gritty specifics, IMO it's better not to get into stuff like this at all, you'll just confuse yourself and other people). But users on this site seem to really like this kind of stuff, so maybe just LW isn't for me :P
I'll start with the positive: I understand that there's a certain "sci-fi bullshit" feel to my original post. I expect that many people will be turned off by the tone of it, and I appreciate the feedback in that regard. If my post comes across as too cosmic and thus causes people to not pay attention to it or dismiss it out of hand, I need to work on that.
But, I really get the sense that you did not actually read the post and simply skimmed it. The three major points that you made were all either thoroughly addressed preemptively in my post, or thoroughly inapplicable.
RE your collection problem: Collect what? There is no empirically-validated working model for consciousness, you do not know what you need to collect.
I know that there is no empirically proven model for consciousness, which is why I explicitly said "the brain" is meant as a hypothesis, not the answer. Implicit in solving the Collection Problem is correctly creating a valid model.
In what sense would a simulation of your physiology constitute immortality? etc. etc. etc.
I was quite specific about wanting to avoid the "Is simulated consciousness the same as real consciousness debate". I acknowledged it as a potential solution, and further acknowledged it may not be the correct solution, then went on to explain why it didn't matter. Let's say you are correct regarding your criticism of simulation as a valid means of reproducing consciousness. It doesn't change the nature of the Creation Problem. The problem still exists, and an answer still exists.
What is the argument that you'd be able to create... a brain or mind from that static image?
That is, quite literally, a rephrasing of the Creation Problem ("Once we have that information, how do we create a physical representation of it?") I don't have an answer to that question. If I did, it wouldn't be a "problem".
The vast majority of all ethical and logistical problems revolve around a single inconvenient fact: human beings die unwillingly.
this is so wrong I pretty much stopped reading. The root cause is disagreement among the living over how to use resources. Having more living beings does not help with any ethical or logistical problem.
"Having more living beings" doesn't help because it confuses preventing death with creating more new living beings.
Again, I'll start with the good. Anything stylistically that causes people to immediately disregard my writing without actually reading it should be avoided at all costs. So thank you for pointing this out to me. Next time I will not lead with an incidental sweeping statement that has the potential to derail someone's train of thought. And I'm not being a smartass here, I do appreciate you telling me why you stopped reading, rather than simply downvoting.
But that said, if you had literally just read the very next sentence you would have seen that we are actually on the same page here. Of course most ethical dilemmas are centered around resources. But if no one died due to starvation, we wouldn't be having any disagreement as to whether or not it's okay to steal a loaf of bread to save one's starving family.
But if no one died due to starvation, we wouldn't be having any disagreement as to whether or not it's okay to steal a loaf of bread to save one's starving family.
You are still wrong. Imagine that starvation leaves you very weak, in constant pain, but alive. It is OK to steal a loaf of bread to feed your children? Let me also remind you that in Christian Hell no one dies :-/
The human existence does not revolve around death.
Sure, I'll concede that in the edge case that we figure out how to prevent death, and have not in the process, figured out how to eliminate hunger pains (or if Christian Hell exists) then you're 100% right.
Either way though, stylistically that was a poor choice of opening sentences on my part. It doesn't add anything to the piece, and it's too easy to dispute it, thus distracting from the overall point.
If nobody could die, we'd STIL be having arguments about when it's OK to steal a loaf of bread. Whether to prevent starvation, or just to prevent hunger pains, or to increase strength by a bit, the question of "need" cannot be easily compared across entities.
The root cause is disagreement among the living over how to use resources.
If humans didn't need resources to live, why would there be any disagreement?
edit: Didn't realize I posted this. I folded this point into my above post.
Even if you, personally, happen to die, you've still got a copy of yourself in backup that some future generation will hopefully be able to reconstruct.
Is there a consensus on the whole brain backup identity issue?
I can't say that trying to come up with intuition pumps about life extension has made me less confused about consciousness, but it does seem fairly obvious to me that if I'm backing up my brain, I'm just creating a second version who shares my values and capacities, not actually extending the life of version A. Being able to have both versions alive at the same time seems a clear indicator that they're not the same, and that when source A dies, copy B just goes on with their life and doesn't suddenly become A.
Unfortunately, I'm not sure the same argument doesn't apply to one brain at different points in time, too. If you atomize my brain now and put it back together later, am I still A or is A dead? What about koma, sleep, or any other interruption of consciousness?
It's all kind of a blur to me.
The idea of a persistent personal identity has no physical basis. I am not questioning consciousness only saying that the mental construct that there is an ownership to some particular sequence of conscious feelings over time is inconsistent with reality (as I would argue all the teleporter-type thought experiments show). So in my view all that matters is how much a certain entity X decides (or instinctually feels) it should care about some similar seeming later entity Y.
Is there a consensus on the whole brain backup identity issue?
No, and thank you for pointing out the potential for confusion in this post. I have edited some key wording: "results in the continuation of the perception of consciousness." has now been changed to "results in a perception of consciousness functionally indistinguishable to an outside observer," which much more closely reflects my intent.
So in other words, if John Doe went into a locked room, created a copy of himself, incinerated the original version, disposed of all the ashes, and then walked out of the room, the copy would be indistinguishable from the original John Doe from your perspective as an outside observer.
How John Doe himself perceives that interaction is an extremely difficult question to answer (or even to really formulate scientifically).
How John Doe himself perceives that interaction is an extremely difficult question to answer (or even to really formulate scientifically).
But that does not make it any less relevant a question.
from your perspective as an outside observer
"Outside observers" can be very different. You probably need to define that observer a bit better.
Is there a consensus on the whole brain backup identity issue?
NO.
There are many like me who see what the OP advocates as a gigantic holocaust. "Murder the entire population of the world and replace them with artificial copies" is a terrifying outcome.
Creation and Collection might also be interleaved. You might incrementally create a copy and sync it with your brain.
Actually I have already created four copies with 50% fidelity of hardware and I hope to approach >30% of software fidelity over the next decades for a total of 40% fidelity. These inexact copies jointly approximate my brain to >85% (assuming software fidelity is also independent) which is deeply satisfactory for me. I don't think technology will reach these levels anytime soon (i.e. during my lifetime).
Upvoted for cuteness.
However, my understanding is that technology has already reached the level of making copies with ~100% of hardware fidelity.
We know how to make ~100% copies of software sure, but hardware? I don't think we can do single-material solid copies with an accurracy with much more than µm resolution.
We can 'copy' (clone) a lot of life-forms. So you might mean that kind of hardware copy. I don't know the mutation rate of animal cloning and it is probably good enough to call it ~100% on the DNA-level. But the resulting phenotype often contains errors that make it questionable to call the result a 100% copy.
I find it daunting imagine the battery of tests you'd need to perform to get an accurate picture of the brain's internal state--even simple models have insanely high degrees of freedom. Would the original even remain operational after such a procedure?
I agree. And it's entirely possible that the early stages of this technology would be destructive reads. I tried not to delve too much into the specific mechanisms of each scenario in my post, because as one commenter already pointed out, it already has a bit of a "sci-fi" vibe to it. I think talking about all the different ways we might be able to scan a brain would push it right into that territory.
Any method that can physically create something as complex as a human brain at-will can almost certainly be adopted to create other things.
And, any method that can physically create something as complex as a human brain at random can almost certainly be adopted to create other things. In evolution we have a randomly created human brain. This could happen again. In an infinite universe all possibilities occur. In a large universe many possibilities occur. Among the possibilities is a prior or subsequent or even simultaneous brain just like mine.
In an infinite universe all possibilities occur. In a large universe many possibilities occur. Among the possibilities is a prior or subsequent or even simultaneous brain just like mine.
Depending on your version of the MWI, that's actually not quite accurate.
Consider the "man at the cliff" statistical thought experiment: A man is standing at the edge of a cliff, blindfolded. At any given time, he has a 10% chance of taking a step forward (and thus falling off the cliff), and a 90% chance of taking a step backward. If that man takes an infinite number of steps, what is the probability that at some point he falls off the cliff? One may be tempted to answer: "In an infinite number of steps, all possibilities occur and thus the probability must be 1," but that is incorrect. The actual probability that the man falls off the cliff at some point is 11.1..%
You have to be careful when tangling with infinity because there are different degrees of "infinity". A "man on the cliff" taking an infinite number of steps is a constrained infinity. Depending on your interpretation of many-worlds, it's likely that you're dealing with a similarly constrained infinity. So "infinite universes" does not imply a guarantee that possibility X will occur.
That doesn't mean X WON'T happen. It just means it's not guaranteed.
So "infinite universes" does not imply a guarantee that possibility X will occur.
I think MWI does guarantee that in some worlds the man will fall off the cliff.
That's very true. In that regard the Man on the Cliff probably wasn't the best example. It was meant to show that infinite iterations does not guarantee an infinite possibility set or that all possibilities will be realized. So in this context, the MWI doesn't guarantee that any scenario you can envision is happening in an alternate world.
I wouldn't call what evolution does "random". It's a very weak optimisation process, but it is an optimisation process.
This is hopelessly broad. It isn't at all clear that such collection is possible, that there's sufficient data for recreation or that any of what is considered is computationally tractable.
Edit: Removed intro because it adds no value to the post. Left in for posterity. The vast majority of all ethical and logistical problems revolve around a single inconvenient fact: human beings die unwillingly. "Should we sacrifice one person to save ten?" or "Is it ethical to steal a loaf of bread to feed your starving family?" become irrelevant questions if no one has to die unless they want to. Similarly, almost all altruistic goals have, at their core, the goal of stopping death in some way shape or form.
The question, "How can we permanently prevent death?" is of paramount importance, and not just to Rationalists. So, it should be a surprise to no one that mystics, crackpots, spiritualists and pseudo-scientists of all walks of life have co-opted this quest as their own. The loftiness of the goal, combined with the cosmic implications of its success, combined with the sheer number of irrational people also seeking to achieve the same goal may make it tempting to apply the non-central fallacy and say, "I'm not interested in stopping death; that's something crazy people do."
But it's a fallacy for a reason: there is a rational way to approach the problem. Let's start with a pair of general statements:
The Collection Problem
This problem is most pressing, because once we solve it, it buys us time. Once that data is stored securely, you've dramatically extended your effective timeline. Even if you, personally, happen to die, you've still got a copy of yourself in backup that some future generation will hopefully be able to reconstruct. But, more importantly, this also applies to all of humanity. Once the Collection Problem is solved, everyone can be backed up. As long as you can stay alive until the problem is solved, (especially if you live in a first-world country), you have probably got a pretty good shot at living forever.
The Collection Problem brings to mind a number of non-trivial sub-problems, but they are fairly trivial *in comparison* to the monumental task of scanning a brain (assuming the brain alone is the seat of consciousness) with sufficient fidelity. Such as logistics, data-storage and security, etc.. I don't mean to blithely dismiss the difficulties of these problems. But these are problems that humanity is already solving. Logistics, data-storage, and security are all billion dollar industries.
The Creation Problem
Once the Collection Problem is solved, you have another problem which is how to take that data and do something useful with it. There's a pretty big gap between an architect drawing up a plan for a building and actually creating that building. But, once this problem is resolved, it's very likely that its solution will also make life itself much, much more convenient. Any method that can physically create something as complex as a human brain at-will can almost certainly be adopted to create other things. Food. Clean water. Shelter. etc. Those likely benefits, of course, are orthogonal, but they are a nice cherry on top.
One of the potential solutions to the Creation Problem involves simulations. I won't go into a ton of detail there because that's a pretty significant discussion unto itself, whether life in a simulation is as valid or fulfilling as life in the "real world". For the purposes of this thought exercise though, it is fairly irrelevant. If you consider a simulation to be an acceptable solution, great. If you don't, that's fine too, it just means the Creation Problem will take longer to solve. Either way, it's likely you're going to be in cold storage for quite some time before the problem does get solved.
What about the rest of us?
All this theory is fine and good. But what if you get hit by a bus tomorrow and don't live to see the resolution of the Collection Problem? What about all of us who have lost loved ones in the past? This is where this exercise dovetails with traditional ethics. Given this system, it's easy enough to argue that we have a responsibility to try to ensure that as many human beings as possible survive until the Collection Problem is resolved.
However, for those of us unlucky enough to die before that, there's one final get-out-of-jail free card: The Recreation Problem. This problem may be thoroughly intractable. And to be sure, it is probably the most difficult problem of them all. In extremely simple (and emotionally charged) terms: "How can we bring back the dead?" Or, if you prefer to dress it up in the literary genre of science: "How can we recreate a system that occurred in the past with Y% fidelity using only knowledge of the present system?"
This may be so improbable as to be effectively impossible. But it's not actually impossible. There's no need for perfect physical fidelity (which is all-but-proven to be impossible). We only need to achieve Y% fidelity, whatever Y% may be. Conceptually, we do this all the time. A ballistics expert can track the trajectory of a bullet with no prior knowledge of that trajectory. A two-way function can be iterated in reverse for as many steps as you have computing power. Etc.
A complex system can be recreated. Is there an upper limit to how far in the past a system can be before it is infeasible to recreate it? Quite possibly. Let's say that upper limit is Z seconds (incidentally, the Collection Problem is actually just a special case of the Recreation Problem where Z is approximately equal to zero). The fact that Z is unknown means you can't simply abandon all your ethical pursuits and say, "It doesn't matter, we're all going to be resurrected anyway!" Z may in fact be equal to approximately zero.
The importance of others.
It is most likely that you, individually, will not be able to solve all three problems on your own. Which means that if you truly desire to live forever, you have to rely on other people to a certain extent. But, it does give one a certain amount of peace when contemplating the horror of death: if every human being commits themselves to solving these three problems, it does not matter if you, personally, fail. All of humanity would have to fail.
Whether that thought actually gives any comfort depends largely on your estimation of humanity and the difficulty of these problems. But regardless of whether you derive any comfort from that, it doesn't diminish the importance of the contributions of others.
The moral of this story...
As a rationalist, you should take a few things away from this.
Post Script:
Note: this was added on as an edit due to feedback in the comments.
The original intent of this article was to explain that there's a rational, scientific way to approach the logistical problem of "living forever".