On desiring subjective states (post 3 of 3)
Carol puts her left hand in a bucket of hot water, and lets it acclimate for a few minutes. Meanwhile her right hand is acclimating to a bucket of ice water. Then she plunges both hands into a bucket of lukewarm water. The lukewarm water feels very different to her two hands. To the left hand, it feels very chilly. To the right hand, it feels very hot. When asked to tell the temperature of the lukewarm water without looking at the thermocouple readout, she doesn't know. Asked to guess, she's off by a considerable margin.

Next Carol flips the thermocouple readout to face her (as shown), and practices. Using different lukewarm water temperatures of 10-35 C, she gets a feel for how hot-adapted and cold-adapted hands respond to the various middling temperatures. Now she makes a guess - starting with a random hand, then moving the other one and revising the guess if necessary - each time before looking at the thermocouple. What will happen? I haven't done the experiment, but human performance on similar perceptual learning tasks suggests that she will get quite good at it.
We bring Carol a bucket of 20 C water (without telling) and let her adapt her hands first as usual. "What do you think the temperature is?" we ask. She moves her cold hand first. "Feels like about 20," she says. Hot hand follows. "Yup, feels like 20."
"Wait," we ask. "You said feels-like-20 for both hands. Does this mean the bucket no longer feels different to your two different hands, like it did when you started?"
"No!" she replies. "Are you crazy? It still feels very different subjectively; I've just learned to see past that to identify the actual temperature."
In addition to reports on the external world, we perceive some internal states that typically (but not invariably) can serve as signals about our environment. Let's tentatively call these states Subjectively Identified Aspects of Perception (SIAPs). Even though these states aren't strictly necessary to know what's going on in the environment - Carol's example shows that the sensation felt by one hand isn't necessary to know that the water is 20 C, because the other hand knows this via a different sensation - they still matter to us. As Eliezer notes:
If I claim to value art for its own sake, then would I value art that no one ever saw? A screensaver running in a closed room, producing beautiful pictures that no one ever saw? I'd have to say no. I can't think of any completely lifeless object that I would value as an end, not just a means. That would be like valuing ice cream as an end in itself, apart from anyone eating it. Everything I value, that I can think of, involves people and their experiences somewhere along the line.
The best way I can put it, is that my moral intuition appears to require both the objective and subjective component to grant full value.
Subjectivity matters. (I am not implying that Eliezer would agree with anything else I say about subjectivity.)
Why would evolution build beings that sense their internal states? Why not just have the organism know the objective facts of survival and reproduction, and be done with it? One thought is that it is just easier to build a brain that does both, rather than one that focuses relentlessly on objective facts. But another is that this separation of sense-data into "subjective" and "objective" might help us learn to overcome certain sorts of perceptual illusion - as Carol does, above. And yet another is that some internal states might be extremely good indicators and promoters of survival or reproduction - like pain, or feelings of erotic love. This last hypothesis could explain why we value some subjective aspects so much, too.
Different SIAPs can lead to the same intelligent behavioral performance, such as identifying 20 degree C water. But that doesn't mean Carol has to value the two routes to successful temperature-telling equally. And, if someone proposed to give her radically different, previously unknown, subjectively identifiable aspects of experience, as new routes to the kinds of knowledge she gets from perception, she might reasonably balk. Especially if this were to apply to all the senses. And if the subjectively identifiable aspects of desire and emotion (SIADs, SIAEs) were also to be replaced, she might reasonably balk much harder. She might reasonably doubt that the survivor of this process would be her, or even human, in any sense meaningful to her.
Would it be possible to have an intelligent being whose cognition of the world is mediated by no SIAPs? I suspect not, if that being is well-designed. See above on "why would evolution build beings that sense internal states."
If you've read all 3 posts, you've probably gotten the point of the Gasoline Gal story by now. But let me go through some of the mappings from source to target in that analogy. A car that, when you take it on a tour, accelerates well, handles nicely, makes the right amount of noise, and so on - one that passes the touring test (groan) - is like a being that can identify objective facts in its environment. An internal combustion engine is like Carol's subjective cold-sensation in her left hand - one way among others to bring about the externally-observable behavior. (By "externally observable" I mean "without looking under the hood".) In Carol's case, that behavior is identifying 20 C water. In the engine's case, it's the acceleration of the car. Note that in neither case is this internal factor causally inert. If you take it away and don't replace it with anything, or even if you replace it with something that doesn't fit, the useful external behavior will be severely impaired. The mere fact that you can, with a lot of other re-working, replace an internal combustion engine with a fuel cell, does not even begin to show that the engine does nothing.
And Gasoline Gal's passion for internal combustion engines is like my - and I dare say most people's - attachment to the subjective internal aspects of perception and emotion that we know and love. The words and concepts we use for these things - pain, passion, elation, for some easier examples - refer to the actual processes in human beings that drive the related behavior. (Regarding which, neurology has more to learn.) As I mentioned in my last post, a desire can form with a particular referent based on early experience, and remain focused on that event-type permanently. If one constructs radically different processes that achieve similar external results, analogous to the fuel cell driven car, one gets radically different subjectivity - which we can only denote by pointing simultaneously to both the "under the hood" construction of these new beings, and the behavior associated with their SIAPs, together.
Needless to say, this complicates uploading.
One more thing: are SIAPs qualia? A substantial minority of philosophers, or maybe a plurality, uses "qualia" in a sufficiently similar way that I could probably use that word here. But another substantial minority loads it with additional baggage. And that leads to pointless misunderstandings, pigeonholing, and straw men. Hence, "SIAPs". But feel free to use "qualia" in the comments if you're more comfortable with that term, bearing my caveats in mind.
The Restoration of William: the skeleton of a short story about resurrection and identity
Bill died. He never liked having dumps done. Each year he would make excuses, put it off. "Next year." he would say. Only after Bill's death do people realise just how long this has been going on for: thirty years. They will have to restore Bill from a 30 year old tape. Is "restore" even the right word? How about "roll-back"?
Worse still, there was a big change in Bill's life 25 years ago when he had a mid-life crisis. He joined a personal growth cult, dropped old friends, made new ones. Some of his new friends can remember encounters with the old Bill of 30 years ago. They didn't like him and avoided him. There was a lot of friction when he joined the personal growth cult 5 years later. Some members wanted to black ball him. You cannot teach an old dog new tricks. It might be true, but the personal growth cult could hardly admit it.
Those who dreaded Bill's return had a week of respite when it seemed that Bill's tape had been lost. Lost? Bill really dead and gone for ever? That was unthinkable. Losing some-ones only back up tape would be a huge scandal. Who would stake their life with a careless archiving company?
After an increasingly panicky search it was found. Found! And still readable, after all those years, with a bit of manual fixing of uncorrectable errors.
Restored Bill woke to find 30 years had gone by. When we think back to what we were like 30 years ago, we do so as a process of diffs. What changed last year. What changed the year before that. What changed between two and three years ago. So when we think back to what changed between year 29 and year 30 and find we cannot remember, what are we to do? No doubt there were a whole years worth of changes, but not knowing what they were, we are seduced by the lazy assumption that they didn't amount to much. Restored Bill did not have the option of making lazy assumptions. He had 30 years of change dumped on him. The genuine article, the whole ka-boodle, with little relation to the convenient fictions that human memory embroiders over 30 years of telling, forgetting, patching and re-telling.
People who remembered disliking Bill 30 years ago were never-the-less sympathetic to the bewildered and pathetic figure, uncertain who and when he was. Phoning close friends to continue yesterday's conversation only to suffer them denying having know him was distressing. It wasn't people deny knowing him in retaliation for a falling out 25 years previously. It was worse than that. How many of your old friends from 30 years ago have you completely forgotten about? You'll soon find that you cannot remember any-one who you have completely forgotten about. The difference between tautology and fact is about a dozen dear old friends.
Restored Bill was struggling to cope with a huge disruption to the natural order of things. Was he acting out of character? Some of deceased Bill's new friends and some of his old friends tried the trick of getting a temporary hologram made from their own 30 year old dump tapes so that they could ask about Restored Bill. As usual this was a distressing experience as the hologram of ones old self turns out to be incompatible with ones own self image and personal narrative. People seeking an explanation for why Restored Bill was different from how they remembered him found instead a question: why were they so different from how they remembered themselves?
One reason was that "hologram" is a rather nasty euphemism, coined to disguise the harsh reality of the law that says "There can be only one." A "hologram" is actually a freshly down loaded flesh and blood person who must be euthanised after the consultation to ensure that there is only ever one copy of a person. The "hologram" is the origin of two genres of fiction. In the hologram-horror one is invited to share the chill of waking up and realising that one is only temporary with but an hour to live. In the hologram-thriller a copy of you has escaped and must be hunted down and killed before he can infiltrate society and impersonate you. There can be only one. If he succeeds you will die in his place, but he knows all about you, he is you!
So the hologram hasn't revolutionised the study of history in the way that you might at first imagine. A history student might try asking a hologram about the past, but pretty soon the hologram realises his predicament and lapses into sullen despair.
No such problem for Restored Bill. Previous Bill was dead and Restored Bill was the one. It all worked out right in the end. Restored Bill learned to rub along with most of deceased Bill's social circle, and the "clerical error" that had actually restored Fred-minus30 never came to light. Current Fred never learned against whom his deep loathing of Restored Bill was truly directed.
Patternist friendly AI risk
It seems to me that most AI researchers on this site are patternists in the sense of believing that the anti-zombie principle necessarily implies:
1. That it will ever become possible *in practice* to create uploads or sims that are close enough to our physical instantiations that their utility to us would be interchangeable with that of our physical instantiations.
2. That we know (or will know) enough about the brain to know when this threshold is reached.
But, like any rationalists extrapolating from unknown unknowns... or heck, extrapolating from anything... we must admit that one or both of the above statements could be wrong without also making friendly AI impossible. What would be the consequences of such error?
I submit that one such consequence could be an FAI that is also wrong on these issues but not only do we fail to check for such a failure mode, it actually looks to us like what we would expect the right answer to look because we are making the same error.
If simulation/uploading really does preserve what we value about our lives then the safest course of action is to encourage as many people to upload as possible. It would also imply that efforts to solve the problem of mortality by physical means will at best be given an even lower priority than they are now, or at worst cease altogether because they would seem to be a waste of resources.
Result: people continue to die and nobody including the AI notices, except now they have no hope of reprieve because they think the problem is already solved.
Pessimistic Result: uploads are so widespread that humanity quietly goes extinct, cheering themselves onward the whole time
Really Pessimistic Result: what replaces humanity are zombies, not in the qualia sense but in the real sense that there is some relevant chemical/physical process that is not being simulated because we didn't realize it was relevant or hadn't noticed it in the first place.
Possible Safeguards:
* Insist on quantum level accuracy (yeah right)
* Take seriously the general scenario of your FAI going wrong because you are wrong in the same way and fail to notice the problem.
* Be as cautious about destructive uploads as you would be about, say, molecular nanotech.
* Make sure you knowledge of neuroscience is at least as good as you knowledge of computer science and decision theory before you advocate digital immortality as anything more than an intriguing idea that might not turn out to be impossible.
[link] Book review: Mindmelding: Consciousness, Neuroscience, and the Mind’s Privacy
I review William Hirstein's book Mindmelding: Consciousness, Neuroscience, and the Mind’s Privacy, which he proposes a way of connecting the brains of two different people together so that when person A has a conscious experience, person B may also have the same experience. In particular, I compare it to my and Harri Valpola's earlier paper Coalescing Minds, in which we argued that it would be possible to join the brains of two people together in such a way that they'd become a single mind.
Fortunately, it turns out that the book and the paper are actually rather nicely complementary. To briefly summarize the main differences, we intentionally skimmed over many neuroscientific details in order to establish mindmelding as a possible future trend, while Hirstein extensively covers the neuroscience but is mostly interested in mindmelding as a thought experiment. We seek to predict a possible future trend, while Hirstein seeks to argue a philosophical position: Hirstein focuses on philosophical implications while we focus on societal implications. Hirstein talks extensively about the possibility of one person perceiving another’s mental states while both remaining distinct individuals, while we mainly discuss the possibility of two distinct individuals coalescing together into one.
I expect that LW readers might be particularly interested in some of the possible implications of Hirstein's argument, which he himself didn't discuss in the book, but which I speculated on in the review:
Most obviously, if another person’s conscious states could be recorded and replayed, it would open the doors for using this as entertainment. Were it the case that you couldn’t just record and replay anyone’s conscious experience, but learning to correctly interpret the data from another brain would require time and practice, then individual method actors capable of immersing themselves in a wide variety of emotional states might become the new movie stars. Once your brain learned to interpret their conscious states, you could follow them in a wide variety of movie-equivalents, with new actors being hampered by the fact that learning to interpret the conscious states of someone who had only appeared in one or two productions wouldn’t be worth the effort. If mind uploading was available, this might give considerable power to a copy clan consisting of copies of the same actor, each participating in different productions but each having a similar enough brain that learning to interpret one’s conscious states would be enough to give access to the conscious states of all the others.
The ability to perceive various drug- or meditation-induced states of altered consciousness while still having one’s executive processes unhindered and functional would probably be fascinating for consciousness researchers and the general public alike. At the same time, the ability for anyone to experience happiness or pleasure by just replaying another person’s experience of it might finally bring wireheading within easy reach, with all the dangers associated with that.
A Hirstein-style mind meld might possibly also be used as an uploading technique. Some upload proposals suggest compiling a rich database of information about a specific person, and then later using that information to construct a virtual mind whose behavior would be consistent with the information about that person. While creating such a mind based on just behavioral data makes questionable the extent to which the new person would really be a copy of the original, the skeptical argument loses some of its force if we can also include in the data a recording of all the original’s conscious states during various points in their life. If we are able to use the data to construct a mind that would react to the same sensory inputs with the same conscious states as the original did, whose executive processes would manipulate those states in the same ways as the original, and who would take the same actions as the original did, would that mind then not essentially be the same mind as the original mind?
Hirstein’s argumentation is also relevant for our speculations concerning the evolution of mind coalescences. We spoke abstractly about the ”preferences” of a mind, suggesting that it might be possible for one mind to extract the knowledge from another mind without inherting its preferences, and noting that conflicting preferences would be one reason for two minds to avoid coalescing together. However, we did not say much about where in the brain preferences are produced, and what would be actually required for e.g. one mind to extract another’s knowledge without also acquiring its preferences. As the above discussion hopefully shows, some of our preferences are implicit in our automatic habits (the things that we show we value with our daily routines), some in the preprocessing of sensory data that our brains carry out (the things and ideas that are ”painted with” positive associations or feelings), and some in the configuration of our executive processes (the actions we actually end up doing in response to novel or conflicting situations). (See also.) This kind of a breakdown seems like very promising material for some neuroscience-aware philosopher to tackle in an attempt to figure out just what exactly preferences are; maybe someone has already done so.
Bad news for uploading
Recently, the Blue Brain Project published a paper arguing that human neurons don't form synapses at locations determined by learning, but just wherever they bump into each other. See video and article here.
For those people hoping to upload their brains by mapping out and virtually duplicating all the synapses—this means that won't work. The synapse locations do not differ from human to human in any useful way. Learning must be encoded in some modulation of each synapse's function.
IJMC Mind Uploading Special Issue published
The International Journal of Machine Consciousness recently published its special issue on mind uploading. The papers are paywalled, but as the editor of the issue, Ben Goertzel has put together a page that links to the authors' preprints of the papers. Preprint versions are available for most of the papers.
Below is a copy of the preprint page as it was at the time that this post was made. Note though that I'll be away for a couple of days, and thus be unable to update this page if new links get added.
In June 2012 the International Journal of Machine Consciousness (edited by Antonio Chella) published a Special Issue on Mind Uploading, edited by Ben Goertzel and Matthew Ikle’.
This page gathers links to informal, “preprint” versions of the papers in that Special Issue, hosted on the paper authors’ websites. These preprint versions are not guaranteed to be identical to the final published versions, but the content should be essentially the same. The list below contains the whole table of contents of the Special Issue; at the moment links to preprints are still being added to the list items as authors post them on their sites.
BEN GOERTZEL and MATTHEW IKLE’ RANDAL A. KOENE SIM BAMFORD RANDAL A. KOENE AVAILABLE TOOLS FOR WHOLE BRAIN EMULATIONDIANA DECA KENNETH J. HAYWORTH NON-DESTRUCTIVE WHOLE-BRAIN MONITORING USING NANOROBOTS: NEURAL ELECTRICAL DATA RATE REQUIREMENTSNUNO R. B. MARTINS, WOLFRAM ERLHAGEN and ROBERT A. FREITAS, JR. MARTINE ROTHBLATT
WHOLE-PERSONALITY EMULATIONWILLIAM SIMS BAINBRIDGE BEN GOERTZEL MICHAEL HAUSKELLER BRANDON OTO TRANS-HUMAN COGNITIVE ENHANCEMENT, PHENOMENAL CONSCIOUSNESS AND THE EXTENDED MINDTADEUSZ WIESLAW ZAWIDZKI PATRICK D. HOPKINS DIGITAL IMMORTALITY: SELF OR 0010110?LIZ STILLWAGGON SWAN and JOSHUA HOWARD YOONSUCK CHOE, JAEROCK KWON and JI RYANG CHUNG KAJ SOTALA KAJ SOTALA and HARRI VALPOLA
Why I Moved from AI to Neuroscience, or: Uploading Worms
This post is shameless self-promotion, but I'm told that's probably okay in the Discussion section. For context, as some of you are aware, I'm aiming to model C. elegans based on systematic high-throughput experiments - that is, to upload a worm. I'm still working on course requirements and lab training at Harvard's Biophysics Ph.D. program, but this remains the plan for my thesis.
Last semester I gave this lecture to Marvin Minsky's AI class, because Marvin professes disdain for everything neuroscience, and I wanted to give his students—and him—a fair perspective of how basic neuroscience might be changing for the better, and seems a particularly exciting field to be in right about now. The lecture is about 22 minutes long, followed by over an hour of questions and answers, which cover a lot of the memespace that surrounds this concept. Afterward, several students reported to me that their understanding of neuroscience was transformed.
I only just now got to encoding and uploading this recording; I believe that many of the topics covered could be of interest to the LW community (especially those with a background in AI and an interest in brains), perhaps worthy of discussion, and I hope you agree.
Are multiple uploads equivilant to extra life?
Suppose I have choice between the following:
A) One simulation of me is run for me 100 years, before being deleted.
B) Two identical simulations of me are run for 100 years, before being deleted.
Is the second choice preferable to the first? Should I be willing to pay more to have multiple copies of me simulated, even if those copies will have the exact same experiences?
Forgive me if this question has been answered before. I have Googled to no avail.
"Ray Kurzweil and Uploading: Just Say No!", Nick Agar
A new paper has gone up in the November 2011 JET: "Ray Kurzweil and Uploading: Just Say No!" (videos) by Nick Agar (Wikipedia); abstract:
There is a debate about the possibility of mind-uploading – a process that purportedly transfers human minds and therefore human identities into computers. This paper bypasses the debate about the metaphysics of mind-uploading to address the rationality of submitting yourself to it. I argue that an ineliminable risk that mind-uploading will fail makes it prudentially irrational for humans to undergo it.
The argument is a variant of Pascal's wager he calls Searle's wager. As far as I can tell, the paper contains mostly ideas he has already written on in his book; from Michael Hauskeller's review of Agar's Humanity's End: Why We Should Reject Radical Enhancement
Starting with Kurzweil, he gives a detailed account of the latter’s “Law of Accelerating Returns” and the ensuing techno-optimism, which leads Kurzweil to believe that we will eventually be able to get rid of our messy bodies and gain virtual immortality by uploading ourselves into a computer. The whole idea is ludicrous, of course, but Agar takes it quite seriously and tries hard to convince us that “it may take longer than Kurzweil thinks for us to know enough about the human brain to successfully upload it” (45) – as if this lack of knowledge was the main obstacle to mind-uploading. Agar’s principal objection, however, is that it will always be irrational for us to upload our minds onto computers, because we will never be able to completely rule out the possibility that, instead of continuing to live, we will simply die and be replaced by something that may be conscious or unconscious, but in any case is not identical with us. While this is certainly a reasonable objection, the way Agar presents it is rather odd. He takes Pascal’s ‘Wager’ (which was designed to convince us that believing in God is always the rational thing to do, because by doing so we have little to lose and a lot to win) and refashions it so that it appears irrational to upload one’s mind, because the procedure might end in death, whereas refusing to upload will keep us alive and is hence always a safe bet. The latter conclusion does not work, of course, since the whole point of mind-uploading is to escape death (which is unavoidable as long as we are stuck with our mortal, organic bodies). Agar argues, however, that by the time we are able to upload minds to computers, other life extension technologies will be available, so that uploading will no longer be an attractive option. This seems to be a curiously techno-optimistic view to take.
John Danaher (User:JohnD) examines the wager, as expressed in the book, further in 2 blog posts:
- "Should we Upload Our Minds? Agar on Searle's Wager (Part One)"
- "Should we Upload Our Minds? Agar on Searle's Wager (Part Two)"
After laying out what seems to be Agar's argument, Danaher constructs the game-theoretic tree and continues the criticism above:
The initial force of the Searlian Wager derives from recognising the possibility that Weak AI is true. For if Weak AI is true, the act of uploading would effectively amount to an act of self-destruction. But recognising the possibility that Weak AI is true is not enough to support the argument. Expected utility calculations can often have strange and counterintuitive results. To know what we should really do, we have to know whether the following inequality really holds (numbering follows part one):
But there’s a problem: we have no figures to plug into the relevant equations, and even if we did come up with figures, people would probably dispute them (“You’re underestimating the benefits of uploading”, “You’re underestimating the costs of uploading” etc. etc.). So what can we do? Agar employs an interesting strategy. He reckons that if he can show that the following two propositions hold true, he can defend (6).
- (6) Eu(~U) > Eu(U)
- (8) Death (outcome c) is much worse for those considering to upload than living (outcome b or d).
...2. A Fate Worse than Death?
- (9) Uploading and surviving (a) is not much better, and possibly worse, than not uploading and living (b or d).
On the face of it, (8) seems to be obviously false. There would appear to be contexts in which the risk of self-destruction does not outweigh the potential benefit (however improbable) of continued existence. Such a context is often exploited by the purveyors of cryonics. It looks something like this:
You have recently been diagnosed with a terminal illness. The doctors say you’ve got six months to live, tops. They tell you to go home, get your house in order, and prepare to die. But you’re having none of it. You recently read some adverts for a cryonics company in California. For a fee, they will freeze your disease-ridden body (or just the brain!) to a cool -196 C and keep it in storage with instructions that it only be thawed out at such a time when a cure for your illness has been found. What a great idea, you think to yourself. Since you’re going to die anyway, why not take the chance (make the bet) that they’ll be able to resuscitate and cure you in the future? After all, you’ve got nothing to lose.
This is a persuasive argument. Agar concedes as much. But he thinks the wager facing our potential uploader is going to be crucially different from that facing the cryonics patient. The uploader will not face the choice between certain death, on the one hand, and possible death/possible survival, on the other. No; the uploader will face the choice between continued biological existence with biological enhancements, on the one hand, and possible death/possible survival (with electronic enhancements), on the other.
The reason has to do with the kinds of technological wonders we can expect to have developed by the time we figure out how to upload our minds. Agar reckons we can expect such wonders to allow for the indefinite continuance of biological existence. To support his point, he appeals to the ideas of Aubrey de Grey. de Grey thinks that -- given appropriate funding -- medical technologies could soon help us to achieve longevity escape velocity (LEV). This is when new anti-aging therapies consistently add years to our life expectancies faster than age consumes them.
If we do achieve LEV, and we do so before we achieve uploadability, then premise (8) would seem defensible. Note that this argument does not actually require LEV to be highly probable. It only requires it to be relatively more probable than the combination of uploadability and Strong AI.
...3. Don’t you want Wikipedia on the Brain?
Premise (9) is a little trickier. It proposes that the benefits of continued biological existence are not much worse (and possibly better) than the benefits of Kurweil-ian uploading. How can this be defended? Agar provides us with two reasons.
The first relates to the disconnect between our subjective perception of value and the objective reality. Agar points to findings in experimental economics that suggest we have a non-linear appreciation of value. I’ll just quote him directly since he explains the point pretty well:
For most of us, a prize of $100,000,000 is not 100 times better than one of $1,000,000. We would not trade a ticket in a lottery offering a one-in-ten chance of winning $1,000,000 for one that offers a one-in-a-thousand chance of winning $100,000,000, even when informed that both tickets yield an expected return of $100,000....We have no difficulty in recognizing the bigger prize as better than the smaller one. But we don’t prefer it to the extent that it’s objectively...The conversion of objective monetary values into subjective benefits reveals the one-in-ten chance at $1,000,000 to be significantly better than the one-in-a-thousand chance at $100,000,000 (pp. 68-69).
How do these quirks of subjective value affect the wager argument? Well, the idea is that continued biological existence with LEV is akin to the one-in-ten chance of $1,000,000, while uploading is akin to the one-in-a-thousand chance of $100,000,000: people are going to prefer the former to the latter, even if the latter might yield the same (or even a higher) payoff.
I have two concerns about this. First, my original formulation of the wager argument relied on the straightforward expected-utility-maximisation-principle of rational choice. But by appealing to the risks associated with the respective wagers, Agar would seem to be incorporating some element of risk aversion into his preferred rationality principle. This would force a revision of the original argument (premise 5 in particular), albeit one that works in Agar’s favour. Second, the use of subjective valuations might affect our interpretation of the argument. In particular it raises the question: Is Agar saying that this is how people will in fact react to the uploading decision, or is he saying that this is how they should react to the decision?
One point is worth noting: the asymmetry of uploading with cryonics is deliberate. There is nothing in cryonics which renders it different from Searle's wager with 'destructive uploading', because one can always commit suicide and then be cryopreserved (symmetrical with committing suicide and then being destructively scanned / committing suicide by being destructively scanned). The asymmetry exists as a matter of policy: the cryonics organizations refuse to take suicides.
Overall, I agree with the 2 quoted people; there is a small intrinsic philosophical risk to uploading as well as the obvious practical risk that it won't work, and this means uploading does not strictly dominate life-extension or other actions. But this is not a controversial point and has already in practice been embraced by cryonicists in their analogous way (and we can expect any uploading to be either non-destructive or post-mortem), and to the extent that Agar thinks that this is a large or overwhelming disadvantage for uploading ("It is unlikely to be rational to make an electronic copy of yourself and destroy your original biological brain and body."), he is incorrect.
Will the ems save us from the robots?
At the FHI, we are currently working on a project around whole brain emulations (WBE), or uploads. One important question is if getting to whole brain emulations first would make subsequent AGI creation
- more or less likely to happen,
- more or less likely to be survivable.
If you have any opinions or ideas on this, please submit them here. No need to present an organised overall argument; we'll be doing that. What would help most is any unusual suggestion, that we might not have thought of, for how WBE would affect AGI.
EDIT: Many thanks to everyone who suggested ideas here, they've been taken under consideration.
New Q&A by Nick Bostrom
Underground Q&A session with Nick Bostrom (http://www.nickbostrom.com) on existential risks and artificial intelligence with the Oxford Transhumanists (recorded 10 October 2011).
[paper draft] Coalescing minds: brain uploading-related group mind scenarios
http://www.xuenay.net/Papers/CoalescingMinds.pdf
Abstract: We present a hypothetical process of mind coalescence, where artificial connections are created between two brains. This might simply allow for an improved form of communication. At the other extreme, it might merge two minds into one in a process that can be thought of as a reverse split-brain operation. We propose that one way mind coalescence might happen is via an exocortex, a prosthetic extension of the biological brain which integrates with the brain as seamlessly as parts of the biological brain integrate with each other. An exocortex may also prove to be the easiest route for mind uploading, as a person’s personality gradually moves from away from the aging biological brain and onto the exocortex. Memories might also be copied and shared even without minds being permanently merged. Over time, the borders of personal identity may become loose or even unnecessary.
Like my other draft, this is for the special issue on mind uploading in the International Journal of Machine Consciousness. The deadline is Oct 1st, so any comments will have to be quick for me to take them into account.
This one is co-authored with Harri Valpola.
EDIT: Improved paper on the basis of feedback; see this comment for the changelog.
Permission for mind uploading via online files
Giulio Prisco made a blog post giving permission to use the data in his Gmail account to reconstruct an uploaded copy of him.
To whom it may concern:
I am writing this in 2010. My Gmail account has more than 5GB of data, which contain some information about me and also some information about the persons I have exchanged email with, including some personal and private information.
I am assuming that in 2060 (50 years from now), my Gmail account will have hundreds or thousands of TB of data, which will contain a lot of information about me and the persons I exchanged email with, including a lot of personal and private information. I am also assuming that, in 2060:
1) The data in the accounts of all Gmail users since 2004 is available.
2) AI-based mindware technology able to reconstruct individual mindfiles by analyzing the information in their aggregate Gmail accounts and other available information, with sufficient accuracy for mind uploading via detailed personality reconstruction, is available.
3) The technology to crack Gmail passwords is available, but illegal without the consent of the account owners (or their heirs).
4) Many of today's Gmail users, including myself, are already dead and cannot give permission to use the data in their accounts.
If all assumptions above are correct, I hereby give permission to Google and/or other parties to read all data in my Gmail account and use them together with other available information to reconstruct my mindfile with sufficient accuracy for mind uploading via detailed personality reconstruction, and express my wish that they do so.
Signed by Giulio Prisco on September 28, 2010, and witnessed by readers.
NOTE: The accuracy of the process outlined above increases with the number of persons who give their permission to do the same. You can give your permission in comments, Twitter or other public spaces.
Ben Goertzel copied the post and gave the same permission on his own blog. I made some substantial changes, such as adding a caveat to exclude the possibility of torture worlds (unlikely I know, but can't hurt), and likewise gave permission in my blog. Anders Sandberg comments on the thing.
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)