Enhancing yourself is great. I would gladly plug in extra memory and better indexing algorithms to my brain. Throw in rocket boosters and indestructible bones and I'll empty my wallet. This I get and support. But what I have never understood is when people talk about uploading or transferring their conciousness. I wouldnt mind creating copies of myself, virtual or otherwise, but it wouldnt be me me. For some reason I have a very strong fear of continuity errors. Maybe you could fool me by replacing my fleshy brain part by part with mechanical hardware and then slowly outsourcing different functionalities part by part to a cloud based solution until I no longer have any physical presence. But I fear this will just lead to a day when I will have a sudden realisation that I am not actually me me and the following existential crisis will lead to unexpected outcomes.
This fear of continuity breaks is also why I would probably stay clear of any teleporters and the like in the future.
This fear of continuity breaks is also why I would probably stay clear of any teleporters and the like in the future.
In case you haven't read it: https://existentialcomics.com/comic/1
But overall I agree, this "feeling" is partially the reason why I'm a fan of the insert slightly-invasive mechanical components + outsource to external device strategy. As in, I do believe it's the most practical since it seems to be roughly doable with non-singularity levels of technology, but it's also the one where no continuation errors can easily happen.
Minor nitpick, but in section IV I think it's unlikely that hemispherectomy and brain shrinkage can stack linearly while preserving a human-like consciousness, because successful hemispherectomy requires the preserved half to rewire some structures to the functions normally carried out by the removed half (source) whereas halving the amount of tissue greatly reduces the resources available for that. The situation reminds me of neural net compression. We can prune or use quantization, compressing the net by some factor, but the techniques don't stack perfectly because they eliminate some of the same redundancies.
Slightly more relevant is the evolutionary argument that any easy change to the brain that decreases its power consumption or volume must give up something very evolutionarily valuable, since brains use a huge amount of energy and increase deaths from childbirth. That is, the architecture of meat brains isn't too inefficient. While this doesn't refute the idea of transferring consciousness gradually, it makes me skeptical that we can do so with general-purpose hardware economically.
How to actually switch to an artificial body – Gradual remapping
Note: Please feel free to message me about inconsistencies, references which you think should be added (or claims you think the current references don't back, or where there's evidence to the contrary). But please note, this is *obviously* not meant to be definitive essay on the concept. In other words "epistemic status: low-to-mid confidence, but, I doubt you can go much beyond mid confidence on these type of subjects"
One of the oldest transhumanist tropes is the idea of transferring your brain into a computer, thus achieving immortality and beyond-human abilities… etc, etc.
I am one of the people that believe this is actually possible, as in, not in the “Oh, in the year 3000 people will be able to do this” sort of way but in the “Barring tremendous progress in accurate delivery of Okazaki factors, thymus transplants and various MTOR inhibitors and drugs that are marketed as something other than and MTOR inhibitor but are actually probably just a confounded MTOR inhibitor, mind-uploading will become my passion project once I hit my 30s”.
I Why popular portrayals of the idea are impractical
Considering this, it sort of dis-tresses me how popular culture portrays this. When I say popular culture here I really mean “every single thing I’ve ever read or seen ob the subject”… though chances are I might not be looking in the correct place ?
To give a few examples:
a) Wikipedia’s page on the subject: https://en.wikipedia.org/wiki/Mind_uploading
b) Wikipedia’s page on transhumanism in general: https://en.wikipedia.org/wiki/Transhumanism
c) One of the most popular doomed-venture/scams around this are, 2045 initiative:
http://2045.com/
d) One of the more realistic doomed-venture/scams around this area, Netcome: https://nectome.com/the-case-for-glutaraldehyde-structural-encoding-and-preservation-of-long-term-memories/
e) That one episode of black mirror that reminded everyone The Go-Go’s were a thing and might have just been an inside-joke about making a bunch of neckbeards watch a half-hour lesbian soap opera: https://www.youtube.com/watch?v=8dnn31TBoM4
Basically, the ideas presented are as follows:
When I say “science fiction” I mean “Yes, this is theoretically not impossible as far as we understand the world, but it’s in the realm of dyson spheres, gray goo and digital picobots, we are a few hundreds generations of tooling away from even figuring out if we can do this and what the implications would be”.
When I say “magic” I mean “This pretty literally breaks the laws of physics/reality as agreed upon via strong experimental evidence”.
II The actual problem
I believe why most of the ideas here sound so far fetched, at leas to me, is not because the problem is fundamentally that hard, but because they refuse to tackle the actual question that is being begged here:
“What exactly are we trying to transfer and/or preserve ?”
The short TL;DR is: Consciousness.
The longer TL;DR might be something like:
The full picture is something like:
I’m not exactly sure what I want to transfer and/or preserve, but ideally, I’d like it to be similar to whatever is happening right now. Much the same way US judges can’t define “porn” but can know it when they see it, we can’t define “consciousness” but can know it when we are experiencing it (or… being it… rather ?).
III A brief detour into non-dualism
Now, in order to even accept the idea of being able to transfer “our brain” or “our consciousness” or “our feeling of self” or “our contiguous feeling of being a self-ware agentic entity”, we must be pretty sold on the idea of non-dualism. Basically accept the fact that our feeling of “being us” is simply the interaction of a bunch of processes going on in the brain and in the body.
I think there’s a “infantile” way to think about non-dualism, as perfectly illustrated in the Sam Harris book Free Will.
The problem with this infantile way is that it basically boils down to “the self is nothing” or “the self is everything”. I don’t think this view really holds sway, at least from a transhumanist perspective, partially because it seems unable to answer the fundamental question “Why do you then even care if you live or die ?” or even “Assuming nobody would be saddened by it, what’s the argument against just killing yourself right now ?”.
I would claim some backing to this positions by looking at Budhism, namely the fact that once it ended up defining “enlightenment” as “kinda like being dead” it had to cram in the idea of “Oh, but you can’t just kill yourself, there’s reincarnation you see, achieving enlightenment is the only way to truly dying”.
A much better nondualistic way of thinking about stuff is, in my opinion, closer to the kind of stuff Daniel Dennett talks about in this books: From Bacteria to Bach and Back, The Mind’s I and Consciousness Explained. I truly recommend all of them, especially if you have a “typical” sciency mentality… Daniel Dennett is basically to unique among modern philosopher of mind in that he seems to have a deep understanding of computers, machine learning, neuroscience, biology and the scientific method, so you don’t end up reading him and feeling like he’s main hypothesis is based on him misunderstanding a simple phenomenon or lacking information that’s “common knowledge” in a specific research niche.
I can’t really do justice to his views in a few paragraphs, hence why I’m really trying to push people to read him, but it boils down to something like:
Basically, the non-naive view of nondualism accepts the fact that there’s nothing “special” or “magical” about human brains, but also that the way we think of “self” relatively ok and the feeling of “being ourselves” or “being conscious” is about as real as anything else in the universe.
IV What we want to preserve might not be so large
So, what we want to preserve could be basically boiled down to “the feeling of being an agentic self that experiences the world, has memory of it’s past activity, is able to create new memories and feels kinda-human, feels and to some extent is similar to other humans”.
But, what I’m sure most people would agree we don’t really need/want to preserve are things like e.g.:
Indeed, I think once you boil it down to some thoughts experiments, people are able to give up on even more ground.
Let me get out some ideas/examples to back this up:
Some references discussing removal/damage/role for the auditory and visual cortex which I find interesting and largely supportive of this view:
So, now ask yourself:
Is a hypothetical paraplegic (1) patient with more than half of his brain mass removed (4) and another half of his brain mass “removed from consciousness” (3) with potentially even more brain mass removal from the cerebral cortexes (2)… arguably still be “conscious”, still be a “self”. A non ideal way to live… but, still a life and not so bad if the advances of science could bring you back to full speed in a few dozens of years.
Now, this is obviously a pretty big “if”, but under this optimistic model a conscious human-like brain could simply be:
Please assume this is an over-simplified idea of the human brain, an overly optimistic example and is not fully factually correct… that’s what I’m assuming.
My main point here isn’t to describe creating a human brain, my main point here is to say “. It seems intuitive that we don’t need all of it to be “conscious” to “experience being a self” .
V Gradual transfer
So, assuming that:
How could we realistically try to transfer “I” into something that is not our body.
I think the answer boils down to “gradually”. Gradual transfer would essentially be step-by-step transfer of brain functionality onto peripheral device, giving said device more and more computing power and responsibility, allowing them to communicate with each other and hoping that over time you can basically transfer the sensation of “I” to said device.
Let me give a few examples of where we are currently at in terms of such peripheral device, not designs that exist on paper, devices that exist today, that are being used by humans, that can be bought by you (in some cases only with a doctor’s approval though):
a) Using the somatic nerves in part of your tongue to see, now, I’m not sure if any “sighted” person tried a device like BrainPort, but judging by the testimonies of people born with sight, it seems to work pretty much like seeing, just with a very low resolution.
b) BCI controlled robotic arms… this requires a pretty “safe” direct implants on the surface of the motor cortex. And in case you were wondering, it’s also been recently done to some extent with noninvasive sensors (sitting on to of your skin). Note, this includes “feeling” the actual arm.
c) Using computers for language processing and mathematics. This is something we all do, to some extent. Ever used a speed-reader ? Congrats, you just out-sourced/modified a big part of how your brain read language. Ever had some complex arithmetics to compute and pulled out a python shell or a phone, yeap, outsourcing your ability to do math.
Well, I’d argue that this is basically the area we should be focused on if we care about consciousness/self transfer.
For example, think about robotic arms controlled via RL training, where you can give even more complex tasks like “pick up that object”. Given the previously shown robotic arm, I see no reason we couldn’t control one where we just have to “think” about the movement and then have the arm do it (e.g. via electrodes connected to the prefrontal cortex).
If you look at various DIY hacks using noninvasive BCIs, I think this enters the realm of relatively trivial if you spent enough time training your brain and could afford spending a few hundreds of thousands on a robotic arm to play around. Here, are, four, examples.
I think a similar thing could arguably be feasible for arithmetic, where we basically attach the calculator to the prefrontal cortex, or our temporal lobe. Or attaching a page scanning + summary making device (say, using a simple camera and a fancy attention network) close to one of our language processing centers and teaching ourselves to “read a book diagonally” using it, then decide if we want to go more in-depth and continue reading it normally (or using a second speed-reader peripheral).
Ok, in saying “attach some electrodes to the front lobe” I might seem to be doing a lot of hand-waving, basically invoking the good-old “once science progresses enough this will be trivial”. But, I will point you back to examples a) and b), the electrode controlled arm and the tongue perceived camera. These seem basically just as “hand wavy”, the location they were placed in was not selected by optimizing for specific neural pathways, it was selected due to ease of access. The brain literally “learns” how to interpret the data coming to and from this and maps the in the correct locations.
The science behind how that happens is basically hand-wavy, since we don’t really know on a deep level, but that’s not bad, that’s amazing. We can skip one of the hardest steps of making a brain, understanding how it works on a deep level, because the brain itself is able to adapt, it’s able to map functionality onto different parts of itself, the nervous system and external devices. This has been known for quite a while, in that the brain can map and re-map different functions onto different parts of itself. The real question is how much can we learn to map onto more complex peripheral devices.
What seems to be lacking in current devices is a thing that we closely associate with consciousness, namely introspection and memory access&storage. Memory accessing/storage hasn’t been tested in humans partially due to physical limitations (as in, the parts related to memory are literally deep inside our brain, and reaching them with electrodes is very dangerous). But, in principle, we can reconstruct images from a brain fMRI.
fMRI resolution is a topic I’m not qualified to speak on, after digging into the subject for a previous projects it seems like it’s an area with plenty of *s.
Suffice to say, best resolutions you will found outside of labs might be something like 500μm with a frequency of 10Hz, to be generous. Based on what I’ve found (reference 1, reference 2), it seems like this could improve but there’s a “hard limit” at the hundreds or a few dozens of microns (no frequency information, let’s assume ~100HZ ?).
The image reconstruction study used 2000μm (that is 2000μm^3), and doesn’t specify the frequency (maybe no that relevant ?).
With electrode arrays (the technology I’m mainly harping on about here), it seems that practical sizes go down to 10μm per electrode , after that filtering noise is too hard for now.
But, suffice to say, I think if we can accurately reconstruct an image from an fMRI, it’s within the realm of possible right now to re-construct or read a memory with sufficient electrodes. This argument is kinda far fetched… however, considering people are studying this veyr thing in-vivo and getting results I’d say that it’s not in the realm of science fiction, closer to 10 years give or take 5.
To recapitulate:
VI Ok, here comes the hand-waving bit
So, assuming you can our-source a bunch of brain functionality to peripheral devices, you’d still have the issues of:
a) Those things not being “conscious” on their own, without the brain.
b) The brain being speckled with electrode arrays (we can have a few implanted relatively safely, once you go down to 100s, simple things like infections might become an issue, there’s too many points of failure, implanting through a single hole is doable, but then wire size and getting the electrodes in the correct place becomes an issue).
So how do you fix this ?
Well, you don’t map many peripherals to the brain at once, you map a few peripherals via electrode arrays implanted in specific places that allow for generic communication (e.g. thalamus, various areas on the pefrontal cortex and various areas in the visual and motor cortex). Then:
Take the simple arithmetic peripheral for example. Instead of adding different discrete functions to it, you could try to map all the “math” functionality of your brain into it. So suddenly, you’re not giving it numbers anymore, you’re giving it an image of an urn, 5 balls extracted from the urn 3 blue – 2 black and it tells you “Probability of 2/5 that the next ball is black”.
Or, take the object recognition peripheral, instead of having it pipe labels or formed images to the brain, you make it classify them and pipe them to the appropriate circuit (e.g. this is an image of an equation, I shall send this to the math peripheral, or, this is an image of something I could stumble on, I shall send this to the motion-controlling peripheral).
a) You build peripherals specifically dedicated to understanding the other peripherals, whenever you have a question about “how does this or that work, or why is this or that giving an error”, you try to phrase that as a mental question for the evaluation peripheral, which then in turn analyzes the others and sends you back an answer.
b) You build a “debugging interface” in the peripherals, which would likely be more computationally complex than the peripherals themselves, but doable. Think of a complex network for tracking, classifying and creating bounding boxes for objects, this is partially “split” into functions, such that you can try and add a secondary debugging algorithm on top, an algorithm that you can query to asks things like “roughly why did you classify this as {Y} ?” or “around what point did you decide area {X} did not contain a relevant object ?” and get answers like “Because this, this and this rough shape that come up when I apply the convolution operation strongly remind me of a {Y}” or “Because the color in area {X} is pretty homogenous and I see now shadows”.
If these seem like pretty complex questions to ask of a neural network that’s because they are. There’s some research in the area of network analyses (e.g. reference 1, reference 2), but it’s mostly focused on trying to understand the internal equation in order to optimize it or to spot the values and times during training which lead to errors.
There’s not field of analyzing neural network to interpret them in a human-brain-compatible way or of building neural network that are more human-brain-compatible in the way that they operate. But conceptually, such interpretations are possible.
See for example deep-dream style projects, which are simply extracting information from the first few layers of a network that’s feed noise… you can recognize “dog like” and “cat like” things in them (because the networks were trained on such images), from those sort of patterns it’s pretty easy to generalize to “because it reminded me of this specific shape which I associate with dogs”.
Again, assuming that senses, memory, feeling of location in space, agentic decision making, ability to read things, ability to map things into words and pronounce them and such functions basically accumulate to
the conscious experience that “I” class “I”
.
What we’d be pretty much doing here, by out-sourcing functionality to peripherals and letting them handle more and more complex tasks, introspect about themselves, store things in a long-term memory and dictate, record and “think about” the behavior of other peripherals… is more or less gradually letting them do the things that we associate with “I”.
So provided enough space for computation (and again, these are standard computer you can connect to a power source or put in a car, we can have plenty of computing power here), might the feeling of “I” transfer to “these complex circuits external to my body that do part of my thinking, memorization and sensory perception for me”, and if this is true, we could progress by transferring more and more of the things that we hold to be “important” about “ourselves” onto the peripherals, essentially moving “I” onto the peripherals and leaving the brain as a redundant storage for some of the information, double-checker for the decisions of the peripherals and storage for information that we considered irrelevant (but might decide we care about in the future).
VII In Conclusion
The more I write about this the more I realize it’s hard to paint a good picture of this view of consciousness transfer. In that I am not very certain of my knowledge around the subject (and thus how possible it is), double-checking that knowledge is time-consuming and trying to make it “make sense” in under 5,000 words is even more difficult.
I think it’s very reasonable to say that this view of gradually giving up brain functions to peripherals, and tweaking said peripherals until we “understand them” until they “become part of us” is a much more realistic view, given current and near-future technology, than other proposed idea for consciousness transfer.
It sounds a bit more strange and convoluted, but that is exactly because it’s achievable. A Dyson Sphere is much easier to explain than a plane, but that’s not because it’s easier building a Dyson Spheres that building planes, it’s exactly because building a plane is something we can do, so we can’t just skim of “irrelevant details” because we realize they are important. We can just wave and say “Ahm, super-heat resistant carbon nanotubes and super-fast absorption solar panel and {insert SF technology}” and we have a dyson sphere. But get 0.x% of the alloy composition for the wings of a Boeing 700 wrong and suddenly it’s no a plane anymore.
Scanning a human brain and uploading it to a computer might seem a bit more “intuitive” than gradually releasing control to external devices and hoping that they “gain consciousness which we consider to be part of ourselves” but that because in the former scenario we use the hand-wavy idea of “scan a human brain”.
But scanning a brain is not currently possible, it’s so far away you can’t even name the chain of technologies required to make it possible.
Further more assuming that scanning a brain through {magic} and transferring it to a computer through {magic} and assuming that said computer than feels like “I” because {magic} is and improbable achievement. With the gradual transfer method we actually have the amazing option of stopping, if it doesn’t seem to work, if the thing we are transferring to doesn’t feel like “I” or like “consciousness”, then we can tweak it, we can change our approach or we can give up and not be dead.
Is the idea that I presented here possible with current technology (or technology that will be available in 5-10 years, e.g. 4nm transistors) ? I don’t know, I do know that it doesn’t seem that far fetched and most of the components needed are already there, there’s no need for exponential improvement, linear improvement will do. Plus, the most complex part of the idea, the remapping of function, is left to the most intelligent and well suited entity in the system, the brain itself.
It is also more plausible in that the re-mapping doesn’t have to be instant (like a scan+transfer), you could well take 30-40 years to remap your brain, adding little bits at a time, so by the time your body is failing you can just “decouple” from it, you can be “you” as a consciousness that feel contignous just in a different artificial body, running on different circuitry. We are basically switching from reading a few exabytes of data in seconds, to allowing said exabytes of data to move themselves over dozens of years and allowing pruning of irrelevant information (e.g. ability to control the old body).
Finally, my goal here is not to claim “this is the definitive strategy and the perfect description of it”, but rather, to say that this seems like a better direction to focus attention towards than the quack brain-scan ideas.
It’s still well in the realm of “transhumanist quackery”, but you could actually design experiments and devices to accomplish this, at least to get part of the way there.