Last December, on New Year’s Eve, I got a phone call in the middle of the night. “It’s your grandmother,” my mom said, “she’s in the ICU.” My grandma has been battling Parkinson’s Disease (PD) for more than eight years. Last summer was the first time I had seen her in five years. Throughout our time apart, her powerful voice in our phone calls had painted an illusion of her old self in my head, and I was not prepared for the reality of her physical deterioration. As I sprinted towards the subway station, cold air scraping against my face like blades, I felt my rose-tinted mirage disintegrate with each passing exhalation. 

“Her head is crystal clear,” a nurse told me the next morning, after my grandma’s condition had stabilized. “Sometimes she can't speak because of her tremors, but her mind is always acute.” I imagined how suffocating it must have been for my grandma, a natural conversationalist forced into silence. Currently, there is no cure for PD; in fact, for all neurodegenerative diseases, the gradual decay of neurons translates into physical and psychological symptoms that are hard to address. But what if we can preserve someone’s mind and transport that into a new body, freeing them from neural damage? If this becomes a possibility, it would alleviate the sufferings endured by millions. For this reason, I am driven to research whole brain emulation, henceforth referred to as mind uploading, and whether someone’s mind upload would be a valid instantiation of the original person. 

According to the Encyclopedia of Science and Religion, mind uploading is “the belief that human beings can transmigrate into machine bodies” (Geraci). This definition is vague, but it illustrates what we want to achieve via mind uploading – to preserve a copy of a human mind and download it onto various artificial bodies, like computers and androids. In 2005, IBM and the Swiss Federal Institute of Technology launched the Blue Brain Project, an effort to develop “all the biological algorithms, scientific processes and software needed to digitally reconstruct and simulate the brain” (Blue Brain Project), starting with mice brains. In 2013, the European Union launched the Human Brain Project, a large-scale research effort seeking to emulate the human brain. This project has made significant breakthroughs, including developing digital reconstructions of certain regions of the brain that enable simulations of brain activity (Human Brain Project). 

In addition to these simulation-based methods, mind uploading also includes invasive  procedures that directly alter the biological brain; however, my paper focuses on the former type of mind uploading – digital simulations that do not interfere with the continued existence of the original brain. These digital simulations, still currently speculative, would perfectly emulate every functionality of the biological brain. In other words, every cognitive task that someone is capable of can be replicated by their mind upload. 

Before further discussions, I must clarify what “upload” means. I use philosopher Jonah Goldwater’s definition: to upload something means to “multiply instantiate” it (Goldwater 233). In other words, we create multiple copies of something that are physically distinct from but qualitatively equivalent to the original. Goldwater claims that “abstract objects” can be multiply instantiated through uploading while “concrete particulars” cannot (Goldwater 233). Mind uploading is grounded upon the assumption that “human minds are patterns of neurochemical activity taking place in the body/brain” (Geraci), rendering them to be abstract objects. If these patterns could be perfectly replicated artificially, then minds can be recreated artificially as well. Currently, those who subscribe to functionalism believe this assumption to be true: minds are defined by their functional properties, and any digital simulation can be said to be another instantiation of a mind if it can perform every function of the original (Levin). 

When viewing minds using this framework, mind uploading seems like a sound way to cure neurodegenerative diseases. However, this method would actually be an invalid solution because minds are concrete particulars, not abstract objects. Instead, I argue that because the identity of minds are tied to the physical brain, a mind upload would not be a valid instantiation of the original mind. Mind uploads, at their best, only capture certain cognitive capabilities of a person. Understanding this distinction is crucial because the functionalist view of minds undermines the complexity of the human experience. In the following sections, I will 1). use Goldwater’s definition to further outline what kind of entities can be uploaded, 2). give an overview of the functionalist argument prominent in the field of machine learning, and 3). refute this argument by drawing evidence from psychology and neuroscience, asserting that minds cannot be uploaded. 

1. Definition 

What kind of entities are able to be uploaded, and are minds one of them? I use Goldwater's definition of uploadability, which calls objects that can be uploaded “abstract objects,” and ones that cannot be “concrete particulars” (Goldwater 233). Abstract objects are defined by the information they contain, meaning that they can exist in multiple locations and through multiple distinct physical manifestations. On the other hand, concrete particulars are defined by the physical medium in which their information is contained and can exist only through its original physical form (Goldwater 236). Goldwater uses the example of books versus paintings to illustrate this point. Books are abstract objects and are uploadable while paintings are concrete particulars and are not uploadable. This is because books are defined by the information they contain, not by their physical copy. Person A’s hardcover copy, person B’s paperback copy, and even person C’s digital copy are all valid instantiations of the book, even if none of them are the original printed copy. By contrast, a painting is defined by its original physical form: someone’s replication of The Mona Lisa is not really The Mona Lisa (Goldwater 237). 

Thus, for mind uploads to be instantiations of minds, minds must be abstract objects defined by their neural and cognitive information, not by the physical brain containing such information. In the following section, I will briefly outline the functionalist argument that defines minds as abstract objects. In the section following that, I will argue that minds are concrete particulars that cannot be separated from the biological brain. 

2. Brains are Abstract Objects: The Functionalist Argument

Functionalism emerged during the Cognitive Revolution, an intellectual movement in the 1950s where scholars began to study human minds using computational models (“Information Processing”). As outlined in the Stanford Encyclopedia of Philosophy, functionalists define minds by the functions they perform, not by their physical manifestations (Levin). One of the earliest proponents of this theory is Alan Turing, a founding scholar of computer science. In his famous 1950 paper “Computing Machinery and Intelligence,” Turing proposes the Imitation Game, later known as the Turing Test, as a way of measuring whether an artificial simulation of a mind is a mind. According to Turing, a simulation can be said to be a mind when, in response to given prompts, it can “provide answers that would naturally be given by a man” (Turing 435). He measures the extent to which a simulation is actually a mind by how well the simulation can mimic a mind’s behavior. It does not matter that the simulation is physically different or arrives at the results in a vastly different way; a simulated mind is an instantiation of an actual mind if it can perfectly replicate everything that mind can output.

Turing’s view that a mind’s function is its defining quality has henceforth become the bedrock belief for later functionalists. Most influentially, philosopher and mathematician Hiliary Putnam argued that the mental states that compose our minds can be understood by their functional roles. For instance, the mental state of thirst is defined by its purpose of driving an organism towards drinking water, not by the internal biological processes or the subjective feeling an organism experiences. Putnam contended that “we would not count an animal as thirsty if [it] did not seem to be directed toward drinking and was not followed by satiation for liquid” (Putnam 56). The subjective feeling and biological processes, if unable to drive an organism towards drinking, would not count as thirst. Inversely, any other stimulus, if able to drive an organism towards drinking, would be thirst. Objects described by Putnam’s theory are thus defined by the functions they perform, not by the way they physically appear in the world. Anything that one can sit on is a chair. Anything that provides sustenance is food. Anything that perfectly encompasses someone’s mental functions can be said to that person’s mind. 

Twenty-first century functionalists have already begun thinking about ways to bring said perfect simulations into reality through machine learning. One of the most influential figures is Nick Bostrom. “If brain activity is regarded as a function that is physically computed by brains,” Bostrom writes in support of prior functionalists, “then it should be possible to compute it on a [digital] machine” (Bostrom 7). Bostrom optimistically proposes various ways through which such simulation could be realized, such as detailed brain scans, high-level image processing, and modeling neural structures and activities. This optimism captures the credence on which much of the field of AI and machine learning is built – that not only is the mind an abstract object but also that we can perfectly replicate them through mind uploading. 

3. Brains are Concrete Particulars: Countering the Functionalist Argument

The functionalist argument, although a useful framework, is grounded upon an inaccurate and reductionist view of minds. The human mind is not an abstract object but a concrete particular that is deeply dependent on its physical vessel – the biological brain. Because mind uploads lack two key physical properties of the brain, they cannot be another instantiation of minds. In this section, I will first use the Chinese Room Argument to illustrate why a simulation of concrete particulars that is physically different is not another instantiation of the original. I will then explain the two key physical properties of minds that render them to be concrete particulars. 

The Chinese Room Argument is a famous thought experiment proposed by philosopher John Searle. Suppose a person who does not speak Chinese is put inside a non-transparent room. Inside the room, she is given detailed instructions of how to manipulate and output certain given Chinese symbols in response to some given prompt. She eventually becomes so good at manipulating the symbols that even though she does not understand a word of Chinese, the language she outputs becomes indistinguishable from responses produced by an actual Chinese speaker (Searle 419).

To someone outside of the room, the person inside can pass as someone who speaks Chinese. But can this person be said to understand Chinese? My answer is no. This person is merely a simulation of the ability to speak Chinese, not an actual example of what it means to speak the language. This is because the simulation only simulates the end result (i.e. producing a cohesive Chinese sentence) but not the intermediary processes that take place. For instance, when processing language, a Chinese speaker undergoes certain neural activities: their Wernicke’s area, located in the posterior superior temporal lobe, is activated during language comprehension (Weill Institute). This process is lacking for the person in the room. 

This thought experiment may be misleading because there is still a real person inside the room. One may challenge that, even though the person inside is not necessarily processing language, they still experience neural activities as they manipulate symbols. But for the person inside, only the part of the brain responsible for symbol manipulation is activated, which is different from the area for language processing (A. James). The exact neural activities experienced by a real Chinese speaker is still lacking for the person who merely simulates speaking Chinese. This discrepancy also applies to purely artificial simulations like large language models. Although ChatGPT may be able to produce sentences that seem indistinguishable from a human, the physical neural processes present in human brains do not occur in it behind the scenes (Shanahan 2). Because there are physical processes happening in the mind of a real Chinese speaker, simulations that lack these properties are not the same as the real speaker. 

To generalize, a simulation of something is not the original if 1). the original’s identity is intertwined with its physical properties and 2). if the simulation lacks these properties. A digital simulation of a thunderstorm is not really a thunderstorm; a tennis video game is not really the sport. An upload of a mind, no matter how good it gets at replicating the functionality and outputs of the original, is not an actual mind because it lacks key physical properties that define minds. I will now demonstrate how two key physical properties define minds, rendering them to be concrete particulars that cannot be instantiated via uploading. 

3.1 Neuroplasticity 

The first physical property of the brain that defines the mind is neuroplasticity; it impacts both the mind’s response to traumatic injury and its functional properties like learning. Psychologists Jean Askenasy and Joseph Lehmann explain that our living brain changes “with every new experience, every new process of learning, memorizing or mastering new and existing skills” (Askenasy & Lehmann 1). Neuroplasticity is this constant reorganization of synapses in response to learning, experience, and injury. 

Research in neuroscience has uncovered the spectacular abilities of neuroplasticity. For instance, through studying perinatal brain damage in children, neuroscientist Elissa Newport and her collaborators found that when babies are born with traumatic damages in their brain’s left hemispheres, their right hemispheres evolve to take over cognitive functions that are usually carried out by the left side (Newport 3). This shows that our brains are highly resilient and adaptive to changes experienced by our minds. By comparison, a digital simulation is very brittle and non-adaptive to change. A computer science student can tell you that if half of their program’s code got deleted, the entire program would cease to run. The remaining half of their program would not learn to take over the functions previously carried out by the deleted half. The mind is thus inseparable from the brain because the latter reflects the conscious experiences of the former, while the former shapes the physical composition of the latter. Only a specific brain can contain a specific mind. A mind upload that is created digitally lacks this physical property of biological brains that directly shapes the identity of the mind, showing that mind uploads are not instantiations of the actual mind.  

In addition, neuroplasticity also impacts the mind’s functional properties. The brains of babies are much more plastic than those of adults: babies have more connections between neurons and can change these connections much more easily. This is especially true in the prefrontal cortex, the brain region responsible for planning and concentration (Gopnik 81). Although this lack of stability in brains may prevent babies from acting insightfully, the malleability of their brains provide them with unique learning advantages. Being able to explore the world constantly and unabashedly enables babies to learn rapidly. “There is a trade-off between the ability to explore creatively, like a child, and the ability to plan and act effectively, like an adult,” writes psychologist Alison Gopnik (Gopnik 81). Indeed, while children are faster learners, adults are better planners. This difference is not because children are “unfinished adults,” but because they are “exquisitely designed by evolution to change and create, to learn and explore” (Gopnik 81). The varying degrees of plasticity in our brains correlate with how different functions of minds are prioritized differently as we traverse through various stages of life. A mind upload does not encapsulate our brain’s evolving neuroplasticity and cannot be said to be an instantiation of a mind. 

3.2 Sentience

Another physical property that mind uploads lack is sentience, or the ability to experience what philosophers call qualia. I subscribe to neuroscientists Antonio and Hanna Damasio’s biological theory of consciousness, which posits that sentience emerged strictly biologically because it was an evolutionary advantage. Qualia is our subjective experience of the world, or what it is like to be us (Chalmers 201). There is something unique for each of us that constitutes how it feels to experience the world in our shoes – to see the color red, to eat home cooked noodles, to feel the sunbeams reflecting upon our skin... Qualia is meant not only to make life more vibrant but also to inform us about our homeostasis, or the internal, physical, and social conditions needed to reach a state of optimal wellbeing. Qualia can be roughly broken down into two categories: pain and pleasure. Pain signals that you are doing something that distances you from homeostasis while pleasure signals that you are doing something that brings you closer (Etchemendy). Most organisms are biologically incentivized to avoid actions that inflict pain and repeat actions that invoke pleasure because they are driven to sustain homeostasis (Nesse & Schulkin 1). Qualia provides us with this information vital to sustaining our survival through deliberate life regulation. 

Only living organisms can possess sentience because sentience provides introspective knowledge that enables them to keep surviving; sentience would not emerge in artificial systems because they are not alive in the first place. I will address the opposition to this claim in the next paragraph. As a living being, when I touch a stove, I experience pain. Through this pain, I realize the consequence of touching the stove – my hand is damaged, which distances me from homeostasis. Upon introspection, I can make a clear association between my faction, the quale of pain I experienced, and the consequence it implies. Next time I see a stove, I will not touch it (Etchemendy). On the other hand, an upload of my mind, which is constructed entirely artificially, would lack sentience. It would not experience pain or pleasure when it interacts with the outer world. It also would not evolve to gain sentience because it is not alive and would not need to regulate its survival. Indeed, there is no current evidence suggesting that a consciousness can be created inorganically (Demasio 279). As a result, it is impossible for us to separate our minds from our physical, biological brains, which makes them concrete particulars that cannot be instantiated via uploading. 

One may oppose this view by arguing that it is impossible to truly evaluate the sentience of anyone. Since each person’s subjective experience of the world is inherently private, the only way to validate another being’s sentience is through their outwardly observable behaviors (Thornton). If a mind upload behaves exactly as the original would, why can we not accept that mind upload to have sentience? This view touches upon one important criterion for evaluating sentience but ignores two other important aspects. Philosopher John Etchemendy and computer scientist Fei-Fei Li use a compelling three-part criteria that I borrow here. First, we evaluate using “behavior evidence”, which is someone’s outwardly observable actions (e.g. someone claiming to be hungry). Second, there is “the absence of contravening evidence,” which is a lack of observable evidence proving the behavior evidence to be false (e.g. the person claiming to be hungry had just eaten). Third, and most importantly, we must consider assumptions about the subject’s bodily makeup – in order for us to accept someone to be hungry, they must “have a physical body like [ours].” (Etchemendy & Li) Likewise, it is not enough for a mind upload to behave in human-like ways for us to consider it sentient. It must have a physical, biological body, which it lacks by definition. 

4. Two Futures

It is crucial that we understand that uploads are not minds and that the functionalist view of mind identity ignores both how neuroplasticity shapes mind properties and how sentience is inherently biological. Making this distinction is necessary because our view of technologies – even speculative ones – in turn impacts how we perceive ourselves and others. From manually-powered mechanical androids to state-of-the-art neural networks today, humans have always used the human mind as a reference (Etchemendy). However, the technology we develop in turn informs our perception of ourselves. Accepting mind uploads as minds would reduce our identities down to functional properties that can be replicated artificially, stripping us of the complexity of individual subjective experiences. Instead, we should remember the richness of life provided by our biological brains, artifacts sculpted by thousands of years of evolution. 

Despite my belief that mind uploads are not valid instantiations of minds, this conclusion need not be the end of the story. I propose two possibilities of future research for easing patient suffering, both of which could be highly valuable if realized. 

The first is to accept minds to be concrete particulars and direct our resources towards technological and medical breakthroughs. If mind uploading is unviable, we must find ways to repair the existing brain. We could spend more effort on neuroengineering research, advance healthcare practices, and develop new technologies that can repair neural damages and restore motor functions. Eventually, we could discover ways to significantly improve current treatments or even find cures to neurodegenerative diseases. 

The second future is grounded upon a theoretical breakthrough that proves minds to be abstract objects independent of their physical substrates, disproving my argument. This could be achieved through proving that sentience could be preserved in artificial bodies, or through proving that sentience can be modeled by a function, reducing physical properties of minds down to functional properties. Either way, this future would require epistemological advancements in neuroscience and the philosophy of mind. 

By now, I have shown that mind uploads are not valid instantiations of the minds they are modeled after because minds’ identities are tied to physical brains. I hope that this paper has sparked some curiosity about your own mind. Currently, there is still a lot we do not understand about our minds (Tompa). We have spent much time trying to understand the world beyond us; examining minds and mind uploads brings us one step closer towards developing a theory of us. 


Works Cited

A. James Clark School of Engineering. “The brain makes sense of math and language in different ways.” UMD ECE, University of Maryland, 16 August 2021, https://ece.umd.edu/news/story/the-brain-makes-sense-of-math-and-language-in-different-ways. Accessed 8 June 2024.

Askenasy, Jean, and Joseph Lehmann. “Consciousness, brain, neuroplasticity.” Frontiers in Psychology, vol. 4, 2013. https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2013.00412/full.

Blue Brain Project. “FAQ.” Swiss Federal Institute of Technology Lausanne, https://www.epfl.ch/research/domains/bluebrain/frequently_asked_questions/. Accessed 2 June 2024.

Bostrom, Nick, and Anders Sandberg. Whole Brain Emulation: A Roadmap. Oxford University, 2008, https://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf. Accessed 14 May 2024.

Chalmers, David. “Facing up to the problem of consciousness.” Journal of Consciousness Studies, vol. 2, no. 3, 1995, pp. 200 - 219. PhilPapers, https://philpapers.org/rec/CHAFUT. Accessed 8 June 2024.

Demasio, Antonio, and Hannah Demasio. “Feelings Are the Source of Consciousness.” Neural Computation, vol. 35, no. 3, 2023, pp. 277-286. https://direct.mit.edu/neco/article/35/3/277/112379/Feelings-Are-the-Source-of-Consciousness. Accessed 14 May 2024.

Etchemendy, John. Philosophy of AI. Gates Computer Science, Stanford University, 24 Jan. 2024. 

Etchemendy, John, and Fei-Fei Li. “No, Today’s AI Isn’t Sentient. Here’s How We Know.” Time, 22 May 2024, https://time.com/collection/time100-voices/6980134/ai-llm-not-sentient/. Accessed 2 June 2024.

Geraci, Robert M. “Mind Uploading.” Encyclopedia of Sciences and Religions, Springer, Dordecht, 2013, https://doi.org/10.1007/978-1-4020-8265-8_201030. Accessed 2 June 2024.

Goldwater, Jonah. “Uploads, Faxes, and You: Can Personal Identity Be Transmitted?” American Philosophical Quarterly, vol. 58, no. 3, 2021, pp. 233-250. PhilPapershttps://philpapers.org/rec/GOLUFA. Accessed 14 May 2024.

Gopnik, Alison. “How Babies Think.” Scientific American, July 2010, http://alisongopnik.com/Papers_Alison/sciam-Gopnik.pdf. Accessed 8 June 2024.

Human Brain Project. “Pioneering digital Brain research.” https://www.humanbrainproject.eu/en/about-hbp/human-brain-project-ebrains/. Accessed 2 June 2024.

“Information Processing and Psychopathology.” International Encyclopedia of the Social & Behavioral Sciences, edited by Neil J. Smelser and Paul B. Baltes, Elsevier Ltd, 2001, pp. 7456 - 7460. ScienceDirect, https://www.sciencedirect.com/topics/neuroscience/cognitive-revolution. Accessed 8 June 2024.

Levin, Janet. “Functionalism (Stanford Encyclopedia of Philosophy).” Stanford Encyclopedia of Philosophy, 24 August 2004, https://plato.stanford.edu/entries/functionalism/. Accessed 14 May 2024.

Nesse, Randolph M., and Jay Schulkin. “An evolutionary medicine perspective on pain and its disorders.” Philos Trans R Soc Lond B Biol Sci., 2019. National Library of Medicine, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6790386/. Accessed 8 June 2024.

Newport, Elissa L., et al. “Language and developmental plasticity after perinatal stroke.” Proceedings of the National Academy of Sciences, vol. 119, no. 42, 2022. https://doi.org/10.1073/pnas.2207293119. Accessed 14 May 2024.

Putnam, Hilitary. Mind, Language and Reality: Philosophical Papers. vol. 2, Cambridge University Press, 1975, https://home.csulb.edu/~cwallis/382/readings/482/putnam.nature.mental.states.pdf.

Searle, John R. “Minds, brains, and programs.” Behavioral and Brain Sciences, vol. 3, no. 3, 1980, pp. 417-424. https://doi.org/10.1017/S0140525X00005756. Accessed 14 May 2024.

Shanahan, Murray. “Talking about Large Language Models.” 2022. arXiv, https://arxiv.org/abs/2212.03551. Accessed 8 June 2024.

Thornton, Stephen P. “Solipsism and the Problem of Other Minds.” Internet Encyclopedia of Philosophy, https://iep.utm.edu/solipsis/. Accessed 9 June 2024.

Tompa, Rachel. “Why is the human brain so difficult to understand? We asked 4 neuroscientists.” Allen Institute, 21 April 2022, https://alleninstitute.org/news/why-is-the-human-brain-so-difficult-to-understand-we-asked-4-neuroscientists/. Accessed 8 June 2024.

Turing, Alan M. “Computing Machinery and Intelligence.” Mind, vol. 59, no. 2, 1950, pp. 433-460. https://doi.org/10.1093/mind/LIX.236.433. Accessed 14 May 2024.

Weill Institute for Neurosciences. “Speech & Language.” UCSF Memory and Aging Center, University of San Francisco, https://memory.ucsf.edu/symptoms/speech-language. Accessed 8 June 2024.

New Comment
3 comments, sorted by Click to highlight new comments since:

I think this argument mostly centers on the definition of certain words, and thus does not change my views on whether I should upload my mind if given the choice.

But can this person be said to understand Chinese? My answer is no.

What you have shown here is what you think the word "understands" means. But everyone agrees about the physical situation here - everyone anticipates the same experiences.

This shows that our brains are highly resilient and adaptive to changes experienced by our minds. By comparison, a digital simulation is very brittle and non-adaptive to change.

The substrate of the simulation, ie. a silicon chip, is brittle (at our current level of tech) but it can still run a simulation of a neuroplastic brain - just program it to simulate the brain chemistry. Then if the simulated brain is damaged, it will be able to adapt.

The bigger point here is that you are implicitly asserting that in order to be "sentient" a mind must have similar properties to a human brain. That's fine, but it's is purely a statement about how you like to define the word "sentient".

Only living organisms can possess sentience because sentience provides introspective knowledge that enables them to keep surviving;

"Sentience" has no widely agreed concrete definition, but I think it would be relatively unusual to say it "provides introspective knowledge". Do you agree that any questions about the actual computation, algorithms or knowledge in a brain can be answered by only considering the physical implementation of neurons and synapses?

sentience would not emerge in artificial systems because they are not alive in the first place.

Again, I think this is purely a statement about the definition of the word "alive". Someone who disagrees would not anticipate any different experiences as a consequence of thinking an artificial system is "alive".

It sounds like you're saying that an inadequate copy of your mind wouldn't be you. I think we can all agree on that.

If there's a copy adequate to preserve most of your memories, habits, and beliefs, you could have a conversation with that mind. It would claim to be you, and to be having a continuation of your experience and your life.

Do you suppose you'd change your mind if you and many others had had conversations with copies that swore they were the same person as before uploading?

Because that seems like the very definition of creating an adequate copy.

My high level take is that this essay is confused about what minds are and how computers actually work, and it ends up in weird places because of that. But that's not a very helpful argument to make with the author, so let me respond to two points that the conclusion seems to hinge on.

A mind upload does not encapsulate our brain’s evolving neuroplasticity and cannot be said to be an instantiation of a mind. 

This seems like a failure to imagine what types of emulations we could build to create a mind upload. Why is this not possible, rather than merely something that seems like a hard engineering problem to solve? As best I can tell, your argument is something like "computer programs are fragile and can't self heal", but this is also true of our bodies and brains for sufficient levels of damage, and most computer programs are fragile by design because they favor efficiency. Robust computer programs where you can could delete half of them and they'd still run are entirely possible to create. It's only a question of where resources are spent.

Likewise, it is not enough for a mind upload to behave in human-like ways for us to consider it sentient. It must have a physical, biological body, which it lacks by definition. 

This is nonsesnese. Uploads are still physically instantiated, just by different means. Your argument thus must hinge on the "biological body" claim, but you don't prove this point. To do so you'd need to provide an argument that there is something special about our bodies that cannot be successfully reproduced in a computer emulation even in theory.

It's quite reasonable to think current computers are not powerful enough to create a sufficiently detailed emulation to upload people today, but that does not itself preclude the development of future computers that are so capable. So you need an argument for why a computer of sufficient power to emulate a human body, including the brain, and an environment for it to live in is not possible at all, or would be impractical even with many orders of magnitude more compute (e.g. some problems can't be solved, even though it's theoretically possible, because they would require more compute than is physically possible to get out of the universe).


For what it's worth, you do hit on an important issue in mind uploading: minds are physically instantiated things that are embedded in the world, and attempts to uploads mind that ignore this aren't going to work. The mind is not even just the brain, it's a system that exists in conjunction with the whole body and the world it finds itself in such that it can't be entirely separated from it. But this is not necessarily a blocker to uploading minds. It's an engineering problem to be solved (or found to be unsolvable for some specific reasons), not a theoretical problem with uploads.