This is the third in a sequence of posts scrutinizing computational functionalism (CF). In a previous post, I defined a concrete claim that computational functionalists tend to make:

Theoretical CF: A simulation of a human brain on a computer, with physics perfectly simulated down to the atomic level, would cause the conscious experience of that brain.

I contrasted this with “practical CF”, the claim that a suitably low-fidelity simulation of a brain, like one that only captures functional properties, would be conscious. In the last post, I discussed practical CF. In this post, I’ll scrutinize theoretical CF.

To evaluate theoretical CF, I’m going to meet functionalists where they (usually) stand and adopt a materialist position about consciousness. That is to say that I’ll assume all details of a human’s conscious experience are ultimately encoded in the physics of their brain.

Two ways to live in a simulation

First of all, I want to pry apart two distinct meanings of “living in a simulation” that are sometimes conflated.

  1. Living in the matrix: Your brain exists in base reality, but you are hooked up to a bunch of sophisticated virtual reality hardware, such that all of the sensory signals entering your brain create a simulated world for you to live in. Consciousness lives in base reality.
  2. Living in Tron: Your brain is fully virtual. Not only are your surroundings simulated but so are all the details of your brain. Consciousness lives in the simulation.

Many intuitions about the feasibility of living in a simulation come from the matrix scenario. I’ve often heard arguments like “Look at the progress with VR - it won’t be long until we also have inputs for tactile sensations, taste etc. There is no technological barrier stopping us from being hooked up to such hardware and living in a totally simulated world”. I agree, it seems very plausible that we can live in that kind of simulation quite soon.

But this is different to the Tron scenario, which requires consciousness to be instantiated within the simulation. This is a more metaphysically contentious claim. Let’s avoid using arguments for the matrix scenario in support of the Tron scenario. Only the Tron scenario pertains to theoretical CF.

What connects simulation and target?

At its heart, theoretical CF is a claim about a metaphysical similarity between two superficially different physical processes: a human brain, and a computer simulating that brain. To find the essence of this claim, we have to understand what these two systems really have in common.

An intuitive desideratum for such a common property is that it is an intrinsic property of the two systems. One should be able to, in principle, study both systems in isolation to find this common property. So let’s try and work out what this property is. This will be easiest if I flesh out a concrete example scenario.

A concrete setup for simulating your brain

I’m going to scan your brain in the state it is right now as you’re reading this. The scan measures a quantum amplitude for every possible strength of each standard model quantum field at each point in your brain, with a resolution at ~the electroweak scale. This scan is going to serve as an initial state for my simulation.

The simulation will be run on my top-secret cluster hidden in my basement. Compute governance has not caught up with me yet. The cluster consists of a large number of GPUs, hooked up to two compute nodes (CPUs), and some memory storage.

I input the readings from your brain as a big data structure in JSON format. I have an executable compiled from a program I wrote in c called physics.exe on the first compute node. The program takes in the initial conditions and simulates the quantum fields forward in time with the GPUs. The state of the quantum fields at a series of later times is stored in memory.

I also have interpret.exe, for unpacking the computed quantum field information into something I can interpret, on the second compute node. This takes in the simulated quantum field data and shows me a video on my screen of the visual experience you are having.

Let’s carefully specify what the two physical processes we’re comparing are. The first is your brain, that’s easy enough. The second should be where “the simulation is”. Since the dynamics of the quantum fields are being simulated by the GPUs, we can consider the second physical process to be the operations on those GPUs. We want to find an intrinsic property that these two systems have in common.

In what sense am I simulating your brain?

What connects your brain and my cluster? A natural answer is that the operations of the cluster represent the physical process of your brain. It represents the brain in the sense that the operations result in data that can be fed into interpret.exe, such that it sends different data to a screen, in such a way that the screen shows us the visual experience.

But the representative nature of the GPU operations is contingent on context. One piece of context is how the output will be used. The operations represent that process insofar as interpret.exe is configured to process the simulation's output in a certain way. What if interpret.exe was configured to take in quantum field information in a different format? Or what if I straight-up lost interpret.exe with no backup?  Would the operations still represent that physical process?

If our property of “representation” is contingent on interpret.exe, then this property is not an intrinsic property of the GPU operations. In which case it’s not a good candidate shared property. It would be quite unintuitive if the experience created by the cluster is contingent on the details of some other bit of software that could be implemented arbitrarily far away in space and time.

To find the intrinsic common property, we need to strip away all the context that might colour how we make sense of the operations of the GPUs. To do this, we need an impartial third party who can study the GPU operations for us.

Is simulation an intrinsic property?

An alien from a technologically and philosophically advanced civilization comes to town. They have a deep understanding of the laws of physics and the properties of computation, completely understand consciousness, and have brought with them an array of infinitely precise measuring tools.

But the alien has total ignorance of humans and the technology we’ve built. They have no idea how our computers work. They don’t know the conventions that computers are built upon, like encoding schemes (floating-point, ASCII, endianness, …), protocols (IP, DNS, HTTP), file formats (jpeg, pdf, mp3, …), compression algorithms (zip, sha-256, …), device drivers, graphics protocols (OpenGL, RGB, …) and all the other countless arbitrarily defined abstractions.

The alien’s task

Let’s give this alien access to our GPUs, ask them to study the operations executed by them, and ask what, if any, experience is being created by these operations. If we believe the experience to be truly intrinsic to these operations, we shouldn’t need to explain any of our conventions to them. And we shouldn’t need to give the alien access to the compute nodes, interpret.exe, the monitors, or the tools we used to measure your brain in the first place.

Now let’s imagine we live in a world where theoretical CF is true. The alien knows this and knows that to deduce the conscious experience of the GPU operations, it must first deduce exactly what the GPUs are simulating. The big question is:

Could an alien deduce what the GPUs are simulating?

The alien cracks open the GPUs to study what’s going on inside. The first breakthrough is realising that the information processing is happening at the level of transistor charges. They measure the ‘logical state’ at each timestep as a binary vector, one component per transistor, 1 for charge and 0 for no charge.

Now the alien must work out what this raw data represents. Without knowledge of our conventions for encoding data, they would need to guess which of the countless possible mappings between physical states and computational abstractions correspond to meaningful operations. For instance, are these transistors storing numbers in floating-point or integer format? Is the data big-endian or little-endian?

Then there’s higher level abstractions abound with more conventions, like the format of the simulated quantum fields. There are also purely physical conventions (rather than CS conventions): frames of reference, the sign of electron chargegauge choicerenormalization schemesmetric signatures, unit choices.

One possibility could be to look for conventions that lead to simulated worlds that obey some sensible constraints: like logical consistency or following the laws of physics. But the problem is that there could be many equally valid interpretations based on different conventions. The alien doesn’t know they’re looking for a simulation of a brain, so they could end up deciding the simulation is of a weather system or a model of galactic collisions instead.

Considering all the layers of convention and interpretation between the physics of a processor and the process it represents, it seems unlikely to me that the alien would be able to describe the simulacra. The alien is therefore unable to specify the experience being created by the cluster.

Beyond transistors: the true arbitrariness of computation

The situation might be worse than the story above. I was being generous when I imagined that the alien could work out that the action is in the transistors. Stepping back, it’s not obvious that the alien could make such an inference.

Firstly, the alien does not know in advance that this is a computer. They could instead think it’s something natural rather than designed. Secondly, the categories of computer, biology, inanimate objects etc. may not feature in the alien’s ontology. Thirdly, if the alien does work out the thing is a computer, computers on the alien’s planet could be very different.

All of these uncertainties mean the alien may instead choose to study the distribution of heat across the chips, the emitted electromagnetic fields, or any other mad combination of physical properties. In this case, the alien could end up with a completely different interpretation of computation than what we intended.

This gets to the heart of a common theme of argument against CF: computation is arbitrary. There is a cluster of thought experiments that viscerally capture this issue. Three I’ve come across are Searle’s Wall, Putnam’s Rock, and Johnson’s Popcorn. They all have the same common thread, which I’ll explain to you.

Searle’s wall, Putnam’s rock, Johnson’s popcorn

Searle famously claimed that he could interpret the wall behind him as implementing any program he could dream of, including a simulation of a brain. Combining this with theoretical CF, it sounds like the wall is having every possible (human) conscious experience.

How can Searle claim that the wall is implementing any program he wants? With the knowledge of the physical state of the wall and the computational state he wants to create, he can always define a map between physical states and computational states such that the wall represents that program. Brian Tomasik gives a stylized version of how this works:

Consider a Turing machine that uses only three non-blank tape squares. We can represent its operation with five numbers: the values of each of the three non-blank tape squares, the machine's internal state, and an index for the position of the head. Any physical process from which we can map onto the appropriate Turing-machine states will implement the Turing machine, according to a weak notion of what "implement" means.

In particular, suppose we consider 5 gas molecules that move around over time. We consider three time slices, corresponding to three configurations of the Turing machine. At each time slice, we define the meaning of each molecule being at its specific location. For instance, if molecule #3 is at position (2.402347, 4.12384, 0.283001) in space, this "means" that the third square of the Turing machine says "0". And likewise for all other molecule positions at each time. The following picture illustrates, with yellow lines defining the mapping from a particular physical state to its "meaning" in terms of a Turing-machine variable.

(Tomasik 2015)

Given some set of Turing machine states (like, say, a simulation of your brain), Searle can always choose some gerrymandered map like the one above, that sends the wall states to the computation.

If computation is this arbitrary, we have the flexibility to interpret any physical system, be it a wall, a rock, or a bag of popcorn, as implementing any program. And any program means any experience. All objects are experiencing everything everywhere all at once.

This is mental. To fix computational functionalism, a number of authors have put forward ways of constraining the allowed maps between physics and computation, such that only reasonable assignments are allowed. I’ve written about a couple of them in the appendix, along with why I’m not convinced by them. I think this is an unsolved problem. See Percy 2024 for the most up-to-date treatment of this issue.

This whole argument of arbitrariness hinges on the assumption I made that consciousness is an intrinsic property of a thing. Computational functionalists have the option of biting the bullet and accepting that consciousness is not intrinsic, but rather a property of our description of that system. Could that make sense?

Is phenomenal consciousness a natural kind?

A philosopher will ask me, what do I mean by reality? Am I talking about the physical world of nature, am I talking about a spiritual world, or what? And to that I have a very simple answer. When I talk about the material world, that is actually a philosophical concept. So in the same way, if I say that reality is spiritual, that’s also a philosophical concept. Reality itself is not a concept, reality is: <whacks bell and it slowly rings out> and we won’t give it a name. (Alan Watts)

The current underlying my argument has been:

  • Premise 1: Computation is not a natural kind: it is an abstraction, a concept, a map. It is fuzzy and/or observer-dependent, down to interpretation. There is no objective fact-of-the-matter whether or not a physical system is doing a certain computation.
  • Premise 2: Phenomenal consciousness is a natural kind: There is an objective fact-of-the-matter whether a conscious experience is occurring, and what that experience is. It is not observer-dependent. It is not down to interpretation. It is an intrinsic property of a system. It is the territory rather than a map.
  • Conclusion: Consciousness cannot be computation.

So far in this post I have argued for Premise 1. I like Premise 1 and I think it’s true. But what about Premise 2? I also agree with Premise 2, but I understand that this is a philosophically contentious claim (for example illusionists or eliminative materialists will disagree with it). I consider Premise 2 my biggest crux for CF. Below I’ll explain why I think Premise 2 is true.

Why I think consciousness is a natural kind

You’re having an experience right now. You’re probably having a visual experience of seeing some text on a screen. The presence and quality of this experience is there for you to see.

Imagine two philosophers, René and Daniel, approach you and ask if they can test their competing Phenomenal Experience Detectors™ on you. René places some electrodes on your head and hooks them up to his laptop. The data is analysed, and a description of your current experience is printed on the screen. Then Daniel folds out his own setup: a handy travel fMRI hooked up to a different laptop containing different software.

René and Daniel are both computational functionalists, so their setups both interpret the readings from your brain as the execution of certain computations. But René and Daniel’s map from brain states to computational states are different. This means they come up with different predictions of the experience you’re having.

Could both of them be right? No - from your point of view, at least one of them must be wrong. There is one correct answer, the experience you are having.

But maybe you’re mistaken about your own experience? Maybe you have enough uncertainty about what you’re experiencing that both René and Daniel’s predictions are consistent with the truth. But phenomenal consciousness, by definition, is not something you can be confused about. Any confusion or fuzziness is part of the experience, not an obstruction to it. There is no appearance/reality distinction for phenomenal consciousness.

You could be dreaming or tripping or in the matrix or whatever, so you could be wrong on the level of interpreting your experience. But phenomenal consciousness is not semantic content. It is the pre-theoretical, pre-analysed, raw experience. Take a look at this image.

Does this represent a rabbit or a duck? The answer to this question is up to interpretation. But are you having a raw experience of looking at this image? The answer to this question is not up to interpretation in the same way. You can’t be wrong about the claim “you are having a visual experience”.

While this is a confusing question, all things considered, I lean towards consciousness being an objective property of the world. And since computation is not an objective property of the world, consciousness cannot be computation.

Conclusion

I think theoretical CF, the claim that a perfect atom-level simulation of a brain would reproduce that brain’s consciousness, is sus.

Theoretical CF requires an intrinsic common property between a brain and a computer simulating that brain. But their only connection is that the computer is representing the brain, and representation is not intrinsic. An alien could not deduce the conscious experience of the computer. If consciousness is an intrinsic property, a natural kind, then it can’t be computation.

In the next post, I’ll address computational functionalism more generally, and scrutinize the most common arguments in favor of it.

Appendix: Constraining what counts as a computation

There have been a number of attempts to define a constraint on maps from physical to computational states, in order to make computation objective. I’ll discuss a couple of them here, to illustrate that this is quite a hard problem that (in my opinion) has not yet been solved.

Counterfactuals

When Searle builds his gerrymandered physics->abstraction under which his wall is executing consciousness.exe, the wall is only guaranteed to correctly execute a single execution path of consciousness.exe. consciousness.exe contains a bunch of if statements (I assume), so there are many other possible paths through the program, many lines of code that Searle’s execution didn’t touch.

Typically when we say that something ran a program, implicit in that statement is the belief that if the inputs had been different, the implementation would have correctly executed a different branch of the program.

Searle’s wall does not have this property. Since consciousness.exe requires inputs, Searle would have to define some physical process in the wall as the input. Say he defined the inputs to be encoded into the pattern of air molecules hitting a certain section of the wall at the start of the execution. He defined the physics->computation map such that the pattern of molecules that actually hit the wall represent the input required to make the execution a legitimate running of consciousness.exe. But what if a different pattern of air molecules happens to hit the wall, representing different inputs?

For the wall to be truly implementing consciousness.exe, the wall must run a different execution path of consciousness.exe triggered by the different input. But because the gerrymandered abstraction was so highly tuned to the previous run, the new motion of molecules in the wall would be mapped to the execution of a nonsense program, not consciousness.exe.

This is the spirit of one of David Chalmers’ attempts to save functionalism. For a thing to implement a conscious program, it's not enough for it to merely transit through a sequence of states matching a particular run of the program. Instead, the system must possess a causal structure that reliably mirrors the full state-transition structure of the program, including transitions that may not occur in a specific run.

This constraint implies that counterfactuals have an effect on conscious experience. Throughout your life, your conscious experience is generated by a particular run through the program that is your mind. There are inevitably some chunks of the mind program that your brain never executes, say, how you would respond to aliens invading or Yann LeCun becoming safety-pilled. Chalmers is saying that the details of those unexecuted chunks have an effect on your conscious experience. Counterfactual branches affect the presence and nature of consciousness.

This new conception of conscious computation comes with a new set of problems (see for example “counterfactuals cannot count”). Here is a little thought experiment that makes it hard for me to go along with this fix. Imagine an experiment including two robots: Alice-bot running program p and Bob-bot running program p’. Alice-bot and Bob-bot are put in identical environments: identical rooms in which identical events happen, such that they receive identical sensory inputs throughout their lifetime.

p’, is a modification of p. To make p’, we determine exactly which execution pathway of p Alice-bot is going to execute given her sensory input. From this we can determine what sections of p will never be executed. We then delete all of the lines of code in the other branches. p’ is a pruned version of p that only contains the code that Alice-bot actually executes. This means that throughout the robot’s lifetimes, they take in identical inputs and execute identical operations. The only difference between them is that Alice-bot has a different program to Bob-bot loaded into memory.

Imagine if p is a conscious program while p’ is not. The counterfactual lines of code we deleted to make p’ were required for consciousness. Alice-bot is conscious and Bob-bot is not. But the only physical difference between Alice-bot and Bob-bot is that Alice-bot has some extra lines of code sitting in her memory, so those extra lines of code in her memory must be the thing that is giving her consciousness. Weird, right?!

Simplicity & naturalness

Another suggestion from Tomasik about how to constrain the allowed maps from Marr 3 to 2 is to only allow suitably simple or natural maps.

More contorted or data-heavy mapping schemes should have lower weight. For instance, I assume that personal computers typically map from voltage levels to 0s and 1s uniformly in every location. A mapping that gerrymanders the 0 or 1 interpretation of each voltage level individually sneaks the complexity of the algorithm into the interpretation and should be penalized accordingly.

What measure of complexity should we use? There are many possibilities, including raw intuition. Kolmogorov complexity is another common and flexible option. Maybe the complexity of the mapping from physical states to algorithms should be the length of the shortest program in some description language that, when given a complete serialised bitstring description of the physical system, outputs a corresponding serialised description of the algorithmic system, for each time step.

(Tomasik 2015)

Tomasik is imagining that we’re building a “consciousness classifier” that takes in a physical system and outputs what, if any, conscious experience it is having. The first part of the consciousness classifier translates the inputted physics and outputs a computation. Then the second part translates the computation description into a description of the conscious experience. He is saying that our consciousness classifier should have a simple physics->computation program.

But would this constraint allow human minds to exist? Consider what we learned about how the human brain implements a mind in my previous post. The mind seems to be governed by much more than what the neuron doctrine says: ATP waves, mitochondria, glial cells, etc. The simplest map from physics to computation would ignore mitochondria; it would look more like the neuron doctrine. But neuron spiking alone, as we’ve established, probably wouldn’t actually capture the human mind in full detail. This constraint would classify you as either having a less rich experience than you’re actually having, or no experience at all.

Even if the counterfactual or naturalness constraints make sense, it remains pretty unclear if they are able to constrain the allowed abstractions enough to shrink the number of possible experiences of a thing to the one true experience the thing is actually having.

New Comment
22 comments, sorted by Click to highlight new comments since:
[-]simon102

Considering all the layers of convention and interpretation between the physics of a processor and the process it represents, it seems unlikely to me that the alien would be able to describe the simulacra. The alien is therefore unable to specify the experience being created by the cluster.

I don't think this follows. Perhaps the same calculation could simulate different real world phenomena, but it doesn't follow that the subjective experiences are different in each case.

If computation is this arbitrary, we have the flexibility to interpret any physical system, be it a wall, a rock, or a bag of popcorn, as implementing any program. And any program means any experience. All objects are experiencing everything everywhere all at once.

Afaik this might be true. We have no way of finding out whether the rock does or does not have conscious experience. The relevant experiences to us are those that are connected to the ability to communicate or interact with the environment, such as the experiences associated with the global workspace in human brains (which seems to control memory/communication); experiences that may be associated with other neural impulses, or with fluid dynamics in the blood vessels or whatever, don't affect anything.

Could both of them be right? No - from your point of view, at least one of them must be wrong. There is one correct answer, the experience you are having.

This also does not follow. Both experiences could happen in the same brain. You - being experience A - may not be aware of experience B - but that does not mean that experience B does not exist.

(edited to merge in other comments which I then deleted)

As with OP, I strongly recommend Aaronson, who explains why waterfalls aren't doing computation in ways that refute the rock example you discuss: https://www.scottaaronson.com/papers/philos.pdf

I think a lot of misunderstandings on this topic are because of a lack of clarity about what exact position is being debated/argued for. I think two relevant positions are

  • (1) it is impossible to say anything meaningful whatsoever about what any system is computing. (You could write this semi-formally as physical system computation the claims " performs " and " performs " are equally plausible)
  • (2) it is impossible to have a single, formal, universally applicable rule that tells you which computation a physical system is running that does not produce nonsense results. (I call the problem of finding such a rule the "interpretation problem", so (2) is saying that the interpretation problem is impossible)

(1) is a much stronger claim and definitely implies (2), but (2) is already sufficient to rule out the type of realist functionalism that OP is attacking. So OP doesn't have to argue (1). (I'm not sure if they would argue for (1), but they don't have to in order to make their argument.) And Scott Aaronson's essay is (as far as I can see) just arguing against (1) by proposing a criterion according to which a waterfall isn't playing chess (whereas stockfish is). So, you can agree with him and conclude that (1) is false, but this doesn't get you very far.

The waterfall thought experiment also doesn't prove (2). It's very hard to prove (2) because (2) is just saying that a problem cannot be solved, and it's hard to prove that anything can't be done. But the waterfall thought experiment is an argument for (2) by showing that the interpretation problem looks pretty hard. This is how I was using it in my reply to Steven on the first post of this sequence; I didn't say "and therefore, I've proven that no solution to the interpretation problem exists"; I've just pointed out that you start off with infinitely many interpretations and currently no one has figured out how to narrow it done to just one, at least not in a way such that the answers have the properties everyone is looking for.

The argument presented by Aaronson is that, since it would take as much computation to convert the rock/waterfall computation into a usable computation as it would be to just do the usable computation directly, the rock/waterfall isn't really doing the computation.

I find this argument unconvincing, as we are talking about a possible internal property here, and not about the external relation with the rest of the world (which we already agree is useless).

(edit: whoops missed an 'un' in "unconvincing")

You disagree with Aaronson that the location of the complexity is in the interpreter, or you disagree that it matters?

In the first case, I'll defer to him as the expert. But in the second, the complexity is an internal property of the system! (And it's a property in a sense stronger than almost anything we talk about in philosophy; it's not just a property of the world around us, because as Gödel and others showed, complexity is a necessary fact about the nature of mathematics!)

The interpreter, if it would exist, would have complexity. The useless unconnected calculation in the waterfall/rock, which could be but isn't usually interpreted, also has complexity. 

Your/Aaronson's claim is that only the fully connected, sensibly interacting calculation matters.  I agree that this calculation is important - it's the only type we should probably consider from a moral standpoint, for example. And the complexity of that calculation certainly seems to be located in the interpreter, not in the rock/waterfall.

But in order to claim that only the externally connected calculation has conscious experience, we would need to have it be the case that these connections are essential to the internal conscious experience even in the "normal" case - and that to me is a strange claim! I find it more natural to assume that there are many internal experiences, but only some interact with the world in a sensible way.

Perhaps the same calculation could simulate different real world phenomena, but it doesn't follow that the subjective experiences are different in each case.

I see what you mean I think - I suppose if you're into multiple realizability perhaps the set of all physical processes that the alien settles on all implement the same experience. But this just depends on how broad this set is. If it contains two brains, one thinking about the roman empire and one eating a sandwich, we're stuck.

This also does not follow. Both experiences could happen in the same brain. You - being experience A - may not be aware of experience B - but that does not mean that experience B does not exist.

Yea I did consider this as a counterpoint. I don't have a good answer to this, besides it being unintuitive and violating occam's razor in some sense.

But this just depends on how broad this set is. If it contains two brains, one thinking about the roman empire and one eating a sandwich, we're stuck.

I suspect that if you do actually follow Aaronson (as linked by Davidmanheim) to extract a unique efficient calculation that interacts with the external world in a sensible way, that unique efficient externally-interacting calculation will end up corresponding to a consistent set of experiences, even if it could still correspond to simulations of different real-world phenomena.

But I also don't think that consistent set of experiences necessarily has to be a single experience! It could be multiple experiences unaware of each other, for example.

It’s interesting that you care about what the alien thinks. Normally people say that the most important property of consciousness is its subjectivity. Like, people tend to say things like “Is there something that it’s like to be that person, experiencing their own consciousness?”, rather than “Is there externally-legible indication that there’s consciousness going on here?”.

Thus, I would say: the simulation contains a conscious entity, to the same extent that I am a conscious entity. Whether aliens can figure out that fact is irrelevant.

I do agree with the narrow point that a simulation of consciousness can be externally illegible, i.e. that you can manifest something that’s conscious to the same extent that I am, in a way where third parties will be unable to figure out whether you’ve done that or not. I think a cleaner example than the ones you mentioned is: a physics simulation that might or might not contain a conscious mind, running under homomorphic encryption with a 100000-bit key, and where all copies of the key have long ago been deleted.

Whether aliens can figure out that fact is irrelevant.

To be clear, would you say that you are disagreeing with "Premise 2" above here?

Premise 2: Phenomenal consciousness is a natural kind: There is an objective fact-of-the-matter whether a conscious experience is occurring, and what that experience is. It is not observer-dependent. It is not down to interpretation. It is an intrinsic property of a system. It is the territory rather than a map.

I don’t think Premise 2 is related to my comment. I think it’s possible to agree with premise 2 (“there is an objective fact-of-the-matter whether a conscious experience is occurring”), but also to say that there are cases where it is impossible-in-practice for aliens to figure out that fact-of-the-matter.

By analogy, I can write down a trillion-digit number N, and there will be an objective fact-of-the-matter about what is the prime factorization of N, but it might take more compute than fits in the observable universe to find out that fact-of-the-matter.

I think, in discarding the simplicity argument, you are underestimating how many zeros are in the ratio gigabytes needed to specify the brain simulation initial conditions:gigabytes needed to store the quantum fields as the simulation runs. The data in the brain is vaguely linear in number of electrons, the ram needed to simulate the brain is vaguely exponential in number of electrons. “Simplest explanation of the state of the GPUs by a factor of 100” and “Simplest explanation of the state of the GPUs by a factor of 10^number of stars in the visible universe” are only quantitatively different, but sometimes quantity has a quality all of its own.

You seem to fundamentally misunderstand computation, in ways similar to Searle. I can't engage deeply, but recommend Scott Aaronson's primer on computational complexity: https://www.scottaaronson.com/papers/philos.pdf

Yeah, maybe I misunderstood the part about the molecules of gas "representing" the states of Turing machine, but my first reaction is that it's not enough to declare that X represents Y, it must also be true that the functionality of X corresponds to the functionality of Y.

I can say that my sock is a calculator, and that this thread represents the number 2 and this thread represents the number 3, but unless it somehow actually calculates 2+3, such analogy is useless.

Similarly, it is not enough to say that a position of a molecule represents a state of a Turing machine, there must also be a way how the rules of the TM actually constrain the movement of the molecule.

Yeah, something like that. See my response to Euan in the other reply to my post.

Is this the passage you're referring to that means I'm "fundamentally misunderstanding computation"?

suppose we actually wanted to use a waterfall to help us calculate chess moves. [...] I conjecture that, given any chess-playing algorithm A that accesses a “waterfall oracle” W, there is an equally-good chess-playing algorithm A0, with similar time and space requirements, that does not access W. If this conjecture holds, then it gives us a perfectly observer-independent way to formalize our intuition that the “semantics” of waterfalls have nothing to do with chess.

This boils down to the Chalmers response. He isn't arguing that the waterfall couldn't implement a single pass through of a chess game, but it couldn't robustly play many different chess games. I discuss the Chalmers response in the appendix and why I think it doesn't fix the issue.

Yes, and no, it does not boil down to Chalmer's argument. (as Aaronson makes clear in the paragraph before the one you quote, where he cites the Chalmers argument!) The argument from complexity is about the nature and complexity of systems capable of playing chess - which is why I think you need to carefully read the entire piece and think about what it says.

But as a small rejoinder, if we're talking about playing a single game, the entire argument is ridiculous; I can write the entire "algorithm" a kilobyte of specific instructions. So it's not that an algorithm must be capable of playing multiple counterfactual games to qualify, or that counterfactuals are required for moral weight - it's that the argument hinges on a misunderstanding of how complex different classes of system need to be to do the things they do.

PS. Apologies that the original response comes off as combative - I really think this discussion is important, and wanted to engage to correct an important point, but have very little time to do so at the moment!

If we apply the Scott Aaronson waterfall counterargument to your Alice-bot-and-Bob-bot scenario, I think it would say: The first step was running Alice-bot, to get the execution trace. During this step, the conscious experience of Alice-bot manifests (or whatever). Then the second step is to (let’s say) modify the Bob code such that it does the same execution but has different counterfactual properties. Then the third step is to run the Bob code and ask whether the experience of Alice-bot manifests again.

But there’s a more basic question. Forget about Bob. If I run the Alice-bot code twice, with the same execution trace, do I get twice as much Alice-experience stuff? Maybe you think the answer is “yeah duh”, but I’m not so sure. I think the question is confusing, possibly even meaningless. How do you measure how much Alice-experience has happened? The “thick wires” argument (I believe due to Nick Bostrom, see here, p189ff, or shorter version here) seems relevant. Maybe you’ll say that the thick-wires argument is just another reductio about computational functionalism, but I think we can come up with a closely-analogous “thick neurons” thought experiment that makes whatever theory of consciousness you subscribe to have an equally confusing property.

You can’t be wrong about the claim “you are having a visual experience”.

Have you heard of Cotard's syndrome?

But are you having a raw experience of looking at this image? The answer to this question is not up to interpretation in the same way. You can’t be wrong about the claim “you are having a visual experience”.

Sometimes when I set an alarm, I turn it off and go back to sleep (oops!). Usually I remember what happened, and I have a fairly wide range of mental states in these memories - typically I am aware that it's an alarm, and turn it off more or less understanding what's going on, even if I'm not always making a rational decision. Rarely, I don't understand that it's an alarm at all, and afterwards just remember that in the early morning I was fumbling with some object that made noise. And a similar fraction of the time, I don't remember turning off the alarm at all! I wonder what kind of processes animate me during those times.

Suppose turning off my alarm involved pressing a button labeled 'I am having conscious experience.' I think that whether this would be truth or lie, in those cases I have forgotten, would absolutely be up to interpretation.

If you disagree, and think that there's some single correct criterion for whether I'm conscious or not when the button gets pressed, but you can't tell me what it is and don't have a standard of evidence for how to find it, then I'm not sure how much you actually disagree.

I'm not talking about access consciousness here. I'm not talking about the ability to report. I'm talking about phenomenal consciousness.

Maybe I'm wrong, but I predict you're going to say "there's no difference", or "there's nothing to consciousness besides reporting" or something, which is a position I have sympathy for and is closely related to the I talk about at the end of the post. But reporting is not what I'm talking about here.

I would argue that for all practical purposes it doesn't matter if computational functionalism is right or wrong. 

  1. Pursuing mind uploading is a good idea regardless of that, as it has benefits not related to perfectly recreating someone in silico (e.g. advancing neuroscience).
  2. If the digital version of RomanS is good enough[1], it will indeed be me, even if the digital version is running on a billiard-ball computer (the internal workings of which are completely different from the workings of the brain).

The second part is the most controversial, but it's actually easy to prove:

  1. Memorize a long sequence of numbers, and write down a hash sum of it.
  2. Ensure no one saw the sequence of numbers except you.
  3. Do a honest mind uploading (no attempts to extract the numbers from your brain etc).
  4. Observe how the digital version correctly recalls the numbers, as checked by the hash sum.
  5. According to the experiment's conditions, only you know the numbers. Therefore, the digital version is you. 

And if it's you, then it has all the same important properties of you, including "consciousness" (if such a thing exists). 

The are some scenarios where such a setup may fail (e.g. some important property of the mind is somehow generated by one special neuron which must be perfectly recreated). But I can't think of any such scenario that is realistic. 

My general position is that 

  1. the concepts of consciousness, qualia etc are too loosely defined to be of any use (including the use in any reasonable discussion). Just discard it as yet another phlogiston.
  2. thus, the task of "transferring consciousness to a machine" is ill-defined. Instead, mind uploading is about building a digital machine that behaves like you. It doesn't matter what is happening inside, as long as the digital version is passing a sufficiently good battery of behavioral tests.
  3. there is a gradual distinction between you and not-you. E.g. an atoms-level sim may be 99% you, a neurons-level sim - 90% you, a LLM trained on your texts - 80% you. The measure is the percentage of the same answers given to a sufficiently long and diverse questionnaire.
  4. a human mind in its fullness can be recreated in silico even by a LLM (trained on sufficient amounts of the mind inputs and outputs). Perfectly recreating the brain (or even recreating it at all) would be nice, but it is unnecessary for mind uploading. Just build an AI that is sufficiently similar to you in behavior.

This position can be called black-box CF, in addition to your practical and theoretical CF.

  1. ^

    As defined by a reasonable set of quality and similarity criteria, beforehand