I don't know if you are going to address this, but if I were to write a sequence of posts on functionalism, I'd start with the problem that "computation" isn't very well defined, and hence functionalism isn't very well-defined, either. In practice it's often clear enough whether or not a system is computing something, but you're going to have a hard time giving a fully general, rigorous, universally applicable definition of what exactly a physical process has to do to count as computing something (and if so, what precisely it is computing). Similarly, your definition of the Practical CF inherits this problem because it's not at all clear what "capturing the dynamics of the brain on some coarse-grained level of abstraction" means. This problem is usually brushed over but imo that's where all the difficulty lies.
(Of course, many people think consciousness is inherently fuzzy, in which case associating it with similarly fuzzy concepts isn't a problem. But I'm assuming you're taking a realist point of view here and assume consciousness is well-defined, since otherwise there's not much of a question to answer. If consciousness is just an abstraction, functionalism becomes vacuously true as a descriptive statement.)
Do you think there are edge cases where I ask “Is such-and-such system running the Miller-Rabin primality test algorithm?”, and the answer is not a clear yes or no, but rather “Well, umm, kinda…”?
(Not rhetorical! I haven’t thought about it much.)
I think there's a practically infinite number of edge cases. For a system to run the algorithm, it would have to perform a sequence of operations on natural numbers. If we simplify this a bit, we could just look at the values of the variables in the program (like a
, x
, y
; I don't actually know the algorithm, I'm just looking at the pseudo-code on Wikipedia). If the algorithm is running, then each variable goes through a particular sequence, so we could just use this as a criterion and say the system runs the algorithm iff one of these particular sequences is instantiated.
Even in this simplified setting, figuring this out requires a mapping of physical states to numbers. If you start agreeing on a fixed mapping (like with a computer, we agree that this set of voltages at this location corresponds to the numbers 0-255), then that's possible to verify. But in general you don't know, which means you have to check whether there exists at least one mapping that does represent these sequences. Considered very literally, this is probably always true since you could have really absurd and discontinuous mappings (if this pebble here has mass between 0.5g and 0.51g it represents the number 723; if it's between 0.51g and 0.52g it represents 911...) -- actually you have infinitely many mappings even after you agree on how the system partitions into objects, which is also debatable.
So without any assumptions, you start with a completely intractable problem and then have to figure out how to deal with this (... allow only reasonable mappings? but what's reasonable? ...), which in practice doesn't seem like something anyone has been able to do. So even if I just show you a bunch of sand trickling through someone's hands, it's already a hard problem to prove that this doesn't represent the Miller-Rabin test algorithm. It probably represents some sequence of numbers in a not too absurd way. There are some philosophers who have just bitten the bullet and concluded that any and all physical systems compute, which is called panpcomputationalism. The only actually formal rule for figuring out a mapping that I know is from IIT, which is famously hated on LW (and also has conclusions that most people find absurd, such that digital computers can't be conscious at all bc the criterion ends up caring more about the hardware than the software). The thing is that most of the particles inside a computer don't actually change all that much depending on the program, it's really only a few specific locations where electrons move around, which is enough if we decide that our mapping only cares about those locations but not so much if you start with a rule applicable to arbitrary physical systems.
That all said, I think illusionists have the pretty easy out of just saying that computation is frame-dependent, i.e., that the answer to "what is this system computing" depends on the frame of reference, specifically the mapping from physical states to mathematical objects. It's really only a problem you must solve if you both think that (a) consciousness is well-defined, frame-invariant, camp #2 style, etc., and also (b) the consciousness of a system depends on what it computes.
For me the answer is yes. There's some way of interpreting the colors of grains of sands on the beach as they swirl in the wind that would perfectly implement the miller robin primality test algorithm. So is the wind + sand computing the algorithm?
Well it probably is computing insofar it's the wind bringing in actual bits of information, not you while searching for a specific pattern instantiation. The test: consider whether if original grains were moved around to form another prime number, would the wind still process them a similar way and yield correct answer?
This seems arbitrary to me. I'm bringing in bits of information on multiple layers when I write a computer program to calculate the thing and then read out the result from the screen
Consider, if the transistors on the computer chip were moved around, would it still process the data in the same way and wield the correct answer?
Yes under some interpretation, but no from my perspective, because the right answer is about the relationship between what I consider computation and how I interpret the results in getting
But the real question for me is - under a computational perspective of consciousness, are there features of this computation that actually correlate to strength of consciousness? Does any interpretation of computation get equal weight? We could nail down a precise definition of what we mean by consciousness that we agreed on that didn't have the issues mentioned above, but who knows whether that would be the definition that actually maps to the territory of consciousness?
I recently came across unsupervised machine translation here. It's not directly applicable, but it opens the possibility that, given enough information about "something", you can pin down what it's encoding in your own language.
So let's say now that we have a computer that simulates a human brain in a manner that we understand. Perhaps there really could be a sense in which it simulates a human brain that is independent of our interpretation of it. I'm having some trouble formulating this precisely.
Right, and per the second part of my comment - insofar as consciousness is a real phenomenon, there's an empirical question of if whatever frame invariant definition of computation you're using is the correct one.
FYI for future readers: the OP circles back to this question (what counts as a computation) more in a later post of this sequence, especially its appendix, and there’s some lively discussion happening in the comments section there.
This is intended to be the first in a sequence of posts where I scrutinize the claims of computational functionalism (CF). I used to subscribe to it, but after more reading, I’m pretty confused about whether or not it’s true. All things considered, I would tentatively bet that computational functionalism is wrong. Wrong in the same way Newtonian mechanics is wrong: a very useful framework for making sense of consciousness, but not the end of the story.
Roughly speaking, CF claims that computation is the essence of phenomenal consciousness. A thing is conscious iff it is implementing a particular kind of program, and its experience is fully encoded in that program. A famous corollary of CF is substrate independence: since many different substrates (e.g. a computer or a brain) can run the same program, different substrates can create the same conscious experience.
CF is quite abstract, but we can cash it out to concrete claims about the world. I noticed two distinct flavors[1] of functionalism-y beliefs that are useful to disentangle. Here are two exemplar claims corresponding to the two flavors:
In this sequence, I’ll address these two claims individually, and then use the insights from these discussions to assess the more abstract overarching belief of CF.
How are these different?
A perfect atomic-level brain stimulation is too expensive to run on a classical computer on Earth at the same speed as real life (even in principle).
The human brain contains 1026atoms.The complexity of an N-body quantum simulation precisely on a classical computer is O(2N).[3] Such a simulation would cost 21026 operations per timestep. Conservatively assume the simulation needs a temporal precision of 1 second, then we need 21026 FLOPS. A single timestep needs more operations than there are atoms in the observable universe (~1080), so a classical computer the size of the observable universe that can devote an operation per atom per second would still be too slow.
Putting in-principle possibility aside, an atom-level simulation may be astronomically more expensive than what is needed for many useful outputs. Predicting behavior or reproducing cognitive capabilities likely can be achieved with a much more coarse-grained description of the brain, so agents who simulate for these reasons will run simulations relevant to practical CF rather than theoretical CF.
Practical CF is more relevant to what we care about
In my view, there are three main questions for which CF is a crux: AI consciousness, mind uploading, and the simulation hypothesis. I think these questions mostly hinge on practical CF rather than theoretical CF. So when it comes to action-guiding, I’m more interested in the validity of practical CF than theoretical CF.
AI consciousness: For near-future AI systems to be conscious, it must be possible for consciousness to be created by programs simple enough to be running on classical Earth-bound clusters. If practical CF is true, that demonstrates that we can create consciousness with simple programs, so the simple program of AI might also create consciousness.
If theoretical CF is true, that doesn't tell us if near-future AI consciousness is possible. AI systems (probably) won’t include simulations of biophysics any time soon, so theoretical CF does not apply to these systems.
Mind uploading: We hope one day to make a suitably precise scan of your brain and use that scan as the initial conditions of some simulation of your brain at some coarse-grained level of abstraction. If we hope for that uploaded mind to create a conscious experience, we need practical CF to be true.
If we only know theoretical CF to be true, then a program might need to simulate biophysics to recreate your consciousness. This would make it impractical to create a conscious mind upload on Earth.
The simulation hypothesis: Advanced civilizations might run simulations that include human brains. The fidelity of the simulation depends on both the available compute and what they want to learn. They might have access to enough compute to run atom-level simulations.
But would they have the incentive to include atoms? If they’re interested in high-level takeaways like human behavior, sociology, or culture, they probably don’t need atoms. They’ll run the coarsest-grained simulation possible while still capturing the dynamics they’re interested in.
Practical CF is closer to the spirit of functionalism
The original vision of functionalism was that there exists some useful level of abstraction of the mind below behavior but above biology, that explains consciousness. Practical CF requires such a level of abstraction so is closely related. Theoretical CF is a departure from this, since it concedes that consciousness requires the dynamics of biology to be present (in a sense).
The arguments in favor of CF are mostly in support of practical CF. For example, Chalmer’s fading qualia experiment only works in a practical CF setting. When replacing the neurons with silicon chips, theoretical CF alone would mean that each chip would have to simulate all of the molecules in the neuron, which would be intractable if we hope to fit the chip in the brain.[4]
CF is often supported by observing AI progress. We are more and more able to recreate the functions of the human mind on computers. So maybe we will be able to recreate consciousness on digital computers too? This is arguing that realistic classical computers will be able to instantiate consciousness, the practical CF claim. To say something about theoretical CF, we’d instead need to appeal to progress in techniques to run efficient simulations of many-body quantum systems or quantum fields.
CF is also sometimes supported by the success of the computational view of cognition. It has proven useful to model the brain as hardware that runs the software of the mind, via e.g. neuron spiking. The mind is a program simple enough to be encoded in neuron spiking (possibly plus some extra details e.g. glial cells). Such a suitably simple abstraction of the brain can then run on a computer to create consciousness - the practical CF claim.
So on the whole, I’m more interested in scrutinizing practical CF than theoretical CF. In the next post, I’ll scrutinize practical CF.
These flavors really fall on a spectrum: one can imagine claims in between the two (e.g. a “somewhat practical CF”).
1 second of simulated time is computed at least every second in base reality.
There could be a number of ways around this. We could use quantum monte carlo or density functional theory instead, both with complexity O(N^3), meaning a simulation would need 10^75 operations per timestep, once again roughly the size of the observable universe. We could also use quantum computers - reducing the complexity to possibly O(N), but this would be a departure from the Practical CF claim. Such a simulation on Earth with quantum computers is in principle possible from a glance, but there could easily be engineering roadblocks that make it impossible in practice.
Could the chips instead interface with, say, a Dyson sphere? The speed of light would get in the way there, since it would take ~minutes to send & receive messages, while neuron firing details are important at << seconds.