I assume that phenomenal consciousness is a sub-component of the mind.
I'm not sure what is meant by this; would you mind explaining?
I mean something along the lines of "if you specify all aspects of the mind (e.g. using a program), you have also specified all aspects of the conscious experience"
Also, the in-post link to the appendix is broken; it's currently linking to a private draft.
Eek, thanks for the heads up, fixed!
MCMC is a simpler example to ensure that we’re on the same page on the general topic of how randomness can be involved in algorithms.
Thanks for clarifying :)
Are we 100% on the same page about the role of randomness in MCMC? Is everything I said about MCMC super duper obvious from your perspective?
Yes.
...If I run MCMC with a PRNG given random seed 1, it outputs 7.98 ± 0.03. If I use a random seed of 2, then the MCMC spits out a final answer of 8.01 ± 0.03. My question is: does the random seed entering MCMC “have a causal effect on the execution of the
I'm not saying anything about MCMC. I'm saying random noise is not what I care about, the MCMC example is not capturing what I'm trying to get at when I talk about causal closure.
I don't disagree with anything you've said in this comment, and I'm quite confused about how we're able to talk past each other to this degree.
The most obvious examples are sensory inputs—vision, sounds, etc. I’m not sure why you don’t mention those.
Obviously algorithms are allowed to have inputs, and I agree that the fact that the brain takes in sensory input (and all other kinds of inputs) is not evidence against practical CF. The way I'm defining causal closure is that the algorithm is allowed to take in some narrow band of inputs (narrow relative to, say, the inputs being the dynamics of all the atoms in the atmosphere around the neurons, or whatever). My bad for not making this more expli...
If you believe there exists "a map between the third-person properties of a physical system and whether or not it has phenomenal consciousness" you believe you can define consciousness with a computation.
I'm not arguing against the claim that you could "define consciousness with a computation". I am arguing against the claim that "consciousness is computation". These are distinct claims.
So, most people who take the materialist perspective believe the material world comes from a sort of "computational universe", e.g. Tegmark IV.
Massive claim, nothing...
I don't really understand the point of this thought experiment, because if it wasn't phrased in such a mysterious manner, it wouldn't seem relevant to computational functionalism.
I'm sorry my summary of the thought experiment wasn't precise enough for you. You're welcome to read Chalmers' original paper for more details, which I link to at the top of that section.
I also don't understand a single one of your arguments against computational functionalism
I gave very brief recaps of my arguments from the other posts in the sequence here so I can connect...
Could you recommend any good (up-to-date) reading defending the neuron doctrine?
How would the alien know when they've found the correct encoding scheme?
I'm not sure I understand this. You're saying the alien could look at the initial conditions, since they're much simpler than the quantum fields as the simulation runs? In that case, how could it track down those initial conditions and interpret them?
Ah I see, thanks for clarifying.
Perhaps I should have also given the alien access to infinite compute. I think the alien still wouldn't be able to determine the correct simulation.
And also infinite X if you hit me with another bottleneck of the alien not having enough X in practice.
The thought experiment is intended to be about in-principle rather than practical.
Hmm I guess I gave your original comment too shallow a reading, apologies for that.
So to be clear, are you saying that, if a half-awake version of you looks at a button saying "I am conscious", thinks to themselves "am I conscious? Yes I am!", and presses the button, whether or not that half-awake version was actually correct with that introspection is up to interpretation? In other words, you don't trust the report of your half-awake self?
My instinct is to say something like: if your half-awake self is actually capable of introspecting on their experience...
it does not boil down to Chalmer's argument.
As far as I can tell, Scott's argument does not argue against the possibility that a waterfall could execute a single forward pass of a chess playing algorithm, if you defined a gerrymandered enough map between the waterfall and logical states.
When he defines the waterfall as a potential oracle, implicit in that is that the oracle will respond correctly to different inputs - counterfactuals.
Viewing the waterfall's potential oracleness as an intrinsic property of that system is to view counterfactual waterfalls...
Whether aliens can figure out that fact is irrelevant.
To be clear, would you say that you are disagreeing with "Premise 2" above here?
Premise 2: Phenomenal consciousness is a natural kind: There is an objective fact-of-the-matter whether a conscious experience is occurring, and what that experience is. It is not observer-dependent. It is not down to interpretation. It is an intrinsic property of a system. It is the territory rather than a map.
Is this the passage you're referring to that means I'm "fundamentally misunderstanding computation"?
...suppose we actually wanted to use a waterfall to help us calculate chess moves. [...] I conjecture that, given any chess-playing algorithm A that accesses a “waterfall oracle” W, there is an equally-good chess-playing algorithm A0, with similar time and space requirements, that does not access W. If this conjecture holds, then it gives us a perfectly observer-independent way to formalize our intuition that the “semantics” of waterfalls have nothing to do w
I'm not talking about access consciousness here. I'm not talking about the ability to report. I'm talking about phenomenal consciousness.
Maybe I'm wrong, but I predict you're going to say "there's no difference", or "there's nothing to consciousness besides reporting" or something, which is a position I have sympathy for and is closely related to the I talk about at the end of the post. But reporting is not what I'm talking about here.
Perhaps the same calculation could simulate different real world phenomena, but it doesn't follow that the subjective experiences are different in each case.
I see what you mean I think - I suppose if you're into multiple realizability perhaps the set of all physical processes that the alien settles on all implement the same experience. But this just depends on how broad this set is. If it contains two brains, one thinking about the roman empire and one eating a sandwich, we're stuck.
...This also does not follow. Both experiences could happen in the same
Yea, you might be hitting on at least a big generator of our disagreement. Well spotted
fixed, thanks. Careless exponent juggling
Thanks for the comment Steven.
Your alternative wording of practical CF is indeed basically what I'm arguing against (although, we could interpret different degrees of the simulation having the "exact" same experience, and I think the arguments here don't only argue against the strongest versions but also weaker versions, depending on how strong those arguments are).
I'll explain a bit more why I think practical CF is relevant to CF more generally.
Firstly, functionalist commonly say things like
...Computational functionalism: the mind is the software of the brai
I guess I shouldn’t put words in other people’s mouths, but I think the fact that years-long trains-of-thought cannot be perfectly predicted in practice because of noise is obvious and uninteresting to everyone, I bet including to the computational functionalists you quoted, even if their wording on that was not crystal clear.
There are things that the brain does systematically and robustly by design, things which would be astronomically unlikely to happen by chance. E.g. the fact that I move my lips to emit grammatical English-language sentences rather tha...
thanks, corrected
If I understand your point correctly, that's what I try to establish here
the speed of propagation of ATP molecules (for example) is sensitive to a web of more physical factors like electromagnetic fields, ion channels, thermal fluctuations, etc. If we ignore all these contingencies, we lose causal closure again. If we include them, our mental software becomes even more complicated.
i.e., the cost becomes high because you need to keep including more and more elements of the dynamics.
The statement I'm arguing against is:
Practical CF: A simulation of a human brain on a classical computer, capturing the dynamics of the brain on some coarse-grained level of abstraction, that can run on a computer small and light enough to fit on the surface of Earth, with the simulation running at the same speed as base reality, would cause the conscious experience of that brain.
i.e., the same conscious experience as that brain. I titled this "is the mind a program" rather than "can the mind be approximated by a program".
Whether or not a simulation ca...
Yes, perfect causal closure is technically impossible, so it comes in degrees. My argument is that the degree of causal closure of possible abstractions in the brain is less than one might naively expect.
Are there any measures of approximate simulation that you think are useful here?
I am yet to read this but I expect it will be very relevant! https://arxiv.org/abs/2402.09090
Especially if it's something as non-committal as "this mechanism could maybe matter". Does that really invalidate the neuron doctrine?
I agree each of the "mechanisms that maybe matter" are tenuous by themselves, the argument I'm trying to make here is hits-based. There are so many mechanisms that maybe matter, the chances of one of them mattering in a relevant way is quite high.
Thanks for the feedback Garrett.
This was intended to be more of a technical report than a blog post, meaning I wanted to keep the discussion reasonably rigorous/thorough. Which always comes with the downside of it being a slog to read, so apologies for that!
I'll write a shortened version if I find the time!
Thanks James!
One failure mode is that the modification makes the model very dumb in all instances.
Yea, good point. Perhaps an extra condition we'd need to include is that the "difficulty of meta-level questions" should be the same before and after the modification - e.g. - the distribution over stuff it's good at and stuff its bad at should be just as complex (not just good at everything or bad at everything) before and after
Thanks Felix!
This is indeed a cool and surprising result. I think it strengthens the introspection interpretation, but without a requirement to make a judgement of the reliability of some internal signal (right?), it doesn't directly address the question of whether there is a discriminator in there.
Interesting question! I'm afraid I didn't probe the cruxes of those who don't expect hard takeoff. But my guess is that you're right - no hard takeoff ~= the most transformative effects happen before recursive self-improvement
Yea, I think you're hitting on a weird duality between setting and erasing here. I think I agree that setting is more fundamental than erasing. I suppose when talking about energy expenditure of computation, each set bit must be erased in the long run, so they're interchangeable in that sense.
Sorry for the delay. As both you and TheMcDouglas have mentioned; yea, this relies on $H(C|X) = 0$. The way I've worded it above is somewhere between misleading and wrong, have modified. Thanks for pointing this out!
Fixed, thanks!
Thanks for the comment, this is indeed an important component! I've added a couple of sentences pointing in this direction.
fixed, thanks!
Really cool post!
I can't help but compare it to john vervaeke's 4 ways of knowing: propositional, procedural, perspectival, and participatory (nice summary).
It's interesting how the two of you have made a categorization that can literally be described with the same words (4 ways of knowing), but they don't map onto each other much at all (maybe practical mastery ~ procedural, but a bit of a stretch). I guess yours is "4 ways of knowing a thing" and his is "4 ways of knowing things", but still!