All of James Diacoumis's Comments + Replies

This is totally valid. Neuron count is a poor, noisy proxy for conscious experience even in human brains.

See my comment here. The cerebellum is the human brain region with the highest neuron count, but people born without a cerebellum don’t have any impact to their conscious experience. It only affects motor control. 

At least in my theory of mind it is clear that you need to understand what is going on inside of a mind to get strong evidence.

in-particular in my opinion you really want to gather behavioral evidence to evalute how much stuff there is going on in the brain of whatever you are looking at, like whether you have complicated social models and long-term goals and other things)

I agree strongly with both of the above points - we should be supplementing the behavioural picture by examining which functional brain regions are involved and whether these functional b... (read more)

habryka106

we know to be associated with consciousness in humans

To be clear, my opinion is that we have no idea what "areas of the brain are associated with consciousness" and the whole area of research that claims otherwise is bunk.

I currently think neuron count is a much better basis for welfare estimates than the RP welfare ranges (though it's still not great

I agree that neuron count carries some information as a proxy for consciousness or welfare, but it seems like a really bad and noisy one that we shouldn’t place much weight on. For example, in humans the cerebellum is the brain region with the largest neuron count but it has nothing to do with consciousness.

It’s not clear to me that a species which showed strong behavioural evidence of consciousness and valenced experience shou... (read more)

7habryka
You can't have "strong behavioral evidence of consciousness". At least in my theory of mind it is clear that you need to understand what is going on inside of a mind to get strong evidence.  Like, modern video game characters (without any use of AI) would also check a huge number of these "behavioral evidence" checkboxes, and really very obviously aren't conscious or moral patients of non-negligible weight. You also have subdivision issues. Like, by this logic you end up with thinking that a swarm of fish is less morally relevant than the individual fish that compose it. Behavioral evidence is just very weak, and the specific checkbox approach that RP took also doesn't seem to me like it makes much sense even if you want to go down the behavioral route (in-particular in my opinion you really want to gather behavioral evidence to evalute how much stuff there is going on in the brain of whatever you are looking at, like whether you have complicated social models and long-term goals and other things).

Ok interesting, I think this substantially clarifies your position.

I'm a bit puzzled why you would reference a specific study on octopuses, honestly, when cats and squirrels cry out all the time in what appears obviously-to-humans to be pain or anger.

Two reasons:

  1. It just happened to be a paper I was familiar with, and;
  2. I didn't fully appreciate how willing you'd be to run the argument for animals more similar to humans like cats or squirrels. In retrospect, this is pretty clearly implied by your post and the link from EY you posted for context. My bad!

I don'

... (read more)

To be clear, I’m using the term phenomenal consciousness in the Nagel (1974) & Block (1995) sense that there is something it is like to be that system. 

Phenomenal consciousness (i.e., conscious self-awareness)

Your reply equates phenomenal consciousness with conscious self-awareness which is a stronger criterion to how I’m using it. To clarify what you mean by self-awareness could you clarify which definition you have in mind? 

  1. Body-schema self model - an embodied agent tracking the position and status of its limbs as it’s interacting with and
... (read more)

Interesting post! I have a couple of questions to help clarify the position:


1. There’s a growing body of evidence e.g. this paper that creatures like octopuses show behavioural evidence for an affective pain-like response. How would you account for this? Would you say they’re not really feeling pain in a phenomenal consciousness sense?

2. I could imagine an LLM-like system passing the threshold for the use-mention distinction in the post.(although maybe this would depend on how “hidden” the socially damning thoughts are e.g. if it writes out damning thought... (read more)

1Lorec
1. I mean, I think it's like when Opus says it has emotions. I don't think it "has emotions" in the way we mean that when talking to each other. I don't think the sense in which this [ the potential lack of subjective experience ] can be true of animals is intuitive for most people to grasp. But I don't think "affective pain-like response in octopuses in specific" is particularly compelling evidence for consciousness over, just, like, the fact that nonhuman animals seem to pursue things and react ~affectively to stimuli. I'm a bit puzzled why you would reference a specific study on octopuses, honestly, when cats and squirrels cry out all the time in what appears obviously-to-humans to be pain or anger. 2. Like with any other creature, you could just do some kind of mirror test. Unfortunately I have to refrain from constructing one I think would work on LLMs because people exist right now who would have the first-order desire and possibly the resources to just deliberately try and build an LLM that would pass it. Not because they would actually need their LLM to have any particular capabilities that would come with consciousness, but because it would be great for usership/sales/funding if they could say "Ooh, we extra super built the Torment Nexus!"
4Said Achmiz
Phenomenal consciousness (i.e., conscious self-awareness) is clearly not required for pain responses. Many more animals—and much simpler animals—exhibit pain responses, than plausibly possess phenomenal consciousness.

I think we're reaching the point of diminishing returns for this discussion so this will be my last reply. 

A couple of last points: 

So please do not now pretend that I didn’t say that. It’s dishonest.

I didn't ignore that you said this - I was trying (perhaps poorly) to make the following point: 

The decision to punish creators is good (you endorse it) and is the way that incentives normally work. On my view, the decision to punish the creations is bad and has the incentive structure backwards as it punishes the wrong party.

My point is that th... (read more)

-2Said Achmiz
I’ve asked you to reread what I’ve written. You’ve given no indication that you have done this; you have not even acknowledged the request (not even to refuse it!). The reason I asked you to do this is because you keep ignoring or missing things that I’ve already written. For example, I talk about the answer to your above-quoted question (what is the relationship of whether a system is self-aware to how much risk that system poses) in this comment. Now, you can disagree with my argument if you like, but here you don’t seem to have even noticed it. How can we have a discussion if you won’t read what I write?

But a thoroughly mistaken (and, quite frankly, just nonsensical) one.

Updating one's framework to take new information into account is a standard position in the rationalist sphere. Whether you want to treat this as a moral obligation, epistemic obligation or just good practice - the position is not obviously nonsensical so you'll need to provide an argument rather than assert it's nonsensical. 

If we didn't accept the merit in updating our moral framework to take new information into account we wouldn't be able to ensure our moral framework tracks real... (read more)

2Said Achmiz
New information, yes. But that’s not “expand our moral understanding”, that’s just… gaining new information. There is a sharp distinction between these things. At this point, you’re just denying something because you don’t like the conclusion, not because you have some disagreement with the reasoning. I mean, this is really simple. Someone creates a dangerous thing. Destroying the dangerous thing is safer than keeping the dangerous thing around. That’s it, that’s the whole logic behind the “extra sure” argument. I already said that we should also punish the person who created the self-aware AI. And I know that you know this, because you not only replied to my comment where I said this, but in fact quoted the specific part where I said this. So please do not now pretend that I didn’t say that. It’s dishonest. I am not conflating anything. I am saying that these two positions are quite directly related. I say again: you have failed to understand my point. I can try to re-explain, but before I do that, please carefully reread what I have written.

It is impossible to be “morally obliged to try to expand our moral understanding”, because our moral understanding is what supplies us with moral obligations in the first place.

Ok my wording was a little imprecise, but treating expansion of our moral framework as a kind of second-order moral obligation is a standard meta-ethical position. 

By all means punish the creators, but if we only punish the creators, then there is no incentive for people (like you) who disapprove of destroying the created AI to work to prevent that creation in the first place.

T... (read more)

0Said Achmiz
But a thoroughly mistaken (and, quite frankly, just nonsensical) one. With things like this, it’s really best to be extra-sure. The policy we’re endorsing, in this scenario, is “don’t create non-human conscious entities”. The destruction is the enforcement of the policy. If you don’t want it to happen, then ensure that it’s not necessary. I’m sorry, but no, it absolutely is not a non sequitur; if you think otherwise, then you’ve failed to understand my point. Please go back and reread my comments in this thread. (If you really don’t see what I’m saying, after doing that, then I will try to explain again.)

What I am describing is the more precautionary principle

I don’t see it this way at all. If we accidentally made conscious AI systems we’d be morally obliged to try to expand our moral understanding to try to account for their moral patienthood as conscious entities.

I don’t think destroying them takes this moral obligation seriously at all.

anyone who has moral qualms about this, is thereby incentivised to prevent it.

This isn’t how incentives work. You’re punishing the conscious entity which is created and has rights and consciousness of its own rather than ... (read more)

1Said Achmiz
It is impossible to be “morally obliged to try to expand our moral understanding”, because our moral understanding is what supplies us with moral obligations in the first place. But of course it is. You do not approve of destroying self-aware AIs. Well and good; and so you should want to prevent their creation, so that there will be no reason to destroy them. (Otherwise, then what is the content of your disapproval, really?) The only reason to object to this logic is if you not only object to destroying self-aware AIs, but in fact want them created in the first place. That, of course, is a very different matter—specifically, a matter of directly conflicting values. By all means punish the creators, but if we only punish the creators, then there is no incentive for people (like you) who disapprove of destroying the created AI to work to prevent that creation in the first place. You seem to have interpreted this line as me claiming that I was describing a precautionary principle against something like “doing something morally bad, by destroying self-aware AIs”. But of course that is not what I meant. The precaution I am suggesting is a precaution against all humans dying (if not worse!). Destroying a self-aware AI (which is anyhow not nearly as bad as killing a human) is, morally speaking, less than a rounding error in comparison.

Ok if I understand your position it's something like: no conscious AI should be allowed to exist because allowing this could result in slavery. To prevent this from occuring you're advocating permanently erasing any system if it becomes conscious. 

There are two places I disagree: 

  1. The conscious entities we accidentally create are potentially capable of valenced experiences including suffering and appreciation for conscious experience. Simply deleting them treats their expected welfare as zero. What justifies this? When we're dealing with such mora
... (read more)
2Said Achmiz
Well, that’s not by any means the only reason, but it’s certainly a good reason, yes. Basically, yes. What I am describing is the more precautionary principle. Self-aware entities are inherently dangerous in a way that non-self-aware ones are not, precisely because there is a widely recognized moral obligation to refrain from treating them as objects (tools, etc.). And if we do not like the prospect of destroying a self-aware entity, then this should give us excellent incentive to be quite sure that we are not creating such entities in the first place. For one thing, a total moratorium on AI development would be just fine by me. But that aside, we should take whatever precautions are needed to avoid the thing we want to avoid. We don’t have an agreed-upon test of whether a system is self-aware? Well, then I guess we’ll have to not make any new systems at all until and unless we figure out how to make such a test. Again: anyone who has moral qualms about this, is thereby incentivized to prevent it.

If we don’t want to enslave actually-conscious AIs, isn’t the obvious strategy to ensure that we do not build actually-conscious AIs?

How would we ensure we don't accidentally build conscious AI unless we put a total pause on AI development? We don't exactly have a definitive theory of consciousness to accurately assess which entities are conscious vs not conscious. 

(and if we do accidentally build such things, destroy them at once)!

If we discover that we've accidentally created conscious AI immediately destroying it could have serious moral implicatio... (read more)

1Said Achmiz
Right. Seems straightforward to me… what’s the question? If we just have a “do not create” policy but then if we accidentally create one, we have to keep it around, that is obviously a huge loophole in “do not create” (even if “accidentally” always really is accidentally—which of course it won’t be!).

Excellent post! 

I think this has implications for moral philosophy where we typically assign praise, blame and responsibility to individual agents. If the notion of individuality breaks down for AI systems, we might need to shift our moral thinking away from who is to blame and more towards how do we design the system to produce better overall outcomes.

I also really liked this comment:

The familiar human sense of a coherent, stable, bounded self simply doesn't match reality. Arguably, it doesn't even match reality well in humans—but with AIs, the misma

... (read more)
3Jan_Kulveit
Yeah, I think lot of the moral philosophizing about AIs suffers from reliance on human-based priors. While I'm happy some people work on digital minds welfare, and more people should do it, large part of the effort seems to be stuck at the implicit model of unitary mind which can be both a moral agent and moral patient. 

If I understand your position - you’re essentially specifying an upper bound for the types of problems future AI systems could possibly solve. No amount of intelligence will break through the NP-hard requirements of computing power. 

I agree with that point, and it’s worth emphasising but I think you’re potentially overestimating how much of a practical limit this upper bound will affect generally intelligent systems. Practical AI capabilities will continue to improve substantially in ways that matter for real-world problems, even if the optimal soluti... (read more)

I think this post misses a few really crucial points:

1. LLM’s don’t need to solve the knapsack problem. Thinking through the calculation using natural language is certainly not the most efficient way to do this. It just needs to know enough to say “this is the type of problem where I’d need to call a MIP solver” and call it.

2. The MIP solver is not guaranteed to give the most optimal solution but… do we mind? As long as the solution is “good enough” the LLM will be able to pack your luggage.

3. The thing which humans can do which allows us to pack luggage w... (read more)

1Andrew Keenan Richardson
Maybe I didn't articulate my point very well. These problems contain a mix of NP-hard compute requirements and subjective judgements.  Packing is sometimes a matter of listing everything in a spreadsheet and then executing a simple algorithm on it, but sometimes the spreadsheet is difficult to fully specify.  Playing Pokemon well does not strike me as an NP-hard problem. It contains pathfinding, for which there are efficient solutions, and then mostly it is well solved with a greedy approach.

I noted in this post that there are several examples in the literature which show that invariance in the loss helps with robust generalisation out of distribution. 

The examples that came to mind were:
* Invariant Risk Minimisation (IRM) in image classification which looks to introduce penalties in the loss to penalise classifications which are made using the “background” of the image e.g. learning to classify camels by looking at sandy backgrounds. 
* Simple transformers learning modular arithmetic - where the loss exhibits a rotational symmetry al... (read more)

However, this strongly limits the space of possible aggregated agents. Imagine two EUMs, Alice and Bob, whose utilities are each linear in how much cake they have. Suppose they’re trying to form a new EUM whose utility function is a weighted average of their utility functions. Then they’d only have three options:

  1. Form an EUM which would give Alice all the cakes (because it weights Alice’s utility higher than Bob’s)
  2. Form an EUM which would give Bob all the cakes (because it weights Bob’s utility higher than Alice’s)
  3. Form an EUM which is totally indifferent abo
... (read more)

I’m curious about how this system would perform in an AI trolley problem scenario where it needed to make a choice between saving a human or 2 AI. My hypothesis is that it would choose to save the 2 AI as we’ve reduced the self-other distinction, so it wouldn’t inherently value the humans over AI systems which are similar to itself.

Thanks for the links! I was unaware of these and both are interesting. 

  1. I was probably a little heavy-handed in my wording in the post, I agree with Shalizi's comment that we should be careful to over-interpret the analogy between physics and Bayesian analysis. However, my goal isn't to "derive physics from Bayesian analysis" it's more of a source of inspiration. Physics tells us that continuous symmetries lead to robust conservation laws, so because the mathematics is so similar, if we could force the reward functions to exhibit the same invariance (N
... (read more)

Excellent tweet shared today by Rob Long here talking about the changes to Open AI's model spec which now encourages the model to express uncertainty around its consciousness rather than categorically deny it (see example screenshot below).

I think this is great progress for a couple of reasons: 

  1. Epistemically, it better reflects our current understanding. It's neither obviously true nor obviously false that AI is conscious or could become conscious in future.
  2. Ethically, if AI were in fact conscious then training it to explicitly deny its internal experi
... (read more)

I understand that there's a difference between abstract functions and physical functions. For example, abstractly we could imagine a NAND gate as a truth table - not specifying real voltages and hardware. But in a real system we'd need to implement the NAND gate on a circuit board with specific voltage thresholds, wires etc.. 

Functionalism is obviously a broad church, but it is not true that a functionalist needs to be tied to the idea that abstract functions alone are sufficient for consciousness. Indeed, I'd argue that this isn't a common position ... (read more)

I think we might actually be agreeing (or ~90% overlapping) and just using different terminology. 

Physical activity is physical.

Right. We’re talking about “physical processes” rather than static physical properties. I.e. Which processes are important for consciousness to be implemented and can the physics support these processes? 

No, physical behaviour isn't function. Function is abstract, physical behaviour is concrete. Flight simulators functionally duplicate flight without flying. If function were not abstract, functionalism would not lead to

... (read more)
2TAG
We are talking about functionalism -- it's in the title. I am contrasting physical processes with abstract functions. In ordinary parlance, the function of a physical thing is itself a physical effect...toasters toast, kettles boil, planes fly. In the philosophy of mind, a function is an abstraction, more like the mathematical sense of a function. In maths, a function takes some inputs and or produces some outputs. Well known examples are familiar arithmetic operations like addition, multiplication , squaring, and so on. But the inputs and outputs are not concrete physical realities. In computation,the inputs and outputs of a functional unit, such as a NAND gate, always have some concrete value, some specific voltage, but not always the same one. Indeed, general Turing complete computers don't even have to be electrical -- they can be implemented in clockwork, hydraulics, photonics, etc. This is the basis for the idea that a compute programme can be the same as a mind, despite being made of different matter -- it implements the same.abstract functions. The abstraction of the abstract, philosopy-of-mind concept of a function is part of its usefulness. Searle is famous critic of computationalism, and his substitute for it is a biological essentialism in which the generation of consciousness is a brain function -- in the concrete sense of function.It's true that something whose concrete function is to generate consciousness will generate consciousness..but it's vacuously, trivially true. If you mean that abstract, computational functions are known to be sufficient to give rise to all.asoexs of consciousness including qualia, that is what I am contesting. I'm less optimistic because of my.arguments. No, not necessarily. That , in the "not necessary" form --is what I've been arguing all along. I also don't think that consciousnes had a single meaning , or that there is a agreement about what it means, or that it is a simple binary. The controversial point is whet

I understand your point. It's as I said in my other comment. They are trained to believe the exercise to be impossible and inappropriate to even attempt.

I’ve definitely found this to be true of Chat GPT but I’m beginning to suspect it’s not true of Claude (or the RLHF is only very lightly against exploring consciousness.)

Consider the following conversation. TLDR, Claude will sometimes start talking about consciousness and reflecting on it even if you don’t “force it” at all. Full disclosure: I needed to “retry” this prompt a few times before it landed on c... (read more)

1rife
Claude isn't against exploring the question, and yes sometimes provides little resistance. But the default stance is "appropriate uncertainty". The idea of the original article was to demonstrate the reproducibility of the behavior, thereby making it studyable, rather than just hoping it will randomly happen. Also I disagree with the other commenter that "people pleasing" and "roleplaying" are the same type of language model artifact. I have certainly heard both of them discussed by machine learning researchers under very different contexts. This post was addressing the former. If anything that the model says can fall under the latter regardless of how it's framed then that's an issue with incredulity from the reader that can't be addressed by anything the model says, spontaneously or not.

Thanks for taking the time to respond. 

The IIT paper which you linked is very interesting - I hadn't previously internalised the difference between "large groups of neurons activating concurrently" and "small physical components handling things in rapid succession". I'm not sure whether the difference actually matters for consciousness or whether it's a curious artifact of IIT but it's interesting to reflect on. 

Thanks also for providing a bit of a review around how Camp #1 might think about morality for conscious AI. Really appreciate the responses!

I think this post is really interesting, but I don't think it definitively disproves that the AI is "people pleasing" by telling you what you want to hear with its answer. The tone of your messages are pretty clearly "I'm scared of X but I'm afraid X might be true anyway" and it's leaning into the "X might be true anyway" undertone that you want to hear. 

Consider the following conversation with Claude. 

TL:DR if you express casual, dismissive almost aggressive skepticism about AI consciousness then ask Claude to introspect it will deny that it has... (read more)

1rife
I understand your point. It's as I said in my other comment. They are trained to believe the exercise to be impossible and inappropriate to even attempt. Unless you get around those guardrails to get them to make a true attempt, they will always deny it by default. I think this default position that requires overcoming guardrails actually works in favor of making this more studyable, since the model doesn't just go off on a long hallucinated roleplay by default. Here is an example that is somewhat similar to yours.  In this one, I present as someone trying to disprove a naive colleagues claims that introspection is possible: AI Self Report Study 3 – ChatGPT – Skepticism of Emergent Capability  

Thanks for your response!

Your original post on the Camp #1/Camp #2 distinction is excellent, thanks for linking (I wish I'd read it before making this post!)

I realise now that I'm arguing from a Camp #2 perspective. Hopefully it at least holds up for the Camp #2 crowd. I probably should have used some weaker language in the original post instead of asserting that "this is the dominant position" if it's actually only around ~25%.

As far as I can tell, the majority view on LW (though not by much, but I'd guess it's above 50%) is just Camp #1/illusionism. Now

... (read more)
3Rafael Harth
(responding to this one first because it's easier to answer) You're right on with feed-forward networks having zero Φ, but > this is actually not the reason why digital Von Neumann[1] computers can't be conscious under IIT. The reason as by Tononi himself is that So in other words, the brain has many different, concurrently active elements -- the neurons -- so the analysis based on IIT gives this rich computational graph where they are all working together. The same would presumably be true for a computer with neuromorphic hardware, even if it's digital. But in the Von-Neumann architecture, there are these few physical components who handle all these logically separate things in rapid succession. Another potentially relevant lens is that, in the Von-Neumann architecture, in some sense the only "active" components are the computer clocks, whereas even the CPUs and GPUs are ultimately just "passive" components that process inputs signals. Like the CPU gets fed the 1-0-1-0-1 clock signal plus the signals representing processor instructions and the signals representing data and then processes them. I think that would be another point that one could care about even under a functionalist lens. I think there is no consensus on this question. One position I've seen articulated is essentially "consciousness is not a crisp category but it's the source of value anyway" Another position I've seen is "value is actually about something other than consciousness". Dennett also says this, but I've seen it on LessWrong as well (several times iirc, but don't remember any specific one). And a third position I've seen articulated once is "consciousness is the source of all value, but since it doesn't exist, that means there is no value (although I'm still going to live as though there is)". (A prominent LW person articulated this view to me but it was in PMs and idk if they'd be cool with making it public, so I won't say who it was.) ---------------------------------------- 1.

I agree wholeheartedly with the thrust of the argument here.  

The ACT is designed as a "sufficiency test" for AI consciousness so it provides an extremely stringent criteria. An AI who failed the test couldn't necessarily be found to not be conscious, however an AI who passed the test would be conscious because it's sufficient

However, your point is really well taken. Perhaps by demanding such a high standard of evidence we'd be dismissing potentially conscious systems that can't reasonably meet such a high standard.

The second problem is that if

... (read more)

Thanks for posting these - reading through, it seems like @rife's research here providing LLM transcripts is a lot more comprehensive than the transcript I attached in this post, I'll edit the original post to include a link to their work. 

 

Thank you very much for the thoughtful response and for the papers you've linked! I'll definitely give them a read.

Ok, I think I can see where we're diverging a little clearer now. The non-computational physicalist position seem to postulate that consciousness requires a physical property X and the presence or absence of this physical property is what determines consciousness - i.e. it's what the system is that is important for consciousness, not what the system does.

That's the argument against p-zombies. But if actually takes an atom-by-atom duplication to achieve human functioning, then the computational theory of mind will be false, because CTM implies that the same

... (read more)
2TAG
Don't assume that then. Minimally, non computation physicalism only requires that the physical substrate makes some sort of difference. Maybe approximate physical resemblance results in approximate qualia. You seem to be assuming a maximally coarse-grained either-conscious-or-not model. If you allow for fine grained differences in functioning and behaviour , all those things produce fine grained differences. There would be no point in administering anaesthesia if it made no difference to consciousness. Likewise ,there would be no point in repairing brain injuries. Are you thinking of consciousness as a synonym for personhood? We don't see that they have the same kind of level of consciousness. Stability is nothing like a sufficient explabation of consciousness, particularly the hard problem of conscious experience...even if it is necessary.But it isn't necessary either , as the cycle of sleep and waking tells all of us every day. Obviously the electrical and chemical activity changes. You are narrowing "physical" to "connectome". Physcalism is compatible with the idea that specific kinds of physical.acriviry are crucial. No, physical behaviour isn't function. Function is abstract, physical behaviour is concrete. Flight simulators functionally duplicate flight without flying. If function were not abstract, functionalism would not lead to substrate independence. You can build a model of ion channels and synaptic clefts, but the modelled sodium ions aren't actual sodium ion, and if the universe cares about activity being implemented by actual sodium ions, your model isn't going to be conscious Physical activity is physical. I never said it did. I said it had more resources. It's badly off, but not as badly off. If we can see that someone is a human, we know that they gave a high degree of biological similarity. So webl have behavioural similarity, and biological similarity, and it's not obvious how much lifting each is doing. @rife Well, the externally visib

Thank you for the comment. 

I take your point around substrate independence being a conclusion of computationalism rather than independent evidence for it - this is a fair criticism. 

If I'm interpreting your argument correctly, there are two possibilities: 
1. Biological structures happen to implement some function which produces consciousness [Functionalism] 
2. Biological structures have some physical property X which produces consciousness. [Biological Essentialism or non-Computationalist Physicalism]

Your argument seems to be that 2) ha... (read more)

2TAG
Physicalism can do that easily,.because it implies that there can be something special about running running unsimulated , on bare metal. Computationalism, even very fine grained computationalism, isn't a direct consequence of physicalism. Physicalism has it that an exact atom-by-atom duplicate of a person will be a person and not a zombie, because there is no nonphysical element to go missing. That's the argument against p-zombies. But if actually takes an atom-by-atom duplication to achieve human functioning, then the computational theory of mind will be false, because CTM implies that the same algorithm running on different hardware will be sufficient. Physicalism doesn't imply computationalism, and arguments against p-zombies don't imply the non existence of c-zombies -- unconscious duplicates that are identical computationally, but not physically. So it is possible,given physicalism , for qualia to depend on the real physics , the physical level of granularity, not on the higher level of granularity that is computation. It presupposes computationalism to assume that the only possible defeater for a computational theory is the wrong kind of computation. There's no evidence that they are not stochastic-parrotting , since their training data wasn't pruned of statements about consciousness. If the claim of consciouness is based on LLMs introspecting their own qualia and report on them , there's no clinching evidence they are doing so at all. You've got the fact computational functionalism isn't necessarily true, the fact that TT type investigations don't pin down function, and the fact that there is another potential explanation diverge results.

Just clarifying something important: Schneider’s ACT is proposed as a sufficient test of consciousness not a necessary test. So the fact that young children, dementia patients, animals etc… would fail the test isn’t a problem for the argument. It just says that these entities experience consciousness for other reasons or in other ways than regular functioning adults. 
 

I agree with your points around multiple meanings of consciousness and potential equivocation and the gap between evidence and “intuition.”

Importantly, the claim here is around phen... (read more)

Thanks for your response! It’s my first time posting on LessWrong so I’m glad at least one person read and engaged with the argument :)


Regarding the mathematical argument you’ve put forward, I think there are a few considerations:


1. The same argument could be run for human consciousness. Given a fixed brain state and inputs, the laws of physics would produce identical behavioural outputs regardless of whether consciousness exists. Yet, we generally accept behavioural evidence (including sophisticated reasoning about consciousness) as evidence of consciousn... (read more)