(it/its)
what matters algorithmically is how they’re connected
I just realised that quote didn't meant what I thought it did. But yes I do understand this and Key seems to think the recurrent connections just aren't strong (they are 'diffusely interconnected'. but whether this means they have an intuitive self model or not honestly who knows, do you have any ideas of how you'd test it? maybe like Graziano does with attentional control?)
(I think we’re in agreement on this?)
Oh yes definitely.
I know nothing about octopus nervous systems and am not currently planning to learn, sorry.
Heheh that's alright I wasn't expecting you too thanks for thinking about it for a moment anyway. I will simply have to learn myself.
Or no sorry I've gone back over the papers and I'm still a bit confused.
Brian Key seems to specifically claim fish and octopuses cannot feel pain in reference to the recurrent connections of their pallium (+ the octopus equivalent which seems to be the supraesophageal complex).
fish also lack a laminated and columnar organization of neural regions that are strongly interconnected by reciprocal feedforward and feedback circuitry [...] Although the medial pallium is weakly homologous to the mammalian amygdala, these structures principally possess feedforward circuits that execute nociceptive defensive behaviours
However he then also claims
This conclusion is supported by lesion studies that have shown that neither the medial pallium nor the whole pallium is required for escape behaviours from electric shock stimuli in fish (Portavella et al., 2004). Therefore, given that the pallium is not even involved in nociceptive behaviours, it could not be inferred that it plays a role in pain.
Which seems a little silly for me because I'm fairly certain humans without a cortex also show nociceptive behaviours?
Which makes me think his claim (in regards to fish consciousness at least) is really just that the feedback circuitry required for the brain to make predictions on its own algorithm (and thus become subjectively aware) just isn't strong enough / is too minimal? He does source a pretty vast amount of information to try and justify this, so much I haven't meaningfully made a start on it yet, its pretty overwhelming. Overall I just feel more uncertain.
I've gone back over his paper on octopuses with my increased understanding and he specifically seems to make reference to a lack of feedback connections between lobes (not just subesophageal lobes). Specifically he seems to focus on the fact that the posterior buccal lobe (which is supraesophageal) has 'no second-order sensory fibres (that) subsequently project from the brachial lobe to the inferior frontal system' meaning 'it lacks the ability to feedback prediction errors to these lobes so as to regulate their models'. I honestly don't know if this places doubt on the ability of octopuses to make intuitive self-models or not in your theory, since I suppose it would depend on the information contained in the posterior buccal lobe, and what recurrent connections exist between the other supraesophageal lobes. Figure 2 has a wiring diagram for a system in the supraesophageal brain as well as a general overview of brain circuity in processing noxious stimuli in figure 7, I would be very interested in trying to understand through what connection the octopus (or even the fish) could plausibly gain information it could use to make predictions about its own algorithm.
I might be totally off on this, but to make a prediction about something like a state of attention / cortical seriality would surely require feedback connections from the output higher level areas where the algorithm is doing 'thinking' to earlier layers, given that for one awareness of a stimuli seems to allow greater control of attention directed towards that stimuli, meaning the idea of awareness must have some sort of top-down influence on the visual cortex no?
This makes me wonder, is it higher layers of the cortex that actually generate the predictive model of awareness, or is the local regions that predictive the awareness concept due to feedback from the higher levels? I'm trying to construct some diagram in my head by which the brain models its own algorithm but I'm a bit confused I think.
You are not obliged to give any in-depth response to this I've just become interested especially due to the similarities between your model and Key's and yet the serious potential ethical consequences of the differences.
Oh and sorry just to be clear, does this mean you do think that recurrent connections in the cortex are essential for forming intuitive self-models / the algorithm modelling properties of itself?
Thank you for the response! I am embarrassed that I didn't realise that the lack of recurrent connections referenced in the sources were referring to regions outside of their cortex-equivalent, should've read through more thoroughly :) I am pretty up-to-date in terms of those things.
Can I additionally ask why do you think some invertebrates likely have intuitive self models as well? Would you restrict this possibility to basically just cephalopods and the like (as many do, being the most intelligent invertebrates), or would you likely extend to it to creatures like arthropods as well? (what's your fuzzy estimate that an ant could model itself as having awareness?)
I love this series very much and I thank you for writing all of this. There's a lot I could ask or say about this but specifically there's a line of thought right now I want to query you on (I might message you more on other topics later if that's okay this series has sparked a lot of thought in me). Apologies if I use bad or unclear language I'm very well informed on the philosophy, not quite so much on the hard science.
I am very interested in this concept of awareness (because of my own interest in phenomenal consciousness). In the previous article, you say that:
every vertebrate, and presumably many invertebrates too, are also active agents with predictive learning algorithms in their brain, and hence their predictive learning algorithms are also incentivized to build intuitive self-models
This is a bit random but its relevant for my own ideas that I'm developing. Do you think making predictions about your own predictive algorithms is something that requires recurrent processing (as in feedback from higher processing layers onto lower ones)? I'm asking this because it actually seems like fish for instance, as well octopuses, have pretty much entirely feed-forward brains, so it seems dubious to me that they could model themselves as having awareness (or even model their own algorithm at all) unless I'm misunderstanding how this works on an algorithmic level.
If I'm right and this means they probably can't model their own algorithm, this would probably imply everything 'below' reptiles at the least lacks the ability to do this as well (and thus lacks the ability to model itself as having awareness).
I do not think this is true.
I don't believe that chatbots are already conscious. Yet I do think we'll be able to tell. Specifically I think we'll be able to trace back the specific kind of neural processing that generates beliefs and reports about consciousness, and then see which functional properties this process has that makes it unique compared to non-conscious processing. Then we can look into chatbots brains and see if they're doing this processing (i.e. see if they're saying their conscious because they have the mental states we do, or if they're just mimicing our reports of our own mental states without any of their own)
This post is really interesting!
Do you have any thoughts on why then does psychosis typically suddenly 'kick in' in late adolescence / early adulthood? (and why trauma correlates with it and tends to act as that 'kickstarter'?)
Also any thoughts about delusions? Like how come schizophrenic people will occasionally not just believe in impossible things but very occasionally even random things like 'I am Jesus Christ' or 'I am Napoleon'?
Right of course. So would this imply that organisms that have very simple brains / roles in their environment (for example: not needing to end up with a flexible understanding of the consequences of your actions), would have a very weak incentive too?
And if an intuitive self model helps with things like flexible planning then even though its a creation of the 'blank-slate' cortex, surely some organisms would have a genome that sets up certain hyperparameters that would encourage it no, since it would seem strange for something pretty seriously adaptive being purely an 'epiphenomenon' (as per language being facilitated by hyperparameters encoded in the genome)? But also its fine if you also just don't have an opinion on this haha. (also: wouldn't some animals not have an incentive to create self-models if creating a self-model would not seriously increase performance in any relevant domain? Like a dog trying to create an in-depth model of the patterns that appear on computer monitors maybe)
It does seem like flexible behaviour in some general sense is perfectly possible without awareness (as I'm sure you know) but I understand that awareness would surely help a whole lot.
You might have no opinion on this at all but would you have any vague guess at all as to why you can only verbally report items in awareness? (cause even if awareness is a model of serial processing and verbal report requires that kind of global projection / high state of attention, I've still seen studies showing that stimuli can be globally accessible / globally projected in the brain and yet still not consciously accessible, presumably in your model due to a lack of modelling of that global-access)