stormykat
stormykat has not written any posts yet.

stormykat has not written any posts yet.

I doubt infants are conscious and thus they are only indirectly morally important as in the future they will eventually become moral patients. 'patients with severe impairment in minimally conscious states who can no longer form such representations' People who could not form such representations would be complete vegetables, an example of such humans are people born without a cortex, and probably at least affectively speaking people with severe akinetic mutism. In the second case again like the infant, they will eventually regain such representations and so are indirectly morally relevant.
Why would they become moral patients because other people care about them
Yeah sorry phrased this badly, they would have moral value as in the same way a treasured heirloom has moral value. Second-hand.
- The family members do not feel pain about their loved ones. I agree that they suffer, but that is not related to pain stimuli. You can have aversive feelings toward all kinds of things unrelated to nociception. Just think about salty water. You only crave it if you have too little salt, but otherwise it is yuck. Although, maybe, you mean nociception in a non-standard way.
Ahhh I think maybe I know another big reason of why people are confused now.... (read more)
Sorry I think I may have explained this badly. The point is that the neural network has no actual aversiveness in its model of the world. There's no super meaningful difference here between the neural network and Gilbert that was never my point. The point is that gilbert is only sensitive to certain types of input, but he has no awareness of what the input does to him. Gilbert / the neural network only experiences: something happens to my body -> something else happens to my body + i react a certain way, he / the network has no model of / access to why that happens, there is no actual aversiveness... (read more)
People with some sort of fictional insane pain asymbolia where they never felt any aversiveness ever wouldn't be moral patients no, although they might have value since their family, who are moral patients, still care about them. No such people actually exist irl though, people with pain asymbolia still want things and still feel aversiveness like everyone else , they just don't suffer from physical pains.
“incentivized to build intuitive self-models” does not necessarily imply “does in fact build intuitive self-models”. As I wrote in §1.4.1, just because a learning algorithm is incentivized to capture some pattern in its input data, doesn’t mean it actually will succeed in doing so.
Right of course. So would this imply that organisms that have very simple brains / roles in their environment (for example: not needing to end up with a flexible understanding of the consequences of your actions), would have a very weak incentive too?
And if an intuitive self model helps with things like flexible planning then even though its a creation of the 'blank-slate' cortex, surely some organisms would have... (read more)
what matters algorithmically is how they’re connected
I just realised that quote didn't meant what I thought it did. But yes I do understand this and Key seems to think the recurrent connections just aren't strong (they are 'diffusely interconnected'. but whether this means they have an intuitive self model or not honestly who knows, do you have any ideas of how you'd test it? maybe like Graziano does with attentional control?)
(I think we’re in agreement on this?)
Oh yes definitely.
I know nothing about octopus nervous systems and am not currently planning to learn, sorry.
Heheh that's alright I wasn't expecting you too thanks for thinking about it for a moment anyway. I will simply have to learn myself.
Or no sorry I've gone back over the papers and I'm still a bit confused.
Brian Key seems to specifically claim fish and octopuses cannot feel pain in reference to the recurrent connections of their pallium (+ the octopus equivalent which seems to be the supraesophageal complex).
fish also lack a laminated and columnar organization of neural regions that are strongly interconnected by reciprocal feedforward and feedback circuitry [...] Although the medial pallium is weakly homologous to the mammalian amygdala, these structures principally possess feedforward circuits that execute nociceptive defensive behaviours
However he then also claims
... (read 466 more words →)This conclusion is supported by lesion studies that have shown that neither the medial pallium nor the whole pallium is
Oh and sorry just to be clear, does this mean you do think that recurrent connections in the cortex are essential for forming intuitive self-models / the algorithm modelling properties of itself?
Thank you for the response! I am embarrassed that I didn't realise that the lack of recurrent connections referenced in the sources were referring to regions outside of their cortex-equivalent, should've read through more thoroughly :) I am pretty up-to-date in terms of those things.
Can I additionally ask why do you think some invertebrates likely have intuitive self models as well? Would you restrict this possibility to basically just cephalopods and the like (as many do, being the most intelligent invertebrates), or would you likely extend to it to creatures like arthropods as well? (what's your fuzzy estimate that an ant could model itself as having awareness?)
There is a mechanism in the brain that has access to / represents the physical damage. There is no mechanism in the brain that has access to / represents the aversive response to the physical damage since there is no meta-representation in first-order systems. Thus not a single part of the nervous system at all represents aversiveness, it can be found nowhere in the system.