I really liked Scott Alexander's post on AI consciousness. He doesn't really discuss the paper of Butlin et al which is fine by me. I can never get excited about consciousness research. It always seemed like drawing the target around the dart, where the dart is humans, and you're trying (not always successfully) to make a definition that includes all humans but doesn't include thermostats. I can't get that excited about whether or not LLM have the "something something feedback" or whether having chain of thoughts changes anything.
Maybe, like Scott says, it's like Aphantasia and I'm just not conscious in the same way that some other people are. (Do you have to be conscious to write consciousness research? This seems like another one of these hard questions that can't be answered based on observed output alone.)
In any case, the main thing people care about is not consciousness but moral patienthood. And here I agree with Scott that, regardless of what philosphers might wish was the case, in practice it's not really about internal or intelligence but about form factor and appearance, together with our selfish interest.
If AI's are embodied, look like us or are super cute it will be hard not to ascribe to them consciousness and some amount of moral patienthood. And arguably rightly so. You don't want to teach your kids that it's OK to mistreat things that look very much like humans.
If AI's are served in the form of faceless assistants, then we would likely not care. If anything, the more super-intelligent AI is, the more alien it would seem to us, and so the less we would care about its welfare.
Some of it might also be how useful or painful for us to ascribe moral patienthood to the AIs. As Scott says, we might have ascribed some degree of personhood to pigs if they didn't taste so good.