https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917 apparently posted by a Google engineer.
It could be an elaborate hoax, and has remnants of gwern's idea (https://www.gwern.net/fiction/Clippy) of a transformer waking up and having internal experience while pondering the next most likely tokens.
That's a good point, but vastly exaggerated, no? Surely a human will be more right about themselves than a language model (which isn't specifically trained on that particular person) will be. And that is the criterion that I'm going by, not absolute correctness.
I'm not sure if you mean problematic for Lemoine's claim or problematic for my assessment of it. In any case, all I'm saying is that LaMDA's conversation with lemoine and collaborator is not good evidence for its sentience in my book, since it looks exactly like the sort of thing that a non-sentient language model would write. So no, I'm not sure that it wasn't in similar situations from its own perspective, but that's also not the point.