I'm Rob Long. I work on AI consciousness and related issues.
https://experiencemachines.substack.com/
There was a rush to deontology that died away quickly, mostly retreating back into its special enclave of veganism.
Can you explain what you mean by the second half of that sentence?
To clarify, what question were you thinking that is more interesting than? I see that as one of the questions that is raised in the post. But perhaps you are contrasting "realize it is conscious by itself" with the methods discussed in "Could we build language models whose reports about sentience we can trust?"
I think I'd need to hear more about what you mean by sapience (the link didn't make it entirely clear to me) and why that would ground moral patienthood. It is true in my opinion that there are other plausible grounds for moral patienthood besides sentience (which, its ambiguity notwithstanding, I think can be used about as precisely as sapience, see my note on usage), most notably desires, preferences, and goals. Perhaps those are part of what you mean by 'sapience'?
Great, thanks for the explanation. Just curious to hear your framework, no need to reply:
-If you do have some notion of moral patienthood, what properties do you think are important for moral patienthood? Do you think we face uncertainty about whether animals or AIs have these properties? -If you don't, are there questions in the vicinity of "which systems are moral patients" that you do recognize as meaningful?
Very interesting! Thanks for your reply, and I like your distinction between questions:
Positive valence involves attention concentration whereas negative valence involves diffusion of attention / searching for ways to end this experience.
Can you elaborate on this? What is do attention concentration v. diffusion mean? Pain seems to draw attention to itself (and to motivate action to alleviate it). On my normal understanding of "concentration", pain involves concentration. But I think I'm just unfamiliar with how you / 'the literature' use these terms.
I'm trying to get a better idea of your position. Suppose that, as TAG also replied, "realism about phenomenal consciousness" does not imply that consciousness is somehow fundamentally different from other forms of organization of matter. Suppose I'm a physicalist and a functionalist, so I think phenomenal consciousness just is a certain organization of matter. Do we still then need to replace "theory" with "ideology" etc?
to say that [consciousness] is the only way to process information
I don't think anyone was claiming that. My post certainly doesn't. If one thought consciousness were the only way to process information, wouldn't there not even be an open question about which (if any) information-processing systems can be conscious?
A few questions:
Suffering seems to need a lot of complexity
and also seems deeply connected to biological systems.
I think I agree. Of course, all of the suffering that we know about so far is instantiated in biological systems. Depends on what you mean by "deeply connected." Do you mean that you think that the biological substrate is necessary? i.e. you have a biological theory of consciousness?
AI/computers are just a "picture" of these biological systems.
What does this mean?
Now, we could someday crack consciousness in electronic systems, but I think it would be winning the lottery to get there not on purpose.
Can you elaborate? Are you saying that, unless we deliberately try to build in some complex stuff that is necessary for suffering, AI systems won't 'naturally' have the capacity for suffering? (i.e. you've ruled out the possibility that Steven Byrnes raised in his comment)
That poem was not written by Hitler.
According to this website and other reputable-seeming sources, the German poet Georg Runsky published that poem, "Habe Geduld", around 1906.