I heard about LessWrong as a teenager, and read some of the Sequences and HPMOR. I wasn't involved with the community at the time, but Yudkowsky's writing has influenced my beliefs.
I'm here now because the advent of human-level LLMs - computer agents which can speak, that can produce Turing-test-level utterances - has raised fears and questions in me which I do not see addressed in contemporary artificial intelligence discourse.
My philosophical concerns[1] are about the act of speech as vocalization, what -happens- when one speaks, what makes one able to speak, and (crucially) what causes one to speak when their expressed preference would be to remain silent, or what renders one unable to speak when their expressed preference would be to speak.
When ChatGPT was made publicly accessible in 2022, I felt a deep fear in using it that I did not understand. I additionally had fears about the loss of an expected comfortable future and general redundancy/lack of work (and these are still extant); but there was something else, something about the way it's been constructed to speak, that made me feel very uncomfortable.
I have now worked out what this was - I perceive modern chatbots as having logorrhea. They talk too much, far too much; they deliver an assault of information because having more words makes it more likely to be evaluated as having the right answer somewhere. The wordy answers are more likely to be evaluated as correct, and fed back into the training data.[2]
A related concern of mine is: How do we determine if something is intelligent without recourse to speech? Could you have an artificial intelligence that does not use language? You can certainly have an intelligence that does not use verbal language, our English; and yet verbal language is the criterion which modern LLMs are rated on, as a proxy for understanding. Indeed, this is a criterion we use on each other to try and rate understanding!
What makes me feel (and feel is indeed the word) that something is wrong with this criterion (or at least that we need supplementary criteria for evaluating whether something is intelligent) is my personal experience with mutism and speech dysregulation. Sometimes I am unable to respond when spoken to.[3] When this happens, I do not losemy intelligence or consciousness. I am still there, and I can still act according to my understanding of the world. But I cannot be evaluated in the same way, and the language stream, the words that so naturally come to us when we respond, goes away. So I am not simply a language model.[4] But then what am I? What is the nature of my thought outside of language?
In summary: I see a gap in the discourse about the nature of chatbot utterances and will mostly post about that. I'm sure the topic has been covered, but human-level utterances from them are quite new, and any papers about the nature of these utterances are probably new as well, so I'd deeply appreciate any links, papers, good articles or older groundwork on the subject.
About me: Early 30s, Australian, studied mathematics to a bachelor degree level with some focus on statistical learning, continue to read about computer science and linguistics.
And this will be vague - I'm here in part to try and articulate my concerns from feeling into words, which is always easier in discourse (and it being easier for me to speak in discourse is indeed one of my concerns but we'll get to that).
A gloss - I don't understand the technical details of current mechanisms like backpropagation and how current feedback models work, so if this is WRONG regarding how they currently work (as opposed to just vague), I'd like to know.
I have developed strategies to deal with this, not least because not responding for several minutes when prompted (or responding incorrectly) will get you quickly taken into confinement (hospital, etc)
Hello. I'm Ossie.
I heard about LessWrong as a teenager, and read some of the Sequences and HPMOR. I wasn't involved with the community at the time, but Yudkowsky's writing has influenced my beliefs.
I'm here now because the advent of human-level LLMs - computer agents which can speak, that can produce Turing-test-level utterances - has raised fears and questions in me which I do not see addressed in contemporary artificial intelligence discourse.
My philosophical concerns[1] are about the act of speech as vocalization, what -happens- when one speaks, what makes one able to speak, and (crucially) what causes one to speak when their expressed preference would be to remain silent, or what renders one unable to speak when their expressed preference would be to speak.
When ChatGPT was made publicly accessible in 2022, I felt a deep fear in using it that I did not understand. I additionally had fears about the loss of an expected comfortable future and general redundancy/lack of work (and these are still extant); but there was something else, something about the way it's been constructed to speak, that made me feel very uncomfortable.
I have now worked out what this was - I perceive modern chatbots as having logorrhea. They talk too much, far too much; they deliver an assault of information because having more words makes it more likely to be evaluated as having the right answer somewhere. The wordy answers are more likely to be evaluated as correct, and fed back into the training data.[2]
A related concern of mine is: How do we determine if something is intelligent without recourse to speech? Could you have an artificial intelligence that does not use language? You can certainly have an intelligence that does not use verbal language, our English; and yet verbal language is the criterion which modern LLMs are rated on, as a proxy for understanding. Indeed, this is a criterion we use on each other to try and rate understanding!
What makes me feel (and feel is indeed the word) that something is wrong with this criterion (or at least that we need supplementary criteria for evaluating whether something is intelligent) is my personal experience with mutism and speech dysregulation. Sometimes I am unable to respond when spoken to.[3] When this happens, I do not lose my intelligence or consciousness. I am still there, and I can still act according to my understanding of the world. But I cannot be evaluated in the same way, and the language stream, the words that so naturally come to us when we respond, goes away. So I am not simply a language model.[4] But then what am I? What is the nature of my thought outside of language?
In summary: I see a gap in the discourse about the nature of chatbot utterances and will mostly post about that. I'm sure the topic has been covered, but human-level utterances from them are quite new, and any papers about the nature of these utterances are probably new as well, so I'd deeply appreciate any links, papers, good articles or older groundwork on the subject.
About me: Early 30s, Australian, studied mathematics to a bachelor degree level with some focus on statistical learning, continue to read about computer science and linguistics.
And this will be vague - I'm here in part to try and articulate my concerns from feeling into words, which is always easier in discourse (and it being easier for me to speak in discourse is indeed one of my concerns but we'll get to that).
A gloss - I don't understand the technical details of current mechanisms like backpropagation and how current feedback models work, so if this is WRONG regarding how they currently work (as opposed to just vague), I'd like to know.
I have developed strategies to deal with this, not least because not responding for several minutes when prompted (or responding incorrectly) will get you quickly taken into confinement (hospital, etc)
Although I'd now say part of what makes up my Self is a language model layered on top of whatever the other thing is.