Large language models such as ChatGPT are deep learning architectures trained on immense quantities of text. Their capabilities of producing human-like text are often attributed either to mental capacities or the modeling of such capacities. This paper argues, to the contrary, that because much of meaning is embedded in common patterns of language use, LLMs can model the statistical contours of these usage patterns. We agree with distributional semantics that the statistical relations of a text corpus reflect meaning, but only part of it. Written words are only one part of language use, although an important one as it scaffolds our interactions and mental life. In human language production, preconscious anticipatory processes interact with conscious experience. Human language use constitutes and makes use of given patterns and at the same time constantly rearranges them in a way we compare to the creation of a collage. LLMs do not model sentience or other mental capacities of humans but the common patterns in public language use, clichés and biases included. They thereby highlight the surprising extent to which human language use gives rise to and is guided by patterns.
Abstract
Large language models such as ChatGPT are deep learning architectures trained on immense quantities of text. Their capabilities of producing human-like text are often attributed either to mental capacities or the modeling of such capacities. This paper argues, to the contrary, that because much of meaning is embedded in common patterns of language use, LLMs can model the statistical contours of these usage patterns. We agree with distributional semantics that the statistical relations of a text corpus reflect meaning, but only part of it. Written words are only one part of language use, although an important one as it scaffolds our interactions and mental life. In human language production, preconscious anticipatory processes interact with conscious experience. Human language use constitutes and makes use of given patterns and at the same time constantly rearranges them in a way we compare to the creation of a collage. LLMs do not model sentience or other mental capacities of humans but the common patterns in public language use, clichés and biases included. They thereby highlight the surprising extent to which human language use gives rise to and is guided by patterns.