Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

I'm a (very old) programmer with a non technical interest in AI, 
so I'm not sure how much i can contribute,
but I do have some thoughts on some of this.
It seems to me that the underlying units of intelligence are concepts.
How do i define concepts - it has to be in terms of other related concepts. 
They are the nodes in a connected graph of concepts.
Concepts are ultimately 'grounded' on some pattern of sensory input.
Abstract concepts are only different in the sense that they do not depend directly on sensory input
It seems to me that gaining 'understanding' is the process of building this graph of related concepts.
We only understand concepts in terms of other concepts, in the similar way to a dictionary defining a word in terms of other words, or collections of words.
A concept only has meaning when it references other concepts.
When a concept is common to many intelligent systems (human brains or LLMs) then it can be represented by a token (or word).
Communities of intelligent sytems evolve languages (sets of these tokens) to represent and commnunicate these concepts.
When communites get seperated, and the intelligent systems cannot communicate easily then the tokens in the languages evolve over time and the languages diverge even though the common concepts often remain the same. 
Sections of a community often develop extensions to languages (e.g. jargon, TLAs) to communicate concepts which are often only understood within that section.
My (very basic) understanding of LLMs is that they are pre-trained to predict the next word in a stream of text output by identifying patterns of input data that have a strong correlation with the output. It does seem to me that by iterating through multiple layer of inputs and adjusting the weights on each layer that a neural network could detect the significant underlying connections between concepts within the data, and that higher level (more abstract) concepts could be detected which are based on lower level (more grounded) concepts and that these conceptual connections could be considered to be a model of the real world capable of making intelligent predictions which could then be selectively pruned and refined using reinforcment learning.