Language models are nearly AGIs but we don't notice it because we keep shifting the bar
I’m putting my existing work on AI on Less Wrong, and editing as I go, in preparation to publishing a collection of my works on AI in a free online volume. If this content interests you, you could always follow my Substack, it's free and also under the name Philosophy Bear. Anyway, enjoy. Comments are appreciated as I will be rewriting parts of the essays before I put them out. A big thank you to user TAG who identified a major error in my previous post regarding the Chinese Room Thought experiment, which prompted its correction [in the addition that will go in the book] and a new corrections section for my Substack page. Glossary: GPT-3- a text-generating language model. PaLM-540B- a stunningly powerful question-answering language model. Great Palm- A hypothetical language model that combines the powers of GPT-3 and PaLM-540B. Probably buildable with current technology, a lot of money and a little elbow grease. Great Palm with continuous learning (GPWCL)- A hypothetical language model that combines the capacities of GPT-3 and PaLM-540B, with an important additional capacity. Most language models work over a “window” of text, functioning as short-term memory. Their long-term memory is set by their training. Continuous learning is the capacity to keep adding to long-term memory as you go, and this would allow a language model to tackle much longer texts. The argument What I’ll be doing in this short essay is a bit cheeky, but I think we’ll make a few important points, viz: 1. Goals that seem very concrete can turn out to be vulnerable to bar-shifting- shifting which we may scarcely even notice. 2. AGI is such a goal. 3. We have gotten very good, much too good, at denying the progress we have made in AGI. 4. A focus on able-bodied humanity, and the tendency to forget disabled people exist when thinking about these topics, deceives us in these matters. If I’m being a bit of a gadfly here, it’s not without a purpose. Everything I say in this article in a