I agree with the premise/ontology but disagree with the conclusion. It's pretty clear the generator of disagreement is
I believe GPT has demonstrated that near-human-level AI is a possible steady state for multiple years
I don't think it's been a steady state nor that it will proceed as one; rather I think GPT not only marks but is causally responsible for the beginning of accelerating takeoff and is already a gateway to a variety of superhuman capabilities (or "products").
[a few minutes later] Actually, I realized I do agree, in the sense of relative updates. I've gotten too used to framing my own views in contrast to others' rather than in relation to my past self. GPT surprised me by demonstrating that an AI can be as intelligent as it is and deployed over the internet, allowed to execute code, etc, without presenting an immediate existential threat. Before I would have guessed that by the time an AI exists which uses natural language as fluently as GPT-3 we'd have months to minutes left, lol. The update was toward slower takeoff (but shorter timelines because I didn't think any of this would happen so soon).
But from this post, which feels a bit fragmented, I'm not clear on how the conclusions about timelines and x-risk follow from the premise of platform/product distinction, and I'd like to better understand the thread as you see it.
Thanks for feedback, I am new to writing in this style and may have erred too much towards deleting sentences while editing. But, if you never cut too much you're always too verbose, as they say. I in particular appreciate that, when talking about how I am updating, I should make clear where I am updating from.
For instance, regarding human level intelligence, I was also describing relative to "me a year/month ago". I relistened to the Sam Harris/Yudkowsky podcast yesterday, and they detour for a solid 10 minutes about how "human level" intelligence is a straw target. I think their arguments were persuasive, and that I would have endorsed them a year ago, but that they don't really apply to GPT. I had pretty much concluded that the difference between a 150 IQ AI and a 350 IQ AI would be a matter of scale. GPT as a simulator/platform seems to me like an existence proof for a not-artificially-handicapped human level AI attractor state. Since I had previous thought the entire idea was a distraction, this is an update towards human level AI.
The impact on AI timelines mostly follows from diversion of investment. I will think on if I have anything additional to add on that front.
I understand your reasoning much better now, thanks!
"GPT as a simulator/platform seems to me like an existence proof for a not-artificially-handicapped human level AI attractor state" is a great way to put it and a very important observation.
I think the attractor state is more nuanced than "human-level". GPT is incentivized to learn to model "everyone everywhere all at once" if you will, a superhuman task -- and while the default runtime behavior is human-level simulacra, I expect it to be possible to elicit superhuman performance by conditioning the model in certain ways or a relatively small amount of fine tuning/RL. Also, being simulated confers many advantages for intelligence (instances can be copied/forked, are much more programmable than humans, potentially run much faster, etc). So I generally think of the attractor state as being superhuman in some important dimensions, enough to be a serious foom concern.
Broadly, though, I agree with the framing -- even if it's somewhat superhuman, it's extremely close to human-level and human-shaped intelligence compared to what's possible in all of mindspace, and there is an additional unsolved technical challenge to escalate from human-level/slightly superhuman to significantly beyond that. You're totally right that it removes the arbitrariness of "human-level" as a target/regime.
I'd love to see an entire post about this point, if you're so inclined. Otherwise I might get around to writing something about it in a few months, lol.
I'm thinking that maybe alignment should also be thought of as primarily about the platform. That is, what might be more feasible is not aligning agents with their individual values, but aligning maps-of-concepts, things at a lower level than whole agents (meanings of language and words, and salient concepts without words). This way, agents are aligned mostly by default in a similar sense to how humans are, assembled from persistent intents/concepts, even as it remains possible to instantiate alien or even evil agents intentionally.
The GPTs fall short while they can't do something like IDA/MCTS that makes more central/normative episodes for the concepts accessible while maintaining/improving their alignment (intended meaning, relation to the real world, relation to other concepts). That is, like going from understanding the idea of winning at chess to being a world champion level good at that (accessing rollouts where that's observed), but for all language use, through generation of appropriate training data (instead of only using external training data).
All this without focusing on specific agents, or values of agents, or even kinds of worlds that the episodes discuss, maintaining alignment of the map with the territory of real observations (external training data, as the alignment target), not doing any motivated reasoning fine-tuning that might distort the signal of those observations. Only after this works is it the time for instantiating specific products (aligned agents), without changing/distoring/fine-tuning/misaligning the underlying simulator.
I feel like this post warrants some kind of product/design tag that doesn't exist yet but not sure where exactly I'd carve reality.
In this post I review the platform/product distinction and note how GPT can be modeled as a platform, which further products will be built upon. I argue that prosaic alignment is best viewed as
hypnotism capabilities traininginstantiating products on that platform. I explain why this slightly pushes out my X-Risk timing while increasing my X-Risk factor.Products vs Platforms
In Stevey's Google Platforms Rant (2011), Steve Yegge lays out the differences between products and platforms.
This is probably one of the most insightful ontological distinctions I have discovered in my decade of software development. Understood through this lens, Twitter is a platform where @realDonaldTrump was a product, Instagram is a platform where influencers are products, and prediction markets will be a platform where casinos and sports books are products. To become a billionaire, the surest route is to create 2000 millionaires and collect half the profits.
GPT as a Platform
NB: following convention I use "GPT" generically.
In Simulators, janus makes a compelling ontological distinction between GPT and many normally discussed types of AGI (agents, oracles, tools, and mimicry). In particular, they note that while GPT-N might model, instantiate, and interrogate agents and oracles (of varying quality), the AI does not have a utility function similar to any of those usual frameworks. The AI in particular does not appear to develop instrumental goals.
Prosaic alignment is the common phrase used to generally describe "try to stop GPT from saying the N word so much".
Source: https://twitter.com/AnthropicAI/status/1562828022675226624/photo/1
Prosaic alignment is often capabilities research, eg step-by-step. "Prompt Engineers" are likely to become common. Products based on taking a LLM trained over vast amounts of generic language, and then training the final legs on specific corpuses like coding, are coming out. There is an app store for products taking GPT-3 and specializing the final legs or simply condition GPT-3 output. Some of these apps, like AI Dungeon, are reasonably high quality. I expect that offering a near-SOTA LLM user trained on a proprietary corpus will become a common service offered by platforms like GCP.
When prosaic alignment engineers attempts to generate helpful, honest, and harmless (HHH) GPT instances, they are largely attempting to constrain which products can be instantiated on their platform.
GPTs Impact on X-Risk
I do not believe that GPT is a likely x-risk. In particular, I believe GPT has demonstrated that near-human-level AI is a possible steady state for multiple years. The order of magnitude of data left to consume may be low and lower quality. After someone bites the multilingual bullet, transcribes YouTube, and feeds in all the WeChat conversations China has stored, there don't seem to be many opportunities to double your corpus. It is reasonable to believe that GPT may end up only capable of simulating Feynman, and not capable of simulating a person 100x smarter than Feynman.
As GPT is a profitable platform, we should expect capability companies to divert time and resources into training better GPTs and making GPT based products. InstructGPT does not say the N word as often.
This does not lower X-Risk from RLHF. Indeed, as many "alignment" focused teams (staffed to a perfunctory level that absolves executives of guilt) are working on the "please GPT stop saying the N word" problem, the total energy towards X-Risk mitigation might drop. Additionally, as the most safety focused labs are drawn to GPT alignment and capabilities, companies working on orthogonal designs might be first to launch the training run that kills us.