I agree that many people focus on "superhuman intelligence, sooner or later or never" and ignore the possibility of how a human-level or even below-human-level intelligence can change the world if it becomes cheap enough.
I heard a rumor about someone who made lots of money by telling an AI to generate thousands of e-books and selling them on Amazon. Previously people did a similar thing by scraping Wikipedia pages and selling them as books, but this is different -- the books are not obviously scam, they are probably just mediocre; and as the AIs get better, the new books will get better, until enough people realize they do not need to buy them, because they can ask an AI directly.
I imagine similar "shovelware" everywhere. There are probably already many people accepting programming or artistic tasks online, giving them to an AI, and getting paid for the result. If you want to make money this way, do it quickly, because at some moment people will realize they can ask an AI directly.
This may be a convenient time to establish fake artistic credentials. Create a page on Patreon, and ask an AI to make deliberately bad pictures... gradually improving, so that people can see your artistic progress. Later, when the backlash against AI art comes, people will stop trusting art from random strangers online, but you will have established credentials as an actual human artist, so you can charge extra for "hand made" art.
At some moment, software development will become "put an AI into a box, and tell it what to do". For example, to program a new Tetris game, you will simply take a generic AI and tell it "do Tetris"; and that's all. To create an entirely new game, you will provide a verbal explanations, a few example screenshots (generated by another AI), and tell the AI "do this".
There will be many "experts" online who will basically be an interface between you and an AI. You will call a human doctor or a financial advisor or a motivational coach, and the person will listen to you, while silently typing questions to an AI, and then will tell you the results, and charge you $100 per hour.
If many cheap appliances get an AI component, I expect most things to be commanded by voice. An AI with a camera will be a powerful combination, the appliance will be able to see what is there, and make some logical conclusion. Your gas stove will tell you "I think the food might be ready, would you like to check it?", your fridge will remind you that you are out of butter, and by the way the box of milk is already open for a week you might want to check it. Toys will talk to children, and accept verbal instructions.
...and this all will be used for surveillance, because of course an intelligent device can provide much better information about you. Not just what you do, but whether you smile or frown when you do it; whether you also use devices made by competition; whether it seems like you have extra money to buy an upgrade; what other devices are missing at your home.
...also, advertising everywhere. The toys will tell your kids to buy merchandise. The kitchen appliances will recommend you to buy food from specific distributors. The computer games will model their villains based on politicians who propose to regulate advertising and surveillance.
Online, it will be possible to create a cheap "Matrix" for people. Imagine being on a web forum where everyone else is a bot, and you are the only human there. But you will never know, because the bots will post interesting things, debate with you, flirt with you, invite you to read other websites... and occasionally do the thing they were made for, like try to make you buy some products or vote for a certain party. You will be a follower of a conspiracy theory that was tailor-made for you; thousands bots coming with random ideas, and whatever you upvote, there will be more of that, and whatever you write, will be incorporated in the theory, as long as it can be made compatible with the main goal. People online will never feel alone again. At the beginning, this will be an operation targeted at specific people (they may be met by actual humans who introduce them to a web forum where they receive a warm approval from bots, and then everything will be handled by the bots), but as it gets cheaper, they will try to catch everyone in their own separate "Matrix".
Even the real people you meet will often be working for similar companies -- their role will be to sometimes meet you offline, to provide evidence that they are actual humans, but as soon as you return to the screen, you will only interact with a bot that simulates them. So once in a year your private "Matrix" can have an offline meetup, where you meet dozen people from a specialized agency who will role-play members of your web forum. (Their job is to go to a different meetup every week with a different identity.)
Or to turn it around, people will use AIs to pretend to be them, and outsource the tasks such as calling your parents, or maintaining contacts with your former classmates and colleagues. Hey, it costs nothing to let an AI handle a personal relationship you don't really care about, and you never know when you can get some useful service from "an old friend". The future of seduction is to have a bot seduce someone and arrange a date, and you just go there to have sex with them. The future of prostitution is the same, only you also let the bot find out some pretext why the other person should send you money. The girlfriend you once met at a party, and since then you call each other every day and talk a lot about various topics and she is really smart and nice, but she is too busy with her work and she is living in another city, and how she has a financial problem, but if you send her some money she will be able to quit her current job and move closer to you... I am sorry, but she is not real. She is just a girl who goes to parties, collects phone numbers from guys and enters them in a computer; that's all, everything that happened afterwards was fake.
Based on the vibe of the post, it seems like you're trying to point at the concept of "being able to do many things". I guess generalization isn't 'for' anything, it's a concept. For an agent, generalization is a method of being able to achieve an outcome based on limited past experience without needing to waste resources figuring out strategies it could have made if only it could generalize better. I can't really tell based on what you said what I'm supposed to answer with "What does it offer you?". Like, generalization offers me the ability to recognize bad chess moves in new scenarios that I haven't seen, or it offers me the ability to take over the universe based on limited knowledge of physics. I don't know where you're trying to limit the word
Acknowledgements
The vision here was midwifed originally in the wild and gentle radiance that is Abram's company (though essentially none of the content is explicitly his).
The PIBBSS-spirit has been infused in this work from before it began (may it infuse us all), as have meetings with the Agent Foundations team at MIRI over the past ~2 years.
More recently, everyone who has been loving the High Actuation project into form (very often spontaneously and without being encumbered by self-consciousness of this fact):[1] individuals include Steve Petersen, Mateusz Baginski, Aditya Prasad, Harmony, TJ, Chris Lakin; the AISC 2024 team, Murray Buchanan, Matt Farr, Arpan Agrawal, Adam, Ryan, Quinn; various people from Topos, ALIFE, MAPLE, MATS, EA Bangalore. Published while at CEEALAR.
Disclaimers
Taking Intelligence Seriously
Sahil:
I gave a talk recently, at an EA event just two days ago, where I made some quick slides (on the day of the talk, so not nearly as tidy as I’d like) and attempted to walk through this so-called “live theory”[2].
Maybe I can give you that talk. I'm not sure how much of what I was saying there will be present now, but I can try. What do you think? I think it'll take about 15 minutes. Yeah?
Steve:
Cool.
Sahil:
Okay, let me give you a version of this talk that's very abbreviated.
So, the title I’m sure already makes sense to you, Steve. I don't know if this is something that you know, but I prefer the word “adaptivity” over intelligence. I'm fine with using “intelligence” for this talk, but really, when I'm thinking of AI and LLMs and “live” (as you’ll see later), I'm thinking, in part, of adaptive. And I think that connotes much more of the relevant phenomena, and much less controversially.
It’s also less distractingly “foundational”, in the sense of endless questions on “what intelligence means”.
Failing to Take Intelligence Seriously
Right. So, I want to say there are two ways to fail to take intelligence, or adaptivity, seriously.
One is, you know, the classic case, of people ignoring existential risk from artificial intelligence. The old “well, it's just a computer, just software. What's the big deal? We can turn it off.” We all know the story there. In many ways, this particular failure-of-imagination is much less pronounced today.
But, I say, a dual failure-of-imagination is true today even among the “cognoscenti”, where we ignore intelligence by ignoring opportunities from moderately capable mindlike entities at scale. I'll go over this sentence slower in the next slide.
For now: there are two ways to not meet reality.
On the left of the slide is “nothing will change”. The same “classic” case of “yeah, what's the big deal? It's just software.”
On the right, it's the total singularity, of extreme unknowable super-intelligence. In fact, the phrase “technological singularity”, IIRC, was coined by Vernor Vinge to mark the point that we can't predict beyond. So, it's also a way to be mind-killed. Even with whatever in-the-limit proxies we have for this, we make various simplifications that are not “approximately” useful; they don’t decay gracefully. (Indeed, this is how the “high-actuation spaces” project document starts.)
All of the richness of reality: that’s in the middle.
Steve:
I think that makes sense. I like how there are both ways to avoid looking at the kind of intelligence in between, given slow takeoff.
Sahil:
Yeah, cool. In the event of a “slow takeoff”, sure.[3]
Opportunities from Moderate Intelligence at Scale
Okay, so, going over the sentence slowly:
Video calling is in a way the same type of technology as telegraph: the ability to send messages. But with incredibly reduced cost, latency plus increased adoption, bandwidth, fidelity. This allowed for remote work and video calls and the whole shebang that’s allowing us to have this conversation right now.
And so the question is: what happens when we have much lower latency and moderately more intelligence, much lower costs, with adaptivity tightly integrated in our infrastructure?[5] Just like we have Wi-Fi in everything now.
And notice that this is not extrapolation that goes only moderately far. That is, just because I'm talking about “moderate” intelligence does not mean this extrapolation is not about the real crazy future up ahead. Only, this is AI-infrastructural extrapolation, not AI-singleton extrapolation (or even what’s connoted by “multipolar”, usually, IME). It’s neglected because it is usually harder to think about, than a relatively centralized thing we can hold in attention one at a time.
This frame also naturally engages more with the warning carried in “attitudes to tech overestimate in the short run, underestimate in the long run.”
So to repeat, and this is important for sensemaking this: I am doing extrapolation that will venture far. What follows is simultaneously very obvious and very weird.[6] In fact, that combination is what makes it work at all, as you’ll see.
But that’s also a double-edged sword. Instead of it being sensible (obvious) and exciting (weird), the perspective here might seem redundant or boring (too obvious) and irrelevant or scary (too weird).
Hopefully, I will:
a) avoid its obviousness being rounded off to “ah right, live theory is another word for autoformalization” and
b) bring its weirdness closer to digestibility. To quote Bostrom in Deep Utopia, “if it isn’t radical, it isn’t realistic.”
So even though some might classify this series as being about "automating[7] alignment research", it tastes nothing like unfleshed mundane trendlines[8] or spectacular terror that are mixed together, for example, in Leopold Aschenbrenner’s “AGI from automated research by 2027”.
Again, this isn't to say that there aren't some serious risks, only that they might look very different (which view will be elaborated in an upcoming post).[9]
Live Interfaces
This slide was ‘Live UI’ (don't bother trying to read the image): what happens to infrastructure and interfaces, generally, when you can “do things that don’t scale”, at scale. People don’t seem to update hard enough, based on their reactions at Sora etc, on what the future will take for granted.
What is possible, when all this is fast, cheap, abundant, reliable, sensitive? Live UI seeks to chart this out. The six pillars, without much explanation for now, are:
Steve:
Am I supposed to be following all of that?
Sahil:
Definitely not, it's just a trailer of sorts.[11] I've included only a relatively accessible example each (it gets way weirder), but there are volumes to say for the pillars, especially to gesture at how it all works together. (Btw: reach out if you're interested to know more or work together!)
A bit more, before we move on though.
Nearly all of the above is about sensitivity and integration[12] as we gain increasing choice in adaptive constructability at scale.
The above could, instead, already sound nauseating, atomizing, and terrifying. A good time, then, to meet the key concern of the High Actuation agenda: the applied metaphysics of cultivating wise and lasting friendship in a reality full of constructs.
High Actuation is the research context (for both Live UI & Live Theory, among several other tracks), where the physical world becomes more and more mindlike through increased and ubiquitous adaptivity. In the process, challenging a lot of (dualistic[13]) assumptions about mind and matter, and how to go about reasoning about them.[14]
But yeah, don't worry about this interface-6-pillars stuff above. I'm going to talk about what I’ll be focusing on building tools (and hiring!) for, in the coming months: intelligent science infrastructure.
Live Theory (preface)
So the boring way to think about intelligent science infrastructure is to say “AI will do the science and math automatically.”
(What does it really mean, to say “automatically”? We’ll get to that.)
First, a succinct unrolling of the whole vision. A series of one-liners follow, with increasing resolution on the part of this dream that matters. The italics signify the emphasis in each stage of clarification.
Live theory is...
(Here "theories" are only one kind of distributable artefact in research to think about.)
But more importantly, the vision is...
(Here "artefacts" includes papers, theories, explainers, code, interfaces, laws[15], norms etc.)
But more importantly, the vision is...
(Here "protocols" don’t need to be a fixed/formal protocol or platform either!)
But really, the vision is...
(Here “lending spirit” being the crucial activity that allows for spacious resolution towards better equilibria.)
IOW: The slower-but-more-significant pace layer of infrastructure, that supports the pace layer of commerce.
Navigation
Some navigation before the longer conversation around this: there are four gates that we'll have to pass through, as you see on the slide below, to come to terms with the proposal.
Each of the four gates has been invariably challenging to convey and invite people through (although it is satisfying to witness people's fluency after a few runs through them):
I'm going to walk through these gates, and conclude, and that will be the talk.
This decomposition into gates should hopefully make it easier to critique the whole thing—eg. “X aspect is undesirable because…” vs “X aspect is impossible because…”. The two obviously deserve very different kinds of responses.
I’m offering the talk this way for many reasons, but to say a bit more. Most of the work is in a) noticing the circumrational water of mathematics as it is today (which can be too obvious for mathematicians and too unbearable for math-groupies respectively) and b) connecting it to mathematics as it might become in the near future (which can seem too bizarre or undesirable, if you don’t notice its importance in mathematics as it is today). When new paradigms start being Steam’d, they often have to pull off a similar straddling of the familiar and the unfamiliar. Not too different from the ordeal of hitting the edges of one’s developmental stage… but at a sociotechnical/civilizational level.
If making it easy for you to respond and critique were the only goal of the gates, they would have been set out in a tidier decomposition. However, in tandem, I'm using the gates-structure to construct a “natural walk”, a narrative arc, through the details of Live Theory. This polytely has some tradeoffs (such as the Bearable and Desirable gates not quite disentangling), but I think it works! Let me know.
The next post will cover the first two gates. A teaser slide for the first one follows.
Live Theory (teaser)
Possibility
And a teaser of two questions I will start the next post with, but also include now to give you time to actually think:
Challenging and supporting, especially through the frustration of freezing this written artefact before I can avail the spaciousness of the fluidic era.
Alternative terms include “adaptive theory”, “fluid theory”, "flexible theory"; where the theories themselves are imbued with some intelligence
This will be elaborated in an upcoming post on risks. “Slow takeoff” is an approximation that will have to do for now, but really it's much more like having lazy or mentally ill AI. If you're curious, here's the abstract:
Generally, it is a bit suspect to me, for an upcoming AI safety org (in this time of AI safety org explosion) to have, say, a 10 year timeline premise without anticipating and incorporating possibilities of AI transforming your (research) methodology 3-5 years from now. If you expect things to move quickly, why are you ignoring that things will move quickly? If you expect more wish-fulfilling devices to populate the world (in the meanwhile, before catastrophe), why aren't you wishing more, and prudently? An “opportunity model” is as indispensable as a threat model.
(In fact, "research methodological IDA" is not a bad summary of live theory, if you brush under the rug all the ontological shifts involved.)
TJ comments:
An example of weirdness if you're hungry, in raw form. Inspired by a submission at a hackathon on live machinery.
(See more here: The Logistics of Distribution of Meaning )
The word “automation” does not distinguish numb-scripted-dissociated from responsive-intelligent-flowing. It usually brings to mind the former. So I avoid using it. More on this in the next post, but for now, a word might be “competence”. When you’re really in flow while dancing, you're not thinking. But it is the opposite of “automation”. If anything, you're more attuned, more sensitive.
This isn't “high-level” abstraction either. The high-low dichotomy is the good-old-fashioned way of looking at generalization, and does not apply with expanded sensitivity, as we'll see in this sequence.
The relevance of the choice "live" will hopefully also become clearer as we go. Meant to connote biological rather than machine metaphors, dialectics rather than deliverables, intentional stance, though not necessarily patienthood.
It's not that people predict "no fundamentally new ways of looking at the world will occur" and therefore decide to exclude them in their stories. I think people exclude ontological shifts because it's very hard to anticipate an ontological shift.
(If you disagree, I'd ask the fundamental question of implied invisible work: what have you rejected? A la book recommendations.)
Matt adds:
Indeed. The Necessary gate will cover much relevance to threat models. Apart from that, expect much much more on "infrastructural insensitivity" to articulate further vectors of risk.
Abram helpfully adds:
And so here is my meeting example pasted raw, which might be more palatable or more triggering:
More here: Live Machinery: An Interface Design Philosophy for Wholesome AI Futures
NB. this is not brute, ham-fisted merging, for those worried. See also: Unity and diversity.
also: centralization-heavy, preformationist, foundationalist, control-oriented. See The Logistics of Distribution of Meaning: Against Epistemic Bureaucratization
This is a very expensive delivery mechanism for a koan/pointing-out instruction, to let go of the static-machine mythos, but there you go.
Live Governance post coming up soon.
See also this proposal for the commonalities in the two posts and limitations of existing approaches.