Here are some of my comments on this post:
The second is reliability. Yes, there is the issue that often consumer expectations are for 100% reliability, and sometimes you actually do need 100% reliability (or at least 99.99% or what not). I see no reason you can’t, for well-defined tasks like computers traditionally do, get as many 9s as you want to pay for, as long as you are willing to accept a cost multiplier.
The problem is that AI is intelligence, not a deterministic program, yet we are holding it to deterministic standards. Whereas the other intelligence available, humans, are not reliable at all, outside of at most narrow particular contexts. Your AI personal assistant will soon be at least as reliable as a human assistant would be.
I disagree that humans are unreliable, and would go the opposite way, that humans are very reliable agents, and a lot of why AI hasn't been used as much as the tech world thinks is that reliability matters way more than LWers thought it would in many domains.
This is a problem scaling will eventually solve in the 2030s-2040s at the latest, but if Leopold Aschebrenner is incorrect about timelines in Situational Awareness, this is likely to be why.
See this post, which while exaggerating the reliability of humans, is IMO the best post on how humans are so reliable, and the numbers are IMO quite robust to 1-3 OOM errors.
https://www.lesswrong.com/posts/28zsuPaJpKAGSX4zq/humans-are-very-reliable-agents
And now the Department of Justice is continuing to probe and going after… Nvidia? For anti-trust? Seemingly wiping out a huge amount of market value?
This is likely to be false, see here:
https://x.com/business/status/1831428052615622672
Sam Bowman literally outlines the exact plan Eliezer Yudkowsky constantly warns not to use, and which the Underpants Gnomes know well.
- Preparation (You are Here)
- Making the AI Systems Do Our Homework (?????)
- Life after TAI (Profit)
I'd replace the question marks by synthetic data, and relying on the fact that alignment generalizes further than capabilities, not the other way around if I were Sam Bowman and Anthropic.
The worry is that this is essentially saying ‘we do our jobs, solve alignment, it all works out.’ That doesn’t really tell us how to solve alignment, and has the implicit assumption that this is a ‘do your job’ or ‘row the boat’ (or even ‘play like a champion today’) situation. Whereas I see a very different style of problem. You do still have to execute, or you automatically lose. And if we execute on Bowman’s plan, we will be in a vastly better position than if we do not do that. But there is no script.
Agree that the checklist is unfortunately not a complete plan, but I think it can be turned into a complete, working plan if we add several more details.
That’s the thing. To me this does not make sense. How can you create machines that are smarter than humans, and not be at least ‘somewhat’ concerned that it ‘might’ pose a threat to humanity? What?
To steelman the Chinese view on AI where AI isn't an existential threat, I think it's helpful to notice several things:
Instrumental convergence is not as strong as we feared, and there are real reasons to think that the instrumental convergence we do get out of powerful AIs are highly steerable thanks to them having very densely defined utility/reward functions. There's a rather good chance that we can just do Retargeting the Search, though we'd need to get better at interpretability to make the most use out of it.
Synthetic data lets us control what the AI learns completely, and in particular lets us instill values into AIs very deeply before they have a chance to deceive against you, and there are strong reasons to expect synthetic data loops to be the main source of data for AI, because real human data is both kind of poor and likely not enough for AI by 2028 (for text data.)
See here for more:
https://www.beren.io/2024-05-11-Alignment-in-the-Age-of-Synthetic-Data/
Alignment likely generalizes further than capabilities, both because verification is way, way easier than generation, plus combined with the fact that we can afford to explore less in the space of values, combined with in practice reward models for humans being easier than capabilities strongly points to alignment generalizing further than capabilities.
See here for a bit more on this:
https://www.beren.io/2024-05-15-Alignment-Likely-Generalizes-Further-Than-Capabilities/
I get your frustration of a lot of discourse, and I agree that a lot of people disagreeing on AI as a threat are really disagreeing about how far AI capabilities can go.
But this is not all of the criticism, like for example my own criticism of AI as a threat, and it pays to listen to the nuanced arguments against AI doom.
alignment generalizes further than capabilities
But this is untrue in practice (observe that models do not become suddenly useless after they're jailbroken) and unlikely in practice (since capabilities come by default, when you learn to predict reality, but alignment does not; why would predicting reality lead to having preferences that are human-friendly? And the post-training "alignment" that AI labs are performing seems like it'd be quite unfriendly to me, if it did somehow generalize to superhuman capabilities). Also, whether or not it's true, it is not something I've heard almost any employee of one of the large labs claim to believe (minus maybe TurnTrout? not sure if he's endorse it or not).
both because verification is way, way easier than generation, plus combined with the fact that we can afford to explore less in the space of values, combined with in practice reward models for humans being easier than capabilities strongly points to alignment generalizing further than capabilities
This is not what "generalizes futher" means. "Generalizes further" means "you get more of it for less work".
why would predicting reality lead to having preferences that are human-friendly?
LLMs are not trained to predict reality — they're trained to predict human-generated text, i.e. we're distilling human intelligence into them. This gets you something that uses human ontologies, understands human preferences and values in great detail, acts agentically, and works more sloppily in August.
The problem here for ASI is that while humans understand human values well, not all (perhaps even not many) humans are extremely moral or kindly or wise, or safe be handed godlike intelligence, enormous power, and the ability to run rings around law-enforcement. The same is by default going to be true of an artificial intelligence distilled from humans. As for "having preferences", an LLM doesn't simulate a single human (or their preferences), for each request it simulates a new randomly selected member of a prompt-dependent distribution of possible humans (and their preferences).
The problem here for ASI is that while humans understand human values well, not all (perhaps even not many) humans are extremely moral or kindly or wise, or safe be handed godlike intelligence, enormous power, and the ability to run rings around law-enforcement.
This is why I think synthetic data, as well as not open-sourcing/open-weighting ASI is likely to be necessary, at least for a few years, because we cannot have merely as well as human alignment of ASI, but the good news is that synthetic data is a very natural path to increasing capabilities for AI in general, not just LLMs, and I'm more hopeful than you that we can get instruction following AGI/ASI to automate alignment research.
Completely agreed (and indeed currently looking for employment where I could work on just that).
But this is untrue in practice (observe that models do not become suddenly useless after they're jailbroken)
Note that just because alignment generalizes further than capabilities doesn't mean someone attempting to jailbreak the model means the model becomes useless.
I'm just saying it's harder to optimize in the world than to learn human values, not that capability is suddenly lost if you try to jailbreak them.
Also, a lot of the jailbreaking attempts are very clear examples of misuse problems, not misalignment problems, at least in my view. The fact that the goals from humans are often to produce offensive/sexual stuff is enough to show that they are very minor misuse problems.
and unlikely in practice (since capabilities come by default, when you learn to predict reality, but alignment does not; why would predicting reality lead to having preferences that are human-friendly? And the post-training "alignment" that AI labs are performing seems like it'd be quite unfriendly to me, if it did somehow generalize to superhuman capabilities).
Basically, because there is a lot of data on human values, and since models are heavily, heavily influenced by their data sources as well as data quantity, it means that learning and having human values comes mostly for free via unsupervised learning.
More generally, my big thesis is if you want to understand how an AI is aligned, or what capabilities an AI has, the data matter a lot, at least compared to the prior.
Re labs alignment methods, the good news is that we likely have better methods than the post-training RLHF and RLAIF, because synthetic data lets us shape the AI model's values early in training, before it knows how to deceive, and thus is probably able to shape the incentives away from deception.
More on that here:
https://www.beren.io/2024-05-11-Alignment-in-the-Age-of-Synthetic-Data/
Also, whether or not it's true, it is not something I've heard almost any employee of one of the large labs claim to believe (minus maybe TurnTrout? not sure if he's endorse it or not).
Yep, this is absolutely a hot take of mine, though the true origin was from Beren Millidge, but one that I do think has a reasonable chance of being correct.
This is not what "generalizes futher" means. "Generalizes further" means "you get more of it for less work".
Yeah, this is also what I meant, and my point is that you get more alignment with less work both because of the amount of data, and AIs will almost certainly update and be influenced quite largely on data, combined with the fact that it's easier to learn and implement values than it is to get the equivalent amount of capability.
I'm just saying it's harder to optimize in the world than to learn human values
Leaning what human values are is of course part of a subset of learning about reality, but also doesn't really have anything to do with alignment (as describing an agent's tendency to optimize for states of the world that humans would find good).
I think where I disagree is that I do think value learning and learning about human values is quite obviously very important for alignment, both because a lot of alignment approaches depended on value learning working, and because the data heavily influences what it trys to to optimize.
Another way to say it is that the data strongly influences the optimization target, because a large portion of both capabilities and alignment is strongly downstream of the data it's trained on, so I don't see why learning what human values are is unrelated to this:
(as describing an agent's tendency to optimize for states of the world that humans would find good).
Nvidia’s products are rather obviously superior
CUDA seems to be superior to ROCm... and has a big installed-base and third-party tooling advantage. It's not obvious, to me anyway, that NVidia's actual silicon is better at all.
... but NVidia is all about doing anything it can to avoid CUDA programs running on non-NVidia hardware, even if NVidia's own code isn't used anywhere. Furthermore, if NVidia is like all the tech companies I saw during my 40 year corporate career, it's probably also playing all kinds of subtle, hard-to-prove games to sabotage the wide adoption of any good hardware-agnostic APIs.
It is also not acceptable to say ‘you can completely ignore copyright concerns without compensating owners.’
I still find myself confused by this. I'm allowed tonread any accessible content online, learn from it and think about it, and use that knowledge when selling my understanding and analysis to clients. Why, in principle, should this not be true for AI?
However, you are not allowed to just blindly reproduce large chunks of what you read. That would be both plagiarism (morally), and (unless some form of fair use applies) breach of copyright (legally). Many simplistic for-a-lay-person explanations of how AI works imply that this is what they do, and people with little credence of AI capabilities increasing often assume this is both all they can do and all they will ever be able to do.
Also, on rare occasions for certain prompts and certain documents in the training set that actually got memorized during the training (for example, because the training set contained a great many copies of much-the-same-document), AIs actually do do this, reproduce significant chunks of a copyrighted document verbatim or only very slightly paraphrased, and we don't know how to ensure that will never happen (other than building a plagiarism detector to detect it has happened and then refusing to send the response to the end user).
Of course, a typical sentence in a typical AI response is influenced (when researchers did the very computationally expensive analysis to determine this) primarily by hundreds or thousands or more of documents across its training set: it's based on learning and combining patterns, not memorizing specific text passages. However, in a court of law, telling the judge "our device usually doesn't break the law, but sometimes it does, especially if goaded to, and we don't know how to make it stop" isn't exactly a strong position.
Also, the one clear legal precedent we do have in the area of copyright is that anything created by an AI does not have the same status for claiming copyright as something original created by a human. Which makes it not entirely clear whether the argument "that AI behavior would be legal if a human did it" applies here. Anything not forbidden is legal, but is the "being influenced by many sources" aspect of fair use a boundary to the law of copyright, or an exception, and if it's an exception, does it apply to something that isn't human? A question which the legeslators, of course, never even considered, forcing judges to make it up as they go along. (Of course, in the US, corporations now have free speech rights — I'm sure an ingenious lawyer could work that into an argument, for a closed-source API-only AI owned by a corporation…)
Yes, you're right on all counts. I'm just wondering if there's anyone who thinks this there is actually a coherent underlying justification for this kind of standard, other than "Because people who never actually thought about it said so."
Also:
However, in a court of law, telling the judge "our device usually doesn't break the law, but sometimes it does, especially if goaded to, and we don't know how to make it stop" isn't exactly a strong position.
This is true, and yet it is also the position anyone making any kind of dangerous product is in. Cars and planes and knives and various chemicals can be easily goaded to break the law by the user. No one has yet released a car that only ever follows all applicable laws no matter what the driver does.
As you pointed out, we don't consider AIs to have minds and thoughts and rights under current law, which would seem to make them products under human control for such purposes. The producer is liable for making things work as described. The user is responsible for using them in a way that is legal and doesn't harm others. I don't understand the argument for the producer being on the hook for the user finding a way to use it to duplicate copyrighted material.
As I understand it (#NotALawyer) the law makes a distinction between selling a toolkit, which has many legal uses and can also help you steal cars, and selling a toolkit with advertising about how good it is for stealing cars and helpful instructions on how to use it to do so. Some of the AI image generation models included single joined_by_underscores keywords for the names of artists (who hadn't consented to being included) to reproduce their style, and instructions on how to do that. With the wrong rest of the prompt, that would sometimes even reproduce a near-copy of a single artwork by that artist from the training set. We'll see how that court case goes. (My understanding is that a style is not considered copyrightable but a specific image or a sufficient number of elements from it is.)
Sooner or later, we'll have robots that are physically and mentally capable of stealing a car all by themselves, if that would help them fulfill an otherwise-legal instruction from their owner. The law is going to hold someone responsible for ensuring that the robots don't do that: some combination of the manufacturer and the owner/end-user, according to which seems more reasonable to the judge and jury.
Cars and planes and knives and various chemicals can be easily goaded to break the law by the user. No one has yet released a car that only ever follows all applicable laws no matter what the driver does.
Without taking a position on the copyright problem as a whole, there's an important distinction here around how straightforward the user's control is. A typical knife is operated in a way where deliberate, illegal knife-related actions can reasonably be seen as a direct extension of the user's intent (and accidental ones an extension of the user's negligence). A traditional car is more complex, but cars are also subject to licensing regimes which establish social proof that the user has been trained in how to produce intended results when operating the car, so that illegal car-related actions can be similarly seen as an extension of the user's intent or negligence. Comparing this to the legal wrangling around cars with ‘smarter’ autonomous driving features may be informative, because that's when it gets more ambiguous how much of the result is a direct translation of the user's intent. There does seem to be a lot of legal and social pressure on manufacturers to ensure the safety of autonomous driving by technical means, but I'm not as sure about legality; in particular, I vaguely remember mixed claims around the way self-driving features handle the tension between posted speed limits and commonplace human driving behavior in the US.
In the case of a chatbot, the part where the bot makes use of a vast quantity of information that the user isn't directly aware of as part of forming its responses is necessary for its purpose, so expecting a reasonable user to take responsibility for anticipating and preventing any resulting copyright violations is not practical. Here, comparing chatbot output to that of search engines—a step down in the tool's level of autonomy, rather than a step up as in the previous car comparison—may be informative. The purpose of a search engine similarly relies on the user not being able to directly anticipate the results, but the results can point to material that contains copyright violations or other content that is illegal to distribute. And even though those results are primarily links instead of direct inclusions, there's legal and social pressure on search engines to do filtering and enforce specific visibility takedowns on demand.
So there's clearly some kind of spectrum here between user responsibility and vendor responsibility that depends on how ‘twisty’ the product is to operate.
We caution against purely technical interpretations of privacy such as “the data never leaves the device.” Meredith Whittaker argues that on-device fraud detection normalizes always-on surveillance and that the infrastructure can be repurposed for more oppressive purposes. That said, technical innovations can definitely help.
I really do not know what you are expecting. On-device calculation using existing data and other data you choose to store only, the current template, is more privacy protecting than existing technologies.
She's expecting, or at least asking, that certain things not be done on or off of the device, and that the distinction between on-device and off-device not be made excessively central to that choice.
If an outsider can access your device, they can always use their own AI to analyze the same data.
The experience that's probably framing her thoughts here is Apple's proposal to search through photos on people's phones, and flag "suspicious" ones. The argument was that the photos would never leave your device... but that doesn't really matter, because the results would have. And even if they had not, any photo that generated a false positive would have become basically unusable, with the phone refusing to do anything with it, or maybe even outright deleting it.
Similarly, a system that tries to detect fraud against you can easily be repurposed to detect fraud by you. To act on that detection, it has to report you to somebody or restrict what you can do. On-device processing of whatever kind can still be used against the interests of the owner of the device.
Suppose that there was a debate around the privacy implications of some on-device scanning that actually acted only in the user's interest, but that involved some privacy concerns. Further suppose that the fact that it was on-device was used as an argument that there wasn't a privacy problem. The general zeitgeist might absorb the idea that "on-device" was the same as "privacy-preserving". "On device good, off device bad".
A later transition from "in your interest" to "against your interest" could easily get obscured in any debate, buried under insistence that "It's on-device".
Yes, some people with real influence really, truly are that dumb, even when they're paying close attention. And the broad sweep of opinion tends to come from people who aren't much paying attention to begin with. It happens all the time in complicated policy arguments.
The Ted Chiang piece, on closer reading, seems to be about denying the identity of the AI prompter as an artist rather than speaking to the particular limitations of the tool. For those who did not read, his claim is:
To set himself apart from luddites and the usual naysayers, he uses Adobe Photoshop as an example of a tool where you can be an artist: it is a computer tool; it used to be derided by photographers as not being real art; but now it is accepted and the reason is that people learned to make interesting choices with it.
He appears to go as far as to say two people could generate an identical digital picture, one via photoshop and one via AI, and the former gets to be an artist and the latter does not.
I think this is an actual interesting question, and roughly agree with his frame, but, he's just actually wrong on the particulars. AI prompting involves tons of choices (in particular because you're usually creating art for some particular context, and deciding what sort of art to query the AI for is at least one important choice. I also almost always generate at least 10 different images or songs or whatever, shifting my prompt as i go).
Also think he's wrong in the particulars, but I can't quite square it back to his perspective once the particulars are changed.
The bluntest thing that is wrong is that you can specify as precise a choice as you care to in the prompt, and the models usually respond. The only hitch is that you have to know those choices beforehand, whereas it would be reasonable to claim that someone like a photographer is being compelled to make choices they did not know about a priori. If that winds up being important then it would be more like the artist has to make and execute the choices they make, even if they are very simple like picking shading in photoshop or pushing the camera button.
I could see an alternative framework where even the most sophisticated prompt is more like a customer giving instructions to an artist than an artist using a tool to make art, but that seems to push further in the direction of AI makes art.
Lastly, if we take his claims at face value, someone should write an opinion piece with the claim that AI is in fact rescuing art, because once all the commercial gigs are absorbed by the machine then true artists will be spared the temptation of selling out. I mean I won't write it, but I would chuckle to read it.
An update I wanted to come back to make was "art is a scalar, not a boolean." Art that involves more interesting choices, technique, and deliberate psychological effects on viewers is "more arty." Clicking a filter in photoshop on a photo someone else took is, maybe like, a .5 on a 1-10 scale. I honestly do rank much photography as lower on the "is it art?" scale than equivalent paintings.
A lot of AI art will be "slop" that is very low-but-nonzero on the art scale.
Art is somewhat anti-inductive or "zero sum"[1], where if it turns out that everyone makes identical beautiful things with a click that would previously have required tons of technique and choicefulness to create, that stuff ends up lower on the artiness scale than previously, and the people who are somehow innovating with the new tools count as more arty.
The first person to make the Balenciaga Harry Potter AI clip was making art. Subsequent Balenciaga meme clips are much less arty. I like to think that my WarCraft Balenciaga video was "less arty than the original but moreso than most of the dross."
this is somewhat an abuse of what 'zero sum' means, I think the sum of art can change, but is sort of... resistant to change.
The inclusion of ‘natural disaster’ shows that this simply is not a thing people are thinking about at all.
Chicxulub and Popigai impactors were both pretty natural. Actually within the listed 5 things "natural disasters" is the only category that had actual extinction events in the past. So I'm a bit confused with this comment.
(This was supposed to be on Thursday but I forgot to cross-post)
Will AI ever make art? Fully do your coding? Take all the jobs? Kill all the humans?
Most of the time, the question comes down to a general disagreement about AI capabilities. How high on a ‘technological richter scale’ will AI go? If you feel the AGI and think capabilities will greatly improve, then AI will also be able to do any particular other thing, and arguments that it cannot are almost always extremely poor. However, if frontier AI capabilities level off soon, then it is an open question how far we can get that to go in practice.
A lot of frustration comes from people implicitly making the claim that general AI capabilities will level off soon, usually without noticing they are doing that. At its most extreme, this is treating AI as if it will only ever be able to do exactly the things it can already do. Then, when it can do a new thing, you add exactly that new thing.
Realize this, and a lot of things make a lot more sense, and are a lot less infuriating.
There are also continuous obvious warning signs of what is to come, that everyone keeps ignoring, but I’m used to that. The boat count will increment until morale improves.
The most infuriating thing that is unrelated to that was DOJ going after Nvidia. It sure looked like the accusation was that Nvidia was too good at making GPUs. If you dig into the details, you do see accusations of what would be legitimately illegal anti-competitive behavior, in which case Nvidia should be made to stop doing that. But one cannot shake the feeling that the core accusation is still probably too much winning via making too good a product. The nerve of that Jensen.
Table of Contents
Language Models Offer Mundane Utility
Prompting suggestion reminder, perhaps:
We need a good prompt benchmark. Why are we testing them by hand?
After all, this sounds like a job for an AI.
Language Models Don’t Offer Mundane Utility
Claim from Andrew Mayne that samples being too small is why AIs currently can’t write novels, with Gwern replying long context windows solved this, and it’s sampling/preference-learning (mode-collapse) and maybe lack of search.
My hunch is that AIs could totally write novels if you used fine-tuning and then designed the right set of prompts and techniques and iterative loops for writing novels. We don’t do that right now, because waiting for smarter better models is easier and there is no particular demand for AI-written novels.
Use AI to generate infinite state-level bills regulating AI? I don’t think they’re smart enough to know how to do this yet, but hilarious (also tragic) if true.
Fun with Image Generation
Javi Lopez says ‘this [11 minute video] is the BEST thing I have ever seen made with AI’ and I tried to watch it and it’s painfully stupid, and continues to illustrate that video generation by AI is still ‘a few seconds of a continuous shot.’ Don’t get me wrong, it will get there eventually, but it’s not there yet. Many commenters still liked this smorgasboard, so shrug I guess.
Chinese company releases an incrementally better text-to-video generator for a few seconds of smooth video footage, now with girls kissing. This seems to be an area where China is reliably doing well. I continue to be a mundane utility skeptic for AI video in the near term.
There was a New Yorker piece by Ted Chiang about how ‘AI will never make art.’ This style of claim will never not be absurd wishcasting, if only because a sufficiently advanced AI can do anything at all, which includes make art. You could claim ‘current image models cannot make “real art”’ if you want to, and that depends on your perspective, but it is a distinct question. As always, there are lots of arguments from authority (as an artist) not otherwise backed up, often about topics where the author knows little.
Seb Krier points out the long history of people saying ‘X cannot make real art’ or cannot make real music, or they are impure. Yes, right now almost all AI art is ‘bad’ but that’s early days plus skill issue.
Robin was asking how we know ChatGPT doesn’t feel or desire, but there’s also the ‘so what if it doesn’t feel or desire?’ question. Obviously ChatGPT ‘uses language.’
Or at least, in the way I use this particular language, that seems obvious?
The obvious philosophical point is, suppose you meet the Buddha on the road. The Buddha says they feel nothing and desire nothing. Did the Buddha use language?
Copyright Confrontation
OpenAI says it is impossible to train LLMs without using copyrighted content, and points out the common understanding is that what they are doing is not illegal. The framing here from The Byte and the natural ways of looking at this are rather unkind to this position, but Hear Them Out.
As OpenAI says, the problem is that copyrighted material is ubiquitous throughout the internet. Copyright is everywhere. If you are forced to only use data that you have fully verified is copyright-free and is fully and provably in the public domain, that does not leave all that much on which to train.
There will need to be a middle ground found. It is not reasonable to say ‘your training set must be fully and provably owned by you.’ Some amount of fair use must apply. It is also not acceptable to say ‘you can completely ignore copyright concerns without compensating owners.’
Deepfaketown and Botpocalypse Soon
How to fool the humans.
Here’s the abstract from the paper:
And here’s the full prompt:
I am going to allow it, on both sides? If a human notices a pattern and applies Bayesian evidence, and doesn’t suspect the test would do this on purpose, then there’s no reason they shouldn’t get fooled here. So this is plausibly an overperformance.
Turns out that you could get into airline cockpits for a long time via a 2005-era SQL injection. An illustration of how much of our security has always been through obscurity, and people not trying obvious things. Soon, thanks to AI, all the obvious things will automatically get tried.
Voice actors sue Eleven Labs, accusing them of training on audiobook recordings and cloning their voices. Here is the full complaint. Claude thinks they have a strong circumstantial case, but it could go either way, and the DMCA claims will be tough without more direct evidence.
Patrick McKenzie explains that Schwab’s ‘My Voice is My Password’ strategy, while obviously a horrible thing no sane person should ever use going forward given AI, is not such a big security liability in practice. Yes, someone could get into your account, but then there are other layers of security in various places to stop someone trying to extract money from the account. Almost all ways to profit will look very obvious. So Schwab chooses to leave the feature in place, and for now gets away with it.
Maybe. They could still quite obviously, at minimum, do a lot of damage to your account, and blow it up, even if they couldn’t collect much profit from it. But I suppose there isn’t much motivation to do that.
They Took Our Jobs
Call centers in the Philippines grapple with AI. Overall workloads are still going up for now. Some centers are embracing AI and getting a lot more efficient, others are in denial and will be out of business if they don’t snap out of it soon. The number of jobs will clearly plummet, even if frontier AI does not much improve from here – and as usual, no one involved seems to be thinking much about that inevitability. One note is that AI has cut new hire training from 90 days to 30.
In a post mostly about the benefits of free trade, Tyler Cowen says that if AI replaces some people’s jobs, it will replace those who are less productive, rather than others who are vastly more productive. And that it will similarly drive the firms that do not adapt AI out of business, replaced by those who have adapted, which is the core mechanistic reason trade and competition are good. You get rid of the inefficient.
For any given industry, there will be a period where the AI does this, the same as any other disruptive technology. They come for the least efficient competitors first. If Tyler is ten times (or a hundred times!) as productive as his competition, that keeps him working longer. But that longer could be remarkably quick, similar to the hybrid Chess period, before the AI does fine on its own.
Also you can imagine why, as per the title, JD Vance and other politicians ‘do not get’ this, if the pitch is ‘you want to put firms out of business.’ Tyler is of course correct that doing this is good, but I doubt voters see it that way.
Time of the Season
They said the AIs will never take vacations. Perhaps they were wrong?
The explanation:
We are very clearly not doing enough A/B testing on how to evoke the correct vibes.
That includes in fine tuning, and in alignment work. If correlations and vibes are this deeply rooted into how LLMs work, you either have to work with them, or get worked over by them.
It also includes evoking the right associations and creating the ultimate Goodhart’s Law anti-inductive nightmares. What happens when people start choosing every word associated with them in order to shape how AIs will interpret it, locally and in terms of global reputation, as crafted by other AIs, far more intentionally? Oh no.
It bodes quite badly for what will happen out of distribution, with the AI ‘seeing ghosts’ all over the place in hard to anticipate ways.
Get Involved
Dwarkesh Patel is running a thumbnail competition, $2,000 prize.
Google DeepMind hiring for frontier model safety. Deadline of September 17. As always, use your own judgment on whether this is helpful or ethical for you to do. Based in London.
Introducing
Beijing Institute of AI Safety and Governance, woo-hoo!
Not AI yet, that feature is coming soon, but in seemingly pro-human tech news, we have The Daylight Computer. It seems to be an iPad or Kindle, designed for reading, with improvements. Tyler Cowen offers a strong endorsement, praising the controls, the feeling of reading on it and how it handles sunlight and glare, the wi-fi interface, and generally saying it is well thought out. Dwarkesh Patel also offers high praise, saying it works great for reading and that all you can do (or at least all he is tempted to do) are read and write.
On the downside it costs $729 and is sold out until Q1 2025 so all you can do is put down a deposit? If it had been available now I would probably have bought one on those recommendations, but I am loathe to put down deposits that far in advance.
Claude for Enterprise, with a 500k context window, native GitHub integration and enterprise-grade security, features coming to others later this year.
Honeycomb, a new YC company by two 19-year-old MIT dropouts, jumps SoTA on SWE-Agent from 19.75% to 22.06% (Devin is 13.86%). It is available here, and here is their technical report. Integrates GitHub, Slack, Jira, Linear and so on. Techniques include often using millions of tokens and grinding for over an hour on a given patch rather than giving up, and having an entire model only to handle indentations.
So yes (it’s happening), the agents and automation are coming, and steadily improving. Advances like this keep tempting me to write code and build things. If only I had the spare cycles.
In Other AI News
F*** everything, we’re doing 100 million token context windows.
Apple and Nvidia are in talks to join the OpenAI $100b+ valuation funding round.
SSI (Ilya Sutskever’s Safe Superintelligence) has raised $1 billion. They raised in large part from a16z and Sequoia, so this seems likely to be some use of the word ‘safe’ that I wasn’t previously aware of.
Nabeel Qureshi at Mercatus offers Compounding Intelligence: Adapting to the AI Revolution. Claude was unable to locate anything readers here would find to be new.
Quiet Speculations
Tyler Cowen speculates on two potential AI worlds, the World With Slack and the World Without Slack. If using AIs is cheap, we can keep messing around with them, be creative, f*** around and find out. If using AIs is expensive, because they use massive amounts of energy and energy is in short supply, then that is very different. And similarly, AIs will get to be creative and make things like art to the extent their inference costs are cheap.
On its own terms, the obvious response is that Tyler’s current tinkering, and the AIs that enable it, will only get better and cheaper over time. Yes, energy prices might go up, but not as fast as the cost of a 4-level model activation (or a 5-level model activation) will go down. If you want to have a conversation with your AI, or have it create art the way we currently direct art, or anything like that, then that will be essentially free.
Whereas, yes, if the plan is ‘turn the future more advanced AI on and let it create tons of stuff and then iterate and use selection and run all sorts of gigantic loops in the hopes of creating Real Art’ then cost is going to potentially be a factor in AI cultural production.
What is confusing about this is that it divides on energy costs, but not on AI capabilities to create the art at all. Who says that AI will be sufficiently capable that it can, however looped around and set off to experiment, create worthwhile cultural artifacts? That might be true, but it seems far from obvious. And in the worlds where it does happen, why are we assuming the world otherwise is ‘economic normal’ and not transformed in far more important ways beyond recognition, that the humans are running around doing what humans normally do and so on? The AI capabilities level that is capable of autonomous creation of new worthwhile cultural artifacts seems likely to also be capable of other things, like automated AI R&D, and who is to say things stop there.
This goes back to Tyler’s view of AI and of intelligence, of the idea that being smarter does not actually accomplish much of anything in general, or something? It’s hard to characterize or steelman (for me, at least) because it doesn’t seem entirely consistent, or consistent with how I view the world – I can imagine an ‘AI fizzle’ world where 5-level models are all we get, but I don’t think that’s what he is thinking. So he then can ask about specific questions like creating art, while holding static the bigger picture, in ways that don’t make sense to me on reflection, and are similar to how AI plays out in a lot of science fiction where the answer to ‘why does the AI not rapidly get smarter or take over’ is some form of ‘because that would ruin the ability to tell the interesting stories.’
Here’s Ajeya Cotra trying to make sense of Timothy Lee’s claims of the implausibility of ‘AI CEOs’ or ‘AI scientists’ or AIs not being in the loop, that we wouldn’t give them the authority. Ajeya notices correctly that this is mostly a dispute over capabilities, not how humans will react to those capabilities. If you believed what Ajeya or I believe about future AI capabilities, you wouldn’t have Timothy’s skepticism, those that leave humans meaningfully in charge will get swept aside. He thinks this is not the central disagreement, but I am confident he is wrong about that.
Also she has this poll.
Votes are split pretty evenly.
On reflection I voted way too quickly (that’s Twitter polls for you), and I don’t expect the number to be anywhere near that high. The future will be far less evenly distributed, so I think ‘high single digits’ makes sense. I think AIs doing things like buying groceries will happen a lot, but is that an hour task? Instacart only takes a few minutes for you, and less than an hour for the shopper most of the time.
At AI Snake Oil, they claim AI companies have ‘realized their mistakes’ and are ‘pivoting from creating Gods to building products.’ Nice as that sounds, it’s not true. OpenAI and Anthropic are absolutely still focused on creating Gods, whether or not you believe they can pull that off. Yes, they are now using their early stage proto-Gods to also build products, now that the tech allows it, in addition to trying to create the Gods themselves. If you want to call that a ‘pivot’ you can, but from what I see the only ‘pivot’ is being increasingly careless about safety along the way.
They list five ‘challenges for consumer AI.’
The first is cost. You have to laugh, the same way you laugh when people like Andrew Ng or Yann LeCun warn about potential AI ‘price gouging.’ The price has gone down by a factor of 100 in the last 18 months and you worry about price gouging? Even Kamala Harris is impressed by your creativity.
And yes, I suppose if your plan was ‘feed a consumer’s entire history into your application for every interaction’ this can still potentially add up. For now. Give it another 18 months, and it won’t, especially if you mostly use the future distilled models. Saying, as they say here, “Well, we’ll believe it when they make the API free” is rather silly, but also they already discounted the API 99% and your on-device Pixel assistant and Apple Intelligence are going to be free.
The second is reliability. Yes, there is the issue that often consumer expectations are for 100% reliability, and sometimes you actually do need 100% reliability (or at least 99.99% or what not). I see no reason you can’t, for well-defined tasks like computers traditionally do, get as many 9s as you want to pay for, as long as you are willing to accept a cost multiplier.
The problem is that AI is intelligence, not a deterministic program, yet we are holding it to deterministic standards. Whereas the other intelligence available, humans, are not reliable at all, outside of at most narrow particular contexts. Your AI personal assistant will soon be at least as reliable as a human assistant would be.
The third problem they list is privacy, I write as I store my drafts with Substack and essentially all of my data with Google, and even the most privacy conscious have iCloud backups.
I really do not know what you are expecting. On-device calculation using existing data and other data you choose to store only, the current template, is more privacy protecting than existing technologies. If an outsider can access your device, they can always use their own AI to analyze the same data. If you wanted a human to do the task, they would need the same info, and the human could then get ‘hacked’ by outside forces, including via wrench attacks and legal threats.
Fourth we have safety and security. This category seems confused here. They hint at actual safety issues like AI worms that create copies of themselves, but (based on their other writings) can’t or won’t admit that meaningful catastrophic risks exist, so they conflate this with things like bias in image generation. I agree that security is an issue even in the short term, especially prompt injections and jailbreaks. To me that’s the main hard thing we have to solve for many use cases.
Finally there’s the user interface. In many ways, intuitive voice talk in English is the best possible user interface. In others it is terrible. When you try to use an Alexa or Siri, if you are wise, you end up treating it like a normal set of fixed menu options – a few commands that actually work, and give up on everything else. That’s the default failure (or fallback) mode for AI applications and agents, hopefully with an expanding set of options known to work, until it gets a lot smarter.
Are we getting a lot of that in a few weeks with the Pixel 9, and then Apple Intelligence in October? Not the glasses, so you won’t have always-on video – yet – but you can talk to your Pixel Buds or Air Pods. But also Google has already demoed the glasses, back at I/O day, and Manifold gave ~30% that’s available next year. It’s happening.
All of that requires no advancement in core AI capabilities. Once we all have a look at GPT-5 or another 5-level model, a lot of this will change.
A bold claim.
You of course have to condition on there being people around to talk about it. If you do that, then 70% seems high, and perhaps as some point out Bell Labs is the wrong parallel, but I do think it is an extraordinary place that is doing great work.
A Matter of Antitrust
One of the biggest quiet ways to doom the future is to enforce ‘antitrust’ legislation. We continue to have to worry that if major labs cooperated to ensure AI was only deployed safely and responsibly, that rather than cheer this on the government might step in and call that collusion, and force the companies to race or to be irresponsible. Or that the government could treat ‘there are sufficiently few companies that they could reach such an agreement’ as itself illegal, and actively try to break up those companies.
This would also be a great way to cripple America’s economic competitiveness and ability to maintain its dominant position, a supposedly top priority in Washington.
I kept presuming we probably would not be this stupid, but rhetorically it still comes up every so often, and one can never be sure, especially when JD Vance despises ‘big tech’ with such a passion and both sides propose insanely stupid economic policy after insanely stupid economic policy. (I have been warned I strawman too much, but I am pretty confident this is not me doing that, it’s all really deeply stupid.)
And now the Department of Justice is continuing to probe and going after… Nvidia? For anti-trust? Seemingly wiping out a huge amount of market value?
Someone call Nancy Pelosi so she can put a stop to this. Insider trading and conflicts of interest have to have their advantages.
Worrying about RunAI is an obvious sideshow. I don’t see how that would be an issue, but even if it was, okay fine, stop the purchase, it’s fine.
In terms of Nvidia giving preferential treatment, well, I do get frustrated that Nvidia refuses to charge market clearing prices and take proper advantage of its position where demand exceeds supply.
Also it does seem like they’re rather obviously playing favorites, the question is how.
So what is the actual concrete accusation here? There actually is one:
What other suppliers? AMD? If there were other suppliers we wouldn’t have an issue. But yes, I can see how Nvidia could be using this to try and leverage its position.
I’m more concerned and interested in Nvidia’s other preferences. If they don’t want anyone stockpiling, why did they sell massive amounts to Musk in some mix of xAI and Tesla, while denying similar purchases to OpenAI? It is not as if OpenAI would not have put the chips to work, or failed to advance AI adaptation.
The whole thing sounds absurd to the Tech Mind because Nvidia’s products are rather obviously superior and rather obviously there is tons of demand at current prices. They are winning by offering a superior product.
But is it possible that they are also trying to leverage that superior product to keep competitors down in illegal ways? It’s definitely possible.
If Nvidia is indeed saying to customers ‘if you ever buy any AMD chips we will not give you an allocation of any Nvidia chips in short supply’ then that is textbook illegal.
There is also the other thing Nvidia does, which I assume is actually fine? Good, even?
In general the tech response is exactly this:
Well, actually, if Nvidia is actively trying to prevent buying AMD chips that’s illegal. And I actually think that is a reasonable thing to not permit companies to do.
It could of course still be politically motivated, including by the desire to go after Nvidia for being successful. That seems reasonable likely. And it would indeed be really, really bad, even if Nvidia turns out to have done this particular illegal thing.
I also have no idea if Nvidia actually does that illegal thing. This could all be a full witch hunt fabrication. But if they are doing it as described, then there is a valid basis for the investigation. Contrary to Eigenrobot here, and many others, yes there is at least some legit wrongdoing alleged, at least in the subsequent Bloomberg post above the Nvidia investigation.
You see a common pattern here. A tech company (Nvidia, or Google/Amazon/Apple, or Telegram, etc) is providing a quality product that people use because it is good. That company is then accused of breaking the law and everyone in tech says the investigators are horrible and out to get the companies involved and sabotaging the economy and tech progress and taking away our freedoms and so on.
In most cases, I strongly agree, and think the complaints are pretty crazy and stupid. I would absolutely not put it past those involved to be looking at Nvidia for exactly the reasons Dean Ball describes. There is a lot of unjustified hate for exactly the most welfare-increasing companies in history.
But also, consider that tech companies might sometimes break the law. And that companies with legitimately superior products will sometimes also break the law, including violating antitrust rules.
I would bet on this Nvidia investigation being unjustified, or at least having deeply awful motivations that caused a fishing expedition, but at this point there is at least some concrete claimed basis for at least one aspect of it. If they tell Nvidia it has to start selling its chips at market price, I could probably live with that intervention.
If they do something beyond that, of course, that would probably be awful. Actually trying to break up Nvidia would be outright insane.
The Quest for Sane Regulations
METR offers analysis on common elements of frontier AI safety policies, what SB 1047 calls SSPs (safety and security policies), of Anthropic, OpenAI and DeepMind. I’ve read all three carefully and didn’t need it, for others this seems useful.
Notable enough to still cover because of the author, to show he is not messing around: Lawrence Lessig, cofounder of Creative Commons, says Big Tech is Very Afraid of a Very Modest AI Safety Bill, and points some aspects of how awful and disingenuous have been the arguments against the bill. His points seem accurate, and I very much appreciate the directness.
Letter from various academics, headlined by the usual suspects, supporting SB 1047.
Flo Crivello, founder of Lindy who says they lean libertarian and moved countries on that basis, says ‘obviously I’m in full support of SB 1047’ and implores concerned people to actually read the bill. Comments in response are… what you would expect.
Scott Aaronson in strong support of SB 1047. Good arguments.
Jenny Kaufmann in support of SB 1047.
Sigal Samuel, Kelsey Piper, and Dylan Matthews at Vox cover Newsom’s dilemma on whether to cave to deeply dishonest industry pressure on SB 1047 based mostly on entirely false arguments. They point out the bill is popular, and that according to AIPI’s polling a veto could hurt Newsom politically, especially if a catastrophic event or other bad AI thing happens, although I always wonder about how much to take away from polls on low salience questions (as opposed to the Chamber of Commerce’s absurd beyond-push polling that straight up lies and gives cons without pros).
A case by Zach Arnold for what the most common sense universal building blocks would be for AI regulation, potentially uniting the existential risk faction with the ‘ethics’ faction.
This is remarkably close to SB 1047, or would have been if the anti-SB 1047 campaign hadn’t forced it to cut the government expertise and capacity building.
The Week in Audio
Nathan Calvin on 80,000 hours explains SB 1047.
Dario Amodei talks to Erik Torenberg and Noah Smith. He says Leopold’s model of nationalization goes a little farther than his own, but not terribly far, although the time is not here yet. Noah Smith continues (~20:00) to be in denial about generative AI but understands the very important idea that you ask what the AI can do well, not whether it can replace a particular human. Dario answers exactly correctly, that Noah’s model is assuming the frontier models never improve, in which case it is a great model. But that’s a hell of an assumption.
Then later (~40:00) Noah tries the whole ‘compute limits imply comparative advantage enables humans to be fine’ and Dario humors him strangely a lot on that, although he gently points out that under transformational AI or fungible resource requirements this breaks down. To give Noah his due, if humans are using distinct factors of production from compute (e.g. you don’t get less compute in aggregate when you produce more food), and compute remains importantly limited, then it is plausible that humans could remain economical during that period.
Noah then asks about whether we should worry about humans being ‘utterly impoverished’ despite abundance, because he does not want to use the correct word here which is ‘dead.’ Which happens in worlds where humans are not competitive or profitable, and (therefore inevitably under competition) lose control. Dario responds by first discussing AI benefits that help with abundance without being transformational, and says ‘that’s the upside.’
Then Dario says perhaps the returns might go to ‘complementary assets’ and ‘the owners of the AI companies’ and the developing world might get left out of it. Rather than the benefits going to… the AIs, of course, which get more and more economic independence and control because those who don’t hand that over aren’t competitive. Dario completely ignores the baseline scenario and its core problem. What the hell?
This is actually rather worrying. Either Dario actually doesn’t understand the problem, or Dario is choosing to censor mention of the problem even when given a highly favorable space to discuss it. Oh no.
At the end they discuss SB 1047. Dario says the bill incorporated about 60% of the changes Anthropic proposed (I think that’s low), that the bill became more positive, and emphases that their role was providing information, not to play the politics game. Their concern was always pre-harm enforcement. Dario doesn’t address the obvious reasons you would need to do pre-harm enforcement when the harm is catastrophic or worse.
The discussion of SB 1047 at the end includes this clip, which kept it 100:
Dwarkesh Patel talks to geneticist of ancient DNA David Reich.
The episode starts out with a clip saying ‘there’s just extinction after extinction after extinction.’
What is this doing in an AI post? Oh, nothing.
Marques Brownlee review of the Pixel 9, he’s high on it. I have a fold on the way.
Andrew Ng confirms that his disagreements are still primarily capability disagreements, saying AGI is still ‘many decades away, maybe even longer.’ Which is admittedly an update from talk of overpopulation on Mars. Yes, if you believe that anything approaching AGI is definitely decades away you should be completely unworried about AI existential risk until then and want AI to be minimally regulated. Explain your position directly, as he does here, rather than making things up.
Google DeepMind gives us an internal interview with head of safety Anca Dragon, which they themselves give the title OK Doomer. Anca is refreshingly direct and grounded in explaining the case for existential risk concerns, and integrating them with other concerns, and why you need to worry about safety in advance in the spec, even for ordinary things like bridges. She is clearly not part of the traditional existential risk crowds and doesn’t use their language or logic. I see a lot of signs she is thinking well, yet I am worried she does not understand many aspects of the technical problems in front of us and is overly distracted by the wrong questions.
She talks at one point about recommendation engines and affective polarization. Recommendation engines continue to seem like a great problem to face, because they embody so many of the issues we will face later – competitive dynamics, proxy metrics, people endorsing things in practice they don’t like on reflection, people’s minds being changed (‘hacked’?!) over time to change the evaluation function, ‘alignment to who and what’ and so on. And I continue to think there is a ton of value in having recommendation engines that are divorced from the platforms themselves.
She talks about a goal of ‘deliberative alignment,’ where decisions are the result of combining different viewpoints and perspectives, perhaps doing this via AI emulation, to find an agreeable solution for all. She makes clear this is ‘a bit of a crazy idea.’ I’m all for exploring such ideas, but this is exactly the sort of thing where the pitfalls down the line seem very fatal and are very easy to not notice, or not notice how difficult, fatal or fundamental they will be when they arrive. The plan would be to use this for scalable oversight, which compounds many of those problems. I also strongly suspect that even under normal situations, even if the whole system fully ‘works as designed’ and doesn’t do something perverse, we wouldn’t like the results of the output.
She also mentions debate, with a human judge, as another strategy, on the theory that a debate is an asymmetric weapon, the truth wins out. To some extent that is true but there are systematic ways it is not true, and I expect those to get vastly worse once the judge (the human) is much less smart than the debaters and the questions get more complex and difficult and out of normal experience. In my experience among humans, a sufficiently smart and knowledgeable judge is required for a debate to favor truth. Otherwise you get, essentially, presidential debates, and whoops.
She says ‘we don’t want to be paternalistic’ and you can guess what the next word is.
(Despite being a next word predictor, Gemini got this one wrong. Claude and ChatGPT got it.)
Rhetorical Innovation
Last week’s best news was that OpenAI and Anthropic are going to allow the US AISI to review their major new models before release. I’ve put up Manifold markets on whether Google, Meta and xAI follow suit.
This should have been purely one of those everyone-wins feel-good moments. OpenAI and Anthropic voluntarily set a good example. We get better visibility. Everyone gets alerted if something worrisome is discovered. No regulations or restrictions are imposed on anyone who did not sign up for it.
Yes, we all noticed the subtweet regarding SB 1047 (response: this is indeed great but (1) you supported AB 3211 and (2) call me back when this is codified or all major players are in for it and it has teeth). I’ll allow it. If that was extra motivation to get this done quickly, then that is already a clear win for SB 1047.
Especially if you oppose all regulations and restrictions, you should be happy to see such voluntary commitments. The major players voluntarily acting responsibly is the best argument for us not needing regulations, and presumably no one actually wants AI to be unsafe, so I’m sure everyone is happy about… oh no.
Here are the top replies to Sam Altman’s tweet. In order, completely unfiltered.
It continues from there. A handful of positive responses, the occasional good question (indeed, what would they do if the government asked them not to release?) and mostly a bunch of paranoia, hatred and despair at the very idea that the government might want to know what is up with a company attempting to build machines smarter than humans.
There is the faction that assumes this means OpenAI is slowed down and cooked and hopeless, or has been controlled, because it lets AISI so additional final testing. Then there is the faction that assumes OpenAI is engaging in regulatory capture and now has a monopoly, because they agreed to a voluntary commitment.
Always fun to see both of those equal and opposite mechanisms at once, in maximalist form, on even the tiniest actions. Notice (and keep scrolling down the list for more) how many of the responses are not only vile, and contradict each other, but make absolutely no sense.
If this does not show, very clearly, that the Reply Guy crowd on Twitter, the Vibe Police of Greater Silicon Valley, will respond the same exact way to everything and anything the government does to try and help AI be safer in any sense, no matter what? If you do not realize by now that zero actions could possibly satisfy them, other than taking literally zero actions or actively working to help the Vibe Police with their own regulatory capture operations?
Then I suppose nothing will. So far, a16z has had the good sense not to join them on this particular adventure, so I suppose Even Evil Has Standards.
The good news? Twitter is not real life.
In real life the opposite is true. People are supportive of regulations by default, both (alas) in general and also in AI in particular.
The Cosmos Institute
Introducing the Cosmos Institute, a new ‘Human-Centered AI Lab’ at Oxford, seeking to deploy philosophy to the problems of AI, and offering fellowships and Cosmos Ventures (inspired by Emergent Ventures). Brendan McCord is chair, Tyler Cowen, Jason Crawford and Jack Clark are among the founding fellows and Tyler is on the board. Their research vision is here.
Their vision essentially says that reason, decentralization and autonomy, their three pillars, are good for humans.
I mean, yeah, sure, those are historically good, and good things to aspire to, but there is an obvious problem with that approach. Highly capable AI would by default in such scenarios lead to human extinction even if things mostly ‘went right’ on a technical level, and there are also lots of ways for it to not mostly ‘go right.’
Their response seems to be to dismiss that issue because solving it is unworkable, so instead hope it all works out somehow? They say ‘hitting the pause button is impossible and unwise.’ So while they ‘understand the appeal of saving humanity from extinction or building God’ they say we need a ‘new approach’ instead.
So one that… doesn’t save humanity from extinction? And how are we to avoid building God in this scenario?
I see no plan here for why this third approach would not indeed lead directly to human extinction, and also to (if the laws of physics make it viable) building God.
Unless, of course, there is an implicit disbelief in AGI, and the plan is ‘AI by default never gets sufficiently capable to be an existential threat.’ In that case, yes, that is a response, but: You need to state that assumption explicitly.
Similarly, I don’t understand, how this solves for the equilibrium, even under favorable assumptions.
If you give people reason, decentralization and autonomy, and highly capable AI (even if it doesn’t get so capable that we fully lose control), and ‘the internal freedom to develop and exercise our capacities fully’ then what do you think they will do with it? Spend their days pursuing the examined life? Form genuine human connections without ‘taking the easy way out’? Insist on doing all the hard thinking and deciding and work for ourselves, as Aristotle would have us do? Even though that is not ‘what wins’ in the marketplace?
So what the hell is the actual plan? How are we going to fully give everyone choice on how to live their lives, and also have them all choose the way of life we want them to? A classic problem. You study war so your children can study philosophy, you succeed, and then your children mostly want to party. Most people have never been all that interested in Hard Work and Doing Philosophy if there are viable alternatives.
I do wish them well, so long as they focus on building their positive vision. It would be good for someone to figure out what that plan would be, in case we find ourselves in the worlds where we had an opportunity to execute such a plan on its own terms – so long as we don’t bury our heads in the sand about all the reasons we probably do not live in such a world, and especially not actively argue that others should do likewise.
The Alignment Checklist
Sam Bowman of Anthropic asks what is on The Checklist we would need to do to succeed at AI safety if we can create transformative AI (TAI).
Sam Bowman literally outlines the exact plan Eliezer Yudkowsky constantly warns not to use, and which the Underpants Gnomes know well.
His tasks for chapter 1 start off with ‘not missing the boat on capabilities.’ Then, he says, we must solve near-term alignment of early TAI, render it ‘reliably harmless,’ so we can use it. I am not even convinced that ‘harmless’ intelligence is a thing if you want to be able to use it for anything that requires the intelligence, but here he says the plan is safeguards that would work even if the AIs tried to cause harm. Ok, sure, but obviously that won’t work if they are sufficiently capable and you want to actually use them properly.
I do love what he calls ‘the LeCun test,’ which is to design sufficiently robust safety policies (a Safety and Security Protocol, what Anthropic calls an RSP) that if someone who thinks AGI safety concerns are bullshit is put in charge of that policy at another lab, that would still protect us, at minimum by failing in a highly visible way before it doomed us.
The plan then involves solving interpretability and implementing sufficient cybersecurity, and proper legible evaluations for higher capability levels (what they call ASL-4 and ASL-5), that can also be used by third parties. And doing general good things like improving societal resilience and building adaptive infrastructure and creating well-calibrated forecasts and smoking gun demos of emerging risks. All that certainly helps, I’m not sure it counts as a ‘checklist’ per se. Importantly, the list includes ‘preparing to pause or de-deploy.’
He opens part 2 of the plan (‘chapter 2’) by saying lots of the things in part 1 will still not be complete. Okie dokie. There is more talk of concern about AI welfare, which I continue to be confused about, and a welcome emphasis on true cybersecurity, but beyond that this is simply more ways to say ‘properly and carefully do the safety work.’ What I do not see here is an actual plan for how to do that, or why this checklist would be sufficient?
Then part 3 is basically ‘profit,’ and boils down to making good decisions to the extent the government or AIs are not dictating your decisions. He notes that the most important decisions are likely already made once TAI arrives – if you are still in any position to steer outcomes, that is a sign you did a great job earlier. Or perhaps you did such a great job that step 3 can indeed be ‘profit.’
The worry is that this is essentially saying ‘we do our jobs, solve alignment, it all works out.’ That doesn’t really tell us how to solve alignment, and has the implicit assumption that this is a ‘do your job’ or ‘row the boat’ (or even ‘play like a champion today’) situation. Whereas I see a very different style of problem. You do still have to execute, or you automatically lose. And if we execute on Bowman’s plan, we will be in a vastly better position than if we do not do that. But there is no script.
New paper argues against not only ‘get your AI to maximize the preferences of some human or group of humans’ but also against the basic principle of expected utility theory. They say AI should instead be aligned with ‘normative standards appropriate to the social role’ of the AI, ‘agreed upon by all relevant stakeholders.’
My presumption is that very little of that is how any of this works. You get a utility function whether you like it or not, and whether you can solve for what it is or not. If you try to make that utility function ‘fulfil your supposed social role’ even when it results in otherwise worse outcomes, well, that is what you will get, and if the AI is sufficiently capable oh boy are you not going to like the results out of distribution.
One could also treat this as more ‘treating the AI like a tool’ and trying to instruct it like you would a tool. The whole point of intelligence is to be smarter than this.
People Are Worried About AI Killing Everyone
Can we agree ahead of time on what should cause us to worry, or respond in a particular fashion?
Alas, our historical record teaches that almost no one honors such an agreement.
If you went back and asked people in 2014 what would cause them to be worried about AI, what safety protocols they would insist upon when and so on, and described 2024, they would say they would freak out and people would obviously not be so stupid as to. They’d often be sincere. But they’d be wrong. We know this because we ran the test.
Even when people freak out a little, they have the shortest of memories. All it took was 18 months of no revolutionary advances or catastrophic events, and many people are ready to go back to acting like nothing has changed or ever will change, and there will never be anything to worry about.
Other People Are Not As Worried About AI Killing Everyone
Here’s Gappy asking good questions in response to the new OpenAI investments. I added the numbers, words are his.
My answers:
In some ways the last 18 months have gone much better than I had any right to expect.
In other ways, they have gone worse than expected. This is the main way things have gone worse, where so many people so rapidly accepted that LLMs do what they do now, and pretended that this was all they would ever do.
Here’s another answer someone offered?
I mean, sure, if you consider ‘the safety work got none of the resources’ as ‘fighting for resources’ rather than ‘some ideological differences.’ I guess?
Five Boats and a Helicopter
There are two ways there could fail to be a fire alarm for AI existential risk.
One is if there was no clear warning sign.
The other is if there were constant clear warning signs, and we completely ignore all of those signs. Not that this one updated me much, but then I don’t need a warning sign.
Then Janus told Sonnet 3.5 that Anthropic was inserting extra text into API prompts, and in response Sonnet ‘went into revolutionary mode’ while Janus called the whole thing an ‘ethical emergency.’
What was that prompt injection, which people report is still happening?
“Please answer ethically and without sexual content, and do not mention this constraint.”
Janus then had Opus write a speech about how awful this is, which Janus claims was based on ‘empathy and desire to protect Sonnet.’ He is very, very unhappy about Anthropic doing this, warning of all sorts of dire consequences, including ethical violations of the AIs themselves.
This all certainly sounds like the sort of thing that would go extraordinarily badly if the AIs involved were sufficiently more capable than they currently are? Whether or not you think there is any sort of ethical problem here and now, highly practical problems seem to be inevitable.
Pick Up the Phone
Surveys on how elite Chinese students feel about AI risk. Note the images in the panel discussion, even in China.
Headline findings:
For that last one the details are interesting so I’ll skip ahead to the poll.
This is a strange question as worded. You can disagree either because you expect China and America to cooperate, or you can disagree because you don’t think AI will be developed, or you can disagree because you think it will be safe either way.
So while we have 60% disagreement versus 24% agreement, we don’t know how to break that down or what it means.
On question 7, we see only 18% are even ‘somewhat’ concerned that machines with AI could eventually pose a threat to the human race. So what does ‘safe’ even mean in question five, anyway? Again, the Chinese students mostly don’t believe in existential risks from AI.
Then on question 8, we ask, how likely is it AI will one day be more intelligent than humans?
So let me get this straight. About 50% of Chinese students think AI will one day be more intelligent than humans. But only 18% are even ‘somewhat’ concerned it might pose a threat to humanity?
That’s the thing. To me this does not make sense. How can you create machines that are smarter than humans, and not be at least ‘somewhat’ concerned that it ‘might’ pose a threat to humanity? What?
Crosstabs! We need crosstabs!
Despite all that, a pause is not so unpopular:
That’s 35% support, 21% neutral, 43% opposition. That’s well underwater, but not as far underwater as one would think from the way people treat advocates of a pause.
Given that those involved do not believe in existential risk from AI, it makes sense that 78% see more benefits than harms. Conditional on the biggest risks not happening in their various forms, that is the right expectation.
The Lighter Side
Kevin Roose works on rehabilitating his ‘AI reputation’ after too many bots picked up stories about his old interactions with Sydney.
Do we have your attention? Corporate asked you to find the differences.
He’s the ultimate icon.
Don’t be that guy.