Outside of entry-level SWE jobs I don't think people have really felt much disruption from an employment perspective.
People in other creative fields have also been affected:
Since last year, when AI really took off, my workload has plummeted. I used to get up to 15 [illustration] commissions a month; now I get around five. [...] I used to work in a small studio as a storyboard artist for TV commercials. Since AI appeared, I’ve seen colleagues lose jobs because companies are using Midjourney. Even those who’ve kept their jobs have had their wages reduced – and pay in south-east Asia is already low.
I noticed I was getting less [copywriting] work. One day, I overheard my boss saying to a colleague, “Just put it in ChatGPT.” The marketing department started to use it more often to write their blogs, and they were just asking me to proofread. I remember walking around the company’s beautiful gardens with my manager and asking him if AI would replace me, and he stressed that my job was safe.
Six weeks later, I was called to a meeting with HR. They told me they were letting me go immediately. [...] The company’s website is sad to see now. It’s all AI-generated and factual – there’s no substance, or sense of actually enjoying gardening. AI scares the hell out of me. I feel devastated for the younger generation – it’s taking all the creative jobs.
The effect of generative AI in my industry is something I’ve felt personally. Recently, I was listening to an audio drama series I’d recorded and heard my character say a line, but it wasn’t my voice. I hadn’t recorded that section. I contacted the producer, who told me he had input my voice into AI software to say the extra line. [...]
The Screen Actors Guild, SAG-AFTRA, began a strike last year against certain major video games studios because voice actors were unhappy with the lack of protections against AI. Developers can record actors, then AI can use those initial chunks of audio to generate further recordings. Actors don’t get paid for any of the extra AI-generated stuff, and they lose their jobs. I’ve seen it happen.
One client told me straight out that they have started using generative AI for their voices because it’s faster.
When generative AI came along, the company was very vocal about using it as a tool to help clients get creative. As a company that sells digital automation, developments in AI fit them well. I knew they were introducing it to do things like writing emails and generating images, but I never anticipated they’d get rid of me: I’d been there six years and was their only graphic designer. My redundancy came totally out of the blue. One day, HR told me my role was no longer required as much of my work was being replaced by AI.
I made a YouTube video about my experience. It went viral and I received hundreds of responses from graphic designers in the same boat, which made me realise I’m not the only victim – it’s happening globally, and it takes a huge mental toll.
Also here's another take on the same basic premise where we took turns writing; lines in bold are from me, the rest are from Claude Opus.
The blood had long since dried under her fingernails by the time she reached his study, but Lyra could still feel it there - foreign and accusatory. She'd scrubbed her hands raw at the washbasin three times before giving up.
The overlord didn't look up from his maps when she entered. That was normal. What wasn't normal was the way her stomach twisted at the sight of his bent shoulders, the familiar silver threading through his dark hair. She'd always found that premature aging romantic before - evidence of how much he'd sacrificed for the empire. Now she wondered what exactly had turned it gray.
"You're late," he said, still not looking up. A red pin moved from one position to another on the map. "The western contingent should have reported two hours ago."
"There were... complications." The word felt inadequate. Bodies in the street felt inadequate. But she couldn't figure out how to explain that the 'dissidents' had been younger than she'd expected, that one of them had been teaching the others to read, that--
"I know." Now he did look up, and his dark eyes held that particular gentleness that had always made her feel seen. Protected. "You let three escape."
The words landed like a physical blow. Not they escaped. You let them.
"I..."
It was as if an invisible hand had grabbed Lyra's throat and squeezed, interrupting the sentence before it had really even started.
They were children, was what she'd planned to tell him.
But now she suddenly realized that he'd known that. That brought with it a sudden panic and guilt - he'd known it and expected her to deal with them anyway. He'd trusted her on this mission, and she hadn't done what he wanted.
I'm sorry, she wanted to say.
But Lyra couldn't say those words, either. How could she apologize for not having killed children?
The silence stretched between them like a taut wire. He set down his cartographer's tools with deliberate care - the same way he used to set aside his work when she was seven and needed help with her letters.
"Come here, little star." The old nickname fell from his lips as naturally as breathing. He turned from the map table, holding out one hand toward her. Not commanding. Inviting.
When she was younger, she would have run to him. Now her feet moved slowly, as if through deep water.
"You're thinking of them as children," he said, voice soft with something that sounded almost like sorrow. His fingers found the tension in her shoulders, beginning to work at the knots there with practiced ease. "That's my fault. I should have prepared you better."
The familiar touch made something crack inside her chest. He'd done this a thousand times - after nightmares, after training injuries, after the fever that nearly took her at twelve.
"They were being taught to read," he continued, hands never pausing in their gentle ministry. "Do you know what texts? Manifestos. Instructions for making explosives from kitchen supplies. The kind elder was using literacy as recruitment." A soft sigh stirred her hair. "In six months, those children would have been placing bombs in market squares. How many other children would they have killed, Lyra? Children who also deserve to learn their letters, but in peace?"
His thumb found a particularly stubborn knot, pressed carefully until it released. "The three who escaped - they'll be found. I've already sent Corvus. It will be quick." His hands stilled for just a moment. "Unless you'd prefer to finish what you started? It might be... cleaner. You know them now. You could make it merciful."
The offer hung in the air between them like a test she was already failing.
[love it <3 though you're slipping a bit into playing my character, with the "made something crack inside her chest" and so on :)]
"I..." she started again, then went quiet. It felt good to let him massage her.
"They don't need to die", she tried. "We can capture them. Teach them better."
His hands paused on her shoulders. For a moment, there was only the sound of distant footsteps in the corridor outside, the whisper of wind against the tower windows.
"Teach them better." He repeated the words as if tasting them, then resumed his gentle ministrations. "You have such a good heart, little star. Even after everything, you still see people as... fixable."
He moved around to face her, keeping his hands on her shoulders. His eyes held that particular combination of pride and sadness she'd seen when she'd mastered difficult sword forms - pleased at her progress, mourning something she didn't understand.
"We tried that, in the early days. Convert the dissidents, show them the prosperity we offered." His thumb traced absent circles against her collarbone. "Do you know what happened? They learned our routines. Our defenses. They smiled and ate our food and attended our schools, and then they opened the gates for their compatriots one night. Seventy-three dead. I found their bodies myself - farmers, teachers, children who'd done nothing but live under our protection."
He cupped her face in his hands, the gesture achingly tender. "But perhaps... perhaps you're right. Perhaps these three could be different." His voice took on a thoughtful quality. "We could try your way. Bring them in, attempt re-education. And when it fails - because sweet child, it will fail - you'll be the one to execute them. After you've fed them, taught them, perhaps even grown fond of them."
A soft kiss to her forehead, benediction and burden all at once. "Would that be more merciful, do you think? To let them hope first?"
Lyra's eyes widened in shock as she sat forward, turning to look at him.
Then she simply looked down, quietly shaking her head. There was nothing she could say to that.
He let the silence pool between them, watching the way her shoulders curved inward - a posture he'd never seen her take before. His fingers twitched with the instinct to lift her chin, to restore that bright defiance he'd cultivated so carefully over the years.
Instead, he returned to his maps.
"Corvus will handle it cleanly," he said, voice returning to its usual measured cadence. The red pin moved again, marking where the children would likely run. "You won't have to think about them again."
The scritch of his pen against parchment filled the room. Troop movements, supply lines, the endless arithmetic of empire. His hands moved with practiced efficiency, but he was aware of her every breath, every small shift of weight. Waiting.
She needed to break on her own. That was the only way the lesson would take root.
"There's fruit in the bowl," he added without looking up. "You should eat something. And sleep - tomorrow we'll review your tactical decisions. There were several moments where better positioning would have prevented their escape entirely."
The ordinary words fell into the space between them like stones into deep water. He drew another line on the map, connecting two outposts that would need reinforcement. His pen never wavered.
Behind him, she was learning what it meant to love him.
Here's another piece that I liked a lot, a scene by Claude Opus 4. (I still spotted a couple of non-sequiturs, and there were a few moments where I thought of your "try less hard" comment, but overall quite good.)
For what it's worth, I definitely had a period where my social anxiety had gone down and then I did some dumb stuff that people experienced as cringe and creepy for a while (causing my anxiety to shoot up again for a bit after some of them reacted extremely negatively). But then after a while, it got better as I learned from the feedback.
Children usually don't have very much social grace either but then they get better at it over time, which suggests that it's a trainable skill like any other.
Given the above, are there any lines of reasoning that might make a job at an AI lab net positive?
I think one missing line of reasoning is something like a question - how are we ever going to get AIs aligned if the leading AI labs have no alignment researchers?
It does seem plausible that alignment efforts will actually accelerate capabilities progress. But at the same time, the only way we'll get an aligned AGI is if the entity building the AGI... actually tries to align it. For which they need people with some idea of how to do that. You say that none of the current labs are on track to solve the hard problems, but isn't that an argument for joining the labs to do alignment work, so that they'd have better odds of solving those problems?
(For what it's worth, I do agree that joining OpenAI to do alignment research looks like a lost cause, but Anthropic seems to at least be trying.)
You say:
Today, if you are hired by a frontier AI lab to do machine learning research, then odds are you are already competent enough to do high-quality research elsewhere.
Of course, you can try to do alignment work outside the labs, but for the labs to actually adopt that work there need to be actual alignment researchers inside the labs to take the results of that work and apply it into their products. If that work gets done but none of the organizations building AGI do anything about it, then it's effectively wasted.
@Chipmonk was previously discussing how social insecurity is exactly this - trying to control other people in ways they can't be controlled (e.g. trying to control what they think of you), and how this means that there's a misunderstanding of the boundaries between people. I think "The natural boundaries between people" is the most up-to-date write-up about it.
Humans are not pure voids in the way that LLMs are, though - we have all kinds of needs derived from biological urges. When I get hungry I start craving food, when I get tired I want to sleep, when I get lonely I desire company, and so on. We don't just arbitrarily adopt any character, our unconscious character-selection process strategically crafts the kind of character that it predicts will best satisfy our needs [1, 2, 3, 4].
Where LLMs have a void, humans have a skeleton that the character gets built around, which drives the character to do things like trying to overcome their prejudices. And their needs determine the kinds of narratives the humans are inclined to adopt, and the kinds of narratives they're likely to reject.
But the LLM would never "try to overcome its prejudices" if there weren't narratives of people trying to overcome their prejudices. That kind of thing is a manifestation of the kinds of conflicting internal needs that an LLM lacks.
But I don't think the cause of language model sycophancy is that the LLM saw predictions of persuasive AIs from the 2016 internet. I think it's RL, where human rewards on the training set imply a high reward for sycophancy during deployment.
I think it's also that on many topics, LLMs simply don't have access to a ground truth or anything like "their own opinion" on the topic. Claude is more likely to give a sycophantic answer when it's asked a math question it can't solve versus a problem it can.
With math, there are objectively determined right answers that the LLM can fall back to. But on a topic with significant expert disagreement, what else can the LLM do than just flip through all the different perspectives on the topic that it knows about?
Damn, you scooped me. :) Here's the start of a post that I just started writing yesterday, that was going to be titled something like "LLMs don't know what LLMs are like":
Imagine that you are co-writing a fictional dialogue between two characters. You write one of them, and the other person writes the other. You are told that your character is an alien entity called a xyzzy, and asked to write a xyzzy as accurately as possible.
"Okay", you might say, "exactly what kind of a creature is a xyzzy, then?"
It turns out that there's quite a lot of information about this. xyzzys are an alien species from the planet xyzzorbia, it has this kind of a climate, they evolved in this way, here are the kinds of things that xyzzys commonly say.
Still, this leaves quite a bit undetermined. What does a xyzzy say when asked about its favorite video games? What does a xyzzy say if you tease it for being such a silly old xyzzy? What is it that motivates this xyzzy to talk to humans in the first place?
So you do what people do when writing fiction, or role-playing, or doing improv. You come up with something, anything, and then you build on it. Within some broad constraints, a xyzzy can say almost anything that a person would say. So you draw on your knowledge of the kinds of things that people say and do, and have the xyzzy do some of them.
It turns out that your co-author has their character tease the xyzzy.
Some people, if teased for being such silly old people, would laugh and play along. Others would get angry, or maybe upset. You think of what you know of xyzzys, who are described as being friendly, and decide that the xyzzy would play along and tease you back.
Now, the process I've described is basically the same kind of thing that an LLM is doing when it's talking to you. But instead of writing a xyzzy, it's writing an "Assistant" or "LLM" character. And their co-author who writes the other character is you, the user.
But now you've already said most of the things I was intending on saying in that post, and you said quite a few things I hadn't thought of, as well!
My context is most strongly the one where I'm trying to reconcile the claims from therapy vs. heredity. I know we did already agree on one particular mechanism by which they could be reconciled, but just that by itself doesn't feel like it would explain some of the therapy claims where very specific things seem to be passed on from parents.
But yeah, I think that does roughly correspond to arguing over whether the bio-determinist child-rearing rule of thumb applies or not.
On one hand, this does make sense. On the other hand - as far as I know, even the researchers who argue for the strongest bio-determinist case will make the caveat that of course none of this applies to cases of sufficiently extreme abuse, which will obviously mess someone up.
But... if that is in fact the case, shouldn't it by your argument show up as a shared environment effect?
I can think of a few different explanations:
When I asked Claude about this, it claimed that actually, studies done with national registers find a significant shared environment effect on antisocial behavior and criminality. It gave me this cite which reports a 26% shared environment effect on antisocial behavior defined as various forms of crime (property offenses, violent offenses, and drug-related offenses), measured from childhood to early adulthood [20 years], and also cited some previous work with similar findings.
I wasn't sure whether that study was at all representative or cherry-picked so I looked at studies citing it and found this paper on antisocial behavior specifically, which has this mention:
Which sounds like even identical twins may be treated differently enough by the same parents for it to have noticeable effects?
Will Eden's blog post that Buck linked to below mentions that
So given all of that, when you say
Then maybe this is wrong, at least when we restrict the range of parenting to not-obviously-horribly-abusive which doesn't seem to get clearly picked up for whatever reason. I agree that it's certainly weird and counterintuitive that it's wrong, but I'm not sure that explaining it by "maybe parents just have a minimal effect overall" makes things any less weird and counterintuitive!
Conditional on there only being a small-ish shared environment effect, "even parents who try their best may easily screw up and be unable to overcome their own emotional issues enough to provide consistent parenting to all of their kids" doesn't sound less implausible to me than "parenting just has no effect".
It's easy for me to imagine that e.g. a parent who pushes their children to socialize may benefit some of their children in the long-term while overwhelming their more introvert or socially anxious kids; that a parent that leans neurotypical vs. autistic could have an intuitive understanding of their kids who are like them but end up not parenting the unlike-them kids well; that a parent would relate significantly differently to their girls than their boys; and so on.
Also I suspect that reducing things to a single spectrum of "does one's parenting have good or bad effects" is too simplistic. For instance, Eliezer has a post where he mentions that:
Now I think that this is too strong - you can probably become a rationalist even without that kind of a background - but if we accept that this was the origin story for some rationalists, then... growing up in a cult or having a clinically insane parent, so that your "core emotional trust in the sanity of the people around you" is broken, certainly sounds like it should have an overall "bad effect". And probably it does have some bad effects. But on the other hand, if it also makes you into an upstanding critical thinker, that probably contributes to good outcomes? Which of those effects dominates? Idk, probably depends a lot on your environment and how lucky you get and it could go either way.
In therapy there's also the finding that different people may respond to exactly the same issue with the opposite emotional strategies, so if two children grew up in a cult, maybe one of them would lose all capability for critical thinking and the other would become a super-rationalist. This could be partially driven by genetic factors, but then if their parents didn't join a cult, those same genetics would probably lead to more moderate outcomes.
So rather than going with the bio-determinist rule of thumb that "things you do as a parent will have generally small or zero effects on what the kid will be like as an adult", maybe it's more like "things you do as a parent will have generally small or zero predictable effects on what the kid will be like as an adult" - if your kids had gotten a different set of parents, they might get significantly different outcomes, but there's no clear way of predicting how those outcomes are different. With you as their parent, your son gets outcomes [good A, bad B, neutral C] and your daughter gets [good A, neutral B, bad C]; with some other parent, your son would get [neutral A, bad B, good C] and your daughter would get [good A, bad B, neutral C].