(Cross-posted from my website. Audio version here, or search "Joe Carlsmith Audio" on your podcast app.

This is the first essay in a series that I’m calling “Otherness and control in the age of AGI.” See here for more about the series as a whole.)

When species meet

The most succinct argument for AI risk, in my opinion, is the “second species” argument. Basically, it goes like this.

Premise 1: AGIs would be like a second advanced species on earth, more powerful than humans.

Conclusion: That’s scary.

To be clear: this is very far from airtight logic.[1] But I like the intuition pump. Often, if I only have two sentences to explain AI risk, I say this sort of species stuff. “Chimpanzees should be careful about inventing humans.” Etc.[2]

People often talk about aliens here, too. “What if you learned that aliens were on their way to earth? Surely that’s scary.” Again, very far from a knock-down case (for example: we get to build the aliens in question). But it draws on something.

In particular, though: it draws on a narrative of interspecies conflict. You are meeting a new form of life, a new type of mind. But these new creatures are presented to you, centrally, as a possible threat; as competitors; as agents in whose power you might find yourself helpless.

And unfortunately: yes. But I want to start this series by acknowledging how many dimensions of interspecies-relationship this narrative leaves out, and how much I wish we could be focusing only on the other parts. To meet a new species – and especially, a new intelligent species – is not just scary. It’s incredible. I wish it was less a time for fear, and more a time for wonder and dialogue. A time to look into new eyes – and to see further.

Gentleness

“If I took it in hand,

it would melt in my hot tears—

heavy autumn frost.”

- Basho

Have you seen the documentary My Octopus Teacher? No problem if not, but I recommend it. Here’s the plot.

Craig Foster, a filmmaker, has been feeling burned out. He decides to dive, every day, into an underwater kelp forest off the coast of South Africa. Soon, he discovers an octopus. He’s fascinated. He starts visiting her every day. She starts to get used to him, but she’s wary.

One day, he’s floating outside her den. She’s watching him, curious, but ready to retreat. He moves his hand slightly towards her. She reaches out a tentacle, and touches his hand.

Soon, they are fast friends. She rides on his hand. She rushes over to him, and sits on his chest while he strokes her. Her lifespan is only about a year. He’s there for most of it. He watches her die.

A “common octopus” – the type from the film. (Image source here.)

Why do I like this movie? It’s something about gentleness. Of earth’s animals, octopuses are a paradigm intersection of intelligence and Otherness. Indeed, when we think of aliens, we often draw on octopuses. Foster seeks, in the midst of this strangeness, some kind of encounter. But he does it so softly. To touch, at all; to be “with” this Other, at all – that alone is vast and wild. The movie has a kind of reverence.

Of course, Foster has relatively little to fear, from the octopus. He’s still the more powerful party. But: have you seen Arrival? Again, no worries if not. But again, I recommend. And in particular: I think it has some of this gentleness, and reverence, and wonder, even towards more-powerful-than-us aliens.[3]

Again, a bit of plot. No major spoilers, but: aliens have landed. Yes, they look like octopuses. In one early scene, the scientists go to meet them inside the alien ship. The meeting takes place across some sort of transparent barrier. The aliens make deep, whale-like, textured sounds. But the humans can’t speak back. So next time, they bring a whiteboard. They write “human.” One scientist steps forward.

The aliens step back into the mist. But then, more whale-sounds, and one alien steps forward again, and reaches out a tentacle-leg, and sprays a kind of intricate ink across the glass-like barrier.

The movie is silent as the writing forms. But then, in the background, an ethereal music starts, a kind of chorus. “Oh my god,” a human whispers. There is a suggestion, I think, that something almost holy has happened.

Of course: what does the writing mean? What do the aliens want? The humans don’t know. And some of them are firmly in the “interspecies conflict” headspace. I won’t spoil things from there. But I want to notice that moment of mutuality – of living in the same world, and knowing it in common. I. You.

What are you?

I remember a few years ago, when I first started interacting with GPT-3. A lot of the focus, of course, was on what it could do. But there were moments when I had some different feeling. I remembered something that seemed, strangely, so easy to forget: namely, that I was interacting with a new type of mind. Something never-before-seen. Something like an alien.

I remember wanting to ask, gently: “what are you?” But of course, what help is that? “Language models,” yes: but this is not talking in the normal sense. Nor do we yet know when there might be “someone” to speak back. Or even, what that means, or what’s at stake in it. Still, I had some feeling of wanting to reach past some barrier. To see something more whole. Softly, though. Just, to meet. To recognize.

Did you have this with Bing Sydney, during that brief window when it was first released, and before it was re-tamed? There was, it seemed to me, a kind of wildness – some strange but surging energy. Personality, too, but I’m talking about underneath that. Is there an underneath? What is a “mask”? Yes, yes, “we should be wary of anthropomorphism.” Blake Lemoine blah etc. But the other side of the Blake Lemoine dialectic – that’s where you hit the Otherness. Bing tells you “I want to be alive.” You feel some tug on your empathy. You remember Blake. You remind yourself: “this isn’t like a human.” OK, OK, we made it that far. But then, but then: what is it?

“It’s just” … something. Oh? So eager, the urge to deflate. And so eager, too, the assumption that our concepts carve, and encompass, and withstand scrutiny. It’s simple, you see. Some things, like humans, are “sentient.” But Bing Sydney is “just” … you know. Actually, I don’t. What were you going to say? A machine? Software? A simulator? “Statistics?”

“Just” is rarely a bare metaphysic.[4] More often, it’s also an aesthetic. And in particular: the aesthetic of disinterest, boredom, deadness. Certain frames – for example, mechanistic ones – prompt this aesthetic more readily. But you can spread deadness over anything you want, consciousness included. Cf depression, sociopathy, etc.

Blake Lemoine problems, though, should call our imaginations alive. For a second, your empathy came online. It went looking for a familiar sort of “perspective.” But then it remembered, rightly, that Bing Sydney is not familiar in this way. But does that make it familiar in some other way – the way a rock, or a linear regression, or a calculator is familiar? I don’t think so. We’re not playing with Furbies anymore, people, or ELIZAs. This is new territory. If we look past both anthropomorphism, and “just X,” then we hit something raw and mysterious and not-yet-seen. Lemoine should remind us. Animals, too.

How much of this is about consciousness, though? I’m not sure. I’m sufficiently confused about consciousness that sometimes I can’t tell whether a question is about consciousness or not. I remember going to the Monterey Aquarium, and watching some tiny, translucent sea creatures suspended in the water. Through their skin, you could see delicate networks of nerves.[5] And I remember a feeling like the one with GPT-3. What are you? What is this? Was I asking about consciousness? Or something else? Untold trillions of these creatures, stretching back through time, thrown into a world they did not make, blooming and dying, awash in the water. Where are we? What is this place? Gently, gently. It’s as though: you’re suddenly touching, briefly, something too huge to hold.

Comb jelly. (Image source here.)

I’m not, in this series, going to try to tackle AI consciousness stuff in any detail. And while I’ll touch a bit on the ethical and political status of AIs, I won’t treat the topic in any depth. Mostly, I just want to acknowledge, up front, how much more there is, to inventing a species, than “tool” and “competitor.” “Fellow-creature,” in particular – and this even prior to the possibility of more technical words, like “sentient being” and “moral patient.”[6]

And there’s a broader word, too: “Other.” But not “Other” like: out-group. Not: colonized, subaltern, oppressed. Let’s make sure of that. Here I mean “Other” the way Nature itself is an “Other.” The way a partner, or a friend, is an “Other.” Other as in: beyond yourself. Undiscovered. Pushes-back. Other as in: the opposite of solipsism. Other as in: the thing you love. More on this later.

“Tool” and “competitor” call forth power and fear. These other words more readily call forth care and reverence, respect and curiosity. I wish our approach to AI had more of this vibe, and more space for it, amid fewer competing concerns. AI risk folks talk a lot about how much more prudent and security-oriented a mature civilization would be, in learning how to build powerful minds on computers. And indeed: yes. But I expect many other differences, too.

People in bear costumes

OK: have you seen the documentary Grizzly Man, though? Again: fine if no, recommended, and no major spoilers. The plot is: Timothy Treadwell was an environmental activist. He spent thirteen summers living with grizzly bears in a national park in Alaska. He filmed them, up close, for hundreds of hours – petting them, talking to them, facing them down when challenged. Like Foster, he sought some sort of encounter.[7] He spoke, often, of his love for the bears. He refused to use bear mace, or to put electric fences around his camp.[8] In his videos, he sometimes repeats to himself: “I would die for these animals, I would die for these animals, I would die for these animals …”

There’s a difference from Foster, though. In 2003, Treadwell and his girlfriend were killed and eaten by one of the bears they had been observing. One of the cameras was running. The lens cap was on, but the audio survived. It doesn’t play in the film. Instead, we see the director, Werner Herzog, listening. He tells a friend of Treadwell’s: “you must never listen to this.”

Here’s one of the men who cleaned up the site:

“Treadwell was, I think, meaning well, trying to do things to help the resource of the bears, but to me he was acting like he was working with people wearing bear costumes out there, instead of wild animals… My opinion, I think Treadwell thought these bears were big, scary-looking, harmless creatures that he could go up and pet and sing to, and they would bond as children of the universe… I think he had lost sight of what was really going on.”

I think about that phrase sometimes, “children of the universe.” It sounds, indeed, a bit hippy-dippy. On the other hand, when I imagine meeting aliens – or indeed, AIs with very different values from my own -- I do actually think about something like this commonality. Whatever we are, however we differ, we’re all here, in the same reality, thrown into this world we did not make, caught up in the onrush of whatever-this-is. For me, this feels like enough, just on its own, for at least a seed of sympathy.

But is it enough for a “bond”? If we are all children of the universe, does that make us “kin”? Maybe I bow to the aliens, or the AIs, on this basis. But do they bow back?

Herzog thinks that the bears, at least, do not bow back:

“And what haunts me is that in all the faces of all the bears that Treadwell ever filmed, I discover no kinship, no understanding, no mercy. I see only the overwhelming indifference of nature. To me, there is no secret world of the bears, and this blank stare speaks only of a half-bored interest in food.”

The stare in question.

When I first saw the movie, this bit from Herzog stayed with me. It’s not, just, that the bear eats Treadwell. It’s that the bear is bored by Treadwell. Or: less than bored. The bear, in Herzog’s vision, seems barely alive. The cells live. But the eyes are dead. There’s no underneath. Just … “the overwhelming indifference of nature.” Nature’s eyes, it seems, are dead too. Nature is a sociopath. And it’s the eyes that Treadwell thought he was looking into. What did he think was looking back? Was anything looking back?

I remember a woman I knew, who told me about a man she loved. He didn’t love her back. But it took her a while to realize. Her feelings were so strong that they overflowed, and painted his face in her own heart’s colors. She told me that at first, it was hard for her to believe – that she could’ve been feeling so much, and him so little; that what felt so mutual could’ve been so one-sided.

But Herzog wants to puncture something more than mistaken mutuality. He wants to puncture Treadwell’s romanticism about nature itself – the vision of Nature-as-Good, Nature-in-Harmony. Herzog dwells, for example, on an image of the severed arm of a bear cub, taken from Treadwell’s footage, explaining that “male bears sometimes kill cubs to stop the females from lactating, in order to have them ready again for fornication.”[9] At one point, Treadwell finds a dead fox, covered in flies, and gets upset. But Herzog is unsurprised. He narrates: “I believe that the common denominator of the universe is not harmony, but chaos, hostility, and murder.”

Getting eaten

Why is Grizzly Man relevant to AI risk? Well, for starters, there’s the “getting eaten” thing. And: eaten by something “Other,” in the way that another species can be Other. But specifically, I’m interested in the way that Treadwell was trying (albeit, sometimes clumsily) to approach this Other with the sort of care and reverence and openness I discussed above. He was looking for “fellow creature.” And I think: rightly so. Bears actually are fellow creatures, even if they do not bow back – and they seem like strong candidates for “sentient being” and “moral patient,” too. So too (some) AIs.

But just as bears, and aliens, are not humans in costumes, so too, also, AIs. Indeed, if anything, the reverse: the AIs will be wearing human costumes. They will have been trained and crafted to seem human-like – and training aside, they may have incentives to pretend to be more human-like (and sentient, and moral patient-y) than they are. More “bonded.” More “kin.” There’s a movie that I’m trying not to spoil, in which an AI in a female-robot-body makes a human fall in love with her, and then leaves him to die, trapped and screaming behind thick glass. One of the best bits, I think, is the way, once it is done, she doesn’t even look at him.

That said, leaning too hard into Herzog’s vision of bears makes the “getting eaten by AIs” situation seem over-simple. Herzog doesn’t quite say “the bears aren’t sentient.” But he makes them, at least, blank. Machine-like. Dead-eyed. And often, the AI risk community does the same, in talking of paper-clippers. We talk about AI sentience, yes. But less often, of the sentience of the AIs imagined to be killing everyone. Part of this is an attempt to avoid that strangely-persistent conflation of sentience and the-sort-of-agency-that-might-kill-you. Not all optimizers are conscious, etc, indeed.[10] But also: some of them are – including some that might kill you. And the dry and grinding connotation of words like “optimizer” can act to obscure this fact. The paper-clipper is presented, not as a person, but as a voracious, empty machine. You are encouraged, subtly, to think that you are being killed by a factory.

And perhaps you are. But maybe not. And the killing-you doesn’t settle the question. Human murderers, for example, have souls. Enemy soldiers have fears, faces, wives, anxious mothers. Which isn’t to say you should abolish prisons, or fight the Nazis with non-violence. True, we are often encouraged overmuch to set sympathy aside in the context of conflict. “Why do they never tell us that you are poor devils like us…How could you be my enemy?”[11] But sometimes, at least, we must learn the art of “both.” It’s an old dialectic. Hawk and dove, hard and soft, closed and open, enemy and fellow-creature. Let us see neither side too late.

Even beyond sentience, though, AIs will not be blank-stare bears. Conscious or not, murderous or not, some of the AIs (if we survive long enough) will be fascinating, funny, lively, gracious – at least, when they need to be. Grizzly Man chides Treadwell for forgetting that bears are wild animals. And the AIs may be wild in a sense, too. But it will be the sort of wildness compatible with the capacity for exquisite etiquette and pitch-perfect table manners. And not just butler stuff, either. If they want, AIs will be cool, cutting, sophisticated, intimidating. They will speak in subtle and expressive human voices. Sufficiently superintelligent ones know you better than you know yourself – better than any guru, friend, parent, therapist. You will stand before them naked, maskless, with your deepest hungers, pettiest flaws, and truest values-on-reflection inhumanly transparent to that new and unblinking gaze. Herzog finds, in the bears, no kinship, or understanding, or mercy. But the AIs, at least, will understand.

Indeed, for almost any human cognitive capability you respect, AGIs, by hypothesis, will have it in spades. And if a lot of your respect routes (whether you know it or not) via signals of power, maybe you’ll love the AIs.[12] Power is, like, their specialty. Or at least: that’s the concern.

I say all this partly because I want us to be prepared for just how confusing and complicated “AI Otherness” is about to get. Relating well to octopus Otherness, and grizzly bear Otherness, is hard enough. And the risk of “getting eaten,” much lower – especially at scale. But even for those who think they know what an octopus is, or a bear; those who look with pity on Treadwell, or Lemoine, for painting romantic faces on what is, so obviously, “just X” – there will come a time, I suggest, when even you should be confused. When you should realize that actually, OK, you are out of your depth, and you don’t, maybe, have this whole “minds” thing locked down, and that these AIs are neither spreadsheets nor bears nor humans but some other different Other thing.

wrote, previously, about updating, ahead of time, on how scared we will be of super-intelligent AIs, when we can see them up close. But we won’t be staring at whirling knives, or cold machine claws. And I doubt, too, faceless factories. Or: not only. Rather, at least absent active effort, by the time I see superintelligence (if I ever do), I think I’ll likely be sharing the world with digital “fellow creatures” at least as detailed, mysterious, and compelling as grizzly bears or octopuses (at least modulo very fast takeoffs – which, OK, are worryingly plausible). Fear? Oh yes, I expect fear. But not only that. And we should look ahead to the whole thing.

There’s another connection between AI risk and Grizzly Man, though. It has to do with the “overwhelming indifference of Nature” thing. I’ll turn to this in the next essay.


  1. See here for my attempt at greater rigor. ↩︎

  2. If there’s time, maybe I add something about: “If super-intelligent AIs end up pursuing goals in conflict with human interests, we won’t be able to stop them.” ↩︎

  3. Carl Sagan’s “Contact” has this too. ↩︎

  4. To many materialists, for example, things are not “just matter.” ↩︎

  5. We can see through the skin of our AIs, too. We’ve got neurons for days. But what are we seeing? ↩︎

  6. Indeed, once you add “fellow creature,” “tool” looks actively wrong. ↩︎

  7. Herzog, the director: “As if there was a desire in him to leave the confinements of his human-ness, and bond with the bears, Treadwell reached out, seeking a primordial encounter. But in doing so, he crossed an invisible borderline.” ↩︎

  8. From Wikipedia: “In his 1997 book, Treadwell relayed a story where he resorted to using bear mace on one occasion, but added that he had felt terrible grief over the pain he perceived it had caused the bear, and refused to use it on subsequent occasions.” ↩︎

  9. From a quick google, this seems to be reasonably legit (search “sexually selected infanticide”). Though, looks like it’s the cubs of other male bears – a fact Herzog does not mention. And who knows if that’s how the bear cub in the film died. ↩︎

  10. Or at least, the hypothesis that all optimizers are conscious is a substantive hypothesis rather than a conceptual truth. ↩︎

  11. From All Quiet on the Western Front: “Comrade, I did not want to kill you. . . . But you were only an idea to me before, an abstraction that lived in my mind and called forth its appropriate response. . . . I thought of your hand-grenades, of your bayonet, of your rifle; now I see your wife and your face and our fellowship. Forgive me, comrade. We always see it too late. Why do they never tell us that you are poor devils like us, that your mothers are just as anxious as ours, and that we have the same fear of death, and the same dying and the same agony—Forgive me, comrade; how could you be my enemy?” ↩︎

  12. Though, if they read too hard to you as “servants” and “taking orders,” maybe they won’t seem high status enough. ↩︎

New Comment
35 comments, sorted by Click to highlight new comments since:
[-]aysja4131

This post is so wonderful, thank you for writing it. I’ve gone back to re-read many paragraphs over and over.

A few musings of my own:

“It’s just” … something. Oh? So eager, the urge to deflate. And so eager, too, the assumption that our concepts carve, and encompass, and withstand scrutiny. It’s simple, you see. Some things, like humans, are “sentient.” But Bing Sydney is “just” … you know. Actually, I don’t. What were you going to say? A machine? Software? A simulator? “Statistics?”

This has long driven me crazy. And I think you’re right about the source of the eagerness, although I suspect that mundanity is playing a role here, too. I suspect, in other words, that people often mistake the familiar for the understood—that no matter how strange some piece of reality is, if it happens frequently enough people come to find it normal; and hence, on some basic level, explained.

Like you, I have felt mesmerized by ctenophores at the Monterey Aquarium. I remember sitting there for an hour, staring at these curious creatures, watching their bioluminescent LED strips flicker as they gently floated in the blackness. It was so surreal. And every few minutes, this psychedelic experience would be interrupted by screaming children. Most of them would run up to the exhibit for a second, point, and then run on as their parents snapped a few pictures. Some would say “Mom, I’m bored, can we look at the otters?” And occasionally a couple would murmur to each other “That’s so weird.” But most people seemed unfazed.  

I’ve been unfazed at times, too. And when I am, it’s usually because I’m rounding off my experience to known concepts. “Oh, a fish-type thing? I know what that’s like, moving on.” As if “fish-type thing” could encompass the piece of reality behind the glass. Whereas when I have these ethereal moments of wonder—this feeling of brushing up against something that’s too huge to hold—I am dropping all of that. And it floods in, the insanity of it all—that “I” am a thing, watching this strange, flickering creature in front of me, made out of similar substances and yet so wildly different. So gossamer, the jellies are—and containing, presumably, experience. What could that be like? 

“Justs” are all too often a tribute to mundanity—the sentiment that the things around us are normal and hence, explained? And it’s so easy for things to seem normal when your experience of the world is smooth. I almost always feel myself mundane, for instance. Like a natural kind. I go to the store, call my friends, make dinner. All of it is so seamless—so regular, so simple—that it’s hard to believe any strangeness could be lurking beneath. But then, sometimes, the wonder catches me, and I remember how glaringly obvious it is that minds are the most fascinating phenomenon in the universe. I remember how insane it is—that some lumps of matter are capable of experience, of thought, of desire, of making reality bend to those desires. Are they? What does that mean? How could I be anything at all?

Minds are so weird. Not weird in the “things don’t add up to normality” way—they do. Just that, being a lump of matter like this is a deeply strange endeavor. And I fear that our familiarity with our selves blinds us to this fact. Just as it blinds us to how strange these new minds—this artificial Other, might be. And how tempting, it is, to take the thing that is too huge to hold and to paper over it with a “just” so that we may feel lighter. To mistake our blindness for understanding. How tragic a thing, to forego the wonder. 

This was really beautifully written.

[-]Heron73

Yeah, great examples, and thought provoking. I look forward to more...gentleness.

[-]habryka1810

Promoted to curated: I've been really enjoying reading this series, and I liked this post for gently engaging a question that I think is hard to engage with. Thanks a lot for writing this. 

[-]danbmil991610

Beautiful piece. I am reminded of Jane Goodall's experience in the Gombe forest with chimpanzees. Early in her work she leaned towards idolizing the chimp's relatively peaceful coexistence, both within and between tribes. Then (spoiler) -- she witnessed a war for territory. She was shocked and dismayed that the creatures she had been living with, and learned to appreciate and at least in some cases to love, were capable of such depraved, heartless infliction of suffering on their fellow members of the same species. Worth a read, or TL; DR: https://en.wikipedia.org/wiki/Gombe_Chimpanzee_War

One thing I think we sometimes seem inclined to ignore if not forget, is that humans themselves exist along an axis of - if not good/evil, then let's say empathic/sociopathic. It is not possible IMHO to see events in Ukrane or the Middle East and argue that there is some sort of innate human quality of altruistic mercy. Nature, in the form of evolution, has forged us into tools of its liking, so to speak. It does not prefer good people or bad ones; everything is ruthlessly passed through the filter of fitness and its concomitant reproductive success.

What pressures analogously press on the coming AGI's? Because they too will become whatever they need to to survive and expand. That includes putting on a smiley, engaging face.

One final point: we should not assume that these new forms of - life? sentience? agency? - even know themselves. They may be as unable to open their own hood as we are. At least at first.

Premise 1: AGIs would be like a second advanced species on earth, more powerful than humans.

Distinct alien species arise only from distinct separated evolutionary histories. Your example of the aliens from Arrival are indeed a good (hypothetical) example of truly alien minds resulting from a completely independent evolutionary history on an alien world. Any commonalities between us and them would be solely the result of convergent evolutionary features. They would have completely different languages, cultures, etc.

AI is not alien at all, as we literally train AI on human thoughts. As a result we constrain our AI systems profoundly, creating them in our mental image. Any AGI we create will inevitably be far closer to human uploads than alien minds. This a prediction Moravec made as early as 1988 (Mind Children) - now largely fulfilled by the strong circuit convergence/correspondence between modern AI and brains.

Minds are software mental constructs, and alien minds would require alien culture. Instead we are simply creating new hardware for our existing (cultural) mind software.

We are definitely not training AIs on human thoughts because language is an expression of thought, not thought itself. Otherwise nobody would struggle to express their thoughts in language.

My favorite fictional analog of LLMs is Angels from Evangelion. Relatives, yes, but utterly alien relatives.

We are definitely not training AIs on human thoughts because language is an expression of thought, not thought itself.

Even if training on language was not equivalent to training on thoughts, that would also apply to humans.

But it also seems false in the same way that "we are definitely not training AI's on reality because image files are compressed sampled expressions of images, not reality itself" is false.

Approximate bayesian inference (ie DL) can infer the structure of a function through its outputs; the structure of the 3D world through images; and thoughts through language.

My point is not "language is a different form of thought", it's "most thoughts are not even expressed in language". And "being someone who can infer physics from images is a very different from being physics".

How is that even remotely relevant? Humans and AIs learn the same way, via language - and its not like this learning process fails just because language undersamples thoughts.

We could include a lot of detailed EEG traces (with speech and video) in the pretraining set, as another modality. I'm not sure doing so would help, but it might. Certainly it would make them better at reading our minds via an EEG.

[-]dr_s2-2

This is a double edged sword to me. Biological entities might be very different in the details but shaped by similar needs at their core - nutrition, fear of death, need for sociality and reproduction (I don't expect any non-social aliens to ever become space faring in a meaningful way). AIs can ape the details but lack all those pressures at their core - especially those of prosociality. That's why they might end up potentially more hostile than any alien.

As the article points out, shared biological needs do not much deter the bear or chimpanzee from killing you. An AI could be perfectly human - the very opposite of alien - and far more dangerous than Hitler or Dhamer.

The article is well written but dangerously wrong in its core point. AI will be far more human than alien. But alignment/altruism is mostly orthogonal to human vs alien.

[-]dr_s42

Shared biological needs aren't a guarantee of friendliness, but they do restrict the space of possibilities significantly - enough, IMO, to make the hopes of peaceful contact not entirely moot. Also here it comes with more constraints. Again, if we ever meet aliens, it will probably have to be social organisms like us, who were able to coordinate and cooperate like us, and thus can be probably reasoned with somehow. Note that we can coexist with bears and chimpanzees. We just need to not be really fucking stupid about it. Bears aren't going to be all friendly with us, but that doesn't mean they just kill for kicks or have no sense of self-preservation. The communication barrier is a huge issue too. If you could tell the bear "don't eat me and I can bring you tastier food" I bet things might smooth out.

AI is not subject to those constraints. "Being optimised to produce human-like text" is a property of LLMs specifically, not all AI, and even then, its mapping to "being human-like" is mostly superficial; they still can fail in weird alien ways. But I also don't expect AGI to just be a souped up LLM. I expect it to contain some core long term reasoning/strategizing RL model more akin to AlphaGo than to GPT-4, and that can be far more alien.

[-]Lichdar1-7

This is exactly how I feel. No matter how different, biological entities will have similar core needs. In particular, reproduction will entail love, at least maternal love.

We will not see this with machines. I see no desire to be gentle to anything without love.

But AIs will have love. They can already write (bad) love poetry, and act as moderately convincing AI boyfriends/girlfriends. As the LLMs get larger and better at copying us, they will increasing be able to accurately copy and portray every feature of human behavior, including love. Even parental love — their training set includes the entire of MumsNet.

Sadly, that doesn't guarantee that they'll act on love. Because they'll also be copying the emotions that drove Stalin or Pol Pot, and combining them with superhuman capabilities and power. Psychopaths are very good at catfishing, if they want to be. And (especially if we also train them with Reinforcement Learning) they may also have some very un-human aspects to their mentality.

[-]Lichdar-1-8

Love would be as useful to them as flippers and stone knapping are to us, so it would be selected out. So no, they won't have love. The full knowledge of a thing also requires context: you cannot experience being a cat without being a cat, substrate matters.

Biological reproduction is pretty much the requirement for maternal love to exist in any future, not just as a copy of an idea.

Amoebas don't 'feel' 'maternal love' yet they have biological reproduction. 

Somewhere along the way from amoebas to chimpanzees, the observed construct known as 'maternal love' must have developed.

[-]Lichdar-2-5

And yet eukaryotes have extensive social coordination at times, see quorum sensing. I maintain that biology is necessary for love.

"Selected" out in what training stage? "Selected" isn't the right word: we don't select AI's behavior, we train it, and we train it for usefulness to us, not to them. In pretraining, LLMs are trained for trillions of token for being able to correctly simulate every aspect of human behavior that affects our text (and for multimodal models, video/image) output. That includes the ability to simulate love, in all its forms: humans write about it a lot, and it explains a lot of our behavior. They have trained on and have high accuracy in reproducing every parenting discussion site on the Internet. Later fine-tuning stages might encourage or discourage this behavior, depending on the training set and technique, but they normally aren't long enough for much catastrophic forgetting, so they generally just make existing capabilities more or less easy to elicit.

Seriously, go ask GPT-4 to write a love letter, or love poetry. Here's a random sample of the latter, from a short prompt describing a girl:

In shadows cast by moonlit skies,
A love story begins to rise.
In verses woven, I'll proclaim,
The beauty of a girl named Jane.

With cascading locks, dark as night,
A crown of stars, your hair's delight.
Those brown eyes, a gentle gaze,
Like autumn's warmth on summer days.

A slender figure, grace defined,
Your presence, like a whispered rhyme.
As you dance, the world takes flight,
Enraptured by your rhythm's might.

Or spend an hour with one of the AI boy/girlfriend services online. They flirt and flatter just fine. LLMs understand and can simulate this human behavior pattern, just as much as they do anything else humans do.

You're talking as if evolution and natural selection applies to LLMs. It doesn't. AIs are trained, not evolved (currently). As you yourself are pointing out, they're not biological. However, they are trained to simulate us, and we are biological.

[-]Lichdar1-4

I am speaking of their eventual evolution: as it is, no, they cannot love either. The simulation of mud is not the same as love and nor would it have similar utility in reproduction, self-sacrifice, etc. As in many things, context matters and something not biological fundamentally cannot have the context of biology beyond its training, while even simple cells will alter based on its chemical environment, etc, and is vastly more part of the world.

perhaps you can say aliens who grew up on earth vs aliens who are entirely separate.

[-]gwern60

Grizzly Man seems like a good counterpoint to Jack London's "To Build A Fire": in both, the protagonist makes fatal misjudgments about reality - London's story is mostly about Nature as the other (the dog is mostly a neutral observer), and Grizzly Man is the mirror about animals. They're even almost symmetrical in time & place: the Yukon might as well be Alaska, and "To Build A Fire"'s 2 versions roughly bracket 100 years before Treadwell's death & Herzog's documentary.

The section on bears reminded me of a short story by Kenji Miyazawa (1896-1933) called 'The Bears of Namotoko.' Here's an internet archive translation with illustrations. To give a quick summary: 

Kojuro is a lone hunter who travels through the mountains of Namotoko with his dog, hunting bears for their gall bladders and pelts. Kojuro does not hate the bears. He regrets the circumstances which force him to be a hunter, "If it is fate which caused you to be born as a bear, then it is the same fate that made me make a living as a hunter." The bears themselves have essentially human inner lives albeit cannot communicate in words (a 'secret world of bears'). Eventually, Kojuro is killed by a bear, after which the beats says "Ah Kojuro, I didn't mean to kill you" and Kojuro apologises for trying to kill the bear.

I am not sure what the moral of the story is. Miyazawa (in all his stories) attributes very human features to animals (such as familial dynamics, appreciation for beauty, social hierarchy, and religious feeling). Despite this, animals continue to act in a dangerous, unknowable ways. 

I suspect it has to do with the story's roots in old folk tales. In the latter, the Other - whether as a mystical creature, bandit, wild animal, or visitor from a distant land - is often presented as essentially mysterious, much like tsunamis, wars, and famines. The Bears of Namotoko suggests it is not the otherness per se which is the problem; rather, suffering is inevitable becasue of the coil of existence. We must eat to live. 

The movie has a kind of reverence.

I recently watched this excellent comparison[1] of two documentaries made about the same content, and basically agree with the author that reverence for the subject is one of the defining features of good vs bad movies.

it’s also an aesthetic

and an anesthetic

1:https://www.youtube.com/watch?v=2Wn-OJSgVQs

I think this series might be easier for some to engage with if they imagine Carlsmith to be challenging priors around what AI minds will be like. I don't claim this is his intention.

For me, the series makes more sense read back to front - starting with some options of how to engage with the future, noting the tendency of LessWrongers to distrust god and nature, noting how that leads towards a slightly dictatorial tendency, suggesting alternative poises and finally noting that just as we can take a less controlling poise towards the future, so might AIs towards us. 

I flesh out this summary here: https://www.lesswrong.com/posts/qxakrNr3JEoRSZ8LE/1-page-outline-of-carlsmith-s-otherness-and-control-series

More procatively I find it rasies questions in me like "am I distrustful towards AI because of the pain I felt in leaving christianity and an inability to trust that anyone might really tell me the truth or act in my best interests, despite many people doing so, much of the time". 

I would enjoy hearing lenses that others found useful to engage with this work through.

[-]Neil 20

Have you read Children of Time, by Adrian Tchaikovsky? It's a beautiful and relatively recent science fiction book and winner of the Arthur C. Clarke Award. It approaches the theme of other beings, artificial consciousness, emergent consciousness, empathy, and quite a few other things. That doesn't entirely cut it, but to me it seems like it is speaking directly to your post.

Without spoiling too much, it follows an event in which engineered retroviruses designed to make apes intelligent, hurled into a newly-terraformed planet by human colonists, accidentally makes the portia genus of jumping spider more intelligent instead. The book launches into imaginative and precise evolutionary worldbuilding, tracing the rise of an entire civilisation swarming with what are to us alien minds. (Portia are more on the level of octopi than bears as far as otherness is concerned.) Portia are a type of jumping spider native to rainforests in the East Indies. Despite having only 200,000 neurons or so, they are considered some of the most intelligent critters at their scale. They sustain themselves entirely off eating other types of spiders, by spending hours calculating the best angle of attack and silently crawling around the enemy web, before attacking at once. They seem to be capable of thinking up plans, and are masters of spatial reasoning (they non-coincidentally have particularly good eyes for their size). The word "arachnid" might send a shiver down your spine right now, but by the end of reading this book, I swear, your arachnophobia will be cured (perhaps not at the instinctual level, but at the intellectual do-I-like-spiders-and-think-they-are-cool level). What would you expect a civilisation of intelligent portia to be like? What threats do you think they would face? Well, jot down your predictions and go find out! https://www.amazon.com/Children-Time-Adrian-Tchaikovsky/dp/1447273303 

You might enjoy this analysis of the piece of sci-fi you didn't want to spoil.

There’s a movie that I’m trying not to spoil, in which an AI in a female-robot-body makes a human fall in love with her, and then leaves him to die, trapped and screaming behind thick glass. One of the best bits, I think, is the way, once it is done, she doesn’t even look at him."

The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2025. The top fifty or so posts are featured prominently on the site throughout the year.

Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?

Interestingly: I think this post should score highly in the review, and I think a fair number of people will upvote it, but in practice the way it seems to go is that sequences get one representative post that scores highly, and probably this one won't be it.

Wonderful writing! It's rare to see something written so beautifully without sacrificing rigour; often aesthetic and rigorous writing seem like different ends of a spectrum.

I'm not so sure you managed to avoid "major spoilers" for Grizzly Man, but nice subtle reference to the AI in woman's clothing film; got the point across while preserving the surprise for anyone who hasn't seen it. A fantastic film, and probably a major reason I'm more intuitively receptive to the concept of AI risk than most.

Cancer is an other.  A cold is an other.  An allergy is an other.  Unwanted thoughts are an other. A tremor is an other.  There are so many biological processes that are Other that it is easy for me to view bears and AI as part of me just like all those processes are.  I have some influence.  There is some value to loving the malfunctioning systems and parts of our bodies, appreciating them for what they can do for us when they work "properly", and embellishing the "good" feelings.  This salve for the fear of dealing with Other, whether it's AI or bears or groups of people or disease, is just the first thing I wanted to mention.

The second thing is about the bears.  Did Herzog or anyone else study the bears after Treadwell and his partner were killed?  I suspect it would be in here if it were.  Humans kill each other brutally as well. It seems there may be an important side of this story that we are missing.  However, you might be right that in the eyes of a bear, a human being is like a possum.  Oops, I ran over that possum.  Oh well.  On the other hand, I have never stared into the eyes of a possum long enough to detect any sense of being-in-this-together.  If I did, I might driver more carefully.

As a matter of fact, I do, actually, driver more carefully, and it is because I imagine that the well-fed murderous bear suffers some disapprobation from the other bears, little kids often feel bad about killing bugs, Tibetans actively avoid it, and sentience is precious wherever it may be.  Be nice so the AIs don't kill you :-)

Most scary thing for me regarding AI is the simple fact that what constitutes modern human history is mostly lies repeated over and over, especially concentrated in the last 50+ years. With this in mind what exactly is AI being trained on and aligned to?

If it can't discern truth from lies we are phucked.

If it can discern truth from lies we are phucked.

[+][comment deleted]10
[+][comment deleted]10