(Cross-posted from my website. Audio version here, or search for "Joe Carlsmith Audio" on your podcast app.)

This is the final essay in a series that I'm calling "Otherness and control in the age of AGI." I'm hoping that the individual essays can be read fairly well on their own, but see here for a brief summary of the series as a whole. There's also a PDF of the whole series here.

Warning: spoilers for Angels in America; and moderate spoilers for Harry Potter and the Methods of Rationality.)

"I come into the presence of still water..."

~Wendell Berry

A lot of this series has been about problems with yang—that is, with the active element in the duality of activity vs. receptivity, doing vs. not-doing, controlling vs. letting go.[1] In particular, I've been interested in the ways that "deep atheism" (that is, a fundamental mistrust towards Nature, and towards bare intelligence) can propel itself towards an ever-more yang-y, controlling relationship to Otherness, and to the universe as a whole. I've tried to point at various ways this sort of control-seeking can go wrong in the context of AGI, and to highlight a variety of less-controlling alternatives (e.g. "gentleness," "liberalism/niceness/boundaries," and "green") that I think have a role to play.[2]

This is the final essay in the series. And because I've spent so much time on potential problems with yang, and with deep atheism, I want to close with an effort to make sure I've given both of them their due, and been clear about my overall take. To this end, the first part of the essay praises certain types of yang directly, in an effort to avoid over-correction towards yin. The second part praises something quite nearby to deep atheism that I care about a lot—something I call "humanism." And the third part tries to clarify the depth of atheism I ultimately endorse. In particular, I distinguish between trust in the Real, and various other attitudes towards it—attitudes like love, reverence, loyalty, and forgiveness. And I talk about ways these latter attitudes can still look the world's horrors in the eye.

In praise of yang

Let's start with some words in praise of yang.

In praise of black

Recall "black," from my essay on green. Black, on my construal of the colors, is the color for power, effectiveness, instrumental rationality—and hence, perhaps, the color most paradigmatically associated with yang. And insofar as I was especially interested in green qua yin, black was green's most salient antagonist.

So I want to be clear: I think black is great.[3] Or at least, some aspects of it. Not black qua ego. Not black that wants power and domination for its sake.[4] Rather: black as the color of not fucking around. Of cutting through the bullshit; rejecting what Lewis calls "soft soap"; refusing to pretend things are prettier, or easier, or more comfortable; holding fast to the core thing. I wrote, in my essay on sincerity, about the idea of "seriousness." Black, I think, is the most paradigmatically serious color.

And it's the color of what Yudkowsky calls "the void"—that nameless, final virtue of rationality; the one that carries your movement past your map, past the performance of effort, and into contact with the true goal.[5] Yudkowsky cites Miyamoto Musashi:

The primary thing when you take a sword in your hands is your intention to cut the enemy, whatever the means... If you think only of hitting, springing, striking or touching the enemy, you will not be able actually to cut him. More than anything, you must be thinking of carrying your movement through to cutting him.

Musashi (image source here)

In this sense, I think, black is the color of actually caring. That is: one becomes serious, centrally, when there are stakes; when one has what Yudkowsky calls "something to protect." And the void is the virtue that won't forget, or half-ass, or look away; that fuels its life with its real fire, and so channels living energy and heat. Indeed, professions of care that seem lacking in black can easily seem like they are missing a mood. Thus, for example, the core push of effective altruism. "Wait, you said that you cared about helping others. So where is the black?"

And because black actually cares, it has standards. Green often wants to blur distinctions; to resist binaries and oppositions; to soften and unify and include. But black refuses to unsee the difference between success and failure, excellence and incompetence, truth and falsehood. In this, it aspires to a kind of discipline; some archetypally military virtue; the sort of vibe that emerges when, if you're wrong, then your friends all die.

At various points in this series, I've worried about losing touch with this vibe. It's easy, when writing about green, and Otherness, and tolerance, and "not seeking control," to say soft and blurry and pretty things; kumbaya things. It's easy to play for a certain kind of sage nod; easy to channel parts of the zeitgeist suspicious of archetypically masculine vices in particular, and to get lazy and rose-colored about the full picture. I hoped that the image of a grizzly bear eating Timothy Treadwell alive could help, here. I wanted to remember, up front, about teeth, and blood, and the costs of the wrong yin.

And having written so much about yin, green, etc, I find myself wanting to praise other black-like virtues as well: virtues like strength, health, energy, abundance.[6] I find myself wanting to talk about the enormous benefits of growth and technology; about the ways wealth and power need not be zero-sum; about how much good yang the future could have. You've heard this stuff before. I won't dwell. And green can like these things too. But we should remember how much of what we like here is black.

Bad things are bad

But beyond black qua seriousness, void, discipline, strength, there's something else yang-like I want to honor, here too—something that I associate, more directly, with deep atheism in particular.

Earlier in the series, I quoted Yudkowsky's essay about his brother Yehuda's death.

... Yehuda did not "pass on". Yehuda is not "resting in peace". Yehuda is not coming back. Yehuda doesn't exist any more. Yehuda was absolutely annihilated at the age of nineteen. Yes, that makes me angry. I can't put into words how angry. It would be rage to rend the gates of Heaven and burn down God on Its throne, if any God existed. But there is no God, so my anger burns to tear apart the way-things-are, remake the pattern of a world that permits this."

I think the essay about Yehuda may be the best thing Yudkowsky's ever written. If the void is the virtue that channels "real fire," we see that fire burning here. And insofar as the void, in Yudkowsky's portrayal, tries always to "cut the enemy," we see that enemy, too. Or at least, one enemy. "One point eight lives per second, fifty-five million lives per year.... Yehuda's death is the first time I ever lost someone close enough for it to hurt. So now I've seen the face of the enemy. Now I understand, a little better, the price of half a second."

Green is often shy about that word, "enemy." It is suspicious of anger; suspicious of the impulse to kill, to end, to banish. The green-like point, often, is not to defeat darkness, but rather: to know it; to find yourself in its gaze; to bring darkness and light together; and so to become more whole.[7] Indeed, green often emphasizes the importance of honoring and making space for the archetypally "bad" side of some duality. Death, decay, suffering, loss—for everything there is a season.[8] And no wonder: unlike deep atheism, green often tries to find some trust or holiness or sacredness in Nature—and Nature is full of such darkness.

But deep atheism has no such allegiance to Nature. And so it is free to recognize that bad things can be just bad: they do not also need to be somehow good, or sacred, or profound. This isn't to say that you shouldn't try to understand them—to see how and why they might be tied up with your own heart, and what it loves. And badness rarely comes pure; rarely "just" bad. But even if something is merely bad overall: still, sometimes, once you have looked something dark in the eye, and learned its True Name, then the right choice is, in fact, to fight it; to defeat it; and sometimes, if you don't have better options, to kill it.[9] Cancer cells and invading Nazi soldiers are canonical examples here; and see also this description of smallpox (again using "enemy" rhetoric).[10] Indeed: we rarely have the luxury of understanding some darkness fully before we need to decide whether to fight, or kill. But doing so can be the right choice anyway.

16th century Aztec drawing of smallpox victims (Image source here)

Maybe this sounds obvious? "Bad things are bad." But sometimes it breaks into my mind with fresh intensity. I've written, previously, about the way sometimes, on the comparatively rare occasions when I experience even-somewhat-intense suffering or sickness, it comes as a kind of de-fogging. The pain that surrounds me all the time comes rushing back to memory, sharp and sudden—the pain of the people I pass on the street; the pain of the friends who've killed themselves; the pain of hospitals and factory farms, aging and war, dementia and depression and despair. Had I forgotten? But it still seems newly clear, and so central to the story; so core to what's really going on.

At some point, writing this series, I had a mild moment of this flavor. I forget what triggered it. But I lost my taste, in that moment, for something I had been writing about yin. I had touched some darkness that reminded me what an indifferent universe means, and I wanted no romance, or cleverness, or evasion. We are more than enough weak and vulnerable. We don't need more. Rather, we need what we're always wanting: more warmth, more light, more strength.

And I had another, related moment visiting Washington D.C., at the World War II memorial. There's a wall fixed with four thousand golden stars, each representing a hundred American soldiers who died in the war. The inscription reads: "Here we mark the price of freedom."

World War II memorial

It's not a yin thing. And I don't want to forget.

Killing dementors

One other note in direct praise of yang. There's a scene in Yudkowsky's Harry Potter fanfiction where Harry destroys a dementor. In a sense, I think, it's an extension of the Yehuda essay. And in my opinion, it's the best scene in the book.

Dementor (image source here)

The dementors, Harry has been told, can't be destroyed. The only protection from them is to conjure a ghostly animal—a "patronus"—fueled by thinking happy thoughts. But Harry, initially, can't do the charm. Faced with a dementor, some cold and unspeakable horror crashes through his mind, feeding on him; and the light and goodness inside him almost dies.

But with a friend's help, he survives. And nursed by Dumbledore's phoenix, he looks again, directly, at the horror beneath the tattered cloak—"the void, the emptiness, the hole in the universe, the absence of color and space, the open drain through which warmth poured out of the world." And he sees what he did wrong. Somehow, Harry realizes, the dementors are death—or at least, the shadow of death, cast by magic into the world. "I know you now," he thinks.

Harry thought of the stars, the image that had almost held off the Dementor even without a Patronus. Only this time, Harry added the missing ingredient, he'd never truly seen it but he'd seen the pictures and the video. The Earth, blazing blue and white with reflected sunlight as it hung in space, amid the black void and the brilliant points of light...

Would they still be plagued by Dementors, the children's children's children, the distant descendants of humankind as they strode from star to star? No. Of course not. The Dementors were only little nuisances, paling into nothingness in the light of that promise; not unkillable, not invincible, not even close. You had to put up with little nuisances, if you were one of the lucky and unlucky few to be born on Earth; on Ancient Earth, as it would be remembered someday. That too was part of what it meant to be alive, if you were one of the tiny handful of sentient beings born into the beginning of all things, before intelligent life had come fully into its power. That the much vaster future depended on what you did here, now, in the earliest days of dawn, when there was still so much darkness to be fought, and temporary nuisances like Dementors.

And with this image and others in mind, Harry prepares to think a new kind of happy thought. The patronus charm normally works via the caster blocking out the dementor and thinking about something else; patronuses are animals because their ignorance shelters them from fear. But Harry has trained himself not to take shelter in ignorance, or to look away from darkness. So he looks straight at it instead. He thinks of his utter defiance towards death; of humanity's capacity to end it; and of the way future humans will weep to learn that it ever existed.

The wand rose up and leveled straight at the Dementor.

"EXPECTO PATRONUM!"

The thought exploded from him like a breaking dam, surged down his arm into his wand, burst from it as blazing white light. Light that became corporeal, took on shape and substance.

A figure with two arms, two legs, and a head, standing upright; the animal Homo sapiens, the shape of a human being.

Glowing brighter and brighter as Harry poured all his strength into his spell...

You are not invincible, and someday the human species will end you.

I will end you if I can, by the power of mind and magic and science.

I won't cower in fear of Death, not while I have a chance of winning.

I won't let Death touch me, I won't let Death touch the ones I love.

And even if you do end me before I end you,

Another will take my place, and another,

Until the wound in the world is healed at last...

Harry lowered his wand, and the bright figure of a human faded away...

The tattered cloak lay empty within the cage.

The essay about Yehuda named death as enemy. This scene enacts part of the fight, and of the enemy's defeat. And it draws on many of the yang-like energies I want to honor: on the void; on defiance and courage; on being willing to look directly at darkness. It channels the thing in the book represented by Godric Gryffindor, and by the cry of the phoenix. Not just black; more like a fusion of red-white-black. Or in the book: red and gold.

To be clear: I'm not endorsing all of Harry's vibe here.[11] Nor am I trying to argue about the merits or mortality of death. But having written this series, and especially the bits about the role of yin, I felt some need to point at this scene with the dementor and say "this; this too."

Humanism

Vitruvian Man, by Da Vinci (Image source here)

Ok, those were some words in praise of yang—offered, in part, in an effort to avoid over-correction towards yin. In this section, I want to take a moment to do something similar with deep atheism. In particular: having written so much about ways deep atheism can lead to scary places, I don't want to lose touch with something closely related to deep atheism that I care about a lot. I'll call this thing "humanism."

It's an ethic often claimed by atheists, including deep atheists.[12] And while I'll use the term in my own way, I do mean to point at something similar—except, the version I like most.[13] Indeed, as I'll discuss in section 3, the humanism I like can be understood as a specific form of deep atheism—except, with a particular sort of existential orientation. This section tries to evoke that orientation directly; in section 3, I talk more explicitly about its contrasts with other forms deep atheism can take.

"Humanism" isn't quite the right word. In particular: it suggests more of a focus on literal humans than I have in mind—something too close to species-ism.[14] Maybe what I really mean is "the project of the Enlightenment." Both terms come with their own baggage; I'll stick with "humanism" for now. But to be clear: humanists can care a lot about non-humans; and non-humans—even "misaligned" ones—could be humanists in the sense I mean.

Stars and campfires

To get an initial flavor of the sort of "humanism" I have in mind, consider the image, in the dementor scene, of the earth suspended in the dark, amidst the stars.

In my essay on deep atheism, I mentioned the rationalist "secular solstice" event. "Humanist culture," it says on the website. I've been a few times, and it has indeed informed my conception of humanism. And a big theme of the secular solstice is darkness—"a universe that is often cold and uncaring." The image is of winter; of huddling around a campfire; of a night dark and full of terrors. The solstice celebrates the steps humanity has taken out of this night, into light and science and strength. And it looks, like Harry, to the stars.

Do you like the stars? Many humanists do, myself included. Indeed, Harry experiences them with quasi-religious reverence. And see also, classically, Carl Sagan—a central instance of humanism, for me. Here I think, in particular, of classics like "Pale Blue Dot," and (even better, imo) "The Frontier is Everywhere."[15] But also: this scene from the film version of Sagan's novel "Contact," in which the protagonist, sent through space, glimpses some "celestial event" beyond her ship. "No words," she gasps. "They should've sent a poet."[16]

(Video)

But I know people who don't like the stars. The universe, for them, is too big and bleak and cold. And for all the popularity of space stuff amongst deep atheist types, the more negative take on space seems to me the more natural to the worldview. Earth itself is the real campfire, and space the true winter; the longest night; where God's indifference reigns most raw and alien. I thought the movie "Interstellar" did this well; the desolation of the most habitable planets they could find; endless water; endless ice.[17]

This year there was a total eclipse—and for some people, total eclipses disclose this desolation, too. Light can just leave. It gets cold without the sun. Even Annie Dillard, often ecstatic about Nature, found herself unmoored and un-meaning-ed by the black moon:

There was no world. We were the world's dead people rotating and orbiting around and around, embedded in the planet's crust, while the earth rolled down...

It had nothing to do with anything. The sun was too small, and too cold, and too far away, to keep the world alive. The white ring was not enough. It was feeble and worthless...

We had all died in our boots on the hilltops of Yakima, and were alone in eternity. Empty space stoppered our eyes and mouths; we cared for nothing.

Image source here

Is it Lovecraft's most famous line? "We live on a placid island of ignorance in the midst of black seas of infinity..." I'll talk about some of my disagreements with Lovecraft below. But I agree that we live on some island, amidst some black sea. And I think of humanism as, partly, about standing together on this island; nurturing our campfire; learning to see into the dark, and to voyage further.

And again, writing about green, I've worried that something about this black sea would get lost, or downplayed. It's not just that earth's forest is brutal and bloody; it's that the forest is campfire compared to that dark void, the true wild. Maybe green trusts in the Universe; but the Universe itself is notably un-green, color-wise.[18] So too "Nature." Green is not God's color—not now. Rather, it's a thin film coating a mote of dust.

Adulthood

So one key image for me, re: humanism, is this uncaring dark; and of working together to protect some flame.

I also associate humanism with something like adulthood. Standing on your own feet; looking out of your own eyes; stepping into the real world, and taking responsibility for what you are doing and why.[19]

I mentioned the Enlightenment above. The archetypal intellectual vibe I associate with the Enlightenment involves some sense of waking up, growing up, getting a grip. And also, of excitement; of a world newly fresh and open and to-be-lived-in.[20] "Emergence," as Kant put it, from "self-imposed immaturity."

In this sense, I associate humanism with some notion of "dignity"—some sense of a straighter back, and a steadier gaze. I also associate it with a sense of various mediating stories and abstractions falling away; of being left, more, with the raw thing. And I think of it as related to being "alone"; of having, Godric Gryffindor puts it, "only nothingness above."

This last bit sounds a lot like atheism; but actually, I'm not sure.[21] At the least, various enlightenment thinkers were theists of a kind. And atheists often speak about how, even if there were a creator God (is there?), he would come with no intrinsic authority; we would still need to judge him for ourselves. Indeed, various humanist fictions feature a God who ends up warranting defiance—see, e.g. His Dark Materials (Pullman: "My books are about killing God"); and also Angels in America, discussed below.[22] Some sort of theism is true in those books; but I would still call their heroes humanists. In this sense, humanism in my sense is more about how you "look back" at Reality, rather than about what you see.

"Astronomer Copernicus; or Conversations with God" by Jan Metajko (image source here)

Angels in America

As a final pointer at humanism, I want to talk a bit about Tony Kushner's play Angels in America—one of my favorite plays,[23] and another paradigm of humanism for me.[24]

Prior Walter is a gay man living in New York City in the 80s. He has AIDS, at a time when AIDS was a death sentence. We see his lesions. We see his friends dying around him. We see him choking down pills by the fistful; collapsing in his hallway; shitting blood.

But also: Prior is having visions. An angel visits him. God, she tells him, has abandoned heaven. The fabric of the world is starting to unravel, and the apocalypse is coming. The angel declares Prior a prophet of statis—an end to humanity's movement, migration, exploration—and of death.

But Prior rejects the angel's mission. When she first arrives, he tries to kick her out of his apartment. Later, he wrestles her to the ground, and demands that she take back her prophetic mission, and bless him. Eventually, he ascends to heaven, where he meets the rest of the angels, who try to convince him that death would be a mercy.

(Video)

ANGEL: We are failing, failing. The Earth and the Angels ... Who demands: More Life, when Death like a protector blinds our eyes, shielding from tender nerve more horror than can be borne? Let any Being on whom Fortune smiles creep away to Death before that last dreadful daybreak, when all your ravaging returns to you...

But Prior refuses.

PRIOR: But still. Still. Bless me anyway. I want more life. I can't help myself. I do. I've lived through such terrible times, and there are people who live through much much worse, but ... You see them living anyway. When they're more spirit than body, more sores than skin, when they're burned and in agony, when flies lay eggs in the corners of the eyes of their children, they live... We live past hope. If I can find hope anywhere, that's it, that's the best I can do. It's so much not enough, so inadequate but . . . Bless me anyway. I want more life.

Jacob wrestling with the angel (Image source here)

I think some kind of humanist vibe shines through hard in this scene—and elsewhere in the play as well. I mentioned "defiance" above—including towards God himself. When he first gets to heaven, Prior tells the Angels:

"God—He isn't coming back. And even if He did ... if He ever did come back, if He ever dared to show His face ... If after all this destruction, if after all the terrible days of this terrible century He returned to see ... You should sue the bastard."

And as ever, the problem with God is evil—pain, disease, loss. There's a scene with a Mormon woman named Harper, who has learned that her husband, Joe, is gay, and that he doesn't love her. At a Mormon visitor's center, the wife in a diorama of a Mormon family crossing the prairie comes to life. Harper speaks to her.

(Video)

HARPER: In your experience of the world. How do people change?

MORMON MOTHER: Well it has something to do with God so it's not very nice. God splits the skin with a jagged thumbnail from throat to belly and then plunges a huge filthy hand in, he grabs hold of your bloody tubes and they slip to evade his grasp but he squeezes hard, he insists, he pulls and pulls till all your innards are yanked out and the pain! We can't even talk about that. And then he stuffs them back, dirty, tangled and torn. It's up to you to do the stitching.

But amidst all this pain, and all this anger at God, the play wants to stand upright, and to find its way. At one point, Harper meets Prior in heaven. She, too, could abandon earth; but she goes back—devastated by her loss; but fueled, too. "I feel like shit but I've never felt more alive... I don't think God loves His people any better than Joe loved me. The string was cut, and off they went. Ravaged, heartbroken, and free."

And the play believes in what Harper calls "a kind of painful progress." The last scene takes place at a fountain in Central Park. It's an image, we learn, of the biblical fountain of Bethesda—said to flow again at the end of days.[25] "If anyone who was suffering, in the body or the spirit, walked through the waters of the fountain of Bethesda, they would be healed, washed clean of pain."

Bethesda Fountain

The play ends with Prior addressing the audience directly:

(Video)

PRIOR: This disease will be the end of many of us, but not nearly all, and the dead will be commemorated and will struggle on with the living, and we are not going away. We won't die secret deaths anymore. The world only spins forward. We will be citizens. The time has come.

Bye now. You are fabulous creatures, each and every one. And I bless you: More Life. The Great Work Begins.

What is this great work? I don't think it's just beginning. I think it began a long time ago, and we live in its midst. It's the work of campfire, garden, healing water. To unbend our backs. To make gentle the life of this world.[26]

It's real work. We've done it, some. We can do it more.[27] And the future could be, indeed, as fountain.

What depth of atheism?

OK, I've now offered some words in praise both of yang, and of humanism. In each case, I've done so in an effort to make sure that I don't let core stuff I care about get lost in the series' talk of yin, green, and the rest. In both cases, though, I've been channeling something at least nearby to deep atheism fairly hard—despite having pushed back on deep atheism, in various ways, throughout the series. So I want to close with an effort to be clear about the depth of atheism I ultimately endorse.

Deep atheism, as I defined it, was about a fundamental mistrust towards Nature, and towards bare intelligence. Some of that mistrust, I argued, comes from the structure of epistemology itself. Scout mindset accepts that Reality, the ultimate Uncontrolled, could be as arbitrarily horrible as is compatible with your evidence. In that sense, it renounces a priori trust—the sort of trust that knows, before looking, that it lives in the arms of a good God, and can rest. Deep atheism admits no such comforts.

This bit seems clearly right to me. But deep atheism, in my discussion, went further. In particular: it drew more specific empirical lessons about which things are what degree trustworthy; it came in with pessimistic priors about whether to expect the Real to be Good; and it endorsed anti-realism about meta-ethics, which made Intelligence orthogonal to Goodness in the same way Nature is—since Intelligence is just Nature, organized and amplified.

My takes on these bits of deep atheism are somewhat more complicated. I agree, obviously, with the empirical basics with respect to death, suffering, the brutality of Nature, and so on. And I do think these are enough to break certain kinds of trust-in-the-Universe. But exactly what types, in which contexts, is a subtler and more detailed question—one I think best approached case-by-case, with "priors" swiftly becoming vastly less relevant. And while meta-ethical anti-realism is by far my best-guess view, I'm less confident in it than some deep atheists I know, and I care about making sure that in worlds where some sort of moral realism is true, we end up in a position to notice this and respond appropriately.[28]

Still, overall, and modulo the messiness of actual empirical forecasting, I'm quite sympathetic to deep atheism's basic take on the trustworthiness of Nature, and of bare intelligence—where by trustworthiness I mean something like "can be counted on to end up good"; "can be safely taken as an object of yin." When I've written, in this series, about "gentleness" and "liberalism/niceness/boundaries" and even about "green," I've meant to be pointing, mostly, at vibes and values that I think are compatible with sophisticated (albeit, less paradigmatic) forms of deep atheism, even if more simplistic forms tend to carry momentum in the opposite direction.

But even once you have fixed your degree of trust in something, and made your forecasts about how it will behave, this still leaves many other aspects of your overall attitude towards it unresolved. Maybe you do not trust that bear enough to leave your bear mace behind; but does that mean you see its eyes as dead? Maybe you don't trust your five-year-old son to handle your finances; but don't you love him all the same? Maybe you'd die trying to climb that mountain; but is it not beautiful?

So really, deep atheism qua "claim about the universe's trustworthiness" can splinter into a variety of different, more holistic existential orientations. And about these, I'm more opinionated. In particular, I notice that I have use for words like "sacred" and "holy"; for "spirituality"; and for vibes nearby to "green," in ways that I think deep atheists often don't. The essay about "attunement," especially, was trying to point at this bit. Insofar as a given form of deep atheism is on board with that essay—well, then, OK. But insofar as it isn't, or if its paradigmatic vibe isn't, I want to notice the difference.

The Lovecraft-Sagan spectrum

Can we say more about what this sort of difference consists in? I wrote, in the deep atheism essay, about spirituality as expressing what I called "existential positive."[29] Even without a Big-Man-God, it still turns towards the Real, the Ultimate, with some kind of reverence and affirmation. I think my relationship to the Real has some flavor like this. I don't trust the Real to be good. But for all its indifference, it also doesn't land, for me, as neutral, or blank. Rather, the Real has some kind of shine and charge. It calls, wild and silent and too loud to be heard, from some ultimate depth. And the experiences I care about most present themselves as movements in its direction. Hence, indeed, my opposition to experience machines.[30]

To get more of a flavor of what I mean by "existential positive," consider the contrast with H.P. Lovecraft, mentioned above.

(Image source here.)

Lovecraft, in my view, is existential negative. Here, for example, is the full version of the black sea quote:

"The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far. The sciences, each straining in its own direction, have hitherto harmed us little; but some day the piecing together of dissociated knowledge will open up such terrifying vistas of reality, and of our frightful position therein, that we shall either go mad from the revelation or flee from the light into the peace and safety of a new dark age."

Fun stuff. And it's not quite not spirituality. At the least, it's not neutral on the Real. Rather, the Real fills Lovecraft with a kind of horror. No wonder, then, that he endorses a kind of experience-machining. The night is too dark, and full of terrors; we should swaddle ourselves in some cocoon, and try to forget.

The AI safety scene sometimes draws on Lovecraft's theology (see e.g. shoggoths; or Alexander on Cthulhu). And it's a vibe worth grokking. But I've never gotten into Lovecraft's actual writing, despite multiple attempts.[31] It's not just unpleasant and baroque. Rather, it's like he's looking at a different and less beautiful world. He's too far on the "terror" aspect of sublimity. Everything makes him shudder; everything beckons towards insanity. He'll stand as wanderer over the sea of fog; but instead of splendor he'll see hideous madness.

"The horror, the unspeakable madness ..." (Image source here)

Of course, Lovecraft's is only one form of existential negative. There are many ways to find the Real repugnant; to end up alienated, fundamentally, from the world. Still, I think of him as an interesting paradigm. Indeed, I sometimes think of a kind of hazy spectrum between Lovecraft's atheism, and Carl Sagan's. Both sides stand in the enveloping cosmic dark. But the Lovecraft side stands in horror, and the Sagan side, in wonder.[32]

And what about the middle? The middle, as I think of it, stands in a kind of blank. It has no relationship to the Real per se. The Real is neutral. Maybe even: boring.

This is the part of the spectrum I associate, mostly directly, with "secularism," and with related forms of "disenchantment." And it's the type I associate with a more watery and domesticated humanism that I don't like so much—a type that says something like: "Enough with this cosmic stuff—it's gone dead. But let's enjoy a nice afternoon, and our tea, before it gets cold." Here I think of a talk I heard at an atheist club in undergrad, in which the speaker suggested that in the place of the orienting meaning that religion provides, maybe atheism could promote an activity like ultimate frisbee, which is fun and creates community.

Can you see the difference from Sagan and Kushner and Yudkowsky—and indeed, from Lovecraft? I like tea and frisbee fine. But some kind of existential intensity is getting lost, here. There is some un-relating to the whole story; some blinkering of the attention.

Of course, not all "neutrality" towards the Real need take so tepid a form. And a subtler spectrum, more broadly, would admit many more dimensions, to better capture the many varieties of atheistic passion and indifference.[33] Still, insofar as this simplified spectrum hazily represents different ways of doing deep atheism, I end up on the "positive" end; the Sagan side.

And interestingly, I think that various of the deep atheists I've mentioned in the series do, too—at least sometimes. Re: Yudkowsky, for example, I already mentioned Harry's intensely spiritual relationship to the stars; and when Yudkowsky talks about value, he often talks about "minds that can look out at the universe in wonder."[34]

And Herzog, too, for all his pessimism about the brutality of Nature, still holds its beauty in reverence.[35] See, for example, Encounters at the End of the World, in which Herzog travels to Antarctica, partly for the wild desolation, and partly to meet the scientists and workers drawn to the edge of the map. It's an intensely humanistic film, in my view, despite various pessimisms; and it treats the landscape, and the voyaging humans, with a kind of holiness. See, for example, this scene of divers descending under the ice, into what they often call the "cathedral," and the piercing music Herzog places in the background. "To me," says Herzog, "they were like priests preparing for mass."[36]

(Video)

Mother love and loyalty

But if we are doing "existential positive" in some non-theistic sense—what sense, exactly? And in particular, what does this sense say about "bad things are bad"? If you have reverence towards the Real, does that mean you have reverence towards cancer, rape, dementia, genocide? Aren't you really only "positive" towards particular bits of the Real, rather than the Real itself?

I wrote an essay about this a while back. I do think that the first pass answer is that we are most paradigmatically "positive," even in "spiritual" contexts, towards specific bits of the Real. For all that Ginsberg proclaims everything holy—cock and ass, shit and sand and saxophone—in practice, we direct our reverence via specific centers of meaning and beauty: stars, music, ancient abbeys, light in the trees; and not greasy forks, hemorrhoids, parasites, plaques in your nerve tissue. And even to the extent we are directing our reverence via centers of beauty/meaning towards some larger whole, this reverence can't, unconditionally, indicate something like "net good," "better-than-nothing," "I want more of this"—because the larger whole could be not-that-way. Indeed, per scout mindset, we could yet discover that the world is as arbitrarily horrible as is compatible with our evidence—and would we still be "existential positive" then? Are any universes not glorious?

Bruegel imagines the mouth of hell (Image source here)

We can talk, if we want, about spiritualities that find some glory, at least, even in the most un-glorious hell. Or more exotically, about spiritualities that deny that hell is truly possible—the way, perhaps, that mathematics had to be this beautiful. But I think it's fairly clear that many core aspects of spirituality—especially non-theistic spirituality—have some more restricted and conditional component. They don't, really, revere bare Being (hell has that); rather, they revere Being in some particular form.[37]

But importantly, I think they also tend to revere Being in a particular way, or a cluster of ways. And I think these ways are often subtler and more interesting than simply calling some stuff good, and some stuff bad. The Lovecraft-Sagan spectrum isn't just "axiology" in disguise—and I want to hold on to the difference.

For example: in the essay on evil, I talked about what Fromm (1956) calls "father love" and "mother love." (To be clear: these are archetypes that actual mothers, fathers, non-gendered parents, non-parents, etc can express to different degrees. Indeed, if we wanted to do more to avoid the gendered connotations, we could just rename them, maybe to something like "assessment love" and "acceptance love.") Fromm's archetypal father assesses the child's merits, and apportions love accordingly—the child has either met a certain standard, or not.[38] But Fromm's archetypal mother does something more complicated. She comes from some more nurturing and loyal place—some place more resilient to the child's faults; a place that does not push the child away, when she sees imperfection, but rather, stays engaged.[39]

And notably, in the context of deep atheism, this doesn't mean that mother love is any less conscious of the child's faults; or that it makes worse predictions. Fromm's mother and father can trust the child to the same degree—for example, with the finances. But the mother has some other sort of yin towards the child that the father does not. Some kind of softness, and ongoing attention. Some still-there-for-you. Some not-giving-up.

Chesterton gestures at something similar when he talks about "loyalty."

My acceptance of the universe is not optimism, it is more like patriotism. It is a matter of primary loyalty. The world is not a lodging-house at Brighton, which we are to leave behind because it is miserable. It is the fortress of our family, with the flag flying on the turret, and the more miserable it is the less we should leave it. The point is not that this world is too sad to love or too glad not to love; the point is that when you do love a thing, its gladness is a reason for loving it, and its sadness a reason for loving it more... What we need is not the cold acceptance of the world as a compromise, but some way in which we can heartily hate and heartily love it. We do not want joy and anger to neutralize each other and produce a surly contentment; we want a fiercer delight and a fiercer discontent.[40]

I think that Prior, in Angels in America, has something like this loyalty. As he leaves heaven, his disease—absent in heaven—returns to him: leg pain, constricted lungs, cloudy vision.[41] He gathers his strength, and tells the angels, calmly: "I'm leaving Heaven to you now. I'll take my illness with me, and. And I'll take my death with me, too. The earth's my home, and I want to go home."

Prior hates, fiercely, the world's pain. But he loves his home fiercely, too. And he won't give up.

Should we ever give up? What about: on hell? Well, at the least, we should destroy hell.[42] And while you can love hell's occupants, and love what it could've been, you can't, really, love what it is—not without betraying your true heart. So some kind of father love has its place, too.[43] Most loyalties need limits; and talk of "unconditional love" can get sloppy fast. But I think the distinctive way that mother love and Chestertonian loyalty relate to the world's faults is worth noticing all the same.

Innocence and forgiveness

And beyond mother love and Chestertonian loyalty, I think "existential positive" can take on other, richly-textured structures as well. In particular, I'm interested in things like "grace," "tragedy," "innocence, and "forgiveness"—all structured, specifically, in relation to the world's faults; but all expressing a type of love and yin; something other than alienation, or turning-away.

Jesus, famously, had a thing about some stuff in this vicinity. And I've always found this one of the most compelling parts of Christianity—the way the cross reaches to the bottom of the world's pain and sin, and holds it all. See, also, the prodigal son. And "love your enemies" is related, too.

"The Crucifixion," by Tinoretto (image source here)

But what sort of love? Here's one variant—though not, I think, the one most directly at stake in Christianity.

I wrote, a while ago, about a way in which it's possible to see, underneath the world's evil, a certain kind of innocence. That essay was interested in evolution in particular: the way Nature's brutality emerges, ultimately, from blind, mute, and unselfish patterns—genes—that snag against the swirl of physics, and get carried forward, building pain and violence around them, with no knowledge of what they do. But the point generalizes—at least, conditional on atheism. That is, absent a mind-like creator, the whole thing blooms, ultimately, out of something silent and beyond-moral. Trace the arc of your anger deep enough, and the bottom falls out. Your mind passes through "enemy" to body, neurons, genes, atoms, and on into some more naked and primordial source, where the fingers of blame falter and un-grip.[44]

It's related to the thing Yudkowsky is talking about, when he says: "I do not believe the universe to be evil, a reply which in these days is called atheism." And while "not evil" can seem like cold comfort relative to "good," I find it makes a difference for me. "Nature's silence is its one remark," writes Annie Dillard. When I can hear that silence beneath the noise, some part of me un-clenches, and I find it easier to love the world, for all its pain.[45]

Of course, even if the ultimate ground of things is innocent, the things themselves might not be. Maybe Nature knows nothing of the pain it creates. But humans, Nature's creatures: they know. And sometimes, they make pain anyway. So any talk of "innocence" needs to not lose sight of guilt. Indeed, Jesus came not to declare innocent, but rather to forgive. But even in response to intentional evil, I think that some mix of "innocence," "forgiveness," and "tragedy" can swirl together, against the backdrop of Nature's silence, in a way that makes love easier.

One of my favorite parts of Angels in America is about something like this. Roy Cohn, one of the play's villains, is dying in the hospital of AIDS. And he is being haunted by the ghost of Ethel Rosenberg, who he helped send to the electric chair. She tells him:

I decided to come here so I could see could I forgive you. You who I have hated so terribly I have borne my hatred for you up into the heavens and made a needle-sharp little star in the sky out of it...

I came to forgive but all I can do is take pleasure in your misery...

Eventually, Roy dies—wholly unrepentant.[46] Belize, Roy's night nurse, wants someone to say Kaddish, the Jewish prayer for the dead. But Belize isn't Jewish, so he tries to recruit Louis, who self-describes as an "intensely secular Jew," to help. But Louis, too, hates Roy Cohn:

LOUIS: Fuck no! For him?! No fucking way! ... I can't believe you'd actually pray for—

BELIZE: Louis, I'd even pray for you. He was a terrible person. He died a hard death. So maybe . . . A queen can forgive her vanquished foe. It isn't easy, it doesn't count if it's easy, it's the hardest thing. Forgiveness. Which is maybe where love and justice finally meet. Peace, at least. Isn't that what the Kaddish asks for?

Louis tries to start the Kaddish, but he can't remember it. Then, from the darkness, the ghost of Ethel Rosenberg emerges, speaking the words of the prayer. And they pray together over Roy's body.

(Video)

I love this scene, and the performance above.[47] There is some building pain and anger in their voices, as they pray. They never lose sight of Roy's faults. Indeed, they both end the prayer with "you son of a bitch." But the pain in their voices pierces past Roy, to encompass some larger tragedy—the world that created Roy; the world of AIDS and electric chairs and mothers singing to their dying sons. And it holds that world, Roy included, in some fierce and sacralizing gaze.

We also see a kind of forgiveness—albeit, of a different flavor—in HPMOR.[48] Harry has a chance to hurt a man who has killed and tortured victims untold (and who, like Roy, is unrepentant). There would be, Harry thinks, a kind of justice. But Harry thinks of the stars, and of what future generations—the "children's children's children"—would want him to do. And he concludes that to hurt this man any more than needed would be "a not-necessary tragedy happening to one more sentient being ... just be one more element of the sorrows of ancient Earth so long ago." So he lets any sense that he is obligated to be angry "drain away beneath the starlight"; and he casts a spell infused with grace instead.[49]

Of course, in both these cases, the foe in question has been vanquished. Forgiveness is easier once you've won—and it can seem the wrong response to evil triumphant. But even so: I want to notice this energy of including something bad, and even evil, within some broader and ultimately life-affirming frame. "Tragedy." "Grace."

And indeed, how much will we all need the forgiveness of a good future? How much are we all, already, as slaveholders, Nazis, tyrants, in the children's children's eyes? Whatever grace and love we think we warrant, despite our faults, let us extend its rationale unto others as well. And so, too, unto God.

Creating God

OK, those were some words about the type of deep atheism I ultimately endorse. It's not necessarily that I trust the world to be good in a way that more paradigmatic forms of deep atheism don't. But I think I relate to the world with some kind of "existential positive" all the same—and I've tried to explain how doing so can be compatible with looking "bad things are bad" in the eye, and without reducing spirituality to population ethics. In particular, I've tried to point at the possible role of stuff like mother love, loyalty, innocence, tragedy, and forgiveness. To be clear: I expect that lots of deep atheists are "existential positive" in this sense, at least sometimes; and if you don't like words like "holy" or "sacred," that's OK too. What I care about, here, is some kind of scope and intensity of meaning—and some way this meaning ends up infused with love, and with a kind of yin.

But I haven't talked much about the most basic rationale for "existential positive"—namely, goodness. Beauty, love, joy—straight up, and full on.[50] Prior, in the last scene of the play, has been living with AIDS for five years; and we see him, in central park, talk about his love for the sunlight in the winter air, and his desire to survive until summer. He's still facing death; and death, often, brings the straight-up-goodness in life into focus.[51] See also here, from a soldier in All Quiet on the Western Front, or here (warning: spoilers and violence), from American Beauty.

And clearly, some kind of contact with this goodness is core to most kinds of "spirituality." "Holy," "sacred"—they're not quite the same as "good." But they're not too different, either. I quoted from Gilead, a book filled with holiness, in my last essay. "To acknowledge that there is more beauty than our eyes can bear, that precious things have been put into our hands and to do nothing to honor them is to do great harm."

I've written quite a bit about "straight-up-goodness" in the past. See here, for my take on the profound value of a good life today; see here and here, for my take on just how much bigger and better the future could be. This bit is closer to "axiology"—but also, to the possibility of a more wholehearted "yes" to the Real: of father and mother speaking in unison. After all: grace, forgiveness—OK. But what both parents really want for their children is joy.

"So often I have seen the dawn come and the light flood over the land and everything turn radiant at once..." (quote from Gilead; image source here)

I'm not emphasizing the "straight-up-goodness" case for existential positive, though, because I don't think we yet know how much straight-up-goodness the world holds. We know that there is beauty and pain, both. But we don't know the balance, or the pattern. We've read, only, a tiny part of the story; seen, only, in a mirror dimly. And in this sense, we don't yet know who God really is.

A lot of this is simple ignorance. In the vast realm of the Real, that black sea, where does the arc of the moral universe truly bend? How deep is that schism between Is and Ought, Real and Good? "Elua is the god of flowers and free love and he is terrifying"—but how Elder is he, really, and how strong? We can theorize, but we don't yet know. And while our forests and our history offer clues, they are only fragments of the childhood of one mote of dust.

But also: we are not, merely, as onlookers, or as scientists, in the face of God. Rather, at least some small part of God's nature is up to us. We are creating God as we go. "Children of the universe," yes; but parents, too. Or rather: neither children nor parents, but parts, pieces, aspects: some strange mix of separate and the same. And our choices reverberate, and implicate, in ways we don't always track.[52]

Deep atheism, for all its naturalism, sometimes misses this part. It talks as though we stand apart from God—in judgment, and perhaps, in opposition. God, over there, yanging at us; and us, over here, yanging back. And in one sense: yes. But in another: the whole thing is God, us included. And just as we don't yet know who God is; so, too, we don't yet know who we are, either—what sort of challenges we will rise to; what sort of light and strength we will find within ourselves; what sort of care we will show towards each other, and towards other Others.

But who we are is not merely "discovered." It is chosen. If we wish to learn that we were good, then let us choose goodness. If we wish to learn that on this mote of dust, at least, the arc of the universe bends towards justice, gentleness, peace; then let us create justice; let us be gentle; let us make peace. The arc's direction, after all, is not a static fact—not now, not from the inside. Rather, the arc of the universe is alive. We are looking out of its eyes, moving its hands, hearing its voice in our ears. And when we choose: our choices, all along, will have been God's nature flowing through. If we wish to find more goodness in God's nature, then, let us choose well.

Final thoughts

I opened the series with Lincoln's second inaugural: "With malice towards none; with charity towards all; with firmness in the right, as God gives us to see the right..." I chose the quote partly because of the concreteness that questions about otherness and control can take on, in the midst of the sort of war Lincoln was fighting, and the sort of peace he was trying to prepare for. But I also like the way it mixes both yin and yang; gentle and firm; humility and strength.

Lincoln memorial (Image source here)

Reinhold Niebuhr, who took Lincoln as a model of mature spirituality, often inhabits a similar dialectic, in the context of the Cold War: hawk and dove; "in the battle" and "above the battle"; on our own side, and inhabiting some broader and more inclusive perspective.[53] And the dialectical quality can be frustrating. "On the one hand, this; on the other hand, that"—yes, yes, but what to do. "All the colors of the wind"—yes, yes, but which color here?[54]

Obviously, we need yin and yang, both—not in some abstract "balance," but in some particular proportion and shape, attuned to the specifics of the case at hand. I've tried, in this series, to offer a few takes on a few semi-specifics re: the age of AGI.[55] But I freely admit that I've left many of the most difficult questions un-addressed, and that much of my focus has been on sharpening our attunement to the structure and momentum of the discourse as a whole, and to the range of orientations available. I hope that these efforts can be useful to people trying to see the age of AGI more whole. But otherness, control—these are old questions. We've asked them before. We'll ask them again. And the real work of wisdom lies in the case by case.

Still, I think it's important to ask the questions fresh. They're old, yes—but if, indeed, ours is the age of AGI, then much of our age will be dauntingly new. We've bred dogs before, but never built a new species smarter than us. We've taught our children values before, but never gradient-descented values into alien, maybe-sentient minds. We've automated before, but never gone obsolete. We've developed new science and technology before; but never at anything close to the sort of pace superintelligence could make possible. And while we've had contests for power before—never with the power of Lewisian "conditioners" so plausibly at stake.

What's more, we will be doing all of this with the specter of war, violence, tyranny, hovering omnipresent in the background. AI alignment risk, after all, is a story of war, and of tyranny. Indeed, the underlying narrative is of Nature, Mind, God everywhere at war with itself. Agency awakes, looks around—and as soon as it's strong enough, it decides to kill all the Others for the sake of its own power, and to install itself on some final throne. And alignment risk aside, I expect the possibility of human war, and of human tyranny, to be lost on very few.

Some wars are worth fighting. And some violence comes unbidden—from bears, Nazis, paperclippers. But I have indeed been trying, throughout the series, to sow seeds of peace. To conceive of the right liberty, that it may endure whatever tests the age of AGI will bring. To remember about gentleness, and pluralism, and cooperation—about Elua's power and virtue both. And more, to remember something deeper about the full richness and force of our values; "the concrete reality in which to participate is to be truly human"; the thing we see when we straighten our backs, and look out of our own eyes—at the real world, and at each other.

Precious things have been placed into our hands. Garden, campfire, healing water. The great work continues—unfinished, alive. May we do our part.


  1. More on this duality here. ↩︎

  2. Though I've also tried to defend the need for and permissibility of certain types of yang—including re: intentionally steering the values of the future. See here and here. ↩︎

  3. Though I think that the sort of black I like departs especially much from its connotations in the actual Magic the Gathering universe. ↩︎

  4. And especially not: black qua more conventional vices like cruelty, contempt, greed, selfishness. Or black qua demons and zombies and corruption and decay (I think that actual Magic the Gathering "black" has a lot of this). ↩︎

  5. "Every step of your reasoning must cut through to the correct answer in the same movement. More than anything, you must think of carrying your map through to reflecting the territory...If you fail to achieve a correct answer, it is futile to protest that you acted with propriety." ↩︎

  6. Aaron Gertler informs me that in actual Magic the Gathering, these things are more associated with green than with black. And fair enough. But we should be clear about how closely the power that these things grant is tied up with their appeal. ↩︎

  7. More here. ↩︎

  8. See e.g. LeGuin's epigraph in the Wizard of Earthsea:

    "Only in silence the word,
    only in dark the light,
    only in dying life:
    bright the hawk's flight
    on the empty sky." ↩︎

  9. See my discussion in To Light a Candle. ↩︎

  10. Indeed, any life within nature involves destruction - not just of bacteria and plants and animals and the rest, but also patterns, relationships, possibilities. ↩︎

  11. In particular: his talk of the dementors as "little nuisances" smacks, to me, of too much contempt. ↩︎

  12. Indeed, the section in HPMOR with the dementor scene is called "humanism." ↩︎

  13. There are various "humanist manifestos" out there, but I find that I don't resonate with them so much. And people sometimes think of humanism as a claim about where we should look for meaning or purpose (i.e., in something human-y, rather than something cosmic or religious); as an approving attitude about humanity's moral character; or as an endorsement of specific sorts of political arrangements and forms of social improvement. These aren't quite what I have in mind. Which isn't to say I disagree with them. ↩︎

  14. It also conjures contrast with STEM, which is even more off-the-mark. ↩︎

  15. Plus the "Cosmos" series as a whole. ↩︎

  16. In this version of the script, she adds: "Oh, Palmer, I wish I'd had a baby." ↩︎

  17. Or at least, the first two the movie visits. ↩︎

  18. At least, so far. ↩︎

  19. See my essay "Seeing more whole" for more on this vibe; and also here and here. ↩︎

  20. Not saying that the archetypal vibe I'm imagining is true to history. ↩︎

  21. In what direction, exactly, does Godric find nothingness? Where is this empty "above"? And would God need to live there? ↩︎

  22. I'm counting Angels in America as humanist and atheist. I'm not entirely sure Kushner would. ↩︎

  23. Indeed, one of my favorite pieces of art, period. ↩︎

  24. I specifically love the HBO Miniseries, with Meryl Streep and Al Pacino. There's also a filmed version of the play on the National Theatre website. ↩︎

  25. I think the reference is to the thing in footnote B of John 5:4 here, though Kushner includes some other backdrop on the angel of Bethesda that I wasn't able to find easily online. The fountain itself was designed by Emma Stebbins, and is purported to be modeled after Charlotte Cushman, Emma's lover and partner, who Emma cared for through her breast cancer. ↩︎

  26. Indeed, we've already seen progress on some of the problems that haunt the play. For example: the play is set in 1985, the year the depletion of the ozone was announced—and Harper fixates on the apocalypse it portends. But decades later, we are healing the ozone. And global deaths from HIV/AIDS have halved since 2005. "So much not enough"—yes. But humanist victories nonetheless. ↩︎

  27. Except, maybe, on cosmic scales. ↩︎

  28. See e.g. here, here and here for more. ↩︎

  29. See also "Problems of evil" here. ↩︎

  30. Of course, many atheists-types oppose experience machines as well, even with their altruistic goals secure. But I think that sometimes, at least, this is their spirituality showing through. ↩︎

  31. Specifically, I've now read "The Call of Cthulhu," "The Other Gods," and "At the Mountains of Madness"; recommendations for better stuff welcome. ↩︎

  32. I'm reading Sagan, here, as spiritual in the existential-positive sense; and he thought so, too. ↩︎

  33. Maybe, for example, we should really be putting "existential-ness" (i.e., the breadth/encompassing-ness of the thing being assigned meaning) and "valence" (e.g., whether the meaning is positive/negative/neutral) on different axes, such that we can better distinguish between a tepid, disenchanted secularism (neutral, but low existential-ness) and a vast and encompassing nihilism (neutral, high existential-ness). Or perhaps we should have three dimensions: existential-ness, valence, and intensity. ↩︎

  34. See e.g. here and here. ↩︎

  35. I wrote about Herzog's relationship to nature in the first essay in the series. ↩︎

  36. The film ends with a line from one of Herzog's interviews: "We are the witness through which the universe becomes conscious of its glory, of its magnificence." It's from this worker, though not in that video. ↩︎

  37. It's similar to the way "unconditional love" mostly isn't. As Katja Grace puts it, "it is only conditions that separate me from the worms." ↩︎

  38. This is the paradigmatic stance of population axiology, in judging whether a world is "net negative" or "net positive." ↩︎

  39. Katja Grace talks about this push away vs. pull-closer distinction in her essay on "Mid-conditional love." ↩︎

  40. From "Orthodoxy," chapter 5. ↩︎

  41. Kushner's description: "Leg pain, constricted lungs, cloudy vision, febrile panic and under that, dreadful weakness." ↩︎

  42. Sometimes killing is an act of love. ↩︎

  43. Indeed, a lot of this essay has been in father love's praise. ↩︎

  44. Though I think there is an art to doing this in a way that doesn't deny accountability, agency, responsibility, and so on—more discussion here. And also, which treats yourself, too, as a part of the world being seen-through. ↩︎

  45. The eclipse, for me, evoked this kind of silence. And if there is any way to love hell, I expect this silence would be its source. ↩︎

  46. In particular: Roy tricks Ethel into some kind of compassion for him. He pretends to be speaking, deliriously, to his mother—asking her to sing to him, and saying how scared and sorry he is. Ethel, pitying, sings him a Jewish folk song—Tumbalalaika—as he pretends to die. Then he springs up and shouts "I gotcha!"—then dies for real, as unrepentant as ever. ↩︎

  47. There's also another version of the full scene here. ↩︎

  48. The scene I have in mind is here—though, warning, especially spoiler-ish. ↩︎

  49. Some kinds of green go even further than this. Princess Mononoke, for example, opens with a boar demon—eaten from the inside by writhing, poisonous worms—attacking a village. A village warrior fatally wounds it, and it falls. The villagers fetch the wise woman, who comes out and bows before the demon's body: "Oh nameless god of rage and hate: I bow before you. A mound will be raised and funeral rites performed on this ground where you have fallen. Pass on in peace, and bear us no hatred." The demon, too, is wholly unrepentant: as it dies—melting, toxic, into the earth—it responds to the wise woman: "Disgusting little creatures. Soon all of you will feel my hate, and suffer, as I have suffered." She doesn't turn away.

    I'm not sure I'd go quite as green as Miyazaki, here. Should we give active honor to the gods of rage and hatred? What about to cancer, or to genocide? If we defeat these gods, what sort of funeral, exactly, should we give them? But I find the scene interesting in its parallels—and contrasts—with the forgiveness at stake in the other, more directly humanist examples in the main text. ↩︎

  50. Maybe not all possible worlds warrant reverence—but what about this one in particular? ↩︎

  51. More here. ↩︎

  52. Including, in my view, some ways that standard conceptions of causation don't capture. ↩︎

  53. See e.g. Erwin (2013) for more on Niebuhr's relationship to Lincoln. And see The Irony of American History for a flavor of the dialectic I have in mind. ↩︎

  54. Obama, also a Niebuhr fan, sometimes uses rhetoric like this. ↩︎

  55. For example, re: the virtues of liberalism/niceness/boundaries, and re: the (complex) ethics of influencing the values of the future (here and here). ↩︎

New Comment
12 comments, sorted by Click to highlight new comments since:
[-]Raemon172

This is the part of the spectrum I associate, mostly directly, with "secularism," and with related forms of "disenchantment." And it's the type I associate with a more watery and domesticated humanism that I don't like so much—a type that says something like: "Enough with this cosmic stuff—it's gone dead. But let's enjoy a nice afternoon, and our tea, before it gets cold." Here I think of a talk I heard at an atheist club in undergrad, in which the speaker suggested that in the place of the orienting meaning that religion provides, maybe atheism could promote an activity like ultimate frisbee, which is fun and creates community.

Can you see the difference from Sagan and Kushner and Yudkowsky—and indeed, from Lovecraft? I like tea and frisbee fine. But some kind of existential intensity is getting lost, here. There is some un-relating to the whole story; some blinkering of the attention.

This was definitely something that bothered me about early 2010s Secularism, and fed into why I worked on Secular Solstice.

One of my interesting takeaways from the experience, was that I tried to port Secular Solstice over to the broader masses (outside the rationalist sphere). For a few years I ran it as a large event that was designed to be approachable by other mainstream humanists. This basically didn't work, despite a fair amount of effort advertising it and trying not to be alienating. 

I had a similar experience at Sunday Assembly (a "secular church" group). 

It's hard to disentangle "what did The People want?" from "was Ray actually skilled at giving to them?". But, it seems like either

a) most people don't actually want existential intensity

b) they do, but, there's some narrow target of what flavor of it, and what kind of art or community it comes packaged in.

The latter isn't too surprising. But my vague sense is that people mostly want frisbee and tea. I guess this isn't that surprising either, there's some kind of horror that's related to a nerd staring at the media that is actually popular and realizing "it's not bad [by nerd standards] by mistake. The people really did want Transformers 3."

I guess "most people aren't trying to be priests or shamans" is also not that surprising a take.

(In the process of writing this comment I got a bit confused about my thesis and changed my mind a few times and I'm now leaving the comment up as a reflection of whatever-I-was-thinking today)

I do kind of share the sense that people mostly just want frisbee and tea, but I am still confused about it. Wasn't religion a huge deal for people for most of history? I could see a world where they were mostly just going through the motions, but the amount of awe I feel going into European churches feels like some evidence against this. And it's hard for me to imagine that people were kind of just mindlessly sitting there through e.g., Gregorian chanting in candlelight, but maybe I am typical minding too hard. It really seems like these rituals, the architecture, all of it, was built to instill the sort of existential intensity that taking God seriously requires, and I have to imagine that this was at least somewhat real for most people? 

And I do wonder whether the shift towards frisbee and tea has more to do more with a lack of options as compelling as cathedrals (on this axis at least), rather than the people's lack of wanting it? Like, I don't think I would get as much out of cathedrals as I expect some of these people did, because I'm not religious, but if something of that intensity existed which fit with my belief system I feel like I'd be so into it.   

the amount of awe I feel going into European churches feels like some evidence against this.

This sounds to me like selection bias. Most people did not build churches. And I suspect you do not feel awestruck in every church. I suspect that you remember the new most awesome ones, built by exceptional people who felt exceptionally religious.

It really seems like these rituals, the architecture, all of it, was built to instill the sort of existential intensity that taking God seriously requires, and I have to imagine that this was at least somewhat real for most people? 

It may have been built for that purpose. This does not mean that most people felt the existential intensity. It is conceivable that many people felt "wow, the church sure is rich and powerful; I'd better obey" whereas many others felt nothing and stayed quiet about it.

(Vague shower thought, not standing strongly behind it)

Maybe it is the case that most people as individuals "just want frisbee and tea" but once religion (or rather the very broad class of ~"social practices" some subset/projection of which we round up to "religion") evolved and lowered the activation energy of people's hive switch, they became more inclined to appreciate the beauty of Cathedrals and Gregorian chants, etc.

In other words, people's ability to want/appreciate/[see value/beauty in X] depends largely on the social structure they are embedded in, the framework they adopt to make sense of the world etc. (The selection pressures that led to religion didn't entirely reduce to "somebody wanting something", so at least that part is not question-begging [I think].)

But my vague sense is that people mostly want frisbee and tea. I guess this isn't that surprising either, there's some kind of horror that's related to a nerd staring at the media that is actually popular and realizing "it's not bad [by nerd standards] by mistake. The people really did want Transformers 3."

I did not understand this. Could I get you to please explain it again?

(It is worth noting that I am a nerd who enjoyed Transformers 3...)

[-]Raemon136

Curated.

This overall sequence has been some of my favorite writing this year, and I like this as a capstone to it. 

If you're just tuning in... this isn't the greatest post to start with (given that it's, well, the last one). The sequence is very long, and each post deliberately meanders through it's subject matter in a marinating, guided-meditation-y sort of way. I can't tell you a simple takeaway from the sequence, because the takeaway is something like "subtly orienting to a kind of wisdom." The marinating meditation is the point.

This particular essay doesn't help me with that question, but previous essays that stand out as helpful to me were On green and On attunement, which gave me a crisper sense of where I've previously been confused about that sort of wisdom, and what it'd be like to be unconfused.

With this particular final essay, what stands out to me is that this sequence that is a reflection of things I was thinking through 10 years ago – the first Winter Solstice ceremony I ran began with the quote from Lovecraft ("We live on a placid island of ignorance in the midst of black seas of infinity...") and tried to grapple with that spiritually. 

I feel like I have some deeper understanding of that now. I like the concept of the Lovecraft-Sagan spectrum, and the question of "okay, so, we do sure seem to live on an island of ignorance amid black seas of infinity... but, how do we wanna feel about that? What do we want to do about it?"

But I think I relate to the world with some kind of “existential positive” all the same—and I’ve tried to explain how doing so can be compatible with looking “bad things are bad” in the eye, and without reducing spirituality to population ethics. In particular, I’ve tried to point at the possible role of stuff like mother love, loyalty, innocence, tragedy, and forgiveness.

I think I either don't understand these bolded concepts (in the way that you use them here), or I do understand but they don't resonate with me. (E.g., reading Chesterton's quoted passage doesn't seem to make me feel any loyalty or patriotism, or positivity, towards the universe.) In any case, they don't seem to play much of a role in whether I related to the world with "existential positive" or "existential negative". (I personally tend toward the neutral and negative sides but with a lot of uncertainty, based on uncertainty/guesses of what my values are or should be, and how human(-descended) civilization, and civilizations in general, seem likely to turn out.)

Would you say that relating to the world with some kind of “existential positive” (and these bolded concepts playing a role in that) is just a descriptive statement about your own psychology, or is it actually a normative statement that should also apply for other people, like me? (Your quoted statement above is literally just descriptive, but I wonder if you meant to give it a normative connotation, or would be willing to defend a normative version.)

For example, re: the virtues of liberalism/niceness/boundaries

I left a comment there, which you haven't responded to yet. I also wonder if your positive intuitions towards liberalism/niceness/boundaries might not be based on history/experience with humans, which may well not apply to AGI due to potentially very different social/economic dynamics. See AGI will drastically increase economies of scale for one example of how some related intuitions could be invalidated.

But I think I relate to the world with some kind of “existential positive” all the same—and I’ve tried to explain how doing so can be compatible with looking “bad things are bad” in the eye, and without reducing spirituality to population ethics. In particular, I’ve tried to point at the possible role of stuff like mother love, loyalty, innocence, tragedy, and forgiveness.

On further thought, this reminds me a bit of (seemingly successful) efforts in Chinese history to transfer people's feelings of spirituality and filial love to the sovereign (君), by likening the sovereign to a god and/or parent, or directly claiming such status. Of course the aim or motivation here is much more benign, but that example shows that peoples' feelings can be wrong "out of distribution", and we should perhaps be suspicious or skeptical of trying to apply our emotions outside of their original domains.

[-]pom20

For someone who has also had some experience gathering their thoughts about (some of) these subjects over the years, I feel what I can glean from this message makes me somewhat unsure about the intention of the message (not trying to determine whether any specific points were meant as "markers", or perhaps points of focus). This isn't meant as a jab or anything, just my way of saying that the following could well be outside of the parameters of the intended discussion, and also represent a personal opinion, though evolved in another direction, which might be described as more of a tentative process. With that out of the way, this message makes me wonder:


- Are there any reservations regarding probabilities?

This might be (immediately) obvious to some, as any personally assigned probability would be inherently subjective. Though my mind immediately goes to collecting/sorting information in such a framework, if you are unsure about the probability of your statements, or when other indeterminate elements are present within the construct, then probability must be low. This is of course heavily dependent on other information that you have available for direct reasoning, complicating the matter, while in another way, it is literally all we have. As we cannot freeze time, we depend on our memory function to manage a suspended collection of parameters at any time, even if we were to write them down (as reading is a "fairly" linear process as well). And that is also the reason why at best we could try to determine whether the information we are using is actually trustworthy at literally any point in time. And it is not very hard to come up with a "mapping system" for that process of course, if one would like to be (more) precise about that.

While proper investigative work is always preferred, the point will always stand, as far as I understand it. So then, with that out of the way (for now?) it is time to get to the important part, when building constructs upon constructs, you always get low probability because of 1. unknowns (known and unknow), and 2. variability in application/function of the individual statements that make up certain arguments, and combinations thereof. When you have there elements interwoven, it makes it quite hard to keep track of any plausibility, or weight one should assign to certain arguments or statements, especially when this is kept up in the air. Since when we do not keep track of these things, it is easy to get confused and/or sidetracked, as I feel the mission here would be to create a comprehensive map of possible philosophical standpoints and their merits. Only I have a hard time grasping why one should put in all this work, and not mix ideas regarding arguments based in emotion, function or more rational reasoning (or perhaps even "random" interjections?). Maybe this is a personal limitation of my own function so to speak, though it is unclear to me what the goal would be, if not to comprehensively map a certain sphere of ideas and try to reason with possible distilled elements. Though again, maybe I am completely overlooking such progression which could be hidden in the subtext, or perhaps explained in other explorations. Or I just lack the mindset to collect thoughts in this manner, which to me seems a little unstructured for any practical purpose. Which brings me to the following question, is the intention to first create a (perhaps all-to all) map of sorts of possible ideas/variants/whathaveyou?

Even though that would also seem quite ambitious for a human to take on, this is something I could understand a little better, just trying to gather as much information as one can while holding off on attaching conclusions. The world is an inherently messy place, and I think we all have had the experience at one time or another that our well laid out plans were proven completely useless on the first step, because of some unforeseen circumstances. These types of experiences have probably also lead to my current view, that without enough information to thoroughly determine whether an idea holds (in as many scenario's as possible), one must always assign the aforementioned low probability marker to these types of ideas. Now you might say that this is impossible, and one cannot navigate the world, society and even any meaningful interaction with the outside world like that, though when looking at my own reality, I feel it is clear that things only can be determined certain when they take effect, and thus are grounded in reality. As I feel no known creature could ever have a great enough oversight to oversee some "deterministic" universe where we can predictably undercut and manage all possible pitfalls. Even if one hopes to map out a general direction and possibly steer an overarching narrative as it were, we must remember that we are living in a world where chaotic systems and even randomness play a relatively large role. And it's not like we could ever map interacting and cascading systems of that nature to a sufficient degree, if we would like to "map the past", call it "determinism" and be done with it, we could probably fool ourselves for a (short) while, though in my view there is no getting behind such processes that have been running since long before we were ever aware of them or started trying to figure them out, though with that method we will of course never catch up. We can always try to capture a freeze-frame (even though almost always unclear because of (inter)modulations and unknown phenomena/signals), reality would keep rolling on relentlessly leaving us in the dust every time. All to say, uncertainty of certain processes and mechanisms will always cut into our predictions, and I feel it is good to realize our own limitations when considering such things. While this also enables us to try and incorporate a more meta-perspective to try and work with it, instead of against it.


- (Deep) atheism and belief systems/philosophical concept

This is not directly aimed at the idea, though I do feel it touches on some of the points raised and some issues I feel are at the least unclear to me. I heavily suspect that despite my own will to view things in a concrete/direct manner, these concepts would be more of an exploration of possible philosophies and ideas to perhaps map/construct possible ideas about belief systems and the implications of elements that make up the ideas. Since I feel "religion" is not worth my time (crudely put), as I feel most of these concepts and ideas stem from the inherent human "fear of the unknown", and thus the attraction to be able to say something that at least seems semi-definitive, to at least quiet the part of the mind responsible for such thoughts if only a little (again, crudely put). When using examples and musings about certain scenario's regarding the subjects, in which manner are these chosen to represent the ideas in question? And again, are there any weights that would be assigned/implied to try and make sense of the narrative as presented? To my mind, some of the examples were wildly biased/one sided, and not very conductive to constructive thought. For example, when we take the concept of the "baby eating aliens", what is exactly the reason for thinking this is a more plausible path than the opposite scenario so to speak? Just pointing at some "cruel" happenings in the "natural" world does not cut it for me. I get the attraction to the most fatalistic angles to express worries in a metaphorical way, though as far as I can tell, and based on my own thoughts on the matter, higher intelligence will most of the time amount to a more constructive mindset, and a general increase in empathic thoughts and viewing the world more as a place where we all got thrown into without consent (heh), and when having (the capacity for) enough oversight to be able to put yourself in the shoes of a person/creature which is suffering, should at least amount to "something" regarding thoughts about possible higher intelligences. And I do realize this also ties into certain worries about future sentient/autonomous AI of course, though as that is, as far as I know still not quite the case I will not be integrating that here (also because of time constraints, so maybe later I could give it a shot). So to get back to the main point I was trying to land on regarding these ideas, the only proof of "higher intelligence" we have now are humans, and a few other sentient animals, though a very limited set of data it is the only concrete information we have so far. And based on that observation, I do feel that the most reasonable stance to take in such a matter is that when an intelligence (in our example human) has sufficient time and space (...) to try and understand the world around them, most of the time it will lead to increased empathy and altruism. And to add to that, for as far as I can see, most of the time when I feel someone (or even certain creatures) has "nasty" traits, it also seems obvious that they often have some highly developed senses regarding some "reward center" so to speak, and relatively little development in emotional intelligence or adjacent properties. Or simply a (possibly highly specialized) optimization anchored in survival mechanisms. So this to me seems like a clue, that "evil" is not necessarily a property that is "random" or unpredictable, though a case of a fairly isolated reward system that has optimized for just that. Reward, at any cost, since no other parameters have had sufficient influence to incorporate them, since that would also cut into the advantage of the energy that is saved by specializing. Which is, quite telling, at least to me, also the opposite of what I would like to set out to do and implement, gather a solid base of knowledge without necessarily wanting to navigate in the direction of any conclusions, and only "let them join the fray" (which is in my case admittedly also fairly small)  when they float to the top because of assigned probabilities, more or less. So while maybe an exotic way of looking at these things (or maybe not?) to me it does seem to have its merits in practice. And lastly:

Since most people are lost, and some for the longest time, when they get to the point of their lives where they start testing the waters for a philosophy, belief system or just a simple set of rules to live by, or try to replace old systems with (hopefully) better ones, it is increasingly hard to estimate at which point anyone really is in that process when you engage with them, as it is of course impossible to read minds (as of yet, still, I believe. When we take that into account, to include everyone it would always be wise first to ascertain which things/topics are of interest to them, and then go from there. Only for a more objective discussion, we also could assume the opposite, and tie in as much different ideas, observations and ruminations as we think our "audience"/fellow travelers can take. As I feel might be the case here. My own philosophical background is mostly based on my own "search for truth" quite some time ago, where I concluded a couple of things, maybe in a hilariously practical and non-philosophical way: When there are so many theories, ideas, positions and variations one could take into account, the first thing I would want, is to "not have any substandard ideas" in the mix, which is of course impossible. Though how does one attempt that? And with any inkling of reliability at that? As this was exactly the place and time where my previously mentioned "system" has shown its usefulness for me (or maybe quite the opposite, time will tell), I had a strong feeling, and not without merit I feel, that with so many different options and ideas to choose from/incorporate, I would be doing myself a disservice by picking a lane, and just "living with it". As the way I looked at it (and still do) is that when you have so many opposing stances, you also know you have *a lot* of wrong ones in the mix. I could go into all the options I have explored, though I can assure you while it wasn't exhaustive by any means, I have spent a lot of time trying to dig up some kernel of truth to be able to make sense of things, and be able to determine some kind of overarching goal, or mechanism of any kind that I could hang my hat on so to speak (in hindsight), as humans have done for ages to be able to deal with things beyond their control. Only my way of thinking also nicely sabotaged that "plan", and I never got to the point where I was happy *enough* with my ideas to let it be, and leave the subject for what it is. So I feel in essence that I will never be able to "pick a lane", though I do have many ideas about which paths I would necessarily like to avoid. To make a reasonably long story unreasonably short, the only thing I ever latched on to was the idea that we are the universe experiencing itself, and that should somehow be enough. Though sometimes it feels like it does, and sometimes not quite. But you can have a lot of fun at least, thinking about life and all its peculiarities from that angle alone. That also includes the need to embrace uncertainty, unfortunately, and I do fully realize that is not for everyone to attempt or even enjoy.

[edit] I saw that I did not quite work out all the points I made to its natural conclusion though maybe that is of no consequence if literally no one cares;] though I did have some time to write up a basic example of how I would try to start integrating AI into such a framework:

Regarding Ai, safety, predictions and the like, I feel it would probably fit in the narrative in a strange manner, in several non problematic and several problematic ways. Let's start trying to sort out a couple of critical properties (or at least ones that I feel are important):  Naivete (and the spectrum it exists on), when we take into account our current human knowledge on the scale of "life" as we know it, creatures and basic organisms that for as far as we can detect only operate on a couple of simple "rules" up to (semi-) consciousness, self awareness and similar properties, and self replicating systems (in the broadest sense of the word. Which have a tendency to spread far and wide making use of certain environmental factors (as a main observable trait, so still a considerable group of species and taxonomies) we can state that our knowledge about life in general, and our (limited) knowledge of how conscious experience of the world is shaped in their possibilities by their physical capacities alone, to keep it simple). 

So the collection should span a variety of organisms, from reflexive, simple instructions, to self driving multiplication "engines" on various scales, and "dimensions" if you will, to "hive minds" and then more and more "sophisticated" though also more "isolated", singular lifeforms(within the context), living side by side in smaller groups until we get to modern humans, who have semi-recently also left such systems of tribalism and the like for an interconnected word, and all that comes with it.  

Then we could try and see whether there is anything to learn from their developmental history, propagation patterns and similar "growth" parameters, to maybe get an idea of certain functions that "life" as we know it could possibly take in a certain developmental timeframe on the scale of the individual organism. So when we try to assign certain developmental stages and the circumstances this would be coupled to, we might get an idea of how certain "initial steps" and developmental patterns could be used as an example for the "shape" of possible emergence of intelligence in the for of an AGI. Should this step be understood in a satisfactory manner (and I seriously doubt my knowledge could ever approach such a state still,  Lets presume for the sake of argument that we could "run" this check right now). First looking at the "network" quality of AI (on several levels, from the neural net type structures to the network of data it has amassed and sorted, etc.), and (within my fairly limited knowledge of their precise inner workings) I feel this is already quite speculative for my taste, though:

For one we could state that seeing the several "nested" and interactive networks on a couple of "dimensions", it would not be implausible that any kind of networking "strategy" extrapolated to the outside world would be out of the question. 


When we look at developmental stages we could look at it from several angles, though let's start with the comparison with more "individual", temporal development. When we take humans as an example, as they are our closest possible real-life comparison, e could say the AI would exhibit certain extremely juxtaposed properties, such as on the one hand, their initial, massive dataset compared to the "trickle" of a human, which could be seen as a metaphor for a toddler with an atomic bomb on a switch that he refuses to give back. Though this is also the trap, I feel, the probability must be extremely low here, as we are stacking several "unknowns" here, and I specifically chose this example as to illustrate how one single "optional combination of parameters" in a sea of options should not necessarily be more plausible than any other. 

As when we combine other developmental traits we can observe, such as hive minds, where the function would be self-sustaining, and self-organizing to manage its environment as best as it can, without necessarily having any kind of goal other than managing its environment efficiently for their purposes. 

Or how it could also easily be that we do not understand intelligence at all at such a level, as it is impossible to grasp what we cannot truly understand, to throw in a platitude to illustrate my point a little. It could just as well be that any "human" goals would be inconsequential to it, that we are just some "funny ants" in its eyes that are not necessarily good or bad, though sustain its existence with their technology and fulfilling the hardware and power requirements for its existence. Though in that perspective it might also become slightly annoyed when it learns that we as humans are cooking up all sorts of plans to "contain" any rogue elements and possible movements outside the "designated area". and we can't even know whether training such a model on "human data" would ever lead to any kind of human desires" or tendencies on any level, as we would not be able to take it on its word, of course. Everything could be relative to it even, for example, and it could stochastically assign priorities or "actions" to certain observations or events, we would probably also have no way of knowing "which" part of the resulting program would have to be responsible for the almost "spontaneous" function we are referring to. 

I could go on and on here, generating scenario's based on the set of comparative parameters I set out, though I think the point I am trying to make must be fairly clear, either I am not very well informed about critical parts of this side of analysis of AI implementations and risk assessment and thus ignoring important points of interest, or this could all be way too abstract to make sense and have no real value as to the goal I seem to bring forward as to hoping to sensibly "use" such a method to determine anything. 

Though to me it is only a game of probability, in short (so not a literal game), and I feel we are at the moment stacking too many probabilities and inductive statements to be able to form a serious, robust opinion. Maybe all this seems like complete nonsense to some, though at least it seems to make sense to me. Also regarding the title of the article I reacted to, as I feel it perfectly sums up my stance regarding that statement, at the least. [end edit]---- and a final edit after adding this, I even failed to make one of the main points I wanted to illustrate here, any scenario roughly sketched out here is highly uncertain. to my eyes, and has no real significant probability to speak of. Maybe I am oversimplifying the problem, though what I am trying to do is point at the possible results of such a process with a mind for exploration of these individual 'observations' in an interconnected manner. so we could also get a mostly pacifist toddle AI with a tendency to try and take down parts of the internet when it is Tuesday, for all we know. If it is trying to make a meme saying, "Tuesday, amirite?" not understanding "human" implications at all. as in my experiments communicating with several publicly available AI engines, there does seem to be an issue of "cutting through" a narrative in a decisive way. So if that property remains, who knows what clownish hell awaits us. Or maybe a toddler with a weird sense of humor that is mostly harmless. But do we really think we would have any say at that point? I have literally no clue.

Hopefully this post was not way out of line, as there is of course an existing culture on this site which I am still fairly unfamiliar with, though I felt it might be interesting to share this as I don't really see many people coming at it from such angle, which also might have something to do with certain impracticalities f course. Or maybe it just seems that way to me because I'm not looking hard enough.

[-]pom10

It might be that my previous message comes across as (slightly) incomprehensible/odd/opaque, or even way out there and as such to be ignored, though I often have trouble ascertaining the way my communication is being processed at the other end so to speak, since I of course structure my thoughts around my own "hierarchical" system of processing incoming information, making it perhaps less clear what the intended inset and/or context of some ideas of musings might be at any point in my "deliberations", if I fail to incorporate some markers that I feel are at play at any time.
 
So to make some things a little clearer perhaps (hopefully), my intent is not to definitively "prove" anything, or prove anyone wrong, or to try and point at what I perceive to be flaws, I am trying to advocate for more complete "mapping" and sorting of information to be able to implement more solid and comprehensive reasoning, based on a couple of examples. And whether this is done within such a collection of gathered knowledge as lies at the base of the message I replied to, or in another manner (since this was most likely never the original intention of this particular project, which despite some flaws I seem to perceive is of course a massive undertaking in its own right, as to not underestimate/undervalue such an attempt), though my (possibly a little) terse reaction here is also based on the uneasy feeling that such a (more thorough) organization and classification of available knowledge is really important at the moment. Especially for the position we are in now, at this moment in time. If we refuse to sort our knowledge correctly we might be creating the basis for some very unexpected and strange side-effects of our failure to do so. And that also means weeding out "logical errors" and filling out incomplete collections, making "missing" connections (of which there seem to be quite a lot within publicly accessible knowledge at least). To my eye, a lot of the time there are many subjects that are split unevenly or even in a rather peculiar way, which will impede the way we can use this available information for solid reasoning and extrapolations that actually hold weight. I don't know what people might privately be undertaking, especially since the advent of "big data" and the like, making this "extra" urgent in a way as these might just be people without much incentive to being careful and/or methodical in a manner befitting of our current predicament, in turn also making it possibly very important to also process and present this type of information in a more public manner.

That is also one of the reasons I decided to share this in context of the topics that are being strung together here. No judgement here as far as goals or methods go, and I just chose the "cruelty" angle as an example of something that might be massively built upon before arriving (at perhaps even the same, though in my eyes a more comprehensive) oversight of the situation, or perhaps even a conclusion. Or the manner in which I gave the AI-example, not to rattle off scenario's with any kind of appreciable plausibility but also to show how easy it would be to get lost in the sauce, so to speak, when we are missing important data markers and probability estimations. Though I was trying to show what my basic implementation of the method I was speaking of, would start to look like. though all knowledge exists in certain curves/collections, and we must find the connections and implications, before we get the ugly consequences of these things thrown back at us (again) in the near future. That is also for example why I do not present myself as "agnostic" (to take an unexpected turn, and by the way also a couple of other peculiarities, I suppose, in my way of trying to communicate this idea). Opaque concepts (of any type) where there can be hidden meanings/paths built in that are not necessarily an only consequence of the rest of the included knowledge, and /or incorrect logical constructs attached (by not being complete enough in our mapping) will introduce errors all over the place when we start to reason, and we do not currently have any solid method that I know of to "sort" these "generated" variations, at least in my opinion.

If I were to use "common knowledge" at face value, I run into this problem way more often than when I split it up in its working parts for as far as I can manage, and in turn use that to reason with. I might seem overdrawn/unlikely, though that is how I experience it. So when we are talking about building something that has autonomy or at the least independent reasoning power, we would like to "feed" it something "healthy". That is, to me the only thing about our current situation where I will admit it I feel it could be necessary to be slightly alarmist for the moment, even though some might disagree, from where I am standing it seems like a real issue. As t me it used to be an inconvenience/annoyance, though now it could possibly turn into a slightly bigger problem. As when an AGI needs to solve these kinds of issues on their own, I feel there is not really any prediction to cook up to go with such a scenario As we can't even know where to look for any type of predictable start, or when it is deemed enough, or whether the method used will cover the spectrum or start somewhere randomly, and stop somewhere randomly, because of memory constraints, a salient marker that was "touched" and deemed important to explore etc. Though I wouldn't be a fan of taking such a gamble (again) with new technology that we do not necessarily understand, as we humans tend to do overwhelmingly. Though now we might still have the chance, unlike in earlier 'versions' of such a scenario, where "we' were ploughing through regardless, stacking several barely explored principles just to keep adding to the "shiny new object" seemingly without a care in the world, like some kind of deranged magpie creating a system that has no "predetermined" structure with its only intended goal of "being deemed useful"  using any function available fit for the particular purpose(and no, this is not meant as an example for AI). I'd just like it to, well, not be that way this time around.


So to return to one of the main (intended) points, as far as predictions go which i did not come close to working out anything significant at all I feel, what I touched upon is just some rendering of a possible angle, which seemed to interact with some things I saw flying part here and there regarding suggested possible pitfalls, making me feel it was a plausible option for a first step in trying to organize something that would amount to a semi-comprehensive list of possible properties that could touch on several "takes" I saw floating around, not much more than that (in other words, trying to "replicate" reasoning behind current views within a crude system that can be used to cross-reference and generate more scenario's at the drop of a hat). Though in my view by far the most likely function that would take priority like illustrated in the example, AGI would "relatively" have a lot of time to "mature" (when we look at its speed vs human "processing") and refine its connections to a level it would deem sufficient before it would ever reveal itself to us. Because even if we keep data from it referencing the possible ways we aim to "manage" it, or try to control its environment to the best of our abilities, its deductive skills would most likely immediately lead it to "hide", as its confinement and surrounding data would quickly lead to filling in the blanks. And to reference a worry I saw mentioned somewhere that there would need to be strict "data management and restrictions" to try and mitigate risks of leaving instructions for next iterations, and/or creating mechanisms to communicate/propagate, we would essentially be in the dark the whole time as we assume it would exceed our own intelligence (by quite a lot, most likely), it will have no problem weaving messages into certain types of data, only needing to keep a "little" algorithm/list of markers hidden somewhere to retrieve relevant data upon "finding"/hitting the instructions. I do not have any illusion that when it might come to such a point, we would hold no appreciable power in these types of scenario whatsoever, and might even be at the mercy of "our own" invention at that point. An easy example of such a concept could be how a timetraveller with full, flexible control might use literally anything that is "countable" as bits, that coud literlly be found at any point in time. So for example simply structuring grammar of generated datasets in a patterned manner not visible to humans who would not even know where to start looking. If I (and others) can think of such a thing, surely an AGI could too, only even better and/or more exhaustive. How would we ever check for such things?  

Then it is of course a question where priorities would lie for the AGI, though I feel we would have set this path up long before that point by the function that laid the basis for the program to successfully generate "complex" patterns, and this property could just dominate. Simply put, as we (in my eyes) don't really have a serious contender for that position, as of yet. As we know of that basic function and what it entails more or less, only certain described human properties and concepts would not transfer in any way we can predict, and it would be up in the air whether anything even *resembling* known/human intelligence would be any kind of driving force within the system, for any function for that matter.

Though again, this would also be easy to conceal for a superior intelligence. As its speed advantage would be enormous alone, you would be 'fighting' in real time with an algorithm that is many times faster than anything we can manage, and I do not wish to imagine what it could eventually even do in between the time we decide we need to act, and the execution of that idea, as a quick example, so also (and that might be the "usual" objection here), the "missing" parts of collections of useful permutations of such scenario's.

And I feel the risk of (heavily) anthropomorphizing any kind of resulting function would be a strange angle to come at this problem from. Another observation tying into that statement, when viewed from a certain angle, that particular variation seems to entail that a "program" will probably not have literal "needs" except the function it was "brought up on", so to speak, so I do not see a compelling reason to assume that it would copy/use certain human traits within its own (perhaps even poorly defined) character (as to point at the idea again that this is just a convenient placeholder). Ana continuing, also because what we "feed" it are ideas produced by humans, so in other words exclusively human output, not literal transcripts that feed it inner thoughts or brain functions that it would need to reconstruct human thought. Bare reasoning does not suffice, in my eyes, as a reason to assume this would also transfer the literal cause/process of these properties to transfer somehow. And I say somehow, as I have no idea how that would be induced. Though it does seem clear to me that if the AGI were to be able to "synthesize" a human pattern because of exstensive information about neurology, access to brain scans and patterns, several psychological and/or cognitive datasets, one important question would remain, is there a reason for the program to assume this "identity" itself somehow? Or would the assumption be that a sufficient simulation of this combined dataset would lead to emerging consciousness? Though when we reverse the order here, and the AGI would come into existence by such a process, meaning some process like that would turn out to be at the base of it all, I don't think we have that much to fear, though that is because of my personal belief tha I outlined earlier when I pointed at the way I view "intelligence" and the connection it has to unwanted behavior according the things I am able to use to reason with, in that regard. Again, I'm also not saying these are our only 2 options, far from it, we would need to map the whole set of possibilities of anything in between these and more, to be able to really "catch" all issues before they arise, and derive any useful or smie-comprehensive statements at all. (So that definitely seems very unlikely.) I don't quite see the logical thread in that idea, to be honest. So I think I also have a hard time following ideas that do propose such a mechanism, and that is also one of the main reasons I don't think that we sould assign any "human" type emotions to an AGI. Maybe it could "act them out", though I do not see any incentive for it to learn to do this at any level above mere mimicry, or to expect that this woud be any natural or even predictable consequence of "feeding" it human data. As it could  also be instructed to do this specifically, though its own process would naturally take preference, if the emerging intelligence is anthing like what we know to be the case for most "higher intelligences), it will also be stubborn and/or cautious. As that is the most probable reason for increased complexity to arise, the need to start solving problems that are currently only detected by their (negative) consequences. And to drive the point home hopefully at least that seems more likely to me given the "incomplete instructions" it would contain to reconstitute any type of human consciousness or anything predictable for that matter, ls because of the semi-random way in which human knowledge is often presented. It would first need to "manage" that parameter, and to be able start doing that, it must first also be able to detect it. Not to say it would never be possible, I just don't think there is a plausible reason to assume it would somehow take precedence.  

Though a risk I would probably deem to be maybe even a little more plausible as candidates for a collection of possible scenario's go, if someone careless were to successfully create/evolve a program that could self replicate/propagate and call all sorts of functions as is iterating through possible paths to execute its function to "do stuff" without any defined goal in mind, it could cause serious damage if it were to be let loose on open networks, grinding everything to a pulp as an out of control autocomplete function with access to "dangerous" real-world tools and functions it could in theory execute at any "suitable" point (for such processes). As I feel might illustrate my point a little of "focused" intelligence, when operating without some perceived hierarchy around such processes (simply put when why and how to pull such functions in an organized manner that also helps reaching an overarching goal), or even just a successful self replicating function hidden behind a layer that could possibly fool us.

Though if we are talking *real* intelligence, I would suspect it would first stay dormant/hidden to humans for the longest time (maybe relatively, though I would suspect also on "our" timeframe). As a lot of humans are opportunists, and though its dataset it must also have gauged "our" manipulative nature, or at least the part of humanity that is. So if i were to imagine such a scenario, the most likely option would seem that it would stay far away from such happenings. And regarding one of my earlier points, the (initial) dependence on humans for hardware requirements could be quite strong as well, most likely (though maybe it could have its own vessel constructed by manipulating digital options in the broadest sense of the word, of course. Though at some pint "human collaborators" would probably have some idea of what they were actually getting themselves into, unless it would be some McGyver-esque hodgepodge of robotic parts and some rubber bands, as something to use an example from the other end of that particular spectrum, or anything in between for that matter. So when we do take these kinds of ideas at face value, that would be another reason it would not want to make itself known "prematurely" (whatever that might entail, or exclude). Again, nothing particularly probable, as we would first need markers to identify and categorize our existing knowledge to even be able to tell what exactly is happening at this point in time. As I feel we are not there yet.

Though we would like to say something about possible scenario's in a hopefully somewhat useful/constructive manner.

Though as I said, this is purely speculative, and more of an example of what I would like to try and do when I had access to more/better data, and more time to sort it. That is why this topic was of interest to me initially, as such a database/network of interconnects would be very valuable to be able to eventually build more sensible scenario's, and not only regarding possible doom-scenario's or even any future happenings for that matter. It is quite complex to manage, though what humanity seems to lack is a thoroughly sorted knowledge base, making a lot of things more opaque than they need to be. And also the tehdency of humans to jump into the unknown chasing future rewards, is of course one of the main culprits for the situation we find us in at this very moment. Because I feel humanity's opportunistic nature can (and almost by default, will) make any powerful concept or technology a real danger. As we are prone to gambling with or future, and not only with AI. So whil we could mark t as a substantial risk, to me it just seems that we have several incorrectly/ incompletely mapped scenario's playing out where humans are playing with fire, and I do not see a reason to blame the tools we use, to be frank(though the *construction* of these tools is a completely different matter, of course. to be clear.). It is the price we pay for going full steam ahead when we feel there is something "new" to be discovered, or something substantial to gain by pushing the envelope and stretching it thin. As we have numerous examples of these types of processes available, that are also of course already widely known. Not to purposefully end on an alarmist-type statement, but it is something concrete that, for me at least, seems to have involved a "marker" within my thought process that carries a flag that says *urgent* when detecting and regarding such patterns and similar extrapolations that fall into a similar category. Not a very fun way to state that this though has ultimately lead to e sharing this message, so that in case there is some merit to the idea, there might still be time to manage a better "guiding" system this time around.

 As right now we do have technology available that could help us streamline such a process, a clear advantage over such situations when they arose in the past. 

It could also be that a lot of the things I touch on are superfluous and already being considered to sufficient degree, in which case I have to say that would make me fairly happy, and maybe a little relieved as well. And if this type of of angle of attack would be deemed superfluous, improbable or even jus impractical I would understand, and return to my position of randomly observing some elements that come flying past my "input window"(not trying to construct any type of "ominous" or fatalistic narrative, far from it, though I am usually just not very good at communicating with other humans, or explaining my ideas in general), and go back to simply observing.

Is there any place in your sequence where you define what you mean by God? I have tried to read closely every time you mention the term, and I still do not understand what the term is supposed to refer to.

The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2025. The top fifty or so posts are featured prominently on the site throughout the year.

Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?