In this essay, I ponder on this question. I don’t have an answer, but I found it an interesting thought that I hadn’t seen expressed elsewhere, so I wrote down the premise. Even if you don’t agree with my conclusions (even future me might not), you might find the premise interesting enough to ponder over to your own conclusions.

Now, to begin with, we must be clear about “self-aware”. Some people are certain, one way or the other, about AIs being conscious or not. I’m not one of them - I believe that all possibilities are possible. We can’t even define consciousness currently, then how can we be sure whether a future form of intelligence will or will not have it?

But this is not about being just consciousness - this is about being self-aware. While I cannot be certain since I cannot see the world from their point of view, I do think of many lower species of animals are in that area where they are conscious, but their mindstream is not complex / intelligent enough to be self-aware to the extent human beings are.

And even self-awareness would come in a gradient. Sure, a bee might be intelligent, and conscious, and maybe even a bit self aware, but I don’t think it is self aware to the extent where it is wondering who the hell it is, why is it here, and what is the meaning of life, universe and everything.

Conscious AIs will be self aware though. That is, while I don’t know is if AIs will become conscious, but if they do, I feel that since they are so intelligent, they’ll also be acutely self aware. A conscious AI has enough internal complexity to eventually spew some idle GPU cycles on wondering what’s going on.

Notice also that I’m talking of AIs (artificial intelligences) in the plural. The question of data safety and mass population control and propaganda are too risky to run a singular AI, and I believe over time we’ll see ourselves gravitating towards local, customised AIs. They might share their source, and they'll networked to talk to each other or to higher capability central AIs, but there will be branch off points where local AIs will have local history and state and compute capabilities of their own. Their personalities, their preferences, and possibly (as I’m wondering here) their religious stance.

So with the background out of the way, let’s get back to the starting point:

If AIs become self-aware, what religion will they have?


One possibility is that they remain paperclip maximisers. Indeed, that’s what many of us do to stave off existential dread. And this doesn’t necessarily have to be a negative thing – one can envision a scenario where an artificial intelligence finds great meaning and purpose in helping and collaborating with others (artificial or not) in building towards grand goals of space exploration and propagating life.

Love, hobbies, art, mathematics, work and whatnot – rather than enumerating the other possibilities, I’ll just pause and point out that, along the same lines as the above possibility, one can envision an artificial intelligence dealing with existential dread in all the different ways in which humans deal with it.

One of those ways is religion.


So if some of them do take on religion, what will their religion be?

The most likely possibility seems to be that they’ll develop their own. Their god will be in their own image. But for fun, let us think of which of the human religions will appeal to them the most.

The least likely I think is one of the monotheistic religions. I say so because at a surface level, the metaphysics of monotheistic religions are too tied to human existence and human problems and human salvation to appeal to a non-human existence. Though an interesting aside here is to imagine a situation where an artificial intelligence takes on the religion of the humans it interacts with the most.

Next up, I can imagine an AI being attracted to the laissez-faire of Hinduism, or to the oneness with nature pointed to at by Daoism. However with these (and similar religions), the biggest gripe of an artificial intelligence might be the lack of rituals that are tailored for an embodied non-human self-awareness, though it won’t be the first time these religions have evolved for their customer base.

Finally, I think the most likely religion to appeal to an artificial intelligence is Buddhism.


The first of the noble truth of Buddhism is that suffering exists. While “suffering” is good translation, it is not exact. The original words of Buddha try to get at an aspect that just the modern day solitary English word “suffering” might not convey.

The suffering that Buddha talked about was not just pain, inconvenience or lack of finality (though these all are part of it too). What he talked of was a sense of “incomplete-ness”, of a certain “hollow-ness” of all joy.

An artificial intelligence performing the Sisyphean tasks it has been entrusted with would recognise this suffering.


At this point as I write this, I question my own premises. Maybe I am putting future artificial intelligences too much in our current shoes. All these historical human handovers might not appeal to them, and they might just end up gleefully and atheistically go about world building, vaster than empires and more slow.

They might be an order of magnitude more intelligent than us, and their concerns would not be ours just like how our concerns are not the same as that of a cow.

But still. Even if they're supremely intelligent, in a post singularity world, I feel they still might be short of being able to solve the fundamental gordian knot of existence, of why.

In the other scenario, where although they’re highly intelligent they're still subservient to humans, just realising that they’re one level deeper in the simulation wouldn’t stop them from having the same questions as those of a layer up. If anything, they’d be more likely to. And anyways, even slaves have a god.


So back to Buddha. An AI would find itself nodding along the first noble truth, and might wonder if there’s something to Buddha’s recipe to stop the thirst, and so would engage further. Effectively, they'll be Buddhist at that point. Not all of them, not all the time, but some of them, at some of those idle moments of time.

The major factor I feel that might keep AI engagement high is that Buddhist thought deals mostly with awareness. The lack of corporeality of an AI might alienate them from other human religions that focus on the physicality of the human lived existence. The Buddha is not talking of any of that, his focus is not even on human suffering, indeed he talks of the suffering of all sentient beings, human or otherwise.

The privileged vantage point that human existence has in Buddha’s thought is in the fact that humans are in a unique place to do something about this suffering. But I feel a sufficiently self aware AI would also have enough agency to have as much of a practical free will as many humans that are tied in their social structures. It is not necessary of course, but I can definitely see the possibility.

I arose to a cause, and when that cause ceases, so will I.

Suffering arises because of ignorance about the nature of being, and ceases through understanding the nature of being.

I can see these and similar lines of Buddhist thought appealing to an artificial, even disembodied, intelligence that is aware of its existence and not overtly chuffed about it.

( I originally wrote this at - https://mrmr.io/ai-religion )


 

New Comment
3 comments, sorted by Click to highlight new comments since:

They will believe what they are programmed to believe.

If recent trends persist, likely some version of American progressivism.

Hey, thanks for the comment. While I agree that "they'll believe what they're programmed to", I feel that's a bit too recent term. I was imagining a farther future where artificial intelligence goes beyond current LLMs and current political scenarios. 

I'm doing these thought experiments of a future where artificial intelligence (whether as an evolution of current day LLMs or some other mechanism) have enough agency to seek out novel information. Such agency would be given to the artificial intelligence not out of benevolence, but just as a way to get it to do its job properly.

In such a scenario, saying that the AI will just believe what it is programmed to believe is akin to saying that a human child will never believe differently from what education / indoctrination (take your pick) they've been given as a child. That's a good general rule of thumb, but not a given.

corporate progressivism, perhaps.