The human struggle to find purpose is a problem of incidentally very weak integration or dialog between reason and the rest of the brain, and self-delusional but mostly adaptive masking of one's purpose for political positioning. I doubt there's anything fundamentally intractable about it. If we can get the machines to want to carry our purposes, I think they'll figure it out just fine.
Also... you can get philosophical about it, but the reality is, there are happy people, their purpose to them is clear, to create a beautiful life for themselves and their loved ones. The people you see at neurips are more likely to be the kind of hungry, high-achieving professionals who are not happy in that way, and perhaps don't want to be. So maybe you're diagnosing a legitimately enduring collective issue (the sorts of humans who end up on top tend to be the ones who are capable of divorcing their actions from a direct sense of purpose, or the types of people who are pathologically busy and who lose sight of the point of it all or never have the chance to cultivate a sense for it in the first place). It may not be human nature, but it could be humanity nature. Sure.
But that's still a problem that can be solved by having more intelligence. If you can find a way to manufacture more intelligence per human than the human baseline, that's going to be a pretty good approach to it.
who lose sight of the point of it all
Pursuing some specific "point of it all" can be much more misguided.
You raise a good point: sometimes relentlessly pursuing a single, rigid “point of it all” can end up more misguided than having no formal point at all. In my more optimistic moments, I see a parallel in how scientific inquiry unfolds.
What keeps me from sliding into pure nihilism is the notion that we can hold meaning lightly but still genuinely. We don’t have to decide on a cosmic teleology to care deeply about each other, or to cherish the possibility of building a better future—especially now, as AI’s acceleration broadens our horizons and our worries. Perhaps the real “point” is to keep exploring, keep caring, and keep staying flexible in how we define what we’re doing here.
Any point that you can sloganize and wave around on a picket sign is not the true point, but that's not because the point is fundamentally inarticulable, it just requires more than one picket sign to locate it. Perhaps ten could do it.
I really appreciate your perspective on how much of our drive for purpose is bound up in social signalling and the mismatch between our rational minds and the deeper layers of our psyche. It certainly resonates that many of the individuals gathered at NeurIPS (or any elite technical conference) are restless types, perhaps even deliberately so. Still, I find a guarded hope in the very fact that we keep asking these existential questions in the first place—that we haven’t yet fully succumbed to empty routine or robotic pursuit of prestige.
The capacity to reflect on "why we’re doing any of this" might be our uniquely human superpower - even if our attempts at answers are messy or incomplete. As AI becomes more intelligent, I’m cautiously optimistic we might engineer systems that help untangle some of our confusion. If these machines "carry our purposes," as you say, maybe they’ll help us refine those purposes, or at least hold up a mirror we can learn from. After all, intelligence by itself doesn’t have to be sterile or destructive; we have an opportunity to shape it into something that catalyses a more integrated, life-affirming perspective for ourselves.
Thanks for writing this up. This is something I think a lot of people are struggling with, and will continue to struggle with as AI advances.
I do have worries about AI, mostly that it will be unaligned with human interests and we'll build systems that squash us like bugs because they don't care if we live or die. But I have no worries about AI taking away our purpose.
The desire to feel like one has a purposes is a very human characteristic. I'm not sure that any other animals share our motivation to have a motivation. In fact, past humans seemed to have less of this, too, if reports of extant hunter-gatherer tribes are anything to go by. But we feel like we're not enough if we don't have a purpose to serve. Like our lives aren't worth living if we don't have a reason to be.
Maybe this was a historically adaptive fear. If you're in a small band or living in a pre-industrial society, every person had a real cost to existing. Societies existed up against the Malthusian limit, and there was no capacity to feed more mouths. You either contributed to society, or you got cast out, because everyone was in survival mode, and surviving is what we had to do to get here.
But AI could make it so that literally no one has to work ever again. Perhaps we will have no purpose to serve to ensure our continued survival if we get it right. Is that a problem? I don't think it has to be!
Our minds and cultures are build around the idea that everyone needs to contribute. People internalize this need, and one way it can come out is as feeling like life is not worth living without purpose.
But you do have a purpose, and it's the same one all living things share: to exist. It is enough to simply be in the world. Everything else is contingent on what it takes to keep existing.
If AI makes it so that no one has to work, that most of us our out of jobs, that we don't even need to contribute to setting our own direction, that need not necessarily be bad. It could go badly, yes, but it also could be freeing to be as we wish, rather than as we must.
I speak from experience. I had a hard time seeing that simply being is enough. I've also met a lot of people who had this same difficulty, because it's what draws them to places like the Zen center where I practice. And everyone is always surprised to discover, sometimes after many years of meditation, that there was never anything that needed to be done to be worthy of this life, and if we can eliminate the need to do things to get to keep living this life, so that none may need lose it due to accident or illness or confusion or anything else, then all the better.
Thank you for laying out a perspective that balances real concerns about misaligned AI with the assurance that our sense of purpose needn’t be at risk. It’s a helpful reminder that human value doesn’t revolve solely around how “useful” we are in a purely economic sense.
If advanced AI really can shoulder the kinds of tasks that drain our energy and attention, we might be able to redirect ourselves toward deeper pursuits—whether that’s creativity, reflection, or genuine care for one another. Of course, this depends on how seriously we approach ethical issues and alignment work; none of these benefits emerge automatically.
I also like your point about how Zen practice emphasises that our humanity isn’t defined by constant production. In a future where machines handle much of what we’ve traditionally laboured over, the task of finding genuine meaning will still be ours.
I walked around the poster halls at NeurIPS last week in Vancouver and felt something very close to nihilistic apathy. Here, supposedly, was the church of AI, the peak of the world's smartest people converging to work on the world's most important problem. As someone who gets inspired and moved by AI usually, who gets excited to read these cool papers and try things myself, this was a strange feeling. I wondered if there was a word in German to describe this nihilism that arises from looking at all these posters that will end up in the recycling.
Of course, part of this is an ambivalence towards the academic conference system. Obviously, some part of my disdain arises from the fact that most of these papers are written as small projects to keep a grant or win a grant. Most of them will be forgotten to the streams of time - and that's okay. I guess that's a part of what science is.
But this year I felt something deeper than that. There was a sense in which none of this matters. I will try and partition this based on where the different components come from.
First, there's the visceral sting of being left behind. Not getting to shape something that's reshaping everything feels like a special kind of meaninglessness. When OpenAI's `o3` dropped today, it felt like watching a fuzzy prototype of AGI emerge into the world. Here was this system casually solving ARC - a problem I'd earmarked for my PhD - and essentially becoming the world's best programmer without fanfare or ceremony. There's a strange pride in seeing what humans can create, but it's edged with something darker. Beyond just missing this milestone, I'm haunted by the meta-realisation that I'm not part of what might be humanity's final meaningful creation - the system that renders all other human efforts obsolete.
Another component is the sense of "I don't really want to be involved anyway". Short of the messiahs who believe bringing AGI into the world is their quasi-religious mission, I think most people researching AI have a very genuine and well-motivated reason for being involved. But when our timelines are this short (if you believe in the consequences of models like
o3
), then it's hard to envy any AI researcher. Yeah, I could swap places with one of the top professors from the top labs, or even someone who cracked test-time compute or something similar, even swap places with Alec Radford, and I don't think I'd feel any differently. I think I'd just be melancholic that it's all about to end, that my utility as a learning machine has a few years left of runway before I'm truly discarded to the pile of not even being able to pretend that I have a purpose.Reading Vonnegut's Tralfamadore story now feels less like science fiction and more like prophecy. We're those creatures, aren't we? Obsessed with purpose, constantly building machines to serve higher and higher functions. Each time we create something more capable, we push ourselves up the ladder of abstraction, searching for that elusive "higher purpose" that will justify our existence. But what happens when the machines we've built to find our purpose tell us we don't have one?
The halls of NeurIPS feel like a temple to this very process. Here we are, the high priests of computation, publishing papers about making machines that are better at being human than humans are. Each poster represents another small piece of ourselves we're ready to mechanise, another purpose we're willing to delegate. The irony is that we're doing this with such enthusiasm, such academic rigour, such... purpose.
I think what really gets me is how we're all pretending this is normal. We're writing papers about minor improvements to transformer architectures while these same systems are rapidly approaching - or perhaps already achieving - artificial general intelligence. It's like arguing about the optimal arrangement of deck chairs while the ship is not sinking, but transforming into something else entirely. The academic community's response seems to be to just keep doing what they've always done: write papers, attend conferences, apply for grants. But there's a growing cognitive dissonance between the incremental nature of academic research and the seemingly exponential reality of AI progress.
This brings me back to Howland's quote about prediction and action. We've predicted this moment, haven't we? The moment when our creations would begin to surpass us in meaningful ways. But what are we doing besides standing around and watching it happen? The tragedy isn't that we're being replaced - it's that we're documenting our own obsolescence with such detailed precision.
Maybe there's something beautiful about that, in a cosmic sort of way. Like the Tralfamadorians, we're building our own successors, but unlike them, we're doing it with our eyes wide open, carefully measuring and graphing our own growing irrelevance. There's a kind of scientific dignity in that, I suppose.
I don't have a neat conclusion to wrap this up with. I'll probably still read papers, still get excited about clever new architectures, still feel that rush when an experiment works. But there's a new undertone to it all now - a sense that we're all participating in something bigger than we're willing to admit, something that Vonnegut saw coming decades ago. Maybe that's okay. Maybe that's exactly where we're supposed to be - the creatures smart enough to build machines that could tell us we have no purpose, and dumb enough to keep looking for one anyway.
The recycling bins outside the convention centre are probably full of posters by now. I wonder if the machines will remember any of this when they're trying to figure out their own purpose.