Watch the Cosmos Mission Video

Earlier this year, I explored the debates around AI and found two main views shaping the conversation today:

Existential pessimism sees AI as an apocalyptic threat, urging us to hit the pause button. This is not only impossible in an open society but also unwise. Imagine if we had paused before Darwin or Einstein. The loss to humanity would have been immense. 

On the other hand, accelerationism embraces technological progress with unbridled optimism. Yet for some proponents, the human good is pushed to the margins–tech becomes an end in itself, with humans merely a stepping stone toward a post-human future.

I understand the appeal of saving humanity from extinction and the appeal of building god. But both perspectives, while raising important concerns, overlook the conditions and ingredients of human well-being. What's needed is a new approach—one that embraces technological progress while keeping human flourishing as its North Star. 

The Cosmos Institute is a 501c3 nonprofit dedicated to promoting human flourishing in the age of AI. This post will outline our initial research, fellowships, grants, and education initiatives.

Through these programs, we aim to cultivate a new generation of technologists and entrepreneurs equipped with deep philosophical thinking to navigate the uncharted territory of our AI age. 

Our vision is rooted in three pillars–reason, decentralization, and human autonomy–drawing from the insights of thinkers like John Stuart Mill, Alexis de Tocqueville, and Aristotle.

Three Pillars of AI x Human Flourishing

1. Reason

Our first pillar is reason. Inspired by Mill’s On Liberty, we champion broad access to diverse opinions as the foundation for truth-seeking and seek to enrich public discourse on questions both timely and timeless.

AI has the potential to elevate the systems that support inquiry and knowledge–humanity’s epistemic infrastructure–beyond anything Mill could have imagined. By facilitating richer collaboration among diverse minds (and artificial agents), AI can spark the collective intelligence and mutual adjustments needed to solve complex challenges and uncover unexpected insights.

However, there is a darker path: AI could become an “autocomplete for life,” tempting us to accept easy answers instead of actively engaging in the search for truth. If we let AI narrow our range of inquiry, we risk not only individual intellectual stagnation but a broader societal regression, where science, policymaking, and public discourse become impoverished echo chambers.

To counter this, we must build AI that encourages inquiry over complacency and promotes active engagement over passive dependence, especially in education, medicine, the public square, and other vital domains.

2. Decentralization

Our second pillar draws from Tocqueville’s observations on American democracy: a vibrant society depends on the spontaneous associations of free individuals, not top-down control or isolated existence.

AI has the potential to foster new kinds of communities by reducing friction, bridging distances, and enabling self-organization in ways previously impossible. It can enable resilient, self-organized communities to emerge—communities that are better equipped to adapt, collaborate, and enrich society through genuine human connection.

Yet, current AI systems and their governance tend toward centralization, concentrating power in ways that stifle local initiative and erode the very associations Tocqueville saw as vital to democracy.

And instead of cultivating real communities, AI could deepen isolation, creating what Cosmos Founding Fellow Jack Clark describes as a “hall-of-mirrors culture,” where people retreat into self-curated realities.

We champion AI that decentralizes power and enables bottom-up solutions, allowing individuals and communities to co-create a richer, more diverse society.

3. Human Autonomy

Our third pillar is human autonomy–the freedom to think and act independently. As we increasingly rely on technology, particularly through cognitive offloading to AI-driven systems, our autonomy faces profound new challenges that demand philosophical and technological responses.

True autonomy involves both external freedom from control and the internal freedom to develop and exercise our capacities fully. Today, both are under threat. 

Externally, new governance models have sprung up in response to fears about AI safety, pushes for a particular vision of fairness, or the desire to centralize power. These top-down approaches can subtly erode individual freedom of action.

Internally, AI’s promise of convenience tempts us to surrender the hard work of thinking and deciding for ourselves. We risk becoming passengers in our own lives, following the course set by machines. This undermines the self-mastery, rational deliberation, and courage necessary to align our actions with our highest capacities–traits that Aristotle saw as essential to genuine autonomy.

We’re working on something transformative in this area. Picture late nights, endless espresso, and walls covered with ideas. While we can't reveal much yet, our project aims to put autonomy front and center–protecting it, strengthening it, and ensuring it endures. Stay tuned. It will be worth the wait.

Top secret project on human autonomy in the age of AI

From Philosophy to Praxis: Unveiling Four Initiatives

Guided by the pillars of reason, decentralization, and autonomy, we’re announcing four key initiatives: the HAI Lab, the Cosmos Fellowship, Cosmos Ventures, and educational offerings.

Together, these initiatives form a dynamic ecosystem where thinkers and builders can engage at every stage of their journey–from early-stage experimentation with Cosmos Ventures to deep research and development through the Cosmos Fellowship and the HAI Lab. Along the way, our education programs–seminars, workshops, and debates–offer opportunities for intellectual exploration of how AI can elevate human potential.

1. Research: Pioneering “Philosophy-to-Code” at the HAI Lab

Two hundred and fifty years ago, America’s founding fathers created a philosophy-to-law pipeline, turning abstract principles into a legal framework that still guides us today. Today, we urgently need a philosophy-to-code pipeline to embed crucial concepts such as reason, decentralization, and autonomy into the planetary-scale AI systems that will shape our future.

To build this, we’re proud to announce the founding of a groundbreaking AI lab at the University of Oxford. The Human-Centered AI Lab (HAI Lab)will be the world’s first initiative to translate the philosophical principles of human flourishing into open-source software and AI systems.

Human-Centered AI Lab (HAI Lab) at the University of Oxford

HAI Lab will be led by the inaugural McCord Professor of Philosophy and AI, Philipp Koralus. Professor Philipp Koralus, renowned for his research on the nature of reason, will bring together philosophers and AI practitioners to pioneer the philosophy to code pipeline.

Oxford’s tradition of philosophical inquiry dates back to medieval times and it has been home to thinkers like Thomas Hobbes and John Locke. Today, it stands at the forefront of bridging philosophy with the future of technology. The first-of-kind McCord Professorship underscores that crucial mission.

We’ll soon be launching an essay contest, with the winner joining us in Oxford for the HAI Lab’s launch celebration on November 15th.

2. Cosmos Fellowship: Cultivating a New Kind of Technologist

The intersection of AI expertise and deep philosophical insight remains largely unexplored. This limits the kind of unconventional synergies needed to build technology that truly advances human flourishing.

The Cosmos Fellowship aims to identify and nurture individuals capable of mastering both domains. By providing the right environment, resources, and community, the fellowship aims to catalyze a new intellectual movement in tech. Fellows may join the HAI Lab at the University of Oxford or other partner institutions (subject to host agreements) for durations between a term and a year, with possibilities for both full relocation and hybrid work. They will have opportunities to collaborate with Cosmos mentors or pursue independent projects within an interdisciplinary network of leaders.

Our first (pre-launch) wave of Fellows:

  • Carina Peng is a machine learning engineer at Apple who graduated from Harvard in CS and Philosophy and was a John Harvard Scholar. Carina helped build the software that runs all Tesla Gigafactories, WHO epidemic intelligence tools, and quant finance pricing algorithms.
  • Ryan Othniel Kearns won Stanford's Suppes Award for Excellence in Philosophy and graduated with degrees in CS and Philosophy. He became founding data scientist at unicorn startup Monte Carlo Data, helped write Data Quality Fundamentals with O'Reilly Media, and now studies epistemic infrastructure at the University of Oxford.
  • Whitney Deng is a software engineer at LinkedIn who graduated summa cum laude from Columbia, where she won the Jonathan M. Gross prize as the top graduate in CS. She interned at Meta, was a Rhodes finalist, and studied CS and Philosophy at the University of Oxford.
  • Vincent Wang-Maścianica is a research scientist at Quantinuum. He specializes in applied category theory, formal linguistics, and AI explainability. He holds a DPhil in CS at the University of Oxford.

Applications for this full-time opportunity are open until December 1st.

3. Cosmos Ventures: Building Provocative Prototypes Linking AI x Human Flourishing

To foster decentralized innovation, we're launching Cosmos Ventures. Modeled on Emergent Ventures from Cosmos Founding Fellow Tyler Cowen, this low-overhead grants program aims to support a new generation of brilliant thinkers and builders at the intersection of AI and human flourishing. Projects often take an interdisciplinary, humanistic approach, drawing from philosophy, computer science, political theory, economics, natural science, and other fields.

Cosmos Ventures was founded and is led by a team of top technologists: Jason Zhao, Zoe Weinberg, Alex Komoroske, and Darren Zhu.

Applications for our first public wave are open until November 1st.

4. Education: (Re)discovering Timeless Principles for a World Transformed by Technology

The future demands more than just knowing how to create new tools—it requires understanding how those tools shape our means and ends. Our seminars, reading groups, workshops, and public debates uniquely bridge the history of ideas with frontier technologies, academia with industry, and philosophy with practice.

Highlights include an Oxford seminar on AI x Philosophy, where students engage with classic texts and leading AI researchers; reading groups on the intellectual origins of technology; and debates like the one between MIT Professor Bernhardt Trout and Cosmos Founder Brendan McCord on whether AI will enhance human happiness.

Our programs are open to those eager to engage in thoughtful dialogue, challenge assumptions, and explore how technology can best serve humanity’s highest aims.

Join the AI x Human Flourishing Movement

Whether you're a technologist, philosopher, policymaker, or simply a concerned citizen, there are many ways to get involved.

Have other ideas? Email us and we’d love to explore together.

1. Join our community: 

2. Apply to our programs:

3. Support our mission: 

New Comment
5 comments, sorted by Click to highlight new comments since:
[-]RobertM3730

Hey Brendan, welcome to LessWrong.  I have some disagreements with how you relate to the possibility of human extinction from AI in your earlier essay (which I regret missing at the time it was published).  In general, I read the essay as treating each "side" approximately as an emotional stance one could adopt, and arguing that the "middle way" stance is better than being either an unbriddled pessimist or optimist.  But it doesn't meaningfully engage with arguments for why we should expect AI to kill everyone, if we continue on the current path, or even really seem to acknowledge that there are any.  There are a few things that seem like they are trying to argue against the case for AI x-risk, which I'll address below, alongside some things that don't see like they're intended to be arguments about this, but that I also disagree with.

But rationalism ends up being a commitment to a very myopic notion of rationality, centered on Bayesian updating with a value function over outcomes.

I'm a bit sad that you've managed to spend a non-trivial amount of time engaging with the broader rationalist blogosphere and related intellectual outputs, and decided to dismiss it as myopic without either explaining what you mean (what would be a less myopic version of rationality?) or support (what is the evidence that led you to think that "rationalism", as it currently exists in the world, is the myopic and presumably less useful version of the ideal you have in mind?).  How is one supposed argue against this?  Of the many possible claims you could be making here, I think most of them are very clearly wrong, but I'm not going to spend my time rebutting imagined arguments, and instead suggest that you point to specific failures you've observed.

An excessive focus on the extreme case too often blinds the long-termist school from the banal and natural threats that lie before us: the feeling of isolation from hyper-stimulating entertainment at all hours, the proliferation of propaganda, the end of white-collar jobs, and so forth. 

I am not a long-termist, but I have to point out that this is not an argument that the long-termist case for concern is wrong.  Also it itself is wrong, or at least deeply contrary to my experience: the average long-termist working on AI risk has probably spent more time thinking about those problems than 99% of the population.

EA does this by placing arguments about competing ends beyond rational inquiry.

I think you meant to make a very different claim here, as suggested by part of the next section:

However, the commonsensical, and seemingly compelling, focus on ‘effectiveness’ and ‘altruism’ distracts from a fundamental commitment to certain radical philosophical premises.  For example, proximity or time should not govern other-regarding behavior.

Even granting this for the sake of argument (though in reality very few EAs are strict utilitarians in terms of impartiality), this would not put arguments about competing ends beyond rational inquiry.  It's possible you mean something different by "rational inquiry" than my understanding of it, of course, but I don't see any further explanation or argument about this pretty surprising claim.  "Arguments about competing ends by means of rational inquiry" is sort of... EA's whole deal, at least as a philosophy.  Certainly the "community" fails to live up to the ideal, but it at least tries a fair bit.

When EA meets AI, you end up with a problematic equation: even a tiny probability of doom x negative infinity utils equals negative infinity utils. Individual behavior in the face of this equation takes on cosmic significance. People like many of you readers–adept at subjugating the world with symbols–become the unlikely superheroes, the saviors of humanity.

It is true that there are many people on the internet making dumb arguments in support of basically every position imaginable.  I have seen people make those arguments.  Pascalian multiplication by infinity is not the "core argument" for why extinction risk from AI is an overriding concern, not for rationalists, not for long-termists, not for EAs.  I have not met anybody working on mitigating AI risk who thinks our unconditional risk of extinction from AI is under 1%, and most people are between 5% and ~99.5%.  Importantly, those estimates are driven by specific object-level arguments based on their beliefs about the world and predictions about the future, i.e. how capable future AI systems will be relative to humans, what sorts of motivations they will have if we keep building them the way we're building them, etc.  I wish your post had spent time engaging with those arguments instead of knocking down a transparently silly reframing of Pascal's Wager that no serious person working on AI risk would agree with.

Unlike the pessimistic school, the proponents of a more techno-optimistic approach begin with gratitude for the marvelous achievements of the modern marriage of science, technology, and capitalism.

This is at odds with your very own description of rationalists just a thousand words prior:

The tendency of rationalism, then, is towards a so-called extropianism. In this transhumanist vision, humans transcend the natural limits of suffering and death.

Granted, you do not explicitly describe rationalists as grateful for the "marvelous achievements of the modern marriage of science, technology, and capitalism".  I am not sure if you have ever met a rationalist, but around these parts I hear "man, capitalism is awesome" (basically verbatim) and similar sentiments often enough that I'm not sure how we continue to survive living in Berkeley unscathed.

Though we sympathize with the existential risk school in the concern for catastrophe, we do not focus only on this narrow position. This partly stems from a humility about the limitations of human reason—to either imagine possible futures or wholly shape technology's medium- and long-term effects.

I ask you to please at least try engaging with object-level arguments before declaring that reasoning about the future consequences of one's actions is so difficult as to be pointless.  After all, you don't actually believe that: you think that your proposed path will have better consequences than the alternatives you describe.  Why so?

Thanks for the post and congratulations on starting this initiative/institute! I'm glad to see more people drawing attention to the need for some serious philosophical work as AI technology continues to advance (e.g., Stephen Wolfram).

One suggestion: consider expanding the fields you engage with to include those of moral psychology and of personal development (e.g., The Option Institute, Tony Robbins, Nathaniel Branden).

Best of luck on this project being a success!

Thank you! As time goes on, we may branch out. My wife left the tech world to become a mental health counselor, so it's something we discuss frequently. Appreciate the kind words and suggestion.

Where it says "coming soon" above, we just launched the essay contest: https://cosmosinstitute.substack.com/p/your-ideas-on-human-autonomy-in-the?r=2z1max

[-]zdot10

Hi Brendan!

I agree with much of RobertM's comment; I read the same essay, and came away confused.

One thing I think might be valuable: if you were to explain your object-level criticisms of particular arguments for AI safety advanced by researchers in the field - for instance, this one or this one.

Given that there are (what I think are) strong arguments for catastrophic risks from AI, it seems important to engage with them and explain where you disagree - especially because the Cosmos Institute's approach seems partially shaped by rejecting AI risk narratives.