This post was rejected for the following reason(s):

  • Low Quality or 101-Level AI Content. There’ve been a lot of new users coming to LessWrong recently interested in AI. To keep the site’s quality high and ensure stuff posted is interesting to the site’s users, we’re currently only accepting posts that meets a pretty high bar. We look for good reasoning, making a new and interesting point, bringing new evidence, and/or building upon prior discussion. If you were rejected for this reason, possibly a good thing to do is read more existing material. The AI Intro Material wiki-tag is a good place, for example. You're welcome to post questions in the latest AI Questions Open Thread.

  • Looks like you’re crossposting this from your blog elsewhere. That can be fine, but sometimes we we’ll reject these crossposts (or allow them but with a warning) if they seem like they’re not quite aimed at the LessWrong audience or adopting LessWrong norms. LessWrong has particular norms aimed at getting towards truth (e.g. avoiding rhetoric, being precise, being quantitative, etc), and your post isn't obviously displaying those. Good solutions to this are to (a) write an extra intro to your post to explain why people on LessWrong might find it interesting, (b) rewrite to be a style more aimed at truth than persuasiveness or engagingness.  

A product manager's path from positioning systems to positioning humanity for the AI Age
 

When I defended my PhD thesis on neural networks back in 2011, AI was a very different field. My research focused on something quite specific: using cascade-connected artificial neural networks to improve mobile phone positioning. It seems almost quaint now, looking back from our current era of transformers and large language models.

My journey since then has been taking me through many winding roads. One of my first major projects was leading a team of signal processing experts and army generals to develop what might have been the world's first Over-the-Horizon-Radar system—technology that could detect ships up to 370km away from the shore, well beyond the curve of the Earth. This experience taught me something crucial: technological breakthroughs can fundamentally change what we believe is possible.

As my career progressed into product management, I had the privilege of developing AI solutions used by Fortune 50 companies, ultimately touching the lives of over a billion people. Each project brought new insights into both the potential and the responsibilities that come with deploying AI at scale.

But something changed around 2019 after the publication "Attention is All You Need." started to get significant traction and validation. As transformers began revolutionizing AI capabilities, I found myself growing increasingly concerned. Not just about the immediate challenges we often discuss—job displacement, algorithmic bias, or the misuse of jailbroken models—but about a more fundamental question:

What happens when humans are no longer the most intelligent entities in the room?

These concerns kept growing as AI capabilities accelerated, eventually leading me to make a significant decision: at the beginning of 2024, I stepped down from my role as CEO of a UK-based fintech company to focus entirely on what I believe is the defining challenge of our time—ensuring AI development enhances rather than diminishes humanity's future.

Since making this transition, I've launched airiskindex.com—a novel approach to tracking AI risk that uses sentiment analysis and investment data to help people understand the dangers of both unaligned AI and overregulated AI development. This work has only strengthened my conviction that we need a balanced, thoughtful approach to AI advancement.

It's a sobering thought, isn't it? As a product manager, I've always been driven by creating value, not imposing restrictions. But here's the thing: sometimes the greatest value comes from thoughtful constraints.

This brings me to the Intelligence Delta concept. At its core, it's about maintaining an optimal gap between AI capabilities and human understanding—large enough to drive progress, but not so large that we lose meaningful participation in the process. Not too dissimilar to some of the more technical initiatives such as mechanistic interpretability or representation engineering. Intelligence Delta is a tool to help us, humans, keep the agency in the times to come.

By shifting our focus to maintaining active human agency, we're not just trying to prevent potential downsides—we're aiming to maximize humanity's future potential. This is crucial because technological advancement isn't just about what AI can do; it's about how we can grow alongside it. The real opportunity lies in creating conditions where human capabilities and understanding can expand in parallel with AI development.

Why does this matter? Well, as I too often say[1], "Nobody asks the monkey now." Once a species is surpassed intellectually, it tends to lose agency in shaping its future. But unlike our evolutionary history, we have the unique opportunity to actively manage this transition. By maintaining an optimal gap between human and AI capabilities, we can ensure we remain active participants in our own destiny—not passive observers of increasingly incomprehensible technological decisions.

In the coming weeks, I'll be diving deeper into how we might actually implement this framework, sharing both theoretical foundations and practical approaches. Before I dive deeper into the framework, I will likely aim to get more perspectives on the matter from the people I know and appreciate their thinking.

But for now, I'd love to hear your thoughts: How do you envision maintaining meaningful human agency in an increasingly AI-powered world?

Looking forward to exploring these questions together.


 

  1. ^

    Nobody asks the monkey — is likely a quote, but I could not find the source

New Comment