bhishma

I am broadly interested in theoretical computer science and neuroscience. 

Recently I've been thinking more about gradual disempowerment risks due to AI and potential mitigation strategies.

Projects that I'm working on  

  • Improving the discourse on the trajectory of AGI and its potential implications - Superposition  
  • Some proposals on improving empowerment and accelerating AI policy making and governance

     

Wikitag Contributions

Comments

Sorted by

Thanks! This was our first big event (>10), so it was kind of a trial by fire. Glad that we could pull it off (obviously with the help of the community). Lots of learnings to digest and incorporate for the next iteration. 

Arguments made in https://epoch.ai/gradient-updates/what-will-the-imo-tell-us-about-ai-math-capabilities were so prescient! 

 

If the 2025 IMO happens to contain 0 or 1 hard combinatorics problems, it’s entirely possible that AlphaProof will get a gold medal just by grinding out 5 or 6 of the problems—especially if there happens to be a tilt toward hard geometry problems. This would grab headlines, but wouldn’t be much of an update over current capabilities. Still, it seems pretty likely: I give a 70% chance to AlphaProof winning a gold medal overall, with almost all of that chance coming from just such a scenario.

But in fact it turned out to be a more general reasoning model than AlphaProof

We're literally using the economic proceeds from attention extraction to build artificial attention mechanisms that might make human attention obsolete.

Its kind of ironic how one of largest effort to build a sand god is based on an architecture which is also > 80% funded by an economy of the same name

Hi Sanjay, yeah we are planning to organise future meetups in Bangalore. Do fill the form, so that we can keep you updated. 

Since the drones are centrally produced, they could easily implement digital watermarks for provenance

Probably do a screen recording to scale later with AI

Jeffrey, I appreciate your points about fusion's potential, and the uncertainty around "foom." However, I think framing this in terms of bottlenecks clarifies the core difference. The Industrial Revolution was transformative because it overcame the energy bottleneck. Today, while clean energy is vital, many transformative advancements are primarily bottlenecked by intelligence, not energy. Fusion addresses an important, existing constraint, but it's a step removed from the frontier of capability. AI, particularly AGI, directly targets that intelligence bottleneck, potentially unlocking progress across virtually every domain limited by human cognitive capacity. This difference in which bottleneck is addressed makes the potential transformative impact, and thus the strategic landscape, fundamentally distinct. Even drastic cost reductions in energy don't address the core limiting factor for progress in areas fundamentally constrained by our cognitive and analytical abilities.

I think a crucial distinction, which you touch on but perhaps don't fully emphasize, lies in the downstream consequences of success in each field. While both are transformative, the nature of that transformation is radically different.

Fusion, if achieved at scale, primarily addresses the energy sector. It's a substitutional technology. It replaces fossil fuels with a cleaner, more abundant alternative, mitigating climate change and potentially altering geopolitical power dynamics related to energy resources. This is undeniably significant. However, beyond the energy sector and related geopolitical shifts, its direct impact on other aspects of the economy and society is likely to be relatively contained. It doesn't fundamentally rewrite how most things are done. We still need doctors, teachers, farmers, artists, etc., and their jobs, while perhaps indirectly affected by cheaper energy, are not fundamentally changed in character.

AI, particularly AGI, is fundamentally different. It's not merely substitutional; it's augmentative and potentially autonomous across a vast range of cognitive tasks. This has the potential to reshape virtually every human activity, from scientific research (including, ironically, accelerating fusion research, as you point out) to art, governance, and even warfare. The breadth and depth of potential change are orders of magnitude greater than with fusion.

Furthermore, AI exhibits a strong tendency towards winner-take-all (or winner-take-most) dynamics.The recursive self-improvement potential of a sufficiently advanced AI creates a powerful feedback loop. Once an entity achieves a certain threshold of general intelligence, it can potentially improve itself at an accelerating rate, making it exceedingly difficult for others to catch up. This "foom" potential, however unlikely some may deem it, creates a qualitatively different strategic landscape compared to fusion. The order of arrival matters immensely in AI in a way it simply doesn't in fusion. It's not just about achieving AGI; it's about who achieves it first and what safeguards are in place before that happens.

While both fusion and AI are important technological pursuits, the stakes and strategic implications are vastly different. The cooperative model described in the fusion community might be well-suited to its particular landscape. However, given the potential for rapid, self-driven escalation and the winner-take-all dynamics of AGI, a purely cooperative approach in AI seems, at best, strategically naive and, at worst, existentially risky. The incentive structures are, in fact, very different because of the outcomes, even if the initial industry structures appear superficially similar. Even if there is collaboration the incentives are heavily tilted towards defection as it favours them immensely.

Load More