bhishma

I am broadly interested in theoretical computer science and neuroscience. Recently I've been thinking more about existential risks and risks from advanced AI in particular.

Wikitag Contributions

Comments

Sorted by

Probably do a screen recording to scale later with AI

bhishma10

Jeffrey, I appreciate your points about fusion's potential, and the uncertainty around "foom." However, I think framing this in terms of bottlenecks clarifies the core difference. The Industrial Revolution was transformative because it overcame the energy bottleneck. Today, while clean energy is vital, many transformative advancements are primarily bottlenecked by intelligence, not energy. Fusion addresses an important, existing constraint, but it's a step removed from the frontier of capability. AI, particularly AGI, directly targets that intelligence bottleneck, potentially unlocking progress across virtually every domain limited by human cognitive capacity. This difference in which bottleneck is addressed makes the potential transformative impact, and thus the strategic landscape, fundamentally distinct. Even drastic cost reductions in energy don't address the core limiting factor for progress in areas fundamentally constrained by our cognitive and analytical abilities.

bhishma910

I think a crucial distinction, which you touch on but perhaps don't fully emphasize, lies in the downstream consequences of success in each field. While both are transformative, the nature of that transformation is radically different.

Fusion, if achieved at scale, primarily addresses the energy sector. It's a substitutional technology. It replaces fossil fuels with a cleaner, more abundant alternative, mitigating climate change and potentially altering geopolitical power dynamics related to energy resources. This is undeniably significant. However, beyond the energy sector and related geopolitical shifts, its direct impact on other aspects of the economy and society is likely to be relatively contained. It doesn't fundamentally rewrite how most things are done. We still need doctors, teachers, farmers, artists, etc., and their jobs, while perhaps indirectly affected by cheaper energy, are not fundamentally changed in character.

AI, particularly AGI, is fundamentally different. It's not merely substitutional; it's augmentative and potentially autonomous across a vast range of cognitive tasks. This has the potential to reshape virtually every human activity, from scientific research (including, ironically, accelerating fusion research, as you point out) to art, governance, and even warfare. The breadth and depth of potential change are orders of magnitude greater than with fusion.

Furthermore, AI exhibits a strong tendency towards winner-take-all (or winner-take-most) dynamics.The recursive self-improvement potential of a sufficiently advanced AI creates a powerful feedback loop. Once an entity achieves a certain threshold of general intelligence, it can potentially improve itself at an accelerating rate, making it exceedingly difficult for others to catch up. This "foom" potential, however unlikely some may deem it, creates a qualitatively different strategic landscape compared to fusion. The order of arrival matters immensely in AI in a way it simply doesn't in fusion. It's not just about achieving AGI; it's about who achieves it first and what safeguards are in place before that happens.

While both fusion and AI are important technological pursuits, the stakes and strategic implications are vastly different. The cooperative model described in the fusion community might be well-suited to its particular landscape. However, given the potential for rapid, self-driven escalation and the winner-take-all dynamics of AGI, a purely cooperative approach in AI seems, at best, strategically naive and, at worst, existentially risky. The incentive structures are, in fact, very different because of the outcomes, even if the initial industry structures appear superficially similar. Even if there is collaboration the incentives are heavily tilted towards defection as it favours them immensely.

bhishma40

Have you looked into https://arxiv.org/abs/2311.04378

Looks like there might be some test data leakage https://codeforces.com/blog/entry/123035

This is very similar to the Julia Galef's framing of Hayekians vs Central planners, which I have found to be quite useful to look at these sort of dynamics. It's also a bit like the exploration/exploitation tradeoff. Initially when you have high uncertainty, it makes a lot of sense to wander and follow your curiosity as that would be more productive. And once you've gathered enough about something, it's much easier to apply the said knowledge. 

. Before the toddler ever hears the word,

 

It goes even back for certain visual stimuli 

We examined fetal head turns to visually presented upright and inverted face-like stimuli. Here we show that the fetus in the third trimester of pregnancy is more likely to engage with upright configural stimuli when contrasted to inverted visual stimuli, in a manner similar to results with newborn participants. The current study suggests that postnatal experience is not required for this preference.

https://www.cell.com/current-biology/fulltext/S0960-9822(17)30580-8#secsectitle0015

Kudos to the speaker, as a (physics) layman I found it really well explained. The connection b/w Renormalization flows and phase transition was really elegant. 

bhishma-11

Alas, being slightly over the subtly-warped-judgment line is like taking one drink – sure it only impairs your judgment a little, but, the one of the things you might do with slightly impaired judgment is to take another drink. (Or, say, foster more emotional closeness with someone who you wouldn’t endorse eventually having sex with).

 

This is sort of the situation where you need to erect Shelling fences

Load More