Related topic: Death by SnuSnu.
Says : https://x.com/ESYudkowsky/status/1681372655147163648?s=20
coming in 2024: the incel/acc movement, motto "give us waifus and give us death", professing that the ideal outcome is a couple of years with an android catgirl followed by quick extermination
Sad part about the human condition is raise the offer 2 OOM (200 years without aging and catgirl companions and catboy companions for the recipient's preferred genders), and that's a better offer than any human has ever gotten in 300k years of existence.
Even 1 OOM (20 years) may beat the life experience EV of 50% of the population.
I think we also need to teach AI researchers UI and graphics design. Most of the field's software prints boring things to console, or at most has a slow and annoying web dashboard with a few graphs. The machine which kills us all should instead have a cool scifi interface with nice tabulation, colors, rectangles, ominous targeting reticles, and cryptic text in the corners.
Introduction
In the two years since the original publication of Death with Dignity, it has been clear that we're not back, it's so over, and that it has never been more over. AI capabilities leapt forward across domains, the general public noticed the existence of AI and threw money and GPUs at anything vaguely related to it, and increasingly incorrect opinions and degradation of terminology have polluted any attempt to discuss the problem. In this environment, Death with Dignity is an ever-more-attractive notion. But it makes a critical mistake.
Given our inevitable demise, death with dignity has the right idea, but it fails to consider the well-established results of fun theory: if you are dying of a terminal disease, is it better to go out peacefully and slowly on a hospital bed surrounded by family, or jumping a motorbike across a ravine filled with cloned raptors trying to chase you? I think the correct answer is obvious.
Something like interpretability work allowing possible dangerous mesaoptimization in a new model to be detected and ignored, or a new mathematical theory of agency which will never be usefully applied to a real inscrutable blob of floats (or, nowadays, ternary numbers) is not, by itself, awesome. While bending such theories to the task of saving the world from AI doom would be awesome, this is unlikely, as Death with Dignity already argues. So what's left? Making our imminent paperclipping more awesome. This is a relatively tractable problem, as I'm about to illustrate.
Avenues for Awesomeness
As TVTropes has taught us, awesomeness requires some sense of meaningful human participation. Naively, this requires actual ability to change the outcome, which is of course impossible: however, this is more of an aesthetic requirement than a functional one - people are generally fine with railroaded fiction, and more generally with watching things play out even when the outcome is known in advance.
The obvious solution, then, is to engineer a dramatic final battle with the AGI - ideally, people get to fight through an army of robots to access an automated OpenMetaAmazGoogMind datacentre (ideally an oversized vertically stacked one), break in, and then engage in a hacking scene (with at least five monitors, and sunglasses) to (futilely) try and shut it down.
How can we achieve this?
Even reaching this is nontrivial, due to continued problems with all AI alignment agendas. Most notably, a competent, consequentialist, unaligned AI will act in secret, and not allow anything which even looks like interference by humans, except possibly as a distraction. This is unfortunate, as it is significantly more awesome (in some sense) to die to a truly superintelligent adversary than to a barely functional AutoGPT-5, all else equal, and problematic, since avoiding this reliably would require a commitment from AI developers to avoid creating competent, consequentialist agents, which is impossible.
However, prosaic alignment approaches such as RLHF could be sufficient for the Death with Awesomeness agenda, due to the weaker robustness requirements. For example, language model RLHF datasets could be altered to include examples of good world domination plans (ones which are ostentatious, showy and complicated) in opposition to bad ones (subtle, hidden, superhuman and obviously unstoppable plans). In combination with other approaches, such as enhancing cyber- and bio-security, this could lock out fast and quiet paths to AI doom efficiently enough to allow a final showdown with AGI.
Conclusion
We believe Death with Awesomeness represents a significant advancement in the field of "Death with X" LessWrong posts.