The way to die with dignity is to genuinely intend to succeed even as we accept that we will likely fail.
Research how to transfer knowledge from trained ML systems to humans.
An example: It was a great achievement when AlphaGo and later systems defeated human go masters. It would be an even greater achievement for the best computer go systems to lose to human go masters - because that would mean that the knowledge these systems had learned from enormous amounts of self-play had been successfully transferred to humans.
Another example: Machine learning systems that interpret medical X-ray images or perform other diagnostic functions may become better than human doctors at this (or even if not better overall, better in some respects). Transferring their knowledge to human doctors would produce superior results, because the human doctor could integrate this knowledge with other knowledge that may not available to the computer system (such as the patient's demeanor).
From the x-risk standpoint, it seems quite plausible that a better ability to transfer knowledge would both allow humans to more successfully "keep up" with the AIs, and to better understand how they may be going wrong.
This line of research has numerous practical applications, and hence may be feasible to promote, especially with a bit of "subsidy" from those concerned about x-risks. (Without a subsidy, it's possible that just enhancing the capability of ML systems would seem like the higher-return investment.)
This somewhat happenned in chess : today's top players are much stronger than twenty years ago, mainly thanks to new understanding brought by the computers. Carlsen or Caruana would probably beat Deep Blue handily.
One method would be to take advantage of low-hanging fruit not directly related to X-risk. Clearly motivation isn't enough to solve these problems (and I'm not just talking about alignment), so we should be trying to optimize all our resources, and that includes getting rid of major bottlenecks like [the imagined example of] hunger killing intelligent, benevolent potential-researchers in particular areas because of a badly-designed shipping route.
A real-life example of this would be the efforts of the Rationalist community to promote more efficient methods of non-scientific analysis (i.e. cases where you don't have the effort required for scientific findings, but want a right answer anyway). This helps not only in X-risk efforts, but also in the preliminary stages of academic research, and [presumably] entrepreneurship as well. We could step up our efforts in this, particularly in college environments where it would influence people's effectiveness whether or not they bought into other aspects of this subgroup's culture like the urgency of anti-X-risk measures.
Another aspect is to diverge in multiple different directions. We're essentially searching for a miracle at this point (to my understanding, in the Death with Dignity post Eliezer's main reason to reject unethical behaviors that might, maybe, possibly lead to success is that they're still less reliable than miracles and reduce our chances of finding any). So we need a much broader range of approaches to solving or avoiding these problems, to increase the likelihood that we get close enough to a miracle solution to spot it.
For instance, most effort on AGI safety so far has focused on the alignment and control problems, but we might want to put more attention to how we might keep up with a self-optimizing AGI by augmenting ourselves, so that human society was never dominated by an inhuman (and thus likely unaligned) cognition. This would involve both the existing line of study in Intelligence Augmentation (IA), but also ways to integrate it with AI insights to keep ahead of an AI in its likely fields of superiority, and also relates to the social landscape of AI in that we'd need to draw resources and progress away from autonomous AI and towards IA.
Working on global poverty seems unlikely to be a way of increasing our chances of succeeding at alignment. If anything, this would likely increase both the number of future alignment and capacity researchers. So it's unlikely to significantly increase our chances.
Augmentation is potentially more promising. I guess my main worry is that if we plug computers into our brains, then this makes us more vulnerable to hacking and so it might even make it easier for things to go wrong. That said, it could still be positive in expectation.
I don't know how much money has been spent on AI safety, but if we go out without having spent $1 billion, then that would seem undignified. Same if we went out without spending at least 10% of our available funds.
We could, in principle, decide that survival of humanity in current form (being various shades of unlikely depending on who you believe), is no longer a priority and focus on different goals what are still desirable in the face of likely extinction. For example:
These are just off from the top of my head and I'm sure there are many more available once survival requirement is removed
Preemptively stopping any unambiguously hostile activities towards the future AGI like alignment research and start working on alignment of human interests towards AGI's instead
Alignment research is not necessarily hostile towards AGIs. AGI also has to solve alignment to cooperate with each other and not destroy everything on earth.
Fix Swapcard's user interface before the apocalypse.
Not really an answer, but a slightly different source of doom. What would be a way to "die with more dignity" in other x-risk scenarios, like "Don't look up" or Project Hail Mary or Oryx and Crake or...?
Eliezer's recent kidding not kidding death with dignity post suggests that our chances of survival are so low that we should just focus on going out with some semblance of dignity (ie. "at least we made an attempt that wasn't truly pathetic).
Or at least that's what it seems to be claiming initially. If you read it to the end it becomes clear that Eliezer is proposing a frame for thinking about how to act in scenarios with a low probability of success[1]. In particular, he seems to be criticizing the tendency of people to say "well we need to assume X because otherwise we would be doomed anyway" as the tendency is to power on multiple assumptions and end up focusing on and over specific-case. In contrast, Eliezer suggests that it is better to position yourself sucks that you would be able to leverage positive moral violations in general as then you aren't just betting on one particular scenario.
I often find that reframing a problem is quite conducive to producing solutions, so I'm asking how we could actually die with more dignity[2].
I find it fascinating how post-rationalist this is.
People could just post their thoughts on the original post, but this question serves as a nudge to actually attempt to generate solutions rather than to just grimly reflecting on it.