Shall We Throw A Huge Party Before AGI Bids Us Adieu?
I don't think there is much more to this post then what it says in the title, but I'll add more details anyway. Essentially, it's become increasingly obvious that despite the best efforts, the progress in AI alignment and other safety efforts has been, well... minimal. Yet the predictions are...
well, I am not arguing for ceasing the agi safety efforts or that it is unlikely they would succeed. I am just claiming that if there is a high enough chance that they might be unsuccessful...we might as well try to make some relatively cheap and simple effort to make this case somewhat more pleasant(although fair enough that the post might be too direct).
Imagine that you had an illness with a 30% chance of death in next 7 years(I hope you don't), it would likely affect your behaviour and you would want to spend your time differently and maybe create some memorable experiences even though the chance that you survive is still high enough.
Despite this, it seems surprising, that when it comes to AGI-related risks, such tendencies to live life differently are much weaker, even though many assign similar probabilities. Is it rational?