We need to align the performance of some large task, a 'pivotal act' that prevents other people from building an unaligned AGI that destroys the world.
What is the argument for why it's not worth pursuing a pivotal act without our own AGI? I certainly would not say it was likely that current human actors could pull it off, but if we are in a "dying with more dignity" context anyway, it doesn't seem like the odds are zero.
My idea, which I'll include more as a demonstration of what I mean than a real proposal, would be to develop a "cause area" for influencing military/political institutions as quickly as possible. Yes, I know this sounds too slow and too hard and a mismatch with the community's skills, but consider:
On the off chance we spend some time in a regime where preventable+detectable catastrophic actions might be attempted, it might be a good idea to somehow encourage the creation of a Giant Alarm which will alert previously skeptical experts that a catastrophe almost occurred and hopefully freak the right people out.
The steelman version of flailing, I think, is being willing to throw a "hail mary" when you're about to lose anyway. If the expected outcome is already that you die, sometimes an action with naively negative value but fat tails can improve your position.
If different hail mary options are mutually exclusive, you definitely want to coordinate to pick the right one and execute it the best you can, but you also need to be willing to go for it at some point.
it's not intuitive to me when it's reasonable to apply geometric rationality in an arbitrary context.
e.g. if i offered you a coin flip where i give you $0.01 with p=50%, and $100 with q=50%, i get G = √.01√100 = $1, which like, obviously you would go bankrupt really fast valuing things this way.
in kelly logic, i'm instead supposed to take the geometric average of my entire wealth in each scenario, so if i start with $1000, I'm supposed to take √1000.01√1100 = $1048.81, which does the nice, intuitive thing of penalizing me a little vs. linear expectation for the added volatility.
but... what's the actual rule for knowing the first approach is wrong?