You could look into whether dragons exist with the plan that you will never reveal any findings no matter what they are. I get that you probably wouldn't bother because most paths by which that information could be valuable require you to leak it, but it's an option.
Mostly for @habryka's sake: it sounds like you are likely describing your unvested equity, or possibly equity that gets clawed back on quitting. Neither of which is (usually) tied to signing an NDA on the way out the door - they'd both be lost simply due to quitting.
The usual arrangement is some extra severance payment tied to signing something on your way out the door, and that's usually way less than the unvested equity.
EDIT: Turns out OpenAI's equity terms are unusually brutal and it is indeed the case that the equity clawback was tied to signing the NDA.
Nope!
(DMed the most recent residents - they moved out at the end of the lease term about a month ago)
I was the co-game-master for 2018 Oxford/Seattle and had to make a call about whether the game-end launch was legit. Your telling is accurate - the guy who pressed the button indeed acted unilaterally and (he claims) thought the button was disabled.
Setting it all up was damned expensive: she died at ninety - about 70 years of redaction time multiplied by a typical human metabolic rate and mass landed you with a lot of redaction entropy. Look at the price of energy, convert the Kilowatt hours and it came to a lot of money. She had to set up an on-death remortgage of her home to cover it even with the subsidies.
A typical human consumes maybe 3000 kcal of food per day. Which is about 3.5 kWh. Current price for electricity in the US is about $0.17/kWh. Do all the math, and you get an electricity cost of about $20,000 for a 90-year reversal, if the reversal consumes close to the metabolic energy spent. Which doesn't seem ridiculously expensive compared to what a human can save in their lifetime (ditto 10x that cost, if you imagine that it takes 10J to reverse the entropy of 1J of metabolism).
Are you imagining a much less efficient process?
I have sometimes considered this but worry that doing so will lower the cost of capital for AGI-constructing companies and accelerate AGI development.
I'm not sure this is a realistic concern for Google/Alphabet - I think they have not bothered to raise capital since the Google IPO and aren't about to start.
Can you link to your comment?
My policy with microcovid from the very beginning has been to look at the numbers and basically ignore the "high risk / medium risk" designations, because they don't match my risk tolerance and that divergence has only increased over the course of the pandemic (now that I'm vaccinated, 1 uCov is less costly because it carries less risk with it).
Not from the very beginning, but for most of this year this is how I've used microcovid.
It already seems like we can infer that dragon-existence has, to you, nontrivial subjective likelihood because you don't loudly proclaim "dragons don't exist" and because you regard investigation as uncomfortably likely to turn you into a believer of something socially unacceptable.
If you think it's in fact, like, 20% likely (a reasonable "nontrivial likelihood" guess for people to make), seems like the angry dragons-don't-exist people should be 20% angry at you.