I think if I got asked randomly at an AI conference if I knew what AGI was I would probably say no, just to see what the questioner was going to tell me.
Saying "I have no intention to kill myself, and I suspect that I might be murdered" is not enough.
Frankly I do think this would work in many jurisdictions. It didn't work for John McAfee because he has a history of crazy remarks, it sounds like the sort of thing he'd do to save face/generate intrigue if he actually did plan on killing himself, and McAfee made no specific accusations. But if you really thought Sam Altman's head of security was going to murder you, you'd probably change their personal risk calculus dramatically by saying that repeatedly on the internet. Just make sure you also contact police specifically with what you know, so that the threat is legible to them as an institution.
If someone wants to murder you, they can. If you ever walk outside, you can't avoid being shot by a sniper.
If the person or people trying to murder you is omnicompetent, then it's hard. If they're regular people, then there are at least lots of temporary measures you can take that would make it more difficult. You can fly to a random state or country and check into a motel without telling anybody where you are. Or you could find a bunch of friends and stay in a basement somewhere. Mobsters used to call doing that sort of thing for a time before a threat had receded "going to ground".
Wearing a camera that is streaming to a cloud 24/7, and your friends can publish the video in case of your death... seems a bit too much. (Also, it wouldn't protect you e.g. against being poisoned. But I think this is not a typical way how whistleblowers die.) Is there something simpler?
You could move to New York or London, and your every move outside of a private home or apartment will already be recorded. Then place a security camera in your house.
Tapping the sign:
I have a draft that has wasted away for ages. I will probably post something this month though. Very busy with work.
The original comment you wrote appeared to be a response to "AI China hawks" like Leopold Aschenbrenner. Those people do accept the AI-is-extremely-powerful premise, and are arguing for an arms race based on that premise. I don't think whether normies can feel the AGI is very relevant to their position, because one of their big goals is to make sure Xi is never in a position to run the world, and completing a Manhattan Project for AI would probably prevent that regardless (even if it kills us).
If you're trying to argue instead that the Manhattan Project won't happen, then I'm mostly ambivalent. But I'll remark that that argument feels a lot more shaky in 2024 than in 2020, when Trump's daughter is literally retweeting Leopold's manifesto.
No, my problem with the hawks, as far as this criticism goes, is that they aren't repeatedly and explicitly saying what they will do
One issue with "explicitly and repeatedly saying what they will do" is that it invites competition. Many of the things that China hawks might want to do would be outside the Overton window. As Eliezer describes in AGI ruin:
The example I usually give is "burn all GPUs". This is not what I think you'd actually want to do with a powerful AGI - the nanomachines would need to operate in an incredibly complicated open environment to hunt down all the GPUs, and that would be needlessly difficult to align. However, all known pivotal acts are currently outside the Overton Window, and I expect them to stay there. So I picked an example where if anybody says "how dare you propose burning all GPUs?" I can say "Oh, well, I don't actually advocate doing that; it's just a mild overestimate for the rough power level of what you'd have to do, and the rough level of machine cognition required to do that, in order to prevent somebody else from destroying the world in six months or three years."
What does winning look like? What do you do next? How do you "bury the body"? You get AGI and you show it off publicly, Xi blows his stack as he realizes how badly he screwed up strategically and declares a national emergency and the CCP starts racing towards its own AGI in a year, and... then what? What do you do in this 1 year period, while you still enjoy AGI supremacy? You have millions of AGIs which can do... stuff. What is this stuff? Are you going to start massive weaponized hacking to subvert CCP AI programs as much as possible short of nuclear war? Lobby the UN to ban rival AGIs and approve US carrier group air strikes on the Chinese mainland? License it to the CCP to buy them off? Just... do nothing and enjoy 10%+ GDP growth for one year before the rival CCP AGIs all start getting deployed? Do you have any idea at all? If you don't, what is the point of 'winning the race'?
The standard LW & rationalist thesis (which AFAICT you agree with) is that sufficiently superintelligent AI is a magic wand that allows you to achieve whatever outcome you want. So one answer would be to prevent the CCP from doing potentially nasty things to you while they have AGI supremacy. Another answer might be turn the CCP into a nice liberal democracy friendly to the United States. Both of these are within the range of things the United States has done historically when they have had the opportunity.
I'm flabbergasted by this degree/kind of altruism. I respect you for it, but I literally cannot bring myself to care about "humanity"'s survival if it means the permanent impoverishment, enslavement or starvation of everybody I love. That future is simply not much better on my lights than everyone including the gpu-controllers meeting a similar fate. In fact I think my instincts are to hate that outcome more, because it's unjust.
Slight correction: catastrophic job loss would destroy the ability of the non-landed, working public to paritcipate in and extract value from the global economy. The global economy itself would be fine. I agree this is a natural conclusion; I guess people were hoping to get 10 or 15 more years out of their natural gifts.