It might mollify people who disagree with the current implicit policy, and make discussion about the policy easier. Here's one option:
There's a single specific topic that's banned because the moderators consider it a Basilisk. You won't come up with it yourself, don't worry. Posts talking about the topic in too much detail will be deleted.
One requirement would be that the policy be no more and no less vague than needed for safety.
Discuss.
My understanding is that the post isn't the x-risk- a UFAI could think this up itself. The reaction to the post is supposedly an x-risk- if we let on we can be manipulated that way, then a UFAI can do extra harm.
But if you want to show that you won't be manipulated a certain way, it seems that the right way to do that is to tear that approach apart and demonstrate its silliness, not seek to erase it from the internet. I can't come up with a metric by which EY's approach is reasonable.
(Concerns not necessarily limited to either existential or UFAI, but we cannot discuss that here.)
Agree. :)