A reasonable guideline is limiting the human caused xrisk to several orders of magnitude below the natural background xrisk level, so that human-caused dangers are lost in the noise compared to the pre-existing threat we must live with anyway.
I like this idea [...]
We are well above there right now - and that's very unlikely to change before we have machine superintelligence.
For those of you interested, András Kornai's paper "Bounding the impact of AGI" from this year's AGI-Impacts conference at Oxford had a few interesting ideas (which I've excerpted below).
Summary: