I think actual infohazardous information is fairly rare. Far more common is a fork: you have some idea or statement, you don't know whether it's true or false (typically leaning false), and you kow that either it's false or it's infohazardous. Examples include unvalidated insights about how to build dangerous technologies, and most acausal trade/acausal blackmail scenarios. Phrased slightly differently: "infohazardous if true".
If something is wrong/false, it's at least mildly bad to spread/talk about it. (With some exceptions; wrong ideas can sometimes inspire better ones, maybe you want fake nuclear weapon designs to trip up would-be designers, etc). And if something is infohazardous, it's bad to spread/talk about it, for an entirely different reason. Taken together, these form a disjunctive argument for not spreading the information.
I think this trips people up when they see how others relate to things that are infohazardous-if-true. When something is infohazardous-if-true (but probably false), peopple bias towards treating it as actually-infohazardous; after all,if it's false, there's not much upside in spreading bullshit. Other people seeing this get confused, and think it's actually infohazardous, or think it isn't but that the first person thinks it is (and therefore thinks the first person is foolish).
I think this is pretty easily fixed with a slight terminology tweak: simply call thinks "infohazardous if true" rather than "infohazardous" (adjective form), and call them "fork hazards" rather that "infohazards" (noun form). This clarifies that you only believe the conditional, and not the underlying statement.
I think there are subtypes of infohazard, and this has been known for quite a long time. Bostrom's paper (https://nickbostrom.com/information-hazards.pdf) is only 12 years old, I guess, but that seems like forever.
There are a LOT of infohazards that are not only hazardous if true. There's a ton of harm in deliberate misinformation, and some pain caused by possibilities that are unpleasant to consider, even if it's acknowledged they may not occur. Roko's Basilisk (https://www.lesswrong.com/tag/rokos-basilisk) is an example from our own group.
edit: I further think that un-anchored requests on LW for unstated targets to change their word choices are unlikely to have much impact. It may be that you're putting this here so you can reference it when you call out uses that seem confusing, in which case I look forward to seeing the reaction.
I read this as an experimental proposal for improvement, not an actively confirmed request for change, FWIW.