Part of the problem I'm having with your example is my perception of the magnitude of the gap between what you are talking about and WrongBot's examples.
What is the axis along which the gap lies? Is it the degree of uncertainty about when it will be safe to learn the dangerous knowledge?
That's part of it, and also how far into the future one thinks that might occur.
A few examples (in approximately increasing order of controversy):
If you proceed anyway...