A few examples (in approximately increasing order of controversy):
If you proceed anyway...
- Identify knowledge that may be dangerous. Forewarned is forearmed.
- Try to cut dangerous knowledge out of your decision network. Don’t let it influence other beliefs or your actions without your conscious awareness. You can’t succeed completely at this, but it might help.
- Deliberately lower dangerous priors, by acknowledging the possibility that your brain is contaminating your reasoning and then overcompensating, because you know that you’re still too overconfident.
- Spend a disproportionate amount of time seeking contradictory evidence. If believing something could have a great cost to your values, make a commensurately great effort to be right.
- Just don’t do it. It’s not worth it. And if I found out, I’d have to figure out where you live, track you down, and kill you.
Intractable. Brain inputs are partially dependent on brain outputs. Thus you need to exclude all inputs from inside future light cone originating at space-time point of brain formation to deny participation of brain in causal chain. This will render reasoning about brain functions nearly impossible.
To rephrase myself:
The set of all possible inputs is larger and much more diverse, than the set of all human brains.
The wast majority of inputs will be processed the same way, by the most brains.
The output is much more dependent of the input, than of the brains.
See this now?