A few examples (in approximately increasing order of controversy):
If you proceed anyway...
- Identify knowledge that may be dangerous. Forewarned is forearmed.
- Try to cut dangerous knowledge out of your decision network. Don’t let it influence other beliefs or your actions without your conscious awareness. You can’t succeed completely at this, but it might help.
- Deliberately lower dangerous priors, by acknowledging the possibility that your brain is contaminating your reasoning and then overcompensating, because you know that you’re still too overconfident.
- Spend a disproportionate amount of time seeking contradictory evidence. If believing something could have a great cost to your values, make a commensurately great effort to be right.
- Just don’t do it. It’s not worth it. And if I found out, I’d have to figure out where you live, track you down, and kill you.
You don't update values, you update knowledge about values. Knowledge about terminal values might be as incomplete as knowledge about instrumental values. The difference is that with instrumental values, you usually update indifference, while with "terminal" values you start out with some idea of preference.
What about newborns? If they have same terminal values as adults, then Kolmogorov complexity of terminal values should not exceed one of genome. Thus a) terminal values are updated or b) terminal values are not very complex or c) knowledge about terminal values is part of terminal values, which imply a).