khafra comments on How to always have interesting conversations - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (331)
"Even aside from that, what is the point of learning faster, if you end up learning a lot of facts and ideas that aren't true?". Your Bayes Score goes up on net ;-)
I agree that fearing making and not noticing mistakes is much better than not minding mistakes you don't notice, but you should be able to notice mistakes later when other people disagree with you or when you can't get your model of the world to reach a certain level of coherence. This is much faster than actively checking every belief. If a belief is wrong and you have good automatic processes that propagate it and that draw attention to incoherence from belief nodes being pushed back and forth from the propogation of the implications of some of your beliefs pushing in conflicting directions, you don't even need people to criticize you, and especially to criticize you well, though both still help. I also think that simply wanting true beliefs without fearing untrue ones can produce the desired effect. A lot of people try to accomplish a lot of things with negative emotions that could be accomplished better with positive emotions. Positive emotions really do produce a greater risk of wireheading and only wanting to believe your beliefs are correct, in the absence of proper controls, but they don't cost nearly as much mental energy per unit of effort. Increased emotional self-awareness reduces the wireheading risk, as you are more likely to notice the emotional impact of suppressed awareness of errors. Classic meditation techniques, yoga, varied life experience and physical exercise boost emotional self-awareness and have positive synergies. I can discuss this more, but once again, unfortunately mostly only in person, but I can take long pauses in the conversation if reminded.
Perhaps the difference here is one of risk sensitivity--similarly to the way a gambler going strictly for long term gains over the largest number of iterations will use the Kelly Criterion, Michael Vassar optimizes for becoming the least wrong when scores are tallied up at the end of the game. Wei Dai would prefer to minimize the volatility of his wrongness instead, taking smaller but steadier gains in correctness.