I think it's pretty well-established here that having accurate beliefs shouldn't actually hurt you.
Not at all. It is well established having accurate beliefs should not hurt a perfect bayesian intelligence. Believing it applied to mere humans would be naive in the extreme.
It's not a good strategy to change your actual beliefs so that you can signal more effectively -- and it probably wouldn't work, anyway.
The fact that we are so damn good at it is evidence to the contrary!
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I'm jealous of all these LW meetups happening in places that I don't live. Is there not a sizable contingent of LW-ers in the DC area?