Another month, another rationality quotes thread. The rules are:
- Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
- No more than 5 quotes per person per monthly thread, please.
- Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
My explanation is that hunches are based on aggregate data that you are not capable of tracking explicitly.
Hunches aren't scientific. They're not good for social things. Anyone can claim to have a hunch. That being said, if you trust someone to be honest, and you know the track record of their hunches, there's no less reason to trust their hunches than your own.
I mean ignore the emotion for the purposes of coming up with a solution.
Overconfidence bias causes you to take too many risks. Risk aversion causes you to take too few risks. I doubt they counter each other out that well. It's probably for the best to get rid of both. But I'd bet that getting rid of just one of them, causing you to either consistently take too many risks or consistently take too few, would be worse than keeping both of them.
Emotions are more about considering theories than finding them. That being said, you don't come up with theories all at once. Your emotions will be part of how you refine the theories, and they will be involved in training whatever heuristics you use.
I'm certainly not arguing that rationality is entirely about emotion. Anything with a significant effect on your cognition should be strongly considered for rationality before you reject it.
This looks like you're talking about terminal values. The utility function is not up for grabs. You can't convince a rational agent that your goals are worth achieving regardless of the method you use. Am I misunderstanding this comment?