Not recognizing immediate danger seems a very bad evolutionary strategy. If you are in the ancestral environment with lots of human-eating lions wandering around, of course you want to recognize them.
My guess would be acute vs chronic. We react strongly to intermittent acute stimuli like a spider, but get desensitized to strong but persistent stimuli that don't kill you right away, like a bill payment due.
This doesn't always work: sometimes people develop an avoidance to going to the doctor or thinking about their health problems because of this sort of wireheading.
This doesn't always work: sometimes people develop an avoidance to going to the doctor or thinking about their health problems because of this sort of wireheading.
Yes, but I'd like to understand how sometimes it does work.
This sounds like the phenomenon known to LW as ugh fields. So the original question to some degree may reduce to: under what circumstances is that phenomenon more inhibited even when there is a strong aversive emotional response? To me, the most obvious hypothesis for the “seeing spiders” case would be that immediate sensory processing is more immune to this than more abstract planning procedures for some reason. But I also would expect the sensory version to show up rarely.
Edited to add: I've posted that as a related question, and I also notice that I expect the “learning not to look at things” and “learning not to process things that do show up” subphenomena to be separate and the former to show up more often.
To me, the most obvious hypothesis for the “seeing spiders” case would be that immediate sensory processing is more immune to this than more abstract planning procedures for some reason.
Yeah, it would make sense for evolution to make the brain system that does predicting sensory data have independent reward from the brain system that evaluates how well your day or life is going at the moment.
Your comment made me (vaguely) remember that Scott Alexander (maybe?) wondered something similar in his review of the book Surfing Uncertainty and/or his post Toward a Predictive Theory of Depression. (I haven't re-read the posts lately, so I'm not confident that they contain relevant information or speculation.)
Very short version in the title. A bit longer version at the end. Most of the question is context.
Long version / context:
This is something I vaguely remember reading (I think on ACX). I want to check if I remember correctly/ where I could learn it in more technical detail.
Say you go camping in a desert. You wake up and notice something that might be a scary spider you take a look and confirm it's a scary spider indeed. This is bad, you feel bad.
Since this is bad, you will be less likely to do some things that led to you will be less likely to do things led to you feeling bad, for example you'll be less likely to go camping in a desert.
But you probably won't learn to:
even though those were much closer to you feeling bad (about being close to a scary spider).
This is a bit weird if you think that humans learn to just get a reward usually you'd expect stuff that happened closer to the punishment to get punished more, not less.
What I recall is that there is a different reward for "epistemic" tasks. Based on accuracy or saliency of things it recognizes, not on whether it's positive / negative.
A bit longer version of the question:
Why don't humans learn to not recognize unpleasant things (too much)? Is there a different reward for some "epistemic" processes? Where could I learn more about this?