In my reading, it is the substance of the pop neuroscience itself, with or without handwringing. Hardly a week seems to go by without New Scientist or Sci.Am. running an article on how Neuroscience Has Shown that we are mere automata, consciousness does not exist, subjective experience is an illusion (bit of a contradiction there, but this is pop science we're talking about), and there is no such thing as morality, agency, free will, empathy, or indeed any mental phenomenon at all. When these things are not being claimed to be non-existent, consciousness is asserted to be nothing more than froth on the waves of neuronal firing, morality is an epiphenomenal confabulation papering over evolved programs of genetic self-interest, and motivation is dopamine craving.
That is, the more some people understand (or think they do) how people work, the less they tend to empathise with them -- or, presumably, with themselves. The pop account of all explanations of mental phenomena is to the effect that they have been explained away. (This phenomenon is not unique to neuroscience: neuroscience is just the current source of explanations.)
This is the standard narrative on Overcoming Bias, where you won't find any handwringing over it. Yvain's recent postings here (the ones I've said I mean to get back to but absolutely do not have the time until mid-August at least) are, from my so-far brief reading of them, along the same lines.
You've likely read more pop neuroscience than I have. It's elicited criticism from conservatives who fear that the fruits of cognitive science will be used to justify depredation and depravity, and eventually rob us of our humanity — or that this is already happening. Do you think they're right about that?
steven0461 (comment under "Preference For (Many) Future Worlds"):
Yvain (Behaviorism: Beware Anthropomorphizing Humans):
Eliezer (Sympathetic Minds):
So, what if, the more we understand something, the less we tend to anthropomorphize it, and the less we empathize/sympathize with it? See this post for some possible examples of this. Or consider Yvain's blue-minimizing robot. At first we might empathize or even sympathize with its apparent goal of minimizing blue, at least until we understand that it's just a dumb program. We still sympathize with the predicament of the human-level side module inside that robot, but maybe only until we can understand it as something besides a "human level intelligence"? Should we keep carrying forward behaviorism's program of de-anthropomorphizing humans, knowing that it might (or probably will) reduce our level of empathy/sympathy towards others?