The question I would ask is whether the marginal utility of a dollar spent in convincing this long tail of people is actually worth it, compared to other uses of the money. It's obvious that some of us needed no convincing at all, we were just desperate for a vaccine. Other people were ok with getting vaccinated if it were easy or convenient. Some must have been on the fence or mildly against it, but could be convinced with some nudging (see vaccine lotteries and other interventions). So now you have a minority that is stubbornly against getting vaccinated. Let's say it's 10% of the people, and perhaps you could convince one in five of those people at some cost. Now you have 8% of the population who won't get vaccinated, and you spent a significant amount of money or resources on going from 10% down to 8%. Are the public health gains significant enough to warrant the expenditure? Is there anything else with more impact that you could have done instead?
This articulates something that I have been thinking about for a while: how to reconcile the fact that people who are extremely strong on the cognitive side often fail to thrive in certain domains because they cannot compensate for their weakness in the other dimensions. We have decent metrics of cognition (standardized testing, for example) but I have not seen much on the others. It seems logical that if I am attempting to build mental strength, I may want to measure progress like I would if I were building physical strength. I wonder how one would go about designing metrics and reliable tests for emotional and behavioral strength.
I noticed that this book just came out, and might be relevant. I will probably read it at some point and report back.
https://www.amazon.com/Voltage-Effect-Ideas-Great-Scale/dp/B0979KTYJR/
Humans aren't utility maximizers, but we think of ourselves as them
What makes you believe this? I wouldn't assume that most people think that way. In order to maximize utility you first have to define a utility function. This is impossible for most of us. I find that I have a fuzzy list of wishes to satisfy, with unclear priorities that shift over time. I imagine that if a rational entity were to try to make sense of other entities that appear similar, it might make an assertion like yours. But what if it turns out that the rest of the entities have a much lower mix of rational / non-rational ("system 1" if you will) function during a given time period? It could be that other people are not attempting to maximize anything most of the time. Perhaps once in a while they sit down and reason about a particular goal, and most of the time they are delegating to more basic systems.
I am interested in books on this topic as well However, the answer to the original question does not seem that mysterious to me. Most evolutionary psychologists describe the mechanisms of cooperation that made it possible for humans to grow increasingly large organizations (tribes, city states, corporations, nation states) aligned behind some commonality. The forces that make people agree to cooperate don't seem to matter that much. Once there is a social contract in place and a hierarchy of leadership, people align themselves behind objectives that seem important. It seems natural that groups of people would build stadiums, churches, political parties, trade mechanisms, armies, factories, roads, etc. I would ask a complementary question: what would you need to disrupt or remove in order to make it very hard for humans to build things at scale?
I threw 10% there as an example of a target that you might convince with some intervention. By "long tail" I don't mean a small number of people, a long tail can be 50% of a distribution. I am using the term to refer to the reasons they don't get vaccinated. The post mentions 34 distinct responses, so if one were to optimize for impact then the idea would be to identify the most "nudgeable" class, evaluate the cost/benefit of the nudge, etc. Sorry if I wasn't clear enough in my original comment.