Did you ever try Circling? I wonder some if there's a conversational context that's very "get to the interesting stuff" which would work better for you. (Or, even if it's boring, it might be because it's foregrounding relational aspects of the conversation which are much less central for you than they are for most people.)
E.g., why did folk write AI 2027? Did they honestly think the timeline was that short?
Isn't it more like "I think there's a 10% chance of transformative AI by 2027, and that is like 100x higher than what it looks like most people think, so people really need to think thru that timeline"?
Like, I generally put my median year at 2030-2032; if we make it to 2028, the situation will still feel like "oh jeez we probably only have a few years left", unless we made it to 2028 thru a mechanism that clearly blocks transformative AI showing up in 2032. (Like, a lot is hinging on what "feels basically like today" means.)
Done, we'll see how it goes.
IMO the real story here for 'how' is "the book is persuasive to a general audience." (People have made claims about the overton window shifting--and I don't think there's 0 of that--but my guess is the book would have gotten roughly the same blurbs in 2021, and maybe even 2016.)
But the social story for how is that I grew up in the DC area, one of my childhood friends is the son of an economist who is not a household name but is prominent enough that all of the household name economists know him. (This is an interesting position to be in--I feel like I'm sort of in this boat with the rationality community.) We play board games online every week and so when the blurb hunt started I got him an advance copy, and he was hooked enough to recommend it to Ben (and others, I think).
(I share this story in part b/c I think it's cool, but also because I think lots of people are ~2 hops away from some cool people and could discover this with a bit of thought and effort.)
I've suggested a pathway or two for this; if you have independent pathways, please try them / coordinate with Rob Bensinger about trying them.
I think Anthropic leadership should feel free to propose a plan to do something that is not "ship SOTA tech like every other lab". In the absence of such a plan, seems like "stop shipping SOTA tech" is the obvious alternative plan.
Note that Anthropic, for the early years, did have a plan to not ship SOTA tech like every other lab, and changed their minds. (Maybe they needed the revenue to get the investment to keep up; maybe they needed the data for training; maybe they thought the first mover effects would be large and getting lots of enterprise clients or w/e was a critical step in some of their mid-game plans.) But I think many plans here fail once considered in enough detail.
I think more than this, when you look at the labs you will often see the breakthru work was done by a small handful of people or a small team, whose direction was not popular before their success. If just those people had decided to retire to the tropics, and everyone else had stayed, I think that would have made a huge difference to the trajectory. (What does it look like if Alec Radford had decided to not pursue GPT? Maybe the idea was 'obvious' and someone else gets it a month later, but I don't think so.)
I do think there's a Virtue of Silence problem here.
Like--I was a ML expert who, roughly ten years ago, decided to not advance capabilities and instead work on safety-related things, and when the returns to that seemed too dismal stopped doing that also. How much did my 'unilateral stopping' change things? It's really hard to estimate the counterfactual of how much I would have actually shifted progress; on the capabilities front I had several 'good ideas' years early but maybe my execution would've sucked, or I would've been focused on my bad ideas instead. (Or maybe me being at the OpenAI lunch table and asking people good questions would have sped the company up by 2%, or w/e, independent of my direct work.)
How many people are there like me? Also not obvious, but probably not that many. (I would guess most of them ended up in the MIRI orbit and I know them, but maybe there are lurkers--one of my friends in SF works for generic tech companies but is highly suspicious of working for AI companies, for reasons roughly downstream of MIRI, and there might easily be hundreds of people in that boat. But maybe the AI companies would only actually have wanted to hire ten of them, and the others objecting to AI work didn't actually matter.)
I only like the first one more than the current cover, and I think then not by all that much. I do think this is the sort of thing that's relatively easy to focus group / get data on, and the right strategy is probably something that appeals to airport book buyers instead of LessWrongers.
I think I have a somewhat different diagnosis.
For example, take 'property rights'. As a category, this mixes together lots of liberal and illiberal things: houses, hammers, and taxi medallions are all 'property' but the first two are productive capital and the last one is a pretty different form of capital. I'd go so far as to say NIMBYism is mostly downstream of an expansive view of property rights--my ownership of my house is not just the volume and physical objects on it, but also more indirect things like the noises and smells that impinge on it and the view out from it.
I think the core problem for classical liberalism in the 2020s is something like "figuring out a modern theory of regulation". That is, increased population density has increased the indirect costs of action (more people now see and are inconvenienced by your ugly building) and increased economic sophistication has increased a bunch of burdens (more complicated varieties of products require more complicated regulations) but the main answers for how to deal with this have come from anti-liberals. Like, consider Wolf Ladejinski, who helped influence land reform in Asia because he understood the popularity of communism came from (largely correct!) hatred of landlords, and free enterprise also does not like landlords strangling the economy. I think the returns to figuring out things like this are pretty high, and am moderately optimistic about 'abundance' types managing to do a similar thing, but I think there's still lots of fertile ground here.