Hanson seems to treat the global civilization as a cultural melting pot, but he does distinguish insular subcultures from that. I intuit he sees contemporary cultures on a gradient relative to global, hegemonic trends (which correlate with technological progress, increasing wealth and education) and thereby drifting pressures.
I wouldn't equate Robin's perspectives on culture with reactionary movements or conservatism. If anything, he seems quite open to radical transformations of society (e.g. futarchy to replace parlamentarism, bounty systems and vouching to replace policing, private insurance policies to replace welfare policies etc.).
Whereas (neo-)reactionary / conservative thought simply often intends to return some previous status quo, Robin does not confess to representing such views and has not proposed such solutions. In fact, as far as I'm aware he hasn't proposed any solutions at all as of yet.
EDIT: (Mis-)interpreted your comment as Robin pushing (neo-)reactionary ideas. I do agree that conservative and reactionary movements generally show interest towards cultural drift as a phenomenon. However, if you propose that Robin's ideas themselves are not novel, I'd like to hear which ideas in particular you think have already been tackled for millenia or some other timescale.
Very good! Hoping to see - weakly intending to commit - a post list of his latest boom (fertility decline, which lead him to culture). I attended one of Robin's Zoom meetings on culture, and I'm confident it is on par with his other great fixations thus far (prediction markets, signaling, ems and aliens) if not even bigger. Robin seems absolutely possessed by the phenomenon.
For those who do not follow him: Robin has begun seeing culture as broken/maladaptive, and he seems to think this is perhaps the key issue of our time, on par or bigger than climate change and AI. He thinks that cultural change is driven into directions which will eventually lead to population decline and nasty places, even though he remains optimistic on our species' future in the long run.
I interpreted Eliezer as writing from the assumption that the superintelligence(s) in question are in fact not already aligned to maximize whatever it is that humanity needs to survive, but some other goal(s), which diverge from humanity's interests once implemented.
He explicitly states that the essay's point is to shoot down a clumsy counterargument (along "it wouldn't cost the ASI a lot to let us live, so we should assume they'd let us live"). So the context (I interpret) is that such requests, however sympathetic, have not been ingrained into the ASI:s goals. Using a different example would mean he was discussing something different.
That is, "just because it would make a trivial difference from the ASI:s perspective to let humanity thrive, whereas it would make an existential difference from humanity's perspective, doesn't mean ASIs will let humanity thrive", assuming such conditions aren't already baked into their decision-making.
I think Eliezer spends so much time on working from these premises because he believes 1) an unaligned ASI to be the default outcome of current developments, and 2) that all current attempts at alignment will necessarily fail.
My understanding goes along similar lines, so I'm not highly doubtful. If anything, I've had the idea that the risk of developmental disorders and miscarriage, difficulties in getting pregnant and some pregnancy related issues might begin rising substantially much sooner than in one's 30s.
To me it seems that the overwhelming majority of children conceived even after 35 are all healthy and fine. That is, >99% on autism, >98% on chromosome disorders. The risk of miscarriage is relevant. All these considered, I believe this evidence means people should likely not be too worried whether they are already too old to have kids.
Whether or not having kids earlier might still be better, while accounting for the costs on one's career or business, etc. is another discussion, particularly when thinking of large numbers of people. However, AFAIK a lot of people already want to conceive while they are young, and I'm not sure whether people considering trying kids can significantly be swayed one way or another by this evidence alone.
(comment edited: missed the link at first sight)
Thanks for the post. A layperson here, little to no technical knowledge, no high-g-mathematical-knowitall-superpowers. I highly appreciate this forum and the abilities of the people writing here. Differences in opinion are likely due to me misunderstanding something.
As for examples or thought experiments on specific mechanisms behind humanity losing a war against an AI or several AIs cooperating, I often find them too specific or unnecessarily complicated. I understand the point is simply to point out that a vast number of possible, and likely easy ways to wipe out humanity (or to otherwise make sure humanity won't resist) exists, but I'd still like to see more of the claimed simple, boring, mundane ways of this happening than this post includes. Such as:
Another example, including killer robots:
I think one successful example of pointing to AI risk without writing fiction, was Eliezer musing the possibility that AI systems might, due to some process of self-improvement, end up behaving in unexpected ways so that they are still able to communicate with one another but unable to communicate with humanity.
My point is that providing detailed examples of AIs exterminating humanity via nanobots, viruses, highly advanced psychological warfare et cetera might serve to further alienate those who do not already believe in the possibility of them being able to or willing to do so. I think that pointing to the general vulnerabilities of the global human techno-industrial societies would suffice.
Let me emphasize that I don't think the examples provided in the post are necessarily unlikely to happen or that what I've outlined above should somehow be more likely. I do think that global production as it exists today seems quite vulnerable to even relatively slight pertubations (such as a coronavirus pandemic or some wars being fought), and that by simply nudging these vulnerabilities might suffice to quickly end any threat humanity could pose to an AI:s goals. Such a nudge might also be possible and even increasingly likely due to wide AI implementation, even without an agent-like Singleton.
A relative pro on focusing on such risks might be the view that humanity does not need a godlike singleton to be existentially, catastrophically f-d, and that even relatively capable AGI systems severely risk putting an end to civilization, without anything going foom. Such events might be even more likely than nanobots and paperclips, so to say. Consistently emphasizing these aspects might convince more people to wary of unrestricted AI development and implementation.
Edit: It's possibly relevant that I relate to Paul's views re: slow vs. fast takeoff insofar as I find slow takeoff likely to happen before fast takeoff.
As for a specific group of people resistant to peer pressure - psychopaths. Psychopaths don't conform to peer pressure easily - or any kind of pressure, for that matter. Many of them are in fact willing to murder, sit in jail, or otherwise become very ostracized if it aligns with whatever goals they have in mind. I'd wager that the fact that a large percentage of psychopaths literally end up jailed speaks for itself - they just don't mind the consequences that much.
This is easily explained due to psychopaths being fearless and mostly lacking empathy. As far as I recall, some physiological correlates exist - psychopaths have a low cortisol response to stressors compared to normies. On top of the apparent fact that they are indifferent towards others' feelings, some brain imaging data supports this as well.
What they might be more vulnerable to is that peer pressure sometimes goes hand in hand with power and success. Psychopaths like power and success, and they might therefore play along with rules to get more of what they want. That might look like caving in to peer pressure, but judging by how the pathology is contemporarily understood, I'd still say it's not the pressure itself, but the benefits aligned with succumbing to it.