Fer32dwt34r3dfsz

Wikitag Contributions

Comments

Sorted by

Upvoted on the basis of clarity, useful / mentoring tone, and the value of the suggestions. Thank you for coming back to this.

In a first-pass read, there is not much I would add, save for mentioning that I’d expect (1)-(4) to change from what they are now were they to actually be implemented in some capacity, given the complexities (jurisdictional resources, public desire, participation, etc…).

I have the Myth of The Rational Voter on my shelf unread!

If I have any sufficiently useful or interesting ideas or comments regarding your remarks, I will add them here.

Agree. This post captures the fact that, time and again, historical and once perceived as insurmountable benchmarks in AI have been surpassed. Those not fully cognizant of the situation have been iteratively surprised. People, for reasons I cannot fully work out, will continue to engage in motivated reasoning against current and near-term-future-expected AI capabilities and or economical value, with some part of the evidence-downplaying consisting of shifting AGI-definitional or capability-threshold-to-impress goalposts (see moving goalposts). On a related note, your post also makes me imagine the apologue of the boiling frog of late w.r.t. scaling curves.

Answer by Fer32dwt34r3dfsz10

Although this is an old question, I want to provide an answer, since this is a topic that I am interested in and believe matters for GCR and X-Risk reduction, though it seems quite plausible that this field will radically transform under different level of AI capabilities.

First, if the author of this post has updated their believes about the field of decline and collapse or has formulated a resource list of their own, I would appreciate these being remarked, so I may engage with them.

Of note, I have not fully read some of these books and other resources, but I have minimally skimmed all of them. There are resources I am not including, since these I feel are not worth their time in opportunity costs.

The following are not put in priority order, but I have provided simple ratings of one to three ✯ indicating how much I believe the books are valuable to thinking about collapse. Ratings come after titles so as not to prime the reader.

Books:

Web-Links:

Some further considerations

Anna's previous comment used the term "proto-model" and alluded to the greater dearth of formalization in this field. It is worth adding here that "this field" (which, at times, I have referred to as "cliodynamics", "studies in complex societies", "historical dynamics", "studies in collapse", "civilizational dynamics"), is a collection of different academic disciplines, each of which has different levels of quantitative rigor.

Many who have entertained theorizing about human societies and their rise and fall (even the notions of "rise" and "fall" are somewhat dubious) have seldom incorporated quantitative measures or models, though I have still found their work valuable.

The authors in the anthology How Worlds Collapse seem not to interact and or collaborate much with those who study global catastrophic risk (e.g., those who would cite books such as X-Risk or Global Catastrophic Risks), which seems to be a loss for both fields, since those studying GCR and or X-Risks have adopted, more readily (or seemingly so), models and mathematics, with a good canonical paper for the latter entry being Classifying Global Catastrophic Risks (2018), and those in the field of collapse typically are more ready to consider patterns across historical complex societies and psychological dynamics relevant to recovery of complex societies under forces of calamity.

Anyway, best of wishes in your studies of human societies and their dynamics, including their decline.


  1. This covers Toynbee, Spengler, Gobineau, and other historical figures in the field of "collapse" or "complex societies", including Peter Turchin. ↩︎

  2. From their sites: Seshat: Global History Databank was founded in 2011 to bring together the most current and comprehensive body of knowledge about human history in one place. The huge potential of this knowledge for testing theories about political and economic development has been largely untapped. Our unique Databank systematically collects what is currently known about the social and political organization of human societies and how civilizations have evolved over time. This massive collection of historical information allows us and others to rigorously test different hypotheses about the rise and fall of large-scale societies across the globe and human history. ↩︎

I expect to post additional comments on this thread, but for now, w.r.t.

Sometimes the preferences people report or even try to demonstrate are better modeled as a political strategy and response to coercion, than as an honest report of intrinsic preferences.

has the author of this post read Private Truths, Public Lies: The Social Consequences of Preference Falsification (Kuran, 1997)? I've read but have not yet written a review of the book, so I cannot comment too critically on its value in this present conversation, but I believe the author should minimally check it out or skim its table of contents. To pull a better overview (from GoodReads) than I can provide off hand:

Preference falsification, according to the economist Timur Kuran, is the act of misrepresenting one's wants under perceived social pressures. It happens frequently in everyday life, such as when we tell the host of a dinner party that we are enjoying the food when we actually find it bland. In Private Truths, Public Lies Kuran argues convincingly that the phenomenon not only is ubiquitous but has huge social and political consequences. Drawing on diverse intellectual traditions, including those rooted in economics, psychology, sociology, and political science, Kuran provides a unified theory of how preference falsification shapes collective decisions, orients structural change, sustains social stability, distorts human knowledge, and conceals political possibilities.

A common effect of preference falsification is the preservation of widely disliked structures. Another is the conferment of an aura of stability on structures vulnerable to sudden collapse. When the support of a policy, tradition, or regime is largely contrived, a minor event may activate a bandwagon that generates massive yet unanticipated change.

In distorting public opinion, preference falsification also corrupts public discourse and, hence, human knowledge. So structures held in place by preference falsification may, if the condition lasts long enough, achieve increasingly genuine acceptance. The book demonstrates how human knowledge and social structures co-evolve in complex and imperfectly predictable ways, without any guarantee of social efficiency.

Private Truths, Public Lies uses its theoretical argument to illuminate an array of puzzling social phenomena. They include the unexpected fall of communism, the paucity, until recently, of open opposition to affirmative action in the United States, and the durability of the beliefs that have sustained India's caste system.

Thank you for the typo-linting.

To provide a better response to your first question than the one I’ve provided below, I would need to ask him to explain more than he has already.

From what he has remarked, the first several meetings were very stressful (for most people, this would, of course, be the case!) but soon he adjusted and developed a routine for his meetings.

While the routine could go off course based on the responsiveness of the individual(s) present (one staffer kept nodding Yes, had no questions, and then 20 minutes later remarked that they “take into account” what had been said; another staffer remarked that the US simply needed to innovate AI as much as possible, and that safety that stifled this was not to be prioritized; these are statements paraphrased from my friend), I get the sense that in most instances he has been able to provide adequate context on his organization and the broader situation with AI first (not sure which of these two comes first and for how long they are discussed).

Concerning provision of a description of the AI context, I am not sure how dynamic they are; I think he mentioned querying the staffers on their familiarity, and the impression he had was that most staffers listened well and thought critically about his remarks.

After the aforementioned descriptions, my friend begins discussing measures that can be taken in support of AI Safety.

He mentioned that he tries to steer framings away from those invoking thoughts of arms races or weaponization and instead focuses on uncontrollability “race to the bottom” scenarios, since the former categorization(s) given by others to the staffers has, in his experience, in some instances downplayed concerns for catastrophe and increased focus on further expanding AI capabilities to “outcompete China”.

My friend’s strategy framings seem appropriate and he is a good orator, but I have not the nuanced suggestion that I wanted for my conversation with him, as I’ve not thought enough about which AI risk framings and safety proposals to have ready for staffers and as I believe talking to staffers qualifies as an instance of the proverb “an ounce of practice outweighs a pound of precept”.

Constraining and/or lowering (via bans, information concealment, raised expenses, etc…) capabilities gains via regulation of certain AI research and production components (weights, chips, electricity, code, etc…) is a strategy pursued in part or fully by different AI Safety organizations.

One friend (who works in this space) and I were very recently reflecting on AI progress, along with strategies to contend with AI related catastrophes. While we disagree on the success probabilities of different AI Safety plans and their facets, including those pertaining to policy and governance, we broadly support similar measures. He does, however, believe “shut down” strategies ought to be prioritized much more than I do.

This friend has, in the last year, met with between 10-50 (providing this range for preservation of anonymity) congressional staffers; the stories of these meetings he has could make for both an entertaining and informative short book, and I was grateful for the experiences and the details of how he prepares for conversations and framing AI that he imparted on me.

The density of familiarity with AI risk across the staffers was concentrated on weaponization; most staffers (save for 3) did not have much if any sense of AI catastrophe. This point is interesting, but I found how my friend perceived his role in AI Safety with these meetings more intriguing.

To prelude, both him and I believe that, generally speaking, individual humans and governments (including the US government) require some (the more spontaneous, the more impactful) catastrophe to engender productive responses. Examples of this include near-death experiences (for people) and the 11 September 2001 bombings for the US government.

With this remark made: my friend perceives his role to be one primarily of priming the staffers i.e. “the US government” to respond more effectively to catastrophe (e.g. 100s of thousands to millions but not billions dead) than they otherwise would have been able.

Any immediate actions taken by the staffers towards AI Safety, especially with respect to a full cessation of certain lines of research, access to computational resources, or information availability, my friend finds excellent, but due to the improbability of these occurring, he believes the brunt of his impact comes to fruition if there is an AI catastrophe that humans can recover from.

This updated how I perceive the “slow down” focused crowd in AI Safety from being one focused on literally having many aspects of AI progress stalled partially or fully to one of governmental and institutional response enhancement in fire-alarm moments.

Kudos to the authors for this nice snapshot of the field; also, I find the Editorial useful.

Beyond particular thoughts (which I might get to later) for the entries (with me still understanding that quantity is being optimized for, over quality), one general thought I had was: How can this be made into more of "living document"?.

This helps

If we missed you or got something wrong, please comment, we will edit.

but seems less effective a workflow than it could be. I was thinking more of a GitHub README where individuals can PR in modifications to their entries or add in their missing entries to the compendium. I imagine most in this space have GitHub accounts, and with the git tracking, there could be "progress" (in quotes, since more people working doesn't necessarily translate to more useful outcomes) visualization tools.

The spreadsheet works well internally but does not seem as visible as would a public repository. Forgive me if there is already a repository and I missed it. There are likely other "solutions" I am missing, but regardless, thank you for the work you've contributed to this space.

The below captures some of my thoughts on the "Jobs hits first" scenario.

There is probably something with the sort of break down I have in mind (though, whatever this may be, I have not encountered it, yet) w.r.t. Jobs hits first but here goes: the landscape of AI induced unemployment seems highly heterogeneous, at least in expectation over the next 5 years. For some jobs, it seems likely that there will be instances of (1) partial automation, which could mean either not all workers are no longer needed (even if their new tasks no longer fully resemble their previous tasks), i.e. repurposing of existing workers, or most workers remain employed but do more labor with help from AI and of (2) prevalent usage of AI across the occupational landscape but without much unemployment, with human work (same roles pre-automation) still being sought after, even if there are higher relative costs associated with the human employment (basically, the situation: human work + AI work > AI work, after considering all costs). Updating based on this more nuanced (but not critically evaluated) situation w.r.t. Jobs hits first, I would not expect the demand for a halt to job automation to be any less but perhaps a protracted struggle over what jobs get automated might be less coordinated if there are swaths of the working population still holding out career-hope, on the basis that they have not had their career fully stripped away, having possibly instead been repurposed or compensated less conditional on the automation.

The phrasing

...a protracted struggle...

nevertheless seems a fitting description for the near-term future of employment as it pertains to the incorporation of AI into work.

I decided to run the same question through the latest models to gauge their improvements.

Not exactly sure if there is much advantage at all in you having done this, but I feel inclined to say Thank You for persisting in persuading your cousin to at least consider concerns regarding AI, even if he perceptually filters those concerns to mostly regard job automation over others, such as a global catastrophe.

In my own life, over the last several years, I have found it difficult to persuade those close to me to really consider concerns from AI.

I thought that capabilities advancing observably before them might stoke them to think more about their own future and how possibly to behave and or live differently conditional on different AI capabilities, but this has been of little avail.

Expanding capabilities seem to best dissolve skepticism but conversations seem to have not had as large an effect as I would have expected. I've not thought or acted as much as I want to on how to coordinate more of humanity around decision-making regarding AI (or the consequences of AI), partially since I do not have a concrete notion where to steer humanity or justification for where to steer (even I knew it was highly likely I was actually contributing to the steering through my actions).

Load More