I believe two things about rulers (politicians, CEOs of big orgs):

  1. They only give others as much freedom as necessary to be useful to achieve ruler's goals
  2. They don't want actors more powerful than them anywhere nearby

From these I intuit that:

  1. Rulers will not support development of powerful AGI as it might threaten to overpower them
  2. Rulers might get rid of humans as soon as an AI can achieve goals more efficiently (but that's much lower bar for intelligence and power of AI, than that needed to overpower the Ruler)

Thus my immediate fears are not so much about aligning super-human AGI, but about aligning Rulers with needs of their constituents - for example a future in which we never get to smarter than humans AIs, but things a bit more powerful than Office365 Copilot can be sufficient for CEO (or real stakeholders behind) to run the whole company, or for an autocratic president to run enough of the industry to make her a yacht and some caviar.

Question: are any of my two assumptions or two intuitions or the conclusion wrong?

What are some falsifiable, observable predictions of them, which I could verify using internet today?

New Answer
New Comment

2 Answers sorted by

Anon User

10

Rules will not support development of powerful AGI as it might threaten to overpower them

is probably true, but only because you used the word "powerful" rather than "capable". Rulers would definitely want development of capable AGIs as long as they believe (however incorrectly) in their ability to maintain power/control over those AGIs.

In fact, rulers are likely to be particularly good at cultivating capable underlings they they maintain firm control of. It may cause them to overestimate their ability to do the same for AGI. In fact, if they expect an AGI to be less agentic, they might expect it to actually be easier to maintain control over an "we just program it to obey" AGI, and prefer that over what they perceive to be inherently less predictable humans.

qbolec

00

ChatGPT's answer:
 

Your two assumptions and intuitions are plausible, but they may not hold true in every case. It is important to consider the specific context and motivations of individual rulers when making predictions about their behavior.

Regarding your first intuition, it is possible that some rulers may support the development of powerful AGI if they see it as a means to achieve their goals more efficiently. However, they may also take precautions to ensure that the AGI is under their control and cannot threaten their power.

Regarding your second intuition, rulers may not necessarily get rid of humans as soon as AI can achieve goals more efficiently. Human labor can still be cheaper or more reliable in certain contexts, and rulers may also want to maintain the appearance of caring about human welfare for political reasons.

To make your intuitions more testable, you could formulate specific hypotheses and predictions that can be observed or measured. For example, you could predict that rulers of authoritarian regimes will be more likely to suppress the development of powerful AGI than rulers of democracies, or that companies with highly centralized decision-making structures will be more resistant to AI-driven automation than companies with more decentralized structures. You could then collect data to test these hypotheses, such as by analyzing government policies or conducting surveys of corporate leaders.

(I am a bit worried by this given that China seems to restrict AIs more than US...)
 

Me: Thanks, the idea with checking what authoritarian regimes are doing today, was very helpful. Can you suggest more operationalisations of my theses?

GPT:
Sure, here are some possible operationalizations of your intuitions:

  1. Operationalizing your first intuition:
  • Hypothesis: Rulers will be more likely to suppress the development of powerful AGI if they perceive it as a threat to their power.
  • Prediction: Countries with more authoritarian governments will have fewer AI research projects and investments than countries with more democratic governments.
  • Operationalization: Use publicly available data on AI research funding and publications by country, as well as measures of political freedom and democracy, to test the relationship between government type and AI development.
  1. Operationalizing your second intuition:
  • Hypothesis: Rulers will be more likely to replace human workers with AI when it is cost-effective and does not threaten their power.
  • Prediction: Companies with highly centralized decision-making structures will be slower to adopt AI-driven automation than companies with more decentralized structures.
  • Operationalization: Survey companies to collect data on their decision-making structures and level of AI adoption, and test the relationship between these variables using statistical analysis.
  1. Operationalizing your conclusion:
  • Hypothesis: Rulers will prioritize their own interests over the well-being of their constituents.
  • Prediction: Countries with high levels of corruption will be less likely to invest in social welfare programs and more likely to invest in projects that benefit the ruling elite.
  • Operationalization: Use measures of corruption and social welfare spending, as well as data on government contracts and investments, to test the relationship between corruption and government priorities.

I like how ChatGPT can help in operatinalizing fuzzy intuitions. I feel an eerie risk that it makes me think even less, and less carefully, and defer to AIs wisdom more and more... it's very tempting ... as if finding an adult who you can cede control to.

2 comments, sorted by Click to highlight new comments since:

rules != rulers btw

Thanks, fixed. I guess this is not why it got -18 votes, though. I would like to hear what exactly people didn't like in this post