After reading about Trump's actions w.r.t. Greenland, I'm updating further away from
and further in favor of both
I'd like to find more/better sources of evidence about "what is the US executive branch optimizing for?"; curious to hear suggestions.
(Also, to Americans: How high/low salience is the issue in the US? Also: curious to read your analysis of your chief executive's behavior.)
The US under Trump 2.0 has a new national security concept which understands the world in terms of great powers and their regions of influence. The USA's region is the entire western hemisphere and that's where Greenland is (along with Canada and Venezuela), and the new America will not allow anything in the western hemisphere to be governed from outside the hemisphere. Instead they want to use Greenland however they see fit, e.g. as a base for continental missile defense.
They do not say this openly, but the European Union, I believe, is not regarded as a great power, but as a construct of America's erstwhile liberal empire. The implication is that the nations of Europe will individually end up as satellites of one great power or another (e.g. China, Russia, post-liberal America, or an emergent indigenous European power), or perhaps as non-aligned.
This insouciant territorial claim on Greenland is the flipside of the way in which America is reevaluating its relationship with all other nations on a bilateral basis. Countries which were used to being treated as equals and partners, at least publicly, now find themselves just another entry in a list of new tariffs, and the target of impulsive hostile declarations by Trump and his allies like Vance and Musk.
This does imply that insofar as the norms of the "rules-based order" depended on American backing to have any effect, they are headed for an irrelevance similar to that of the League of Nations in the 1930s. Anything in international relations that depends on America or routes through America will be shaped by mercurial mercantile realpolitik, or whatever the new principles are.
The one complication in this picture is that liberalism still has a big domestic constituency in America, and has a chance of ruling the country again. If the liberals regain power in America, they will be able to rebuild ties with liberals in Canada, the EU, and elsewhere, and reconstitute a version of liberal internationalism at least among themselves, if not for the whole globe.
Interesting. Thanks. How did you arrive at the above picture? Any sources of information you'd recommend in particular?
I read and watch a lot of political content (too much), and I participate in forums on both sides of American politics. That's the closest I can give to a method. I also have a sporadic geopolitics blog.
Qualitatively, discussions re: Greenland look a lot like discussions re: North Korea did back in his first term. People thought he had lost his mind and was going to start a nuclear war, but tensions actually ended up calming down - arguably more than usual - after the initial surge.
Partisanism aside, and whether or not people like it or consider it the optimal way to get what he wants, this just looks to be the way that he negotiates. If you're looking for reassurance, I've seen this news cycle quite a few times before during a Trump presidency, and things tended to turn out alright the other times.
A potentially somewhat important thing which I haven't seen discussed:
(This looks like a decisionmaker is not the beneficiary -type of situation.)
Why does that matter?
It has implications for modeling decisionmakers, interpreting their words, and for how to interact with them.[1]
If we are in a gradual-takeoff world[2], then we should perhaps not be too surprised to see the wealthy and powerful push for AI-related policies that make them more wealthy and powerful, while a majority of humans become disempowered and starve to death (or live in destitution, or get put down with viruses or robotic armies, or whatever). (OTOH, I'm not sure if that possibility can be planned/prepared for, so maybe that's irrelevant, actually?)
For example: we maybe should not expect decisionmakers to take risks from AI seriously until they realize those risks include a high probability of "I, personally, will die". As another example: when people like JD Vance output rhetoric like "[AI] is not going to replace human beings. It will never replace human beings", we should perhaps not just infer that "Vance does not believe in AGI", but instead also assign some probability to hypotheses like "Vance thinks AGI will in fact replace lots of human beings, just not him personally; and he maybe does not believe in ASI, or imagines he will be able to control ASI". ↩︎
Here I'll define "gradual takeoff" very loosely as "a world in which there is a >1 year window during which it is possible to replace >90% of human labor, before the first ASI comes into existence". ↩︎
People who have a lot of political power or own a lot of capital, are unlikely to be adversely affected if (say) 90% of human labor becomes obsolete and replaced by AI.
That's certainly the hope of the powerful. It's unclear whether there is a tipping point where the 90% decide not to respect the on-paper ownership of capital.
so long as property rights are enforced, and humans retain a monopoly on decisionmaking/political power, such people are not-unlikely to benefit from the economic boost that such automation would bring.
Don't use passive voice for this. Who is enforcing which rights, and how well can they maintain the control? This is a HUGE variable that's hard to control in large-scale social changes.
It's unclear whether there is a tipping point where [...]
Yes. Also unclear whether the 90% could coordinate to take any effective action, or whether any effective action would be available to them. (Might be hard to coordinate when AIs control/influence the information landscape; might be hard to rise up against e.g. robotic law enforcement or bioweapons.)
Don't use passive voice for this. [...]
Good point! I guess one way to frame that would be as
by what kind of process do the humans in law enforcement, military, and intelligence agencies get replaced by AIs? Who/what is in effective control of those systems (or their successors) at various points in time?
And yeah, that seems very difficult to predict or reliably control. OTOH, if someone were to gain control of the AIs (possibly even copies of a single model?) that are running all the systems, that might make centralized control easier? </wild, probably-useless speculation>