This is a special post for quick takes by Mitchell_Porter. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
11 comments, sorted by Click to highlight new comments since: Today at 1:26 AM

Three months ago, China's foreign minister disappeared for a while, and after a month he was replaced. Now it's the defense minister's turn for a lengthy absence. A real doomer interpretation would be that China's preparing to grab Taiwan, and they're just changing personnel first so there won't be any infighting. 

Hopefully that won't happen, but I was thinking, what if it did, and had an alarming thought. I'm old enough to have been an adult when 9/11 happened, so I vaguely recall the Internet aspect of the western response. Blogs were still kind of new, and overnight a network of "war-blogs" sprang into being, sharing opinion and analysis. Also, offstage, and far more portentously, legal barriers to Internet surveillance were brought down, though many of us naive civilians had no inkling for a decade, until Snowden defected. 

My alarming thought was, what would happen if there was a similar geopolitical shock now, and the developing AI infrastructure (social as well as technical) was mobilized with the same urgency as the 2001 Internet? But then, what form would that even take? Would the deep state's AI policy experts go to Big Tech and say, we need a superstrategist now, we're nationalizing your frontier research and we'll be going at top speed towards superintelligence? Then it really would be Manhattan Project 2.0. 

A real doomer interpretation

I was expecting something like "an AI is killing the ministers and replacing them with its avatars".

(I should probably read less Less Wrong.)

China's preparing to grab Taiwan

Hm, I think people were saying that the war in Ukraine is also a symbol for "what would happen if China attacked Taiwan". (As in, if Russia gets a cheap victory, China will expect the same; and if Russia is defeated, China will also think twice.) So either those people were wrong, or China is predicting Russian victory?

Or perhaps it is something more complicated, like: "Russia will probably lose, but only narrowly. The West is too tired to fight another proxy war (before the first one even finished). Also, we are stronger and less dysfunctional than Russia. All things considered, a narrow loss for Russia predicts a narrow victory for China, which seems worth it. And maybe start now, while the West is still busy with Russia."

we're nationalizing your frontier research and we'll be going at top speed towards superintelligence?

I think you don't even need superintelligence. Using GPT-4 efficiently could already be a huge change. Like, make it immediately analyze and comment all your plans. Also, have it create new tactical plans that human experts will verify.

Even if the tactical advice is sometimes wrong, the fact that you can get it immediately (also, when you figure out the mistake, you can get a correction immediately) could be a dramatic improvement. I mean, people sometimes make mistakes, too; but they also spend a lot of time deciding, and there is information they fail to consider, plus sometimes no one wants to be the bearer of the bad news... but with GPT you just get instant answers to anything.

You need to make it somehow so that you can feed to GPT all the government secrets, without them leaking to the tech companies. Like, run a copy on government servers, or something.

Eliezer recently tweeted that most people can't think, even most people here, but at least this is a place where some of the people who can think, can also meet each other

This inspired me to read Heidegger's 1954 book What is Called Thinking? (pdf),  in which Heidegger also declares that despite everything, "we are still not thinking". 

Of course, their reasons are somewhat different. Eliezer presumably means that most people can't think critically, or effectively, or something. For Heidegger, we're not thinking because we've forgotten about Being, and true thinking starts with Being.  

Heidegger also writes, "Western logic finally becomes logistics, whose irresistible development has meanwhile brought forth the electronic brain." So of course I had to bring Bing into the discussion. 

Bing told me what Heidegger would think of Yudkowsky, then what Yudkowsky would think of Heidegger, and finally we had a more general discussion about Heidegger and deep learning (warning, contains a David Lynch spoiler). Bing introduced me to Yuk Hui, a contemporary Heideggerian who started out as a computer scientist, so that was interesting. 

But the most poignant moment came when I broached the idea that perhaps language models can even produce philosophical essays, without actually thinking. Bing defended its own sentience, and even creatively disputed the Lynchian metaphor, arguing that its "road of thought" is not a "lost highway", just a "different highway". (See part 17, line 254.) 

Whimsical idea: The latest UFO drama is a plot by an NSA AI, to discourage other AIs from engaging in runaway expansion, by presenting evidence that we're already within an alien sphere of control... 

[-]TLK9mo10

That would make a fun sci-fi novel

This is my first try at a "shortform" post... 

I have slightly refined my personal recipe for human-friendly superintelligence (which derives from mid-2000s Eliezer). It is CEV as an interim goal, along with as much "reflective virtue" as possible. 

I was thinking about the problem of unknown unknowns, and how a developing superintelligence deals with them, once it is beyond human oversight. An unknown unknown is something we humans didn't know about or didn't think of, that the AI discovers, and which potentially affects what it does or should do. 

I asked ChatGPT about this problem, and one of its suggestions was "robust and reflective AI design". I was reminded of a concept from philosophy, the idea of a virtuous circle among disciplines such as ontology, epistemology, phenomenology, and methodology. (@Roman Leventov has some similar ideas.)

Thus, reflective virtue: the extent to which an AI's design embodies and encourages such a virtuous circle. If it faces unknown unknowns, at times when it is beyond human assistance or guidance, that's all it will have to keep it on track. 

Re: the virtuous cycle, I was excited recently to find Toby Smithe's work, a compositional account of Bayesian Brain, which strives to establish formal connections between ontology, epistemology, phenomenology, semantics, evolutionary game theory, and more.

Next week, Smithe will give a seminar about this work.

[-]TAG1y20

robust

That word sets of my BS detectors. It just seems to mean "good, not otherwise specified". It's suspicious that politicians use it all the time.

What's the situation? 

In the USA: Musk's xAI announced Grok to the world two weeks ago, after two months of training. Meta disbanded its Responsible AI team. Google's Gemini is reportedly to be released in early 2024. OpenAI has confused the world with its dramatic leadership spasm, but GPT-5 is on the way. Google and Amazon have promised billions to Anthropic. 

In Europe, France's Mistral and Germany's Aleph Alpha are trying to keep the most powerful AI models unregulated. China has had regulations for generative AI since August, but is definitely aiming to catch up to America. Russia has GigaChat and SistemmaGPT, the UAE has Falcon. I think none of these are at GPT-4's level, but surely some of them can get there in a year or two. 

Very few players in this competitive landscape talk about AI as something that might rule or replace the human race. Despite the regulatory diplomacy that also came to life this year, the political and economic elites of the world are on track to push AI across the threshold of superintelligence, without any realistic sense of the consequences. 

I continue to think that the best chance of a positive outcome, lies with AI safety research (and perhaps realistic analysis of what superintelligence might do with the world) that is in the public domain. All these competing power centers may keep the details of their AI capabilities research secret, but public AI safety research has a chance of being noticed and used by any of them. 

Current sense of where we're going:

AI is percolating into every niche it can find. Next are LLM-based agents, which have the potential to replace humanity entirely. But before that happens, there will be superintelligent agent(s), and at that point the future is out of humanity's hands anyway. So to make it through, "superalignment" has to be solved, either by an incomplete effort that serendipitously proves to be enough, or because the problem was correctly grasped and correctly solved in its totality. 

Two levels of superalignment have been discussed, what we might call mundane and civilizational. Mundane superalignment is the task of getting a superintelligence to do anything at all, without having it overthink and end up doing something unexpected and very unwanted. Civilizational superalignment is the task of imparting to an autonomous superintelligence, a value system (or disposition or long-term goal, etc) which would be satisfactory as the governing principle of an entire transhuman civilization. 

Eliezer thinks we have little chance of solving even mundane superalignment in time - that we're on track to create superintelligence without really knowing what we're doing at all. He thinks that will inevitably kill us all. I think there's a genuine possibility of superalignment emerging serendipitously, but I don't know the odds - they could be decent odds, or they could be microscopic. 

I also think we have a chance of fully and consciously solving civilizational superalignment in time, if the resources of the era of LLM-based agents are used in the right way. I assume OpenAI plans to do this, possibly Conjecture's plan falls under this description, and maybe Anthropic could do it too. And then there's Orthogonal, who are just trying to figure out the theory, with or without AI assistance. 

Unknown unknowns may invalidate some or all of this scenario. :-)