What's this post about?

I make some rants and recommendations about terminology.

This is written for AI-not-kill-everyone-ists. If you are worried about AI killing everyone and want us to prevent AI from killing everyone, this post is for you. If you don't have that agenda and instead have other agenda's, that's fine. It's great. But this post may fail to connect to your worldview in a meaningful way.

( If you are an AI-not-kill-everyone-ist, you should probably know this post exists )

The global AI situation has advanced rather far and so some terminology for what we are referencing in the strategic situation needs updating. I like clear terminology and getting peoples world models to line up. I don't claim to be up to date on the many valuable contributions in this space, but I have been engaging with these ideas since 2013, so I may have valuable opinions to offer, but may just be adding unnecessary noise. I will feel successful if I get engagement and get pointed to other resources I should read.

I make little attempt to maintain a professional rhetorical style. I like peoples personalities, and I hope you enjoy getting to see some of mine.

AGI

"Artificial General Intelligence (AGI)." - I hate this term.

I hate how "artificial" just means "man made" and so points to almost everything I interact with. Even when I go out in the woods I'm walking on carefully manicured trails. Everything I know is artificial. I'm over this term.

I hate the term "intelligence". It has too many connotations in the public, and is really the wrong thing to be focused on anyway. Much better is "capabilities" or "optimization power".

Finally, "general"... well this is a great term. I love it and I want to study it in a mathematical way. It is clear how one would theoretically build up a "general capabilities" partial ordering. If agent A can do exactly task x and agent B can do exactly tasks x and y then B is strictly more general than A. But what if A can do x and y while B can do y and z? Now they're incomparable. There's a research stub. How do we map tasks we care about in and out of sets we can talk about like this? There's a research stub. Do I think people are thinking about this when they hear the word "general" in this context? I really don't. And so, also, I hate "general" being included in AGI because "general" has useful and important meaning which is being ignored by use of the term.

Projecting task-space onto one dimension is pretty lossy, but this is what I imagine you might get if you did, and had a way of measuring the capabilities of this systems on different tasks in task space.

Anyway, as far as I am concerned LLM chatbots are AGI. We have AGI and it passes the Turing Test and if anyone wants to say it doesn't then I will very much accuse them of shifting goalposts.

ASI

"Artificial Super Intelligence (ASI)" is a slightly better term. But it still sucks.

The problem is that it doesn't talk about generality. I know in Bostrom's book he says that it should be assumed to be as general as humanity. Surely everyone who voices an opinion in this sphere has taken the time to read "Superintelligence" and is using that definition -- sarcasm, but to be honest I don't know how close or far from the truth this is. We seem to be going through an AI Safety Eternal September and I love and hate it. -- But if you just look at the words, I think it is fair to think of AlphaFold as a "protein folding" ASI and ChatGPT as a "has read all the things" ASI. There are lots of other examples like this.

We don't know how far we are from Bostromian ASI, but the common sense idea I get from the words "artificial", "super", and "intelligence" -- systems that perform above the level of human capabilities within bounded task domains -- those systems we definitely already have a bunch of.

RSI

"Recursive Self Improvement (RSI)." Are we reading a self help book? "Unlock your hidden potential with self improvement based on differential equations!" - ugh.

All I think recursive self improvement means is compounding improvement. Many many systems already exhibit this. Language and culture compound and make humans more successful. Technology compounds letting us make better technology. We have been in the midst of RSI for a very long time and it isn't note worthy at all except for the issue of trying to identify who or what is at the helm.

Singularity

Such a gross term.

In math and science it's the locations in some space that become so extreme in some way that modelling them doesn't work. They are ill defined. So, can we say we have "a technology singularity" when the progress of technology is so advanced that we can't model or predict it? Given our track record for predicting technology, I ask, how can you be concerned about a point in the future where technology becomes hard to predict when we have never had a period of time when it was predictable in the first place?

This word seems to add nothing but confusion. Tell me about the factors affecting capabilities and optimization power, and how current sources of optimization power are among those factors. Please do not talk to me of "singularities" unless you are studying inverse kinematics in articulated robots.

Intelligence Explosion and FOOM

As with all things, the more genuine and reasonable a concept is, the more people scoff at it and think it is goofy.[1] So too with these terms. I think they should be abandoned not because they are unreasonable, but because, afaik, people find them silly and don't actually engage with the complex models the terms represent.

An "explosion" is a violent expansion of volume caused by the sudden release of some form of stored energy. And so the analogy goes, an "intelligence explosion" is a sudden increase of intelligence caused by the sudden usability of potential. In example, we have computers, and they run programs written by humans. The humans are the cool new kids on the complex system dynamics block (on evolutionary timescales). They're no pushovers, but in terms of absolute optimality? The human programs are nowhere near the limit. Once we get something writing better programs that is itself a program, all the potential these computers had is suddenly made use of. It is the difference between the current usage of intelligence substrate as opposed to the optimal usage of intelligence substrate that makes this an "explosion" rather than just a reasonable and predictable march forward into greater capability.

As for "Fast Onset of Overwhelming Mastery (FOOM)"... I can't help it, I like E.Y.'s goofy literary flair. I think this term is fun and I think it points at what we would likely see in the event of an intelligence explosion. But it's silly, and I don't think many people like silly terms in their serious topics. We can just as well say "sudden capabilities increase" and be boring and stuff. Gotta spend those weirdness points where they count and telling the world to stop building AGI because optimization loops will kill us all is sure gonna cost a lot of weirdness points.

Loss of Control

I think this is seeing more use applied to systems of the type that are currently deployed. Please comment if you agree or disagree. But if true, it makes the term seem like it refers to something manageable. Like losing control of your car. If you can release your breaks and stop the skidding have you really lost control? Whatever. It's fine. Let them have the term, it suits that kind of purpose better anyway. When an AI steers your car into a ditch, that is a fine use of the term "loss of control". When an AI systematically removes all influence you have over the future, that is a different kind of thing and having a different term seems like a good idea.

Speaking of which...

Value Lock-In

"Value Lock-In". This is the term I think we should be focusing on now. All the other stuff has confusing definitions and has already come to pass and really isn't the main issue anyway.

Unfortunately this also has a confusing definition, but it's important. Here's a short sweet summary: "Value lock-in is a state in which the values determining the long-term future of Earth-originating life can no longer be altered."

This is the point we care about. Yeah we have AGI. Yeah we have ASI. Yeah, it's embedded in human organizations that may be out of human control and recursively self improving. It is possible we are past the point of value lock-in. It is possible that once we decided corporations promoting return on investment should be agents that can act in the world we were past the point of value lock-in.

Interestingly, but ultimately inconsequentially, it may be the case that humans can no longer affect the outcome, but the collective control systems that determine what will come to pass also have not locked-in their values. This may be because of competition between systems, or insufficient value encoding capabilities. So, although we may never know about it, there may be drift in the value encoding before the capabilities and self awareness of these systems progress sufficiently to impose a convergent force onto their own value encoding. Unfortunately, if that didn't make sense to you, it's probably an indication you aren't understanding a lot of other things I'm trying to say. This particular thing doesn't matter, but it is important to how I see the world, and also unfortunately, I am very experienced with misunderstanding people and being misunderstood, so I mention it.

Conclusion

If you care about promoting the "AI-not-kill-everyone" worldview and agenda, please consider reducing your use of the terms "AGI", "ASI", and "RSI" instead say "system", "control system", or "decision system" and "sudden capabilities increase". Probably don't mention that it's part of a differential equation unless it's really relevant. Probably don't say "recursive" unless necessary. Possibly include the term "autonomous", "computer" or "AI". But, tbh, I really would like if we could phase out that last term. "AI" is just as meaningless and unhelpful of a semantic umbrella as it has ever been.

Don't use the terms "singularity", "intelligence explosion", or "FOOM". Instead, "sudden capabilities increase" is fine. It is ok to say "loss of control" to refer to issues with mundane usability of AI, but when referring to the endgame where an optimization system ensures the continuation of it's optimization target, please refer to that as "the point of value lock-in" or a "value lock-in event".

Really, the other terms I just dislike, I doubt people will actually stop using them, but I'm very serious about the term "value lock-in" as pointing to the main thing AI-not-kill-everyone-ists are concerned about.

It is we care about preventing, or rather, if we get to the point of value lock-in, we'd better be damn good at encoding values. It may already be too late, but if it is, I can't tell, so I'll keep stupidly trying to fight for humanity and all the things humanity cares for. I hope you will do the same.

 

( But don't forget self care, and not just the physical kind ! )

 

As with everything I put on the internet, I would love to hear your thoughts on what I have written here.

  1. ^

    Do I really believe this? Probably not, rather, the statement comes out of the scorn I feel towards professionalism and status signalling. But I do think genuine human emotions are messy and goofy like marketing campaigns pretend to be, and a serious tone is neither a sign for or against the seriousness of the content.

    ... or maybe I'm just a misfit who wants to say "normies REEEEE". Idk.

New Comment
1 comment, sorted by Click to highlight new comments since:

I guess one kind of critique I might expect for this is that it fails to connect with a strategy. Terminology is for communication within a community. What community do these recommendations apply to, and why? I'd like to write a post sometime exploring that. If you know of anyone exploring that sort of idea. Please let me know.

A more general critique may be on the value of making recommendations about terminology and language use. My intuition is that being conscientious about our language is important, but critical examination of that idea seems valid.

Curated and popular this week