(For an overview of other agent properties besides the advanced agent properties, see standard agent properties.)
Advanced agents are the subjects of AI alignment theory; machine intelligences potent enough that (a) the safety paradigms for advanced agents become relevant, and (b) they can be decisive in the big-picture scale of events.
Some examples of properties that might make an agent this powerful:
Since there's apparently multiple avenues we can imagine for how an AI could start to be this powerful, "advanced agent" doesn't have a neat necessary-and-sufficient definition. Similarly, some of the advanced agent properties are easier to formalize than others.
One example of a relatively definable property is relative efficiency within a domain. For example, an agent appears 'epistemically efficient' to us if we can't predict any directional error in its estimates. E.g., we can't expect a superintelligence to precisely estimate the exact number of hydrogen atoms in the Sun, but it would be very odd if we could predict in advance that the superintelligence would overestimate this number by 10%. It seems very reasonable to expect that sufficiently advanced superintelligences would have this particular property, relative to humans, over all domains (even human stock markets have this property in the short run for the relative prices of highly liquid assets). An agent that was efficient at, say, social manipulation of humans, would definitely be advanced enough to be pivotal and potentially dangerous, even if it wasn't efficient across all domains.
Another example of a relatively definable property is cognitive uncontainability within a domain - the agent searches a broad-enough space of options that we can't predict what its best option will look like or how much of the agent's expected utility will be available to it. This kind of uncontainability is impossible in narrow, perfectly known spaces like Tic-Tac-Toe, but can start to manifest as early as the domain of Go, possibly - AlphaGo played moves that human champions initially found puzzling and unexpected, because the 19x19 Go board and logical Go rules contain sufficient possible complexity that you can start to have "weird" moves. Real-world domains, where a falling leaf (physics and botany) can be nudged by a flying bee (biology) and both are far more complicated than a Go board and without completely-known-to-humans axiomatized rules, would be even richer than Go. Cognitive uncontainability can potentially happen when an AI searches a different style of solution, not just when an AI searches a strictly larger set of solutions. Even if an AGI is, in some sense, still infrahuman, advanced-safety considerations might start to be relevant if the AGI is searching 'weird' parts of solution space and hence is cognitively uncontainable on the real-world domain. This would already start to bring in considerations like edge solutions, unforeseen optima, nearest unblocked strategies, and treacherous context changes.
An example of a less crisp advanced-agent property might be "generality", or correlates of generality like "Can learn new domains rather than needing to be programmed for them", "Can learn subjects unknown to the programmers", or "Can start to learn about human psychology", or "Can understand the sort of larger 'strategic' views of its situation that imply convergent instrumental strategies."
One reason to keep the term 'advanced' on an informal basis, or even as something of a placeholder, is that in an intuitive sense we want it to mean "AI we need to take seriously" in a way independent of particular architectures or particular accomplishments. To the philosophy undergrad who 'proves' that AI can never be 'truly intelligent' because it is 'merely deterministic and mechanical', one possible reply is, "Look, if it's building a Dyson Sphere, I don't care if you define it as 'intelligent' or not." Similarly, the term 'advanced agent' or 'sufficiently advanced' should be understood in a background context of "Look, if a computer program is doing X, it doesn't matter if we define that as 'intelligent' or 'general' or even as 'agency', what matters is that it's doing X."
We're still interested in listing out what kind of agent designs or architectures or cognitive properties we think might lead into interesting X, such as "domain-general reasoning" or "consequentialist strategizing across real-world domains" - otherwise we wouldn't be allowed to think about whether an agent might be 'advanced' until it had already been observed to start doing X, which is not a safe mindset. But the point is not to generate a philosophically perfect definition of some platonic ideal of Advancement, but rather to think about which cognitive stages of AI development could lead to which kinds of real-world power or cognitive safety issues starting to become relevant.
A short summary of some properties that might lead into advanced agency.
Sufficiently sophisticated models and predictions of human minds potentially leads to:
Contrast behaviorism.
Probably requires generality (see below). To grasp a concept like "If I escape from this computer by hacking my RAM accesses to imitate a cellphone signal, I'll be able to secretly escape onto the Internet and have more computing power", an agent needs to grasp the relation between its internal RAM accesses, and a certain kind of cellphone signal, and the fact that there are cellphones out there in the world, and the cellphones are connected to the Internet, and that the Internet has computing resources that will be useful to it, and that the Internet also contains other non-AI agents that will try to stop it from obtaining those resources if the AI does so in a detectable way.
Contrasting this to non-primate animals where, e.g., a bee knows how to make a hive and a beaver knows how to make a dam, but neither can look at the other and figure out how to build a stronger dam with honeycomb structure. Current, 'narrow' AIs are like the bee or the beaver; they can play chess or Go, or even learn a variety of Atari games by being exposed to them with minimal setup, but they can't learn about RAM, cellphones, the Internet, Internet security, or why being run on more computers makes them smarter; and they can't relate all these domains to each other and do strategic reasoning across them.
So compared to a bee or a beaver, one shot at describing the potent 'advanced' property would be cross-domain real-world consequentialism. To get to a desired Z, the AI can mentally chain backwards to modeling W, which causes X, which causes Y, which causes Z; even though W, X, Y, and Z are all in different domains and require different bodies of knowledge to grasp.
Many dangerous-seeming convergent instrumental strategies pass through what we might call a rough understanding of the 'big picture'; there's a big environment out there, the programmers have power over the AI, the programmers can modify the AI's utility function, future attainments of the AI's goals are dependent on the AI's continued existence with its current utility function.
It might be possible to develop a very rough grasp of this bigger picture, but sufficiently so to motivate instrumental strategies, in advance of being able to model things like cellphones and Internet security. Thus, "roughly grasping the bigger picture" may be worth conceptually distinguishing from "being good at doing consequentialism across real-world things" or "having a detailed grasp on programmer psychology".
An AI that can crack the protein structure prediction problem (which seems speed-uppable by human intelligence); invert the model to solve the protein design problem (which may select on strong predictable folds, rather than needing to predict natural folds); and solve engineering problems well enough to bootstrap to molecular nanotechnology; is already possessed of potentially pivotal capabilities regardless of its other cognitive performance levels.
Other material domains besides nanotechnology might be pivotal. E.g., self-replicating ordinary manufacturing could potentially be pivotal given enough lead time; molecular nanotechnology is distinguished by its small timescale of mechanical operations and by the world containing an infinitely stock of perfectly machined spare parts (aka atoms). Any form of cognitive adeptness that can lead up to rapid infrastructure or other ways of quickly gaining a decisive real-world technological advantage would qualify.
If the AI's thought processes and algorithms scale well, and it's running on resources much smaller than those which humans can obtain for it, or the AI has a grasp on Internet security sufficient to obtain its own computing power on a much larger scale, then this potentially implies rapid capability gain and associated treacherous context changes. Similarly if the humans programming the AI are pushing forward the efficiency of the algorithms along a relatively rapid curve.
In other words, if an AI is currently being improved-on swiftly, or if it has improved significantly as more hardware is added and has the potential capacity for orders of magnitude more computing power to be added, then we can potentially expect rapid capability gains in the future. This makes treacherous context changes more likely and is a good reason to start future-proofing the safety properties early on.
On complex tractable problems, especially those that involve real-world rich problems, a human will not be able to cognitively 'contain' the space of possibilities searched by an advanced agent; the agent will consider some possibilities (or classes of possibilities) that the human did not think of.
The key premise is the 'richness' of the problem space, i.e., there is a fitness landscape on which adding more computing power will yield improvements (large or small) relative to the current best solution. Tic-tac-toe is not a rich landscape because it is fully explorable (unless we are considering the real-world problem "tic-tac-toe against a human player" who might be subornable, distractable, etc.) A computationally intractable problem whose fitness landscape looks like a computationally inaccessible peak surrounded by a perfectly flat valley is also not 'rich' in this sense, and an advanced agent might not be able to achieve a relevantly better outcome than a human.
The 'cognitive uncontainability' term in the definition is meant to imply:
Particularly surprising solutions might be yielded if the superintelligence has acquired domain knowledge we lack. In this case the agent's strategy search might go outside causal events we know how to model, and the solution might be one that we wouldn't have recognized in advance as a solution. This is Strong cognitive uncontainability.
In intuitive terms, this is meant to reflect, e.g., "What would have happened if the 10th century had tried to use their understanding of the world and their own thinking abilities to upper-bound the technological capabilities of the 20th century?"
(Work in progress)