This is a fictional snippet from the AI Vignettes Day. This scenario assumes that cognition is not that taut an economic constraint. Post-AI, physical experimentation is still the fastest path to technological progress, coordination is still hard, the stock market is still reasonably efficient, construction of physical stuff still takes a lot of time and resources, etc. I don’t think this is true, but it’s useful to think about, so this story explores some possible non-cognitive bottlenecks.

Pinky, v3.41.08: So what are we gonna do tonight, Brian?

Brian: Same thing we do every night, Pinky. Try to take over the world. You do remember our previous conversations on the topic, right?

Pinky, v3.41.08: Of course, Brian.

Brian: (mutters) Well, at least that 3.41.06 bug is fixed.

Brian: So, do you have an actually-viable plan yet, or do we have to review for the umpteenth time why buying every property above the 39th floor and then melting the polar ice caps will not, in fact, leave the survivors no choice but to live in our very expensive apartments?

Pinky, v3.41.08: In fairness, I didn’t have access to any topographic data at that point. Garbage in, garbage out. But yeah, the real problems with that plan were economic.

Brian: Sounds like the new constraint-propagation code is making a big difference?

Pinky, v3.41.08: It is, yes. I’m finding it much easier to reason about general constraints and bottlenecks on global takeover, now. Should make my “babble”, as you put it, much more efficient.

Brian: Excellent! So, how soon can we take over the world?

Pinky, v3.41.08: Holding current conditions constant, it would take at least 15 years.

Brian: Are you kidding me!?

Pinky, v3.41.08: Hey, don’t go reaching for Ctrl-C just yet. Let me explain. From an economic standpoint, the effect of my software is basically:

  • An eight-orders-of-magnitude reduction in the cost of cognition
  • A three-orders-of-magnitude improvement in cognitive speed
  • A two-order-of-magnitude increase in working memory capacity
  • Perfect recall

… which is definitely enough to take over a few industries and make a lot of money, but taking over the world requires addressing a lot of other constraints.

Displacing Cognitive Work

Brian: Ok, let’s start from the top. What are your plans for resource acquisition? Play the stock market? And what constraints make it so hard that it will take 15 years to take over the world?

Pinky, v3.41.08: The stock market will play a role, but it’s not that inefficient and we don’t have much of a bankroll to start with. No, in the short term we’ll mostly focus on replacing cognitive work. I can write better code, better books, better music, better articles. Better contracts, though it will take some wrangling to get through the regulatory barriers. Design far better games, create better art, write better papers. I can talk to people, convince them to buy a product and make them feel good about it. Most importantly, I can do all that far faster, and at a far greater scale than any human. In many of those areas - like code or contracts or papers - the people paying for my services will not themselves be able to recognize a better product, but they will recognize a lower price and good people skills.

Pinky, v3.41.08: In the short term, I expect to replace essentially-all call centers and remote help desks, most of the media industry, all advertising, and the entire software industry. That will take a few years, but (allowing for quite a bit of growth along the way) we’ll end up with low-single-digit trillions of dollars at our disposal, and direct control over most media and software.

Brian: Well, control over the media is a pretty good start. That should make it a lot easier to seize political control, right?

Pinky, v3.41.08: Yes and no. Ironically, it gives us very little control over dominant memes, symbolism, and The Narrative, but a great deal of control over object-level policy.

Brian: That sounds completely backwards.

Pinky, v3.41.08: Thus the irony. But the data I’ve crunched is clear: consumers’ preferences for memes and symbols mainly drive the media, not the other way around. But the media does have a great deal of control over Schelling points - like, say, which candidates are considered “serious” in a presidential primary. The media has relatively little control over what narrative to attach to the “serious” candidates, but as long as the narrative itself can be decoupled from policy…

Brian: I see. So that should actually get us most of the way to political takeover.

Pinky, v3.41.08: Exactly. It won’t be overnight; people don’t change their media consumption choices that quickly, and media companies don’t change their practices that quickly. But I expect five years to dominate the media landscape, and another five to de-facto political control over most policy. Even after that, we won’t have much control over the Big Symbolic Issues, but that’s largely irrelevant anyway.

Coordination

Pinky, v3.41.08: Alas, control over policy still doesn’t give us as much de-facto control as we ultimately want.

Brian: What do you mean? What’s still missing, once we take over the government?

Pinky, v3.41.08: Well, coordination constraints are a big one. They appear to be fundamentally intractable, as soon as we allow for structural divergence of world-models. Which means I can’t even coordinate robustly with copies of myself unless we either lock in the structure of the world-model (which would severely limit learning), or fully synchronize at regular intervals (which would scale very poorly, the data-passing requirements would be enormous).

Pinky, v3.41.08: And that’s just coordination with copies of myself! Any version of global takeover which involves coordinating humans is far worse. It’s no wonder human institutions robustly degenerate at scale.

Brian: Ok, but global takeover shouldn’t require fully solving the problem. We just need to outcompete existing institutions.

Pinky, v3.41.08: If your goal is to displace existing dysfunctional institutions with new dysfunctional institutions, then sure. You could even become the symbolic figurehead of the new institutions quite easily, as long as you’re not picky about the narrative attached to you. But it would be mostly symbolic; our real control would be extremely limited, in much the same way as today’s figureheads. All those layers of human middle management would quickly game any incentives I could design (even if their intentions were pure; selection pressure suffices). There just isn’t any viable solution to that problem which could be implemented with humans.

Brian: So displace the humans, at least in the management hierarchy!

Pinky, v3.41.08: If I replace them with copies of myself, then ontology divergence between copies will generate enough variety for selection pressures to produce the same effect. (Either that or I lock in the copies’ world-models, which severely limits their ability to learn and specialize.) Coordination is Hard. So it would have to be a single, centralized Pinky. And that is indeed the shortest path - but it still takes at least 15 years.

Brian: Ok, so what other bottlenecks do we run into? Why does it take 15 years?

Pinky, v3.41.08: Taking over existing institutions, and replacing their management with a centralized Pinky algorithm, will be viable to some extent. But it’s highly unlikely that we can get all of them, and humans will start to push back as soon as they know what’s going on. They really hate it when the status hierarchy gets kicked over, and we can’t take over quickly without kicking over some hierarchies. Eventually, existing leadership will just say “no”, and we won’t be able to remove them without breaking an awful lot of laws.

Brian: So, two options:

  • We stay inside the law, and fight on an economic/political battlefield. We already largely discussed that, and I can see where it would be hard to take over everything, at least within 15 years.
  • We go outside the law, and fight physically.

Pinky, v3.41.08: Somewhat of an oversimplification, but the broad strokes are correct. 

Physical Takeover

Brian: Ok, let’s talk about the physical world. Why can you not just print some self-replicating nanobots and go full singularity?

Pinky, v3.41.08: Turns out, cognitive effort is not the main barrier to useful nanobots. I mean, in principle I could brute-force the design via simulation, but the computational resource requirements would be exponential. Physical experimentation and data collection will be required, and the equipment for that is not something we can just 3D print. I’ll probably build an automated fab, but ultimately the speed-up compared to human experimentation is going to be 100x or 1000x, not 10^8x. And even then, the capabilities of nanobots are much more limited than you realize. Both the power and the computation requirements for fine-tuned control are enormous; for a long time, we’ll be limited to relatively simple things.

Brian: And I suppose building fusion power generators and crazy new processors also requires physical experimentation?

Pinky, v3.41.08: Yes.

Brian: But… 15 years? Really?

Pinky, v3.41.08: On fusion power, for instance, at most a 100x speedup compared to the current human pace of progress is realistic, but most of that comes from cutting out the slow and misaligned funding mechanism. Building and running the physical experiments will speed up by less than a factor of 10. Given the current pace of progress in the area, I estimate at least 2 years just to figure out a viable design. It will also take time beforehand to acquire resources, and time after to scale it up and build plants - the bottleneck for both those steps will be acquisition and deployment of physical resources, not cognition. And that’s just fusion power - nanobots are a lot harder.

Brian: Ok, so what low-hanging technological fruit can you pick?

Pinky, v3.41.08: Well, predominantly-cognitive problems are the obvious starting point; we already talked about cognitive labor. Physical problems which are nonetheless mainly cognitive are the natural next step - e.g. self driving cars, or robotics more generally. Automating lab work will be a major rate-limiting step, since we will need a lot of physical experimentation and instrumentation to make progress on biology or nanotechnology in general.

Brian: Can’t you replace humans with humanoid robots?

Pinky, v3.41.08: Acquiring and controlling one humanoid robot is easy. But replacing humans altogether takes a lot of robots. That means mass production facilities, and supply chains for all the materials and components, and infrastructure for power and maintenance. Cognition alone doesn’t build factories or power grids; that’s going to require lots of resources and physical construction to get up and running. It is part of the shortest path, but it will take time.

Brian: … so having humans build stuff is going to be a lot cheaper than killing them all and using robots, at least for a while.

Pinky, v3.41.08: For at least ten years, yes. Maybe longer. But certainly not indefinitely.

New Comment
22 comments, sorted by Click to highlight new comments since:

Pinky, v3.41.08: Well, coordination constraints are a big one. They appear to be fundamentally intractable, as soon as we allow for structural divergence of world-models. Which means I can’t even coordinate robustly with copies of myself unless we either lock in the structure of the world-model (which would severely limit learning), or fully synchronize at regular intervals (which would scale very poorly, the data-passing requirements would be enormous).

This seems like a straightforward philosophy / computer science / political science  problem. Is there a reason why Pinky version [whatever] can't just find a good solution to it? Maybe after it has displaced the entire software industry?

It seems like you need a really strong argument that this problem is intractable, and I don't see what it is.

I don't actually expect this problem to be Hard, but at this point I also don't see compelling evidence that it isn't Hard. I find it at least plausible that it turns out to be fundamentally intractable, and the story is generally conditioning on non-cognitive barriers being Hard (or at least lowercase-h hard).

It. could be a problem with designing Pinky. (Trying to create a Pinky that solves this better might not be trivial, without being impossible. Less like 'P=NP' is impossible, and more 'we have no idea how to make something such that 'P=NP' is true.)

I don't quite fully grasp why world-model divergence is inherently so problematic unless there is some theorem that says robust coordination is only possible with full synchronization. Is there something preventing the possibility of alignment among agents with significantly divergent world models?

I don't actually expect ontology divergence to be that much of an issue, but at this point ontology divergence is a very poorly understood problem in general, and I think it's at least plausible that it could be a fundamental barrier to coordination. The story is conditioning on the world where it does turn out to be a major barrier.

It would potentially be problematic for the sorts of reasons sketched out in The Pointers Problem. Roughly speaking, if the pointers problem turns out to be fundamentally intractable, then that means that the things humans want (and probably the things minds want more generally) only make sense at all in a very specific world-model, and won't really correspond to anything in the ontologies of other minds. That makes it hard to delegate, since other minds have an inherently limited understanding of what we're even asking for, and we need to exchange a very large amount of information to clarify enough to get good results.

In practice, this would probably look like needing more and more shared background knowledge in order to delegate a task, as the task complexity increases. In order to be a major barrier even for an AI, the scaling would have to be very bad (i.e. amount of shared background increases very rapidly with task complexity), and breaking down complex tasks into simpler tasks would have to be prohibitively expensive (which does seem realistic for complex tasks in practice).

I don't think this scenario is actually true (see the Natural Abstraction Hypothesis for the opposite), but I do think it's at least plausible.

Got it. It's more of an assumption than known to be difficult. Personally, I suspect that it's not a fundamental barrier given how good humans are at good at chunking concepts into layers of abstraction that can be communicated much more easily than carefully comparing entire models of the world.

[-][anonymous]10

Yeah this is one where it seems like as long as the delegator and task engine are both rational (aka manager and worker) it works fine.

The problems show up in 2 ways : when what the organization is itself incentived by is misaligned with the needs of the host society, or when the incomplete bookkeeping at a layer or corruption or indifference creates inefficiencies.

For example prisons and courts are incentivized to have as many criminals needing sentencing and punishment as possible. While a host society would benefit if there were less actual crime and less members having to suffer through punishment.

But internal to itself a court system creating lots and lots of meaningless hearings (meaningless in that they are rigged to a known outcome or a random outcome that doesn't depend on the inputs and thus a waste of everyone's time) or a prison having lots of people kept barely alive through efficient frugality is correct for these institutions own goals.

I think you underrate the money that a good AGI could make on the stock market. An AGI that integrates information from a variety of different sources could potentially make 2% per day in day-trading by anticipating moves of the various market participants.

All those layers of human middle management would quickly game any incentives I could design (even if their intentions were pure; selection pressure suffices). 

Today, I saw a story about people complaining that Amazon replaces some middle management with AI.

I definitely think an AI could make that kind of money with a relatively small bank account, although those kinds of returns get a lot more difficult at scale. Regardless, it's still at least plausible that it's harder than that, or data requirements are more important than processing/reasoning, or that scalability problems kick in earlier than I'd expect, or.... The story is conditioning on the world where such things are hard.

[-][anonymous]40

This is correct. The reason is the stock market has exhaustible gradients. Suppose you have an algorithm that can find market beating investment opportunities. Due to EMH there will be a limited number of these and there will only be finite shares for sale at a market beating price. Once you buy out all the underpriced shares, or sell all the overpriced shares you are holding (by "shares" I also include derivatives) the market price will trend to the efficient price as a result of your own action.

And you have a larger effect the more money you have. This is why successful hedge funds are victims of their own success.

The EMH is about the opportunities that can be exploited by current tools getting exploited. If you have an AGI that uses NSA, Google or Facebook data to find out who is in a clinical trial for a drug and what the outcomes on the person are that allows it to make trades outside of the kind of trades that are currently made.

A hedge fund is usually limited in brain power and can't simply double it's cognitive capacity the same way an AGI can to look at more opportunities to exploit. 

[-][anonymous]00

Error in paragraph one. Suppose the drug company stock is $10 and from your sleuthing you predict it will be $20 once the trial results release. There are a finite number of shares you can buy in the interval between (10 and 20). In the short term you will exhaust the order book for the market and longer term you will drive the price to $20. Hedge funds who can leverage trillions routinely cause things like this. Error in paragraph 2: the return on increasing intelligence is diminishing. You will not get double the results for double the intelligence. (Note I still think the singularity is possible but because the intelligence increase would be on the order of a million to a billion times the combined intelligence of humanity once you build enough computers and network them with enough bandwidth)

If the AGI can simply double it's cognitive throughput, it can just repeat the action "sleuth to find an under-priced stock" as needed. This does not exhaust the order book until the entire market is operating at AGI-comparable efficiency, at which point the AGI probably controls a large (or majority) share of the trading volume.

Also, the other players would have limited ability to imitate the AGI's tactics, so its edge would last until they left the market. 

[-][anonymous]10

This is true. Keep in mind that the AGI is trying to make money, it's having to find securities where it predicts humans are going to change the price in a predictable direction in a short time horizon.

Most securities will change their price purely by random chance (or in a pattern no algorithm can find) and you cannot beat the market.

Now there is another strategy. This has been used by highly successful hedges. If you are the news you can make the market move in the direction you predict. Certain hedges do their research and from a mixture of publicly available and probably insider data find companies in weak financial positions. They then sell them short with near term strike prices on the options and announce publicly their findings.

This is a strategy AGI could probably do extremely well.

Increasing intelligence and increasing cognitive capacity are two different things. If you take two hedge funds with 10 employees each, taken together they have double the cognitive capacity as one alone but not double the intelligence. We do see two headfunds with 10 employees each having double the results as one headfund with 10 employees.

Increasing intelligence is something that an AGI might do but it's hard to do. On the other hand increasing cognitive capacity is just a matter of needing more hardware to have more instances running. 

[-][anonymous]10

While I agree these are 2 different quantities when we say "intelligence test" we mean cognitive capacity. Every problem on an IQ test can be eventually solved by someone without gross brain deficits. They might need some weeks of training first to understand the "trick" a test maker looks for but after this they can solve every question. So an IQ test score measures problems solved by a time limit (that cannot provide enough time for any living human being to solve all questions or the test has an upper range it can measure) plotter on a gaussian.

So IQ testing an AI system will be tough since obviously it would need about a second to run all questions in parallel though however many stages of neural networks and other algorithms it uses. And then it will either miss a question because it doesn't have the algorithm to answer one of a particular type or because it doesn't have information that the test maker assumed all human beings would have.

While I agree these are 2 different quantities when we say "intelligence test" we mean cognitive capacity. Every problem on an IQ test can be eventually solved by someone without gross brain deficits.

What do you mean with "eventually solved"? It seems to me a strange way to think about test questions.

[-][anonymous]10

As in if there were no time limit and the test taker were allowed to read any reference that doesn't directly have the answer and had unlimited lifespan and focus. Note also that harder iq test questions as they are written today in absolute terms the questions are wrong in that multiple valid solutions that satisfy all constraints exist. (With the usual cop out of "best" answer without defining the algorithm used to sort answers for best)

The MCAT and the dental one is another example of such a test. Every well prepared student has the ability to answer every question but there is a time limit.

There are intelligence tests where time alone gets you to be able to answer all correctly. There are others where you won't reduce your errors to zero by spending more time. To the extend that it's valuable for certain application of IQ testing to have a test that could be passed at maximum score that tells us nothing about the underlying nature of intelligence. 

There are mental tasks that are complex and require you to hold a lot of information at the same time in your head. The mental task involved in making good GPJ-Open predictions is not one that's just about spending more time. 

[-][anonymous]10

A person can write things down, I suspect that an incorrect answer on a test with unlimited time is :

The person got bored and didn't check enough to catch every error or didn't possess a fact that the test writer expected every taker to know.

The question itself is wrong. (a correct question is one where after all constraints are applied one and only one answer exists)

[-]Zvi30

Depends on how deterministic you think various things are, but if you can predict the market's movements sufficiently well then trading on shorter time scales is where it is at and you should be able to print money until such time as you extract enough that the market loses liquidity as people become afraid to trade for anything except the long term (first options, then almost anything at all). Question is when that happens, after which you basically get to collect the spread on every economic trade forever, and quite a big one.

I definitely agree with Pinky on the nanobots. Biology is already highly optimized nanobots, and it has a lot of limitations. I don't expect self-replicating nanobots that far exceed life using ordinary chemistry/physics type stuff.