From "Friendly Teams" ...
Small teams have at times suddenly acquired disproportionate power, and I’m sure their associates who anticipated this possibility used the usual human ways to consider that team’s “friendliness.” But I can’t recall a time when such sudden small team power came from an UberTool scenario of rapidly mutually improving tools.
In August 1945 an ubertool was demonstrated two times in Japan.
Relatedly, there have been numerous instances of individual humans taking over entire national governments by exploiting high-leverage, usually unethical, opportunities. A typical case involves a military general staging a coup or an elected leader legislating unlimited power unto himself. None of these instances look anything like firms competing for resources. Instead, we have single actors who are intelligent and opportunistic, unethical or morally atypicall, and risk-tolerant enough to accept the consequences of a failed coup.
A UFAI looks much more like a dictator than a firm.
All those require the an least implicit cooperation of a lot of other people, e.g., the general's army.
Imagine you're a small but growing firm. You can choose whether to reinvest your profits in:
There are various factors which might affect which of these is the better buy:
So I imagine for most firms, their ratio of growth to productivity investment lies within a particular range. We expect the UberTool caricature to fail because it is investing far too heavily in productivity instead of growth (and it's just too small to be able to do the necessary research).
So how might an AGI be different from UberTool?
What other factors would be relevant?
Just realised I've been blurring the distinction between tools and intangibles. If a firm wants to increase its efficiency, it could either design better chairs for its employees or design a better recruitment process and I've been treating these as the same kind of thing. I think the main difference is that intangibles are harder to buy and sell than tools are, and this may be relevant.
I've held off on posting the next rerun for a few days in case anybody else suggested something new in the discussion of how to run the AI FOOM Debate. After looking at the comments (with associated votes), and after looking through the sequence myself, I've decided to rerun one post a day. Most of these posts will be from Robin Hanson and Eliezer Yudkowsky, but there are a few posts from Carl Shulman and James Miller that will be included as well. This process will start tomorrow with "Abstraction, Not Analogy" by Robin Hanson.
Meanwhile, there are several posts written by Robin Hanson in the week or so leading up to the debate that provide a bit of background to his perspectives, which I have linked to below. They're all fairly short and relatively straightforward, so I don't think that they each merit a full blown individual discussion.
Fund UberTool?
Engelbart As UberTool?
Friendly Teams
Friendliness Factors
Setting the Stage