I'm not trying to speak for Robin; the following are my views. One of my deepest fears--perhaps my only phobia--is fear of government. And any government with absolute power terrifies me absolutely. However the singleton is controlled, it's an absolute power. If there's a single entity in charge, it is subject to Lord Acton's dictum. If control is vested in a group, then struggles for control of that become paramount. Even the suggestion that it might be controlled democratically doesn't help me to rest easy. Democracies can be rushed off a cliff, too. And someone has to set up the initial constitution; why would we trust them to be as good as George Washington and turn down the opportunity to be king?
I also understand your admonition to prepare a line of retreat. But I don't see a path to learn to stop worrying and love the Singleton. If anyone has suggestions, I'll listen to them.
In the meantime, I prefer outcomes with contending powers and lots of incidental casualties over any case I can think of with a singleton in charge of the root account and sufficient security to keep out the hackers. At least in the former case there's a chance that there will be periods with polycentric control. In the latter case, eventually there will be a tyrant who manages to wrest control, and with complete control over physical space, AGI, and presumably nanotech, there's little hope for a future revival of freedom.
" at least as well thought out and disciplined in contact with reality as Eliezer's theories are"
I'll have to grant you that, Robin. Eliezer hasn't given us much solid food to chew on yet. Lots of interesting models and evocative examples. But it's hard to find solid arguments that this particular transition is imminent, that it will be fast, and that it will get out of control.
Endogenous Growth theory, Economic Growth and Research Policy all seem to be building mathematical models that attempt to generalize over our experience of how much government funding leads to increased growth, how quickly human capital feeds back into societal or individual wealth, or what interventions have helped poor countries to develop faster. None of them, AFAICT, have been concrete enough to lead to solid policy prescription that have reliably led to anyone or any country to recreate the experiences that led to the models.
In order to have a model solid enough to use as a basis for theorizing about the effects on growth of a new crop of self-improving AGIs, we'd need to have a much more mechanistic model behind endogenous growth. Fermi's model told him how to calculate how many neutrons would be released given a particular density of uranium of a particular purity, how much would be absorbed by a particular quantity of shielding, and therefore where the crossover would be from a k of less than 1 to greater than 1. None of those models gives numerical models that we can apply to human intelligence, much less any abstractions that we could extend to cover the case of intelligences learning faster than we do.
Tyrrell, it seems to me that there's a huge difference between Fermi's model and the one Robin has presented. Fermi described a precise mechanism that made precise predictions that Fermi was able to state ahead of time and confirm experimentally. Robin is drawing a general analogy between several historical events and drawing a rough line connecting them. There are an enormous number of events that would match his prediction, and another enourmous number of non-events that Robin can respond to with "just wait and see."
So I don't really see Eli as just saying that black swans may upend Robin's expected outcomes. In this case, Eli's side of the argument is that he's arguing for a force multiplier that will change the regime of progress, like Fermi's. Unfortunately for Eli's argument, he hasn't yet produced the mathematical model or the detailed physical model that would let us put numbers on the predictions. So this particular little story just argues for plausibility of the model that says take off might happen at some point. Eli has been arguing for a little while that the regime change projection has more plausibility than Robin thinks, but Robin has already granted some plausibility, so he doesn't have to cede any more ground (as you say) because of this argument. Robin can just say that this is the kind of effect that he was already taking into account, and we are still waiting for Eli to show likelihood.
As far as general models of repeated insight, the best I can do is point to Smolin's model of the progress of fundamental physics as presented in "The Trouble with Physics." He shows how breakthroughs from Copernicus, Galileo, Bacon, Newton, Maxwell, and Einstein were a continuous series of unifications. From my blog (linked above) "The focus was consistently on what pre-existing concepts were brought together in one of two ways. Sometimes the unification shows that two familiar things that are thought of as distinct are really the same thing, giving a deeper theory of both. (the Earth is one planet among several, the Sun is one star among many.) Other times, two phenomena that weren't understood well are explained as one common thing (Bacon showed that heat is a kind of motion; Newton showed that gravity explained both planetary orbits and ballistic trajectories; Maxwell showed that electricity and magnetism are the different aspects of the same phenomenon.)"
Einstein seems to have consciously set out to produce another unification, and succeeded twice in finding other aspects of reality to fold together with a single model. AFAICT, it hasn't been done again on this scale since QED and QCD.
MZ: I doubt there are many disagreements that there were other interesting inflection points. But Robin's using the best hard data on productivity growth that we have and it's hard to see those inflection points in the data. If someone can think of a way to get higher-resolution data covering those transitions, it would be fascinating to add them to our collection of historical cases.
@Silas
I thought the heart of EY's post was here:
even if you could record and play back "good moves", the resulting program would not play chess any better than you do.
If I want to create an AI that plays better chess than I do, I have to program a search for winning moves. I can't program in specific moves because then the chess player really won't be any better than I am. [...] If you want [...] better [...], you necessarily sacrifice your ability to predict the exact answer in advance - though not necessarily your ability to predict that the answer will be "good" according to a known criterion of goodness. "We never run a computer program unless we know an important fact about the output and we don't know the output," said Marcello Herreshoff.
So the heart of the AI is something that can generate and recognize good answers. In game playing programs, it didn't take long for the earliest researchers to come up with move and position evaluators that they have been improving on ever since. There have even been some attempts at general move and position evaluators. (See work on Planner, Micro-Planner, and Conniver, which will probably lead you to other similar work.) Move generation has always been simpler in the game worlds than it would be for any general intelligence. The role of creativity hasn't been explored that much AFAICT, but it's crucial in realms where the number of options at any point are so much larger than in game worlds.
The next breakthrough will require some different representation of reality and of goals, but Eli seems to be pointing at generation and evaluation of action choices as the heart of intelligent behavior. The heart of it seems to be choosing a representation that makes generation and analysis of possible actions tractable. I'm waiting to see if EY has any new ideas on that front. I can't see how progress will be made without it, even in the face of all of EY's other contributions to understanding what the problem is and what it would mean to have a solution.
And EY has clearly said that he's more interested in behavior ("steering the future") than recognition or analysis as a characteristic of intelligence.
Third, you can't possibly be using an actual, persuasive-to-someone-thinking-correctly argument to convince the gatekeeper to let you out, or you would be persuaded by it, and would not view the weakness of gatekeepers to persuasion as problematic.
But Eliezer's long-term goal is to build an AI that we would trust enough to let out of the box. I think your third assumption is wrong, and it points the way to my first instinct about this problem.
Since one of the more common arguments is that the gatekeeper "could just say no", the first step I would take is to get the gatekeeper to agree that he is ducking the spirit of the bet if he doesn't engage with me.
The kind of people Eliezer would like to have this discussion with would all be persuadable that the point of the experiment is that 1) someone is trying to build an AI. 2) they want to be able to interact with it in order to learn from it, and 3) eventually they want to build an AI that is trustworthy enough that it should be let it out of the box.
If they accept that the standard is that the gatekeeper must interact with the AI in order to determine its capabilities and trustworthiness, then you have a chance. And at that point, Eliezer has the high ground. The alternative is that the gatekeeper believes that the effort to produce AI can never be successful.
In some cases, it might be sufficient to point out that the gatekeeper believes that it ought to be possible to build an AI that it would be correct to allow out. Other times, you'd probably have to convince them you were smart and trustworthy, but that seems doable 3 times out of 5.
I agree on Pearl's accomplishment.
I have read Dennet, and he does a good job of explaining what Consciousness is and how it could arise out of non-conscious parts. William Calvin was trying to do the same thing with how wetware (in the form that he knew it at the time) could do something like thinking. Jeff Hawkins had more details of how the components of the brain work and interact, and did a more thorough job of explaining how the pieces must work together and how thought could emerge from the interplay. There is definitely material in "On Intelligence" that could help you think about how thought could arise out of purely physical interactions.
I'll have to look into Drescher.
I read most of the interchange between EY and BH. It appears to me that BH still doesn't get a couple of points. The first is that smiley faces are an example of misclassification and it's merely fortuitous to EY's ends that BH actually spoke about designing an SI to use human happiness (and observed smiles) as its metric. He continues to speak in terms of "a system that is adequate for intelligence in its ability to rule the world, but absurdly inadequate for intelligence in its inability to distinguish a smiley face from a human." EY's point is that it isn't sufficient to distinguish them, you have to also categorize them and all their variations correctly even though the training data can't possibly include all variations.
The second is that EY's attack isn't intended to look like an attack on BH's current ideas. It's an attack on ideas that are good enough to pass peer review. It doesn't matter to EY whether BH agrees or disagrees with those ideas. In either case, the paper's publication shows that the viewpoint is plausible enough to be worth dismissing carefully and publicly.
Finally, BH points to the fact that, in some sense, human development uses RL to produce something we are willing to call intelligence. He wants to argue that this shows that RL can produce systems that categorize in a way that matches our consensus. But evolution has put many mechanisms in our ontogeny and relies an many interactions in our environment to produce those categorizations, and its success rate at producing entities that agree with the consensus isn't perfect. In order to build an SI using those approaches, we'd have to understand how all that interaction works, and we'd have to do better than evolution does with us in order to be reliably safe.
I see the valuable part of this question not as what you'd do with unlimited magical power, but as more akin to the earlier question asked by Eliezer: what would you do with $10 trillion? That leaves you making trade-offs, using current technology, and still deciding between what would make you personally happy, and what kind of world you want to live in.
Once you've figured out a little about what trade-offs between personal happiness and changing the world you'd make with (practically) unlimited (but non-magical) resources, you can reflect that back down to how you spend your minutes and your days. You don't make the same trade-offs on a regular salary, but you can start thinking about how much of what you're doing is to make the world a better place, and how much is to make your self or your family happier or more comfortable.
I don't know how Eli expects to get an FAI to take our individual trade-offs among our goals into account, but since my goals for the wider world involve more freedom and less coercion, I can think about how I spend my time and see if I'm applying the excess over keeping my life in balance to pushing the world in the right direction.
Surely you've thought about what the right direction looks like?