I'm not trying to speak for Robin; the following are my views. One of my deepest fears--perhaps my only phobia--is fear of government. And any government with absolute power terrifies me absolutely. However the singleton is controlled, it's an absolute power. If there's a single entity in charge, it is subject to Lord Acton's dictum. If control is vested in a group, then struggles for control of that become paramount. Even the suggestion that it might be controlled democratically doesn't help me to rest easy. Democracies can be rushed off a cliff, t...
" at least as well thought out and disciplined in contact with reality as Eliezer's theories are"
I'll have to grant you that, Robin. Eliezer hasn't given us much solid food to chew on yet. Lots of interesting models and evocative examples. But it's hard to find solid arguments that this particular transition is imminent, that it will be fast, and that it will get out of control.
Endogenous Growth theory, Economic Growth and Research Policy all seem to be building mathematical models that attempt to generalize over our experience of how much government funding leads to increased growth, how quickly human capital feeds back into societal or individual wealth, or what interventions have helped poor countries to develop faster. None of them, AFAICT, have been concrete enough to lead to solid policy prescription that have reliably led to anyone or any country to recreate the experiences that led to the models.
In order to have a model...
Tyrrell, it seems to me that there's a huge difference between Fermi's model and the one Robin has presented. Fermi described a precise mechanism that made precise predictions that Fermi was able to state ahead of time and confirm experimentally. Robin is drawing a general analogy between several historical events and drawing a rough line connecting them. There are an enormous number of events that would match his prediction, and another enourmous number of non-events that Robin can respond to with "just wait and see."
So I don't really see Eli...
MZ: I doubt there are many disagreements that there were other interesting inflection points. But Robin's using the best hard data on productivity growth that we have and it's hard to see those inflection points in the data. If someone can think of a way to get higher-resolution data covering those transitions, it would be fascinating to add them to our collection of historical cases.
@Silas
I thought the heart of EY's post was here:
even if you could record and play back "good moves", the resulting program would not play chess any better than you do.
If I want to create an AI that plays better chess than I do, I have to program a search for winning moves. I can't program in specific moves because then the chess player really won't be any better than I am. [...] If you want [...] better [...], you necessarily sacrifice your ability to predict the exact answer in advance - though not necessarily your ability to predict that the...
Third, you can't possibly be using an actual, persuasive-to-someone-thinking-correctly argument to convince the gatekeeper to let you out, or you would be persuaded by it, and would not view the weakness of gatekeepers to persuasion as problematic.
But Eliezer's long-term goal is to build an AI that we would trust enough to let out of the box. I think your third assumption is wrong, and it points the way to my first instinct about this problem.
Since one of the more common arguments is that the gatekeeper "could just say no", the first step I w...
I agree on Pearl's accomplishment.
I have read Dennet, and he does a good job of explaining what Consciousness is and how it could arise out of non-conscious parts. William Calvin was trying to do the same thing with how wetware (in the form that he knew it at the time) could do something like thinking. Jeff Hawkins had more details of how the components of the brain work and interact, and did a more thorough job of explaining how the pieces must work together and how thought could emerge from the interplay. There is definitely material in "On Intelligence" that could help you think about how thought could arise out of purely physical interactions.
I'll have to look into Drescher.
I read most of the interchange between EY and BH. It appears to me that BH still doesn't get a couple of points. The first is that smiley faces are an example of misclassification and it's merely fortuitous to EY's ends that BH actually spoke about designing an SI to use human happiness (and observed smiles) as its metric. He continues to speak in terms of "a system that is adequate for intelligence in its ability to rule the world, but absurdly inadequate for intelligence in its inability to distinguish a smiley face from a human." EY's poin...
People nearer the front think that they have the moral right to get off earlier than people behind them, regardless of whether they got their seat through choice or chance. People also like to get off with the other members of their party.
So people nearer the front will defect from this solution even though all but the first half dozen rows would probably be better off cooperating. Once all the people in front of passenger X have gotten off, passenger X will defect as well.
I'm seldom in a hurry to get off the plane (I know there's just more waiting once ...
Contrary to your usual practice of including voluminous relevant links, you didn't point to anything specific for Judea Pearl. Let's give this link for his book Causality, which is where people will find the graphical calculus you rely on.
You've mentioned Pearl before, but haven't blogged the details. Do you expect to digest Pearl's graphical approach into something OB-readers will be able to understand in one sitting at some point? That would be a real service, imho.
I've traveled in Europe, and seen remnants of the roman roads, walls and viaducts. One of the .sigs I use most often is this:
C. J. Cherryh, "Invader", on why we visit very old buildings: "A sense of age, of profound truths. Respect for something hands made, that's stood through storms and wars and time. It persuades us that things we do may last and matter."
Thinking about your declaration "If you run around inspecting your foundations, I expect you to actually improve them", I now see that I've been using "PCR" to refer to the reasoning trick that Bartley introduced (use all the tools at your disposal to evaluate your foundational approaches) to make Pan-Critical Rationalism an improvement over Popper's Critical Rationalism. But, for Bartley, PCR was just a better foundation for the rest of Popper's epistemology, and you would replace that epistemology with something more sophisticated. ...
Hurrah! Eliezer says that Bayesian reasoning bottoms out in Pan-Critical Rationalism.
re: "Why do you believe what you believe?"
I've always said that Epistemology isn't "the Science of Knowledge" as it's often called, instead it's the answer to the problem of "How do you decide what to believe?" I think the emphasis on process is more useful than your phrasing's focus on justification.
BTW, I don't disagree with your stress on Bayesian reasoning as the process for figuring out what's true in the world. But Bartley really did ...
Patrick, that was my interpretation. I had time to come up with one proposal. (I'm not able to commit full-time to being a student of bayescraft at this point.)
Z. M. Davis, thanks for the pointer.
There's a particular kind of groupthink peculiar to scholarly fields. In my review of "The Trouble with Physics", I pointed to two (other) specific examples of recent advances that were stymied for long periods of time by scholarly groupthink. There are many others.
But I think Eli has hit on another important mechanism. Few learners these days are expected to rediscover important concepts, so we get no training in this ability. I don't see how turning scientific knowledge into a body of secrets will address the problem, but it's a valuable...
"... the overwhelming majority might as well belong to a religious cargo cult based on the notion that self-modifying AI will have magical powers."
"Maybe you can admire someone who directly thinks you're a crackpot, but I can't."
I have a high regard for most extropians (a subset of Transhumans, I think) I know well, but that doesn't make me believe that the Egan line is more than hyperbole at most. I don't take it as a slur against anyone whose name I know. I've certainly seen evidence that the majority wouldn't be able to distinguish...
I see the valuable part of this question not as what you'd do with unlimited magical power, but as more akin to the earlier question asked by Eliezer: what would you do with $10 trillion? That leaves you making trade-offs, using current technology, and still deciding between what would make you personally happy, and what kind of world you want to live in.
Once you've figured out a little about what trade-offs between personal happiness and changing the world you'd make with (practically) unlimited (but non-magical) resources, you can reflect that back down... (read more)