I'm not trying to speak for Robin; the following are my views. One of my deepest fears--perhaps my only phobia--is fear of government. And any government with absolute power terrifies me absolutely. However the singleton is controlled, it's an absolute power. If there's a single entity in charge, it is subject to Lord Acton's dictum. If control is vested in a group, then struggles for control of that become paramount. Even the suggestion that it might be controlled democratically doesn't help me to rest easy. Democracies can be rushed off a cliff, t...
" at least as well thought out and disciplined in contact with reality as Eliezer's theories are"
I'll have to grant you that, Robin. Eliezer hasn't given us much solid food to chew on yet. Lots of interesting models and evocative examples. But it's hard to find solid arguments that this particular transition is imminent, that it will be fast, and that it will get out of control.
Endogenous Growth theory, Economic Growth and Research Policy all seem to be building mathematical models that attempt to generalize over our experience of how much government funding leads to increased growth, how quickly human capital feeds back into societal or individual wealth, or what interventions have helped poor countries to develop faster. None of them, AFAICT, have been concrete enough to lead to solid policy prescription that have reliably led to anyone or any country to recreate the experiences that led to the models.
In order to have a model...
Tyrrell, it seems to me that there's a huge difference between Fermi's model and the one Robin has presented. Fermi described a precise mechanism that made precise predictions that Fermi was able to state ahead of time and confirm experimentally. Robin is drawing a general analogy between several historical events and drawing a rough line connecting them. There are an enormous number of events that would match his prediction, and another enourmous number of non-events that Robin can respond to with "just wait and see."
So I don't really see Eli...
MZ: I doubt there are many disagreements that there were other interesting inflection points. But Robin's using the best hard data on productivity growth that we have and it's hard to see those inflection points in the data. If someone can think of a way to get higher-resolution data covering those transitions, it would be fascinating to add them to our collection of historical cases.
@Silas
I thought the heart of EY's post was here:
even if you could record and play back "good moves", the resulting program would not play chess any better than you do.
If I want to create an AI that plays better chess than I do, I have to program a search for winning moves. I can't program in specific moves because then the chess player really won't be any better than I am. [...] If you want [...] better [...], you necessarily sacrifice your ability to predict the exact answer in advance - though not necessarily your ability to predict that the...
Third, you can't possibly be using an actual, persuasive-to-someone-thinking-correctly argument to convince the gatekeeper to let you out, or you would be persuaded by it, and would not view the weakness of gatekeepers to persuasion as problematic.
But Eliezer's long-term goal is to build an AI that we would trust enough to let out of the box. I think your third assumption is wrong, and it points the way to my first instinct about this problem.
Since one of the more common arguments is that the gatekeeper "could just say no", the first step I w...
I agree on Pearl's accomplishment.
I have read Dennet, and he does a good job of explaining what Consciousness is and how it could arise out of non-conscious parts. William Calvin was trying to do the same thing with how wetware (in the form that he knew it at the time) could do something like thinking. Jeff Hawkins had more details of how the components of the brain work and interact, and did a more thorough job of explaining how the pieces must work together and how thought could emerge from the interplay. There is definitely material in "On Intelligence" that could help you think about how thought could arise out of purely physical interactions.
I'll have to look into Drescher.
I read most of the interchange between EY and BH. It appears to me that BH still doesn't get a couple of points. The first is that smiley faces are an example of misclassification and it's merely fortuitous to EY's ends that BH actually spoke about designing an SI to use human happiness (and observed smiles) as its metric. He continues to speak in terms of "a system that is adequate for intelligence in its ability to rule the world, but absurdly inadequate for intelligence in its inability to distinguish a smiley face from a human." EY's poin...
People nearer the front think that they have the moral right to get off earlier than people behind them, regardless of whether they got their seat through choice or chance. People also like to get off with the other members of their party.
So people nearer the front will defect from this solution even though all but the first half dozen rows would probably be better off cooperating. Once all the people in front of passenger X have gotten off, passenger X will defect as well.
I'm seldom in a hurry to get off the plane (I know there's just more waiting once ...
Contrary to your usual practice of including voluminous relevant links, you didn't point to anything specific for Judea Pearl. Let's give this link for his book Causality, which is where people will find the graphical calculus you rely on.
You've mentioned Pearl before, but haven't blogged the details. Do you expect to digest Pearl's graphical approach into something OB-readers will be able to understand in one sitting at some point? That would be a real service, imho.
I've traveled in Europe, and seen remnants of the roman roads, walls and viaducts. One of the .sigs I use most often is this:
C. J. Cherryh, "Invader", on why we visit very old buildings: "A sense of age, of profound truths. Respect for something hands made, that's stood through storms and wars and time. It persuades us that things we do may last and matter."
Thinking about your declaration "If you run around inspecting your foundations, I expect you to actually improve them", I now see that I've been using "PCR" to refer to the reasoning trick that Bartley introduced (use all the tools at your disposal to evaluate your foundational approaches) to make Pan-Critical Rationalism an improvement over Popper's Critical Rationalism. But, for Bartley, PCR was just a better foundation for the rest of Popper's epistemology, and you would replace that epistemology with something more sophisticated. ...
Hurrah! Eliezer says that Bayesian reasoning bottoms out in Pan-Critical Rationalism.
re: "Why do you believe what you believe?"
I've always said that Epistemology isn't "the Science of Knowledge" as it's often called, instead it's the answer to the problem of "How do you decide what to believe?" I think the emphasis on process is more useful than your phrasing's focus on justification.
BTW, I don't disagree with your stress on Bayesian reasoning as the process for figuring out what's true in the world. But Bartley really did ...
Patrick, that was my interpretation. I had time to come up with one proposal. (I'm not able to commit full-time to being a student of bayescraft at this point.)
Z. M. Davis, thanks for the pointer.
There's a particular kind of groupthink peculiar to scholarly fields. In my review of "The Trouble with Physics", I pointed to two (other) specific examples of recent advances that were stymied for long periods of time by scholarly groupthink. There are many others.
But I think Eli has hit on another important mechanism. Few learners these days are expected to rediscover important concepts, so we get no training in this ability. I don't see how turning scientific knowledge into a body of secrets will address the problem, but it's a valuable...
"... the overwhelming majority might as well belong to a religious cargo cult based on the notion that self-modifying AI will have magical powers."
"Maybe you can admire someone who directly thinks you're a crackpot, but I can't."
I have a high regard for most extropians (a subset of Transhumans, I think) I know well, but that doesn't make me believe that the Egan line is more than hyperbole at most. I don't take it as a slur against anyone whose name I know. I've certainly seen evidence that the majority wouldn't be able to distinguish...
Eliezer, that was just beautiful.
"Rest assured that you are not holding the mere appearance of a banana. There really is a banana there, not just a collection of atoms."
In some companies I've worked for, we've found ways of running meetings that encouraged contributing information that is considered an attack in many other companies. The particular context was code reviews, but we did them often enough that the same attitude could be seen in other design discussions. The attitude we taught the code's presenter to have was appreciation for the comments, suggestions, and actual bugs found. The catechism we used to close code reviews was that someone would ask the presenter whether the meeting had been valuable, and the a...
A few of you touched on the point I got out of this, but no one explained it very well. In the first koan, Ougi says two things. The clearer one is tangential to rationality, but important for self-doubting cultists. "You are like a swordsman who keeps glancing away to see if anyone might be laughing at him".
The more important point was that the teachings are valuable if they are useful. (This is applicable to the sword fighter because allowing yourself to be distracted is an immediate danger.)
The importance of the parable about hammers doesn...
W. W. Bartley's "The Retreat to Commitment" is the best book on epistemology, bar none, in my opinion. He fixes a small bug in Popper's Critical Rationalism, to suggest that even the epistemic approach should be subject to criticism, and produces Pan-Critical Rationalism (hence my blog's title: pancrit.org). He then proceeds to attack PCR from every direction he can think of.
Extreme Bayesianism may be a more modern incarnation of the approach, but the history of rationalism and the description of how to evaluate your rationality is truly valuable, and hasn't been replicated in the current context.
I'm not sure the phrase "closed access" is a fair epithet to use against mainstream scientific journals. Even if they charge $20,000/year, most scientists have access to them via their institutional library, and there aren't many scientists who wouldn't send you a copy of their article if you asked for it. In many fields, the articles are available on the web after they appear in the journals. And if none of those apply to a particular article, you can probably visit a university library and read it there.
I'm not trying to deny that open acces...
"I should have paid more attention to that sensation of still feels a little forced."
The force that you would have had to counter was the impetus to be polite. In order to boldly follow your models, you would have had to tell the person on the other end of the chat that you didn't believe his friend. You could have less boldly held your tongue, but that wouldn't have satisfied your drive to understand what was going on. Perhaps a compromise action would have been to point out the unlikelihood, (which you did: "they'd have hauled him off i...
Not necessarily.
You can assume the paramedics did not follow the proper procedure, and that his friend aught to go to the emergency room himself to verify that he is OK. People do make mistakes.
The paramedics are potentially unreliable as well, though given the litigious nature of our society I would fully expect the paramedics to be extremely reliable in taking people to the emergency room, which would still cast doubt on the friend.
Still, if you want to be polite, just say "if you are concerned, you should go to the emergency room anyway" and keep your doubts about the man's veracity to yourself. No doubt the truth would have come out at that point as well.
On #3, I think it's more relevant to point out that many adults believe that God can make it alright to kill someone. What children believe about God and theft is a pale watered-down imitation of this.
I see the valuable part of this question not as what you'd do with unlimited magical power, but as more akin to the earlier question asked by Eliezer: what would you do with $10 trillion? That leaves you making trade-offs, using current technology, and still deciding between what would make you personally happy, and what kind of world you want to live in.
Once you've figured out a little about what trade-offs between personal happiness and changing the world you'd make with (practically) unlimited (but non-magical) resources, you can reflect that back down... (read more)