Chris_Hibbert
Chris_Hibbert has not written any posts yet.

Chris_Hibbert has not written any posts yet.

I'm not trying to speak for Robin; the following are my views. One of my deepest fears--perhaps my only phobia--is fear of government. And any government with absolute power terrifies me absolutely. However the singleton is controlled, it's an absolute power. If there's a single entity in charge, it is subject to Lord Acton's dictum. If control is vested in a group, then struggles for control of that become paramount. Even the suggestion that it might be controlled democratically doesn't help me to rest easy. Democracies can be rushed off a cliff, too. And someone has to set up the initial constitution; why would... (read more)
" at least as well thought out and disciplined in contact with reality as Eliezer's theories are"
I'll have to grant you that, Robin. Eliezer hasn't given us much solid food to chew on yet. Lots of interesting models and evocative examples. But it's hard to find solid arguments that this particular transition is imminent, that it will be fast, and that it will get out of control.
Endogenous Growth theory, Economic Growth and Research Policy all seem to be building mathematical models that attempt to generalize over our experience of how much government funding leads to increased growth, how quickly human capital feeds back into societal or individual wealth, or what interventions have helped poor countries to develop faster. None of them, AFAICT, have been concrete enough to lead to solid policy prescription that have reliably led to anyone or any country to recreate the experiences that led to the models.
In order to have a model solid enough to use as a basis for theorizing about the effects on growth of a new crop of self-improving AGIs,... (read more)
Tyrrell, it seems to me that there's a huge difference between Fermi's model and the one Robin has presented. Fermi described a precise mechanism that made precise predictions that Fermi was able to state ahead of time and confirm experimentally. Robin is drawing a general analogy between several historical events and drawing a rough line connecting them. There are an enormous number of events that would match his prediction, and another enourmous number of non-events that Robin can respond to with "just wait and see."
So I don't really see Eli as just saying that black swans may upend Robin's expected outcomes. In this case, Eli's side of the... (read more)
MZ: I doubt there are many disagreements that there were other interesting inflection points. But Robin's using the best hard data on productivity growth that we have and it's hard to see those inflection points in the data. If someone can think of a way to get higher-resolution data covering those transitions, it would be fascinating to add them to our collection of historical cases.
@Silas
I thought the heart of EY's post was here:
even if you could record and play back "good moves", the resulting program would not play chess any better than you do.
If I want to create an AI that plays better chess than I do, I have to program a search for winning moves. I can't program in specific moves because then the chess player really won't be any better than I am. [...] If you want [...] better [...], you necessarily sacrifice your ability to predict the exact answer in advance - though not necessarily your ability to predict that the answer will be "good" according to a known criterion of goodness. "We never run a computer... (read more)
Third, you can't possibly be using an actual, persuasive-to-someone-thinking-correctly argument to convince the gatekeeper to let you out, or you would be persuaded by it, and would not view the weakness of gatekeepers to persuasion as problematic.
But Eliezer's long-term goal is to build an AI that we would trust enough to let out of the box. I think your third assumption is wrong, and it points the way to my first instinct about this problem.
Since one of the more common arguments is that the gatekeeper "could just say no", the first step I would take is to get the gatekeeper to agree that he is ducking the spirit of the bet if he doesn't engage... (read more)
I agree on Pearl's accomplishment.
I have read Dennet, and he does a good job of explaining what Consciousness is and how it could arise out of non-conscious parts. William Calvin was trying to do the same thing with how wetware (in the form that he knew it at the time) could do something like thinking. Jeff Hawkins had more details of how the components of the brain work and interact, and did a more thorough job of explaining how the pieces must work together and how thought could emerge from the interplay. There is definitely material in "On Intelligence" that could help you think about how thought could arise out of purely physical interactions.
I'll have to look into Drescher.
I read most of the interchange between EY and BH. It appears to me that BH still doesn't get a couple of points. The first is that smiley faces are an example of misclassification and it's merely fortuitous to EY's ends that BH actually spoke about designing an SI to use human happiness (and observed smiles) as its metric. He continues to speak in terms of "a system that is adequate for intelligence in its ability to rule the world, but absurdly inadequate for intelligence in its inability to distinguish a smiley face from a human." EY's point is that it isn't sufficient to distinguish them, you have... (read more)
I see the valuable part of this question not as what you'd do with unlimited magical power, but as more akin to the earlier question asked by Eliezer: what would you do with $10 trillion? That leaves you making trade-offs, using current technology, and still deciding between what would make you personally happy, and what kind of world you want to live in.
Once you've figured out a little about what trade-offs between personal happiness and changing the world you'd make with (practically) unlimited (but non-magical) resources, you can reflect that back down to how you spend your minutes and your days. You don't make the same trade-offs on a regular... (read more)