Eliezer_Yudkowsky comments on Update on Kim Suozzi (cancer patient in want of cryonics) - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (61)
This is a rather important point. How do we get more info on it? You're the first halfway-sane person I've ever heard put the median at 2100.
From my perspective if you told me that in actual fact AGI had been developed in 2120 (a bit of a ways after your median) despite the lack of any great catastrophes, I would update in the direction of believing all of the following:
It seems like I'd have to execute a lot of updates. How do we resolve this?
Well atom-size features are scheduled to come along on that time-scale, believed to mark the end of scaling feature size downwards. That has been an essential part of Moore's law all along the way. Without it, one has to instead do things like use more efficient materials at the same size, new architectural designs, new cooling, etc. That's a big change in the underlying mechanisms of electronics improvement, and a pretty reasonable place for the trend to go awry, although it also wouldn't be surprising if it kept going for some time longer.
The so-called "Great Stagnation" isn't actually a stagnation, it's mainly just compounding growth at a slower rate. How much of the remaining distance to AGI do you think was covered 2002-2012? 1992-2002?
Haven't they been so far?
In any case, nanotechnology can't shrink feature sizes below atomic scale, and that's already coming up via conventional technology. Also, if the world is one where computation is energy-limited, denser computers that use more energy in a smaller space aren't obviously that helpful.
Could you give some examples of what you had in mind?
Well, there is demographic decline: rich country populations are shrinking. China is shrinking even faster, although bringing in its youth into the innovation sectors may help a lot.
Say biotech genetic engineering methods are developed in the next 10-20 years, heavily implemented 10 years later, and the kids hit their productive prime 20 years after that. Then they go faster, but how much faster? That's a fast biotech trajectory to enhanced intelligence, but the fruit mostly fall in the last quarter of the century.
See 15:30 of this talk, Anders' Monte Carlo simulation (assumptions debatable, obviously) is a wide curve with a center around 2075. Separately Anders expresses nontrivial uncertainty about the brain model/cognitive neuroscience step, setting aside the views of the non-Anders population.
vs
I said "near the end of the century" contrasted to a prediction of intelligence explosion in 2045.
Here's one: http://phys.org/news/2012-08-d-wave-quantum-method-protein-problem.html
That doesn't apply to large proteins yet, but it doesn't make me optimistic about the nanotech timeline. (Which is to say, it makes me update in favor of faster R&D.)
Nobody believes in D-Wave.
That seems like an oversimplification. Clearly some people do.
Scott Aaronson:
I am not qualified to judge whether the D-Wave's claim that they use quantum annealing, rather than the standard simulated annealing (as Scott suspects) in their adiabatic quantum computing is justified. However, the lack of independent replication of their claims is disconcerting.
Maybe they could get Andrea Rossi to confirm.
http://blogs.nature.com/news/2012/08/d-wave-quantum-computer-solves-protein-folding-problem.html
You have a computer doing something we could already do, but less efficiently than existing methods, which have not been impressively useful themselves?
ETA: https://plus.google.com/103530621949492999968/posts/U11X8sec1pU
The G+ post explains what it's good for pretty well, doesn't it?
It's not a dramatic improvement (yet), but it's a larger potential speedup than anything else I've seen on the protein-folding problem lately.
You can duplicate that D-Wave machine on a laptop.
True, but somewhat besides the point; it's the asymptotic speedup that's interesting.
...you know, assuming the thing actually does what they claim it does. sigh
Also no asymptotic speedup.
This is puzzling.
I had thought that the question of AI timelines was so central that the core SI research community would have long since Aumannated and come to a consensus probability distribution.
Anyway, good you're doing it now.
Maybe I was absent from the office that day? I hadn't heard Carl's 2083 estimate (I recently asked him in person what the actual median was, and he averaged his last several predictions together to get 2083) until now, and it was indeed outside what I thought was our Aumann-range, hence my surprise.
It seems like the sort of thing people would plan to do on a day you were going to be in the office.
We had discussed timelines to this effect last year.
I'm wondering why this is stated as a conjunction. Would a single failure here really result in an early AGI development?