jsteinhardt comments on Thoughts on the Singularity Institute (SI) - Less Wrong

256 Post author: HoldenKarnofsky 11 May 2012 04:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1270)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 15 May 2012 05:49:19PM 31 points [-]

Reading Holden's transcript with Jaan Tallinn (trying to go over the whole thing before writing a response, due to having done Julia's Combat Reflexes unit at Minicamp and realizing that the counter-mantra 'If you respond too fast you may lose useful information' was highly applicable to Holden's opinions about charities), I came across the following paragraph:

My understanding is that once we figured out how to get a computer to do arithmetic, computers vastly surpassed humans at arithmetic, practically overnight ... doing so didn't involve any rewriting of their own source code, just implementing human-understood calculation procedures faster and more reliably than humans can. Similarly, if we reached a good enough understanding of how to convert data into predictions, we could program this understanding into a computer and it would overnight be far better at predictions than humans - while still not at any point needing to be authorized to rewrite its own source code, make decisions about obtaining "computronium" or do anything else other than plug data into its existing hardware and algorithms and calculate and report the likely consequences of different courses of action

I've been previously asked to evaluate this possibility a few times, but I think the last time I did was several years ago, and when I re-evaluated it today I noticed that my evaluation had substantially changed in the interim due to further belief shifts in the direction of "Intelligence is not as computationally expensive as it looks" - constructing a non-self-modifying predictive super-human intelligence might be possible on the grounds that human brains are just that weak. It would still require a great feat of cleanly designed, strong-understanding-math-based AI - Holden seems to think this sort of development would happen naturally with the sort of AGI researchers we have nowadays, and I wish he'd spent a few years arguing with some of them to get a better picture of how unlikely this is. Even if you write and run algorithms and they're not self-modifying, you're still applying optimization criteria to things like "have the humans understand you", and doing inductive learning has a certain inherent degree of program-creation to it. You would need to have done a lot of "the sort of thinking you do for Friendly AI" to set out to create such an Oracle and not have it kill your planet.

Nonetheless, I think after further consideration I would end up substantially increasing my expectation that if you have some moderately competent Friendly AI researchers, they would apply their skills to create a (non-self-modifying) (but still cleanly designed) Oracle AI first - that this would be permitted by the true values of "required computing power" and "inherent difficulty of solving problem directly", and desirable for reasons I haven't yet thought through in much detail - and so by Conservation of Expected Evidence I am executing that update now.

Flagging and posting now so that the issue doesn't drop off my radar.

Comment author: jsteinhardt 18 May 2012 03:05:21PM 9 points [-]

Holden seems to think this sort of development would happen naturally with the sort of AGI researchers we have nowadays, and I wish he'd spent a few years arguing with some of them to get a better picture of how unlikely this is.

While I can't comment on AGI researchers, I think you underestimate e.g. more mainstream AI researchers such as Stuart Russell and Geoff Hinton, or cognitive scientists like Josh Tenenbaum, or even more AI-focused machine learning people like Andrew Ng, Daphne Koller, Michael Jordan, Dan Klein, Rich Sutton, Judea Pearl, Leslie Kaelbling, and Leslie Valiant (and this list is no doubt incomplete). They might not be claiming that they'll have AI in 20 years, but that's likely because they are actually grappling with the relevant issues and therefore see how hard the problem is likely to be.

Not that it strikes me as completely unreasonable that we would have a major breakthrough that gives us AI in 20 years, but it's hard to see what the candidate would be. But I have only been thinking about these issues for a couple years, so I still maintain a pretty high degree of uncertainty about all of these claims.

I do think I basically agree with you re: inductive learning and program creation, though. When you say non-self-modifying Oracle AI, do you also mean that the Oracle AI doesn't get to do inductive learning? Because I suspect that inductive learning of some sort is fundamentally necessary, for reasons that you yourself nicely outline here.

Comment author: Eliezer_Yudkowsky 18 May 2012 10:11:15PM *  12 points [-]

I agree that top mainstream AI guy Peter Norvig was way the heck more sensible than the reference class of declared "AGI researchers" when I talked to him about FAI and CEV, and that estimates should be substantially adjusted accordingly.

Comment author: thomblake 20 May 2012 07:45:18PM 1 point [-]

Yes. I wonder if there's a good explanation why narrow AI folks are so much more sensible than AGI folks on those subjects.

Comment author: DanArmak 27 May 2012 10:10:12PM 5 points [-]

Because they have some experience of their products actually working, they know that 1) these things can be really powerful, even though narrow, and 2) there are always bugs.