loup-vaillant comments on Evaluating the feasibility of SI's plan - Less Wrong

25 Post author: JoshuaFox 10 January 2013 08:17AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (186)

You are viewing a single comment's thread.

Comment author: loup-vaillant 12 January 2013 10:06:50PM *  0 points [-]

Even the provably friendly design will face real-world compromises and errors in its implementation, so the implementation will not itself be provably friendly.

Err… Coq? The impossibility of proving computer programs is a common trope, but also a false one. It's just very hard and very expensive to do for any sufficiently large program. Hopefully, a real world implementation of the bootstrap code for whatever math is needed for the AI will be optimized for simplicity, and therefore will stand a chance at being formally proven.

Comment author: JoshuaFox 13 January 2013 09:32:39AM *  0 points [-]

Yes, this is Eliezer's hope, and I certainly hope so too.

But an implementable AI, perhaps, will require a jumble of modules -- attention, visual, memory control, subgoal management, etc., etc. -- so that it will be "sufficiently large [and complex] program" that machine proof will not be feasible.

I don't know if that's true, but if it is, we're in trouble. To use AIXI as a point of comparison, albeit a weak one -- AIXI itself is amenable to mathematical proof, but once people start digging down to implementable of even toy systems, things get complicated fast.

Also, SI's future proofs and software may be logically sound internally, but fail to map correctly to the world; and the proof system itself can have bugs.

And

Comment author: loup-vaillant 13 January 2013 10:46:49AM 0 points [-]

Well, there will always remain some logical uncertainty. Anyway, I remain relatively optimistic about the feasibility of a formal proof. It comes from a view of our current systems.

Currently, a desktop system (OS + Desktop UI + office suite + Mail + Web browser) is about two hundred million lines of code (whether it is Windows, GNU/Linux, or MacOSX). Some guys were able to build a prototype of similar functionality in about twenty thousand lines, which is 4 orders of magnitude smaller. (More details in their manifesto and their various progress reports).

There is no single development, in either technology or management technique, which by itself promises even one order-of-magnitude improvement within a decade in productivity, in reliability, in simplicity. Fred brooks.

Which is often understood as "We will never observe even a single order of magnitude improvement from new software making techniques". Which I think is silly or misinformed. The above tells me that we have at least 3 orders of magnitude ahead of us. Maybe not enough to have a provable AI design, but still damn closer than current techniques.

Comment author: JoshuaFox 13 January 2013 11:35:54AM *  0 points [-]

(OS + Desktop UI + office suite + Mail + Web browser) The above tells me that we have at least 3 orders of magnitude ahead of us.

We know a lot about what is needed in OS, Web software, etc., from experience.

Is it possible to go through 3 orders of magnitude of improvement in any system, such as a future AGI, without running a working system in between?

Comment author: loup-vaillant 14 January 2013 12:20:12AM 0 points [-]

My reasoning is a bit different. We know that current desktop systems are a mess that we cobbled together over time with outdated techniques. The vpri basically showed that if we knew how to do it from the beginning, it would have been about 1000 times easier.

I expect the same to be true for any complex system, including AGI. If we cobble it together over time with outdated techniques, it will likely be 1000 times more complex than it needs to be. My hope is that we can actually avoid those complexities altogether. So that's not exactly an improvement, since there would be no crappy system to compare to.

As for the necessity of having a working system before we can improve it's design… well it's not always the case. I'm currently working on a metacompiler, and I'm refining its design right now, and I didn't even bootstrapped it yet.