kgalias comments on Superintelligence Reading Group 3: AI and Uploads - Less Wrong

9 Post author: KatjaGrace 30 September 2014 01:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (138)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 03 October 2014 08:51:00AM *  1 point [-]

Making a brain emulation machine requires (1) the ability to image a brain at sufficient resolution, and (2) computing power in excess of the largest supercomputers available today. Both of these tasks are things which require a long engineering lead time and commitment of resources, and are not things which we expect to solved by some clever insight. Clever insight alone won't ever enable you construct record-setting supercomputers out of leftover hobbyist computer parts, toothpicks, and superglue.

Comment author: kgalias 03 October 2014 09:49:15AM 3 points [-]

Why do we assume that all that is needed for AI is a clever insight, not the insight-equivalent of a long engineering time and commitment of resources?

Comment author: [deleted] 03 October 2014 03:02:17PM *  1 point [-]

Because the scope of the problems involved, e.g. searchspace over programs, can be calculated and compared with other similarly structured but solved problems (e.g. narrow AI). And in a very abstract theoretical sense today's desktop computers are probably sufficient for running a fully optimized human-level AGI. And this is a sensible and consistent result -- it should not be surprising that it takes many orders of magnitude more computational power to emulate a computing substrate running a general intelligence (the brain simulated by a supercomputer) than to run a natively coded AGI. Designing the program which implements the native, non-emulative AGI is basically a "clever insight" problem, or perhaps more accurately a large series of clever insights.

Comment author: kgalias 03 October 2014 03:46:42PM *  1 point [-]

I agree.

Why does this make it more plausible that a person can sit down and invent a human-level artificial intelligence than that they can sit down and invent the technical means to produce brain emulations?

Comment author: [deleted] 03 October 2014 07:04:39PM *  1 point [-]

We have the technical means to produce brain emulations. It requires just very straightforward advances in imaging and larger supercomputers. There are various smaller-scale brain emulation projects that have already proved the concept. It's just that doing that at a larger scale and finer resolution requires a lot of person-years just to get it done.

EDIT: In Rumsfeld speak, whole-brain emulation is a series of known-knowns: lots of work that we know needs to be done, and someone just has to do it. Whereas AGI involves known-unknowns: we don't know precisely what has to be done, so we can't quantify exactly how long it will take. We could guess, but it remains possible that clever insight might find a better, faster, cheaper path.

Comment author: kgalias 07 October 2014 06:24:06PM 0 points [-]

Sorry for the pause, internet problems at my place.

Anyways, it seems you're right. Technically, it might be more plausible for AI to be coded faster (higher variance), even though I think it'll take more time than emulation (on average).