KatjaGrace comments on Superintelligence Reading Group 3: AI and Uploads - Less Wrong

9 Post author: KatjaGrace 30 September 2014 01:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (138)

You are viewing a single comment's thread. Show more comments above.

Comment author: KatjaGrace 01 October 2014 07:37:39PM 2 points [-]

I see. In that case, why would you expect applying intelligence to that problem to bring about a predictable discontinuity, but applying intelligence to other problems not to?

Comment author: Cyan 01 October 2014 07:46:21PM 2 points [-]

Because the solution has an immediate impact on the exercise of intelligence, I guess? I'm a little unclear on what other problems you have in mind.

Comment author: KatjaGrace 01 October 2014 09:18:06PM 2 points [-]

The impact on exercise of intelligence doesn't seem to come until the ems are already discontinuously better (if I understand), so can't seem to explain the discontinuous progress.

Comment author: Cyan 01 October 2014 11:18:27PM 1 point [-]

Making intelligence-implementing computations substrate-independent in practice (rather than just in principle) already expands our capabilities -- being able to run those computations in places pink goo can't go and at speeds pink goo can't manage is already a huge leap.

Comment author: KatjaGrace 02 October 2014 12:37:43AM 2 points [-]

Even if it is a huge leap to achieve that, until you run the computations, it is unclear to me how they could have contributed to that leap.

Comment author: Cyan 02 October 2014 01:51:36PM *  0 points [-]

But we do run biological computations (assuming that the exercise of human intelligence reduces to computation) to make em technology possible.

Since we're just bouncing short comments off each other at this point, I'm going to wrap up now with a summary of my current position as clarified through this discussion. The original comment posed a puzzle:

Brain emulations seem to represent an unusual possibility for an abrupt jump in technological capability, because we would basically be ‘stealing’ the technology rather than designing it from scratch. ...If this is an unusual situation however, it seems strange that the other most salient route to superintelligence - artificial intelligence designed by humans - is also often expected to involve a discontinuous jump in capability, but for entirely different reasons.

The commonality is that both routes attack a critical aspect of the manifestation of intelligence. One goes straight for an understanding of the abstract computation that implements domain-general intelligence; the other goes at the "interpreter", physics, that realizes that abstract computation.