You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

KatjaGrace comments on Superintelligence Reading Group 3: AI and Uploads - Less Wrong Discussion

9 Post author: KatjaGrace 30 September 2014 01:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (138)

You are viewing a single comment's thread. Show more comments above.

Comment author: KatjaGrace 30 September 2014 12:05:26PM 3 points [-]

There is a continuum between understanding the brain well, and copying it in detail. But it seems that for much of that spectrum - where a big part is still coming from copying well - I would expect a jump. Perhaps a better analogy would involve many locked boxes of nanotechnology, and our having the whole picture when we have a combination of enough lockpicking and enough nanotech understanding.

Do you mean that this line of argument is evidence against brain emulations per se because such jumps are rare?

For AI, the most common arguments I have heard for fast progress involve recursive self-improvement, and/or insights related to intelligence being particularly large and chunky for some reason. Do you mean these are possible because we don't know how far we have come, or are you thinking of another line of reasoning?

It seems to me that for any capability you wished to copy from an animal via careful replication rather than via understanding would have this character of perhaps quickly progressing when your copying abilities become sufficient. I can't think of anything else anyone tries to copy in this way though, which is perhaps telling.