You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

VincentYu comments on On desiring subjective states (post 3 of 3) - Less Wrong Discussion

7 Post author: torekp 05 May 2015 02:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (13)

You are viewing a single comment's thread. Show more comments above.

Comment author: VincentYu 16 May 2015 09:07:41AM *  1 point [-]

For uploading, that means whole brain emulation. In my underinformed opinion, whole brain emulation is not the Way to AGI if you just want AGI. At some point, then, AGI will be available while WBE systems will be way behind; and so, uploaders will at least temporarily face a deeply serious choice on this issue.

Are you suggesting that mind uploading to a non-WBE platform will be available before WBE? I don't think this is a common belief; uploading almost exclusively refers to WBE. See, for instance, Sandberg and Bostrom (2008), who don't distinguish between WBE and uploading:

Whole brain emulation, often informally called “uploading” or “downloading”, has been the subject of much science fiction and also some preliminary studies.

I think it is indeed a common belief that AGI may come before WBE, but as far as I know, it is not commonly believed that AGI will provide an alternative route to WBE, because human minds will likely not be feasibly translatable to the AGI architectures that we come up with.

Comment author: torekp 17 May 2015 12:34:56AM 1 point [-]

Good question, thanks. Yes, I do think that "mind uploading", suitably loosely defined, will be available first on a non-WBE platform. I'm assuming that human-level AGI relatively quickly becomes superhuman-level, to the point where imitating the behavior of a specific individual becomes a possibility.

Comment author: VincentYu 17 May 2015 04:15:28AM 0 points [-]

I see. In GAZP vs. GLUT, Eliezer argues that the only way to feasibly create a perfect imitation of a specific human brain is to do computations that correspond in some way to the functional roles behind mental states, which will produce identical conscious experiences according to functionalism.