Giles comments on Desired articles on AI risk? - Less Wrong

13 Post author: lukeprog 02 November 2012 05:39AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (26)

You are viewing a single comment's thread.

Comment author: Giles 02 November 2012 05:58:37PM 6 points [-]

I'd be interested to see a critique of Hanson's em world, but within the same general paradigm (i.e. not "that won't happen because intelligence explosion").

e.g.

  • ems would respect our property rights why exactly?
  • how useful is analysis given "ems behave just like fast copyable humans" assumption probably won't be valid for long?
Comment author: DaFranker 02 November 2012 06:18:36PM *  3 points [-]

how useful is analysis given "ems behave just like fast copyable humans" assumption probably won't be valid for long?

Yeah, I don't see how that assumption could last long.

Make me an upload, and suddenly you've got a bunch of copies learning a bunch of different things, and another bunch of copies experimenting and learning on how to create diff patches to do stable knowledge merging from multiple studying branch copies. Wouldn't be long before the trunk mind becomes a supergenius polyexpert if not an outright general superintelligence, if it works.

That's just one random way things could go weird out of many others anyone could think of.

Comment author: Giles 02 November 2012 07:16:23PM 3 points [-]

I think Hanson comes at this from the angle of "let's apply what's in our standard academic toolbox to this problem". I think there might be people who find this approach convincing who would skim over more speculative-sounding stuff, so I think that approach might be worth pursuing.

I really don't disagree with your analysis but I wonder which current academic discipline comes closest to being able to frame this kind of idea?