You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Wei_Dai comments on Journal of Consciousness Studies issue on the Singularity - Less Wrong Discussion

14 Post author: lukeprog 02 March 2012 03:56PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (85)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 02 March 2012 10:00:30PM 10 points [-]

Similar theme from Hutter's paper:

Will AIXI replicate itself or procreate? Likely yes, if AIXI believes that clones or descendants are useful for its own goals.

If AIXI had the option of creating an AIXI (which by definition has the goal of maximizing its own rewards), or creating a different AI (non-AIXI) that had the goal of serving the goals of its creator instead, surely it would choose the latter option. If AIXI is the pinnacle of intelligence (as Hutter claims), and an AIXI wouldn't build another AIXI, why should we? Because we're just too dumb?

Comment author: Will_Newsome 03 March 2012 07:49:48AM 2 points [-]

I like lines of inquiry like this one and would like it if they showed up more.

Comment author: Wei_Dai 03 March 2012 08:46:21AM 0 points [-]

I'm not sure what you mean by "lines of inquiry like this one". Can you explain?

Comment author: Will_Newsome 03 March 2012 08:57:31AM 6 points [-]

I guess it's not a natural kind, it just had a few things I like all jammed together compactly:

  • Decompartmentalizes knowledge between domains, in this case between AIXI AI programmers and human AI programmers.
  • Talks about creation qua creation rather than creation as some implicit kind of self-modification.
  • Uses common sense to carve up the questionspace naturally in a way that suggests lines of investigation.
Comment author: Luke_A_Somers 03 March 2012 02:53:54AM 1 point [-]

An AIXI might create another AIXI if it could determine that the rewards would coincide sufficiently, and it couldn't figure out how to get as good a result with another design (under real constraints).