Wei_Dai comments on Journal of Consciousness Studies issue on the Singularity - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (85)
Similar theme from Hutter's paper:
If AIXI had the option of creating an AIXI (which by definition has the goal of maximizing its own rewards), or creating a different AI (non-AIXI) that had the goal of serving the goals of its creator instead, surely it would choose the latter option. If AIXI is the pinnacle of intelligence (as Hutter claims), and an AIXI wouldn't build another AIXI, why should we? Because we're just too dumb?
I like lines of inquiry like this one and would like it if they showed up more.
I'm not sure what you mean by "lines of inquiry like this one". Can you explain?
I guess it's not a natural kind, it just had a few things I like all jammed together compactly:
An AIXI might create another AIXI if it could determine that the rewards would coincide sufficiently, and it couldn't figure out how to get as good a result with another design (under real constraints).