You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

DanArmak comments on Computation complexity of AGI design - Less Wrong Discussion

6 Post author: Squark 02 February 2015 08:05PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (69)

You are viewing a single comment's thread.

Comment author: DanArmak 03 February 2015 11:20:15AM *  1 point [-]

I feel you should give a strict definition of general intelligence (artificial or not). If you define it as "an intelligence that can solve all solvable problems with constrained resources", it's not clear that that humans have a general intelligence. (For instance, you yourself argue that humans might be unable to solve the problem of creating AGI.) But if humans don't have a general intelligence, then creating an artificial human-level intelligence might be easier than creating an AGI.

Comment author: Squark 05 February 2015 07:51:17AM 0 points [-]

Hi Daniel, thx for commenting!

How strict do you want the definition to be? :) A lax definition would be efficient cross domain optimization. A mathematically rigorous definition would be the updateless intelligence metric (once we solve logical uncertainty and make the definition truly rigorous).

Roughly, general intelligence is the ability to efficiently solve the average problem where the averaging is done over a Solomonoff ensemble.