You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

beoShaffer comments on Amending the "General Pupose Intelligence: Arguing the Orthogonality Thesis" - Less Wrong Discussion

2 Post author: diegocaleiro 13 March 2013 11:21PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (22)

You are viewing a single comment's thread. Show more comments above.

Comment author: beoShaffer 14 March 2013 03:04:05AM 7 points [-]

Why do you think this, and on a related note why do you think AI's without X will stop functioning/hit a ceiling (in the sense of what is the causal mechanism)?

Comment author: [deleted] 16 March 2013 02:46:05AM *  -1 points [-]

Taking a wild guess I’d say…

Starting from my assumption that concept-free general intelligence is impossible, the implication is that there would be some minimal initial set of concepts required to be built-in for all AGIs.

This minimal set of concepts would imply some necessary cognitive biases/heuristics (because the very definition of a ‘concept’ implies a particular grouping or clustering of data – an initial ‘bias’), which in turn is equivalent to some necessary starting values (a ‘bias’ is in a sense, a type of value judgement).

The same set of heuristics/biases (values) involved in taking actions in the world would also be involved in managing (reorganizing) the internal representational system of the AIs. If the reorganization is not performed in a self-consistent fashion, the AIs stop functioning. Remember: we are talking about a closed loop here….the heuristics/biases used to reorganize the representational system, have to themselves be fully represented in that system.

Therefore, the causal mechanism that stops the uAIs would be the eventual breakdown in their representational systems as the need for ever more new concepts arises, stemming from the inconsistent and/or incomplete initial heuristics/biases being used to manage those representational systems (i.e., failing to maintain a closed loop).

Advanced hard math for all this to follow….