Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Danielle_Fong comments on Growing Up is Hard - Less Wrong

28 Post author: Eliezer_Yudkowsky 04 January 2009 03:55AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (41)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Danielle_Fong 05 January 2009 03:10:17AM 0 points [-]


Sure, there are upgrades that one can make in which one can more or less prove deterministically how it changes a subsystem in isolation. Things like adding the capability for zillion bit math, or adding a huge associative memory. But it's not clear that the subsystem would actually be an upgrade in interaction with the AI and with the unpredictable environment, at once. I guess the word I'm getting hung up on is 'correctness.' Sure, the subsystems could be deterministically correct, but would it necessarily be a system-wide upgrade?

It's also especially plausible that there are certain 'upgrades' (or at least large cognitive system changes) which can't be arrived at deterministically, even by a super human intelligence.