Larks comments on Superintelligence 15: Oracles, genies and sovereigns - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (30)
I don't really understand why AGI is so different from currently existing software. Current software seems docile - we worry more about getting it to do anything in the first place, and less about it accidentally doing totally unrelated things. Yet AGI seems to be the exact opposite. It seems we think of AGI as being 'like humans, only more so' rather than 'like software, only more so'. Indeed, in many cases it seems that knowing about conventional software actually inhibits one's ability to think about AGI. Yet I don't really understand why this should be the case.