MattG comments on Goal retention discussion with Eliezer - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (26)
Why? That's not a necessary logical consequence. These aren't (or don't have to be) chaotical systems, so there is no reason that scaling up the size of the computation results in an unauditable mess. The techniques used depend very much on the AGI architecture, but there are designs which allow for tracing thought patterns and answering questions about its operation in ways which are computationally tractable.
Just as an example of something a human couldn't understand that a sufficiently smart computer might - writing code directly to binary, without the intermediate step of a programming language.
That would be read as decompiled assembler which humans can understand, though not in large quantities.
Interesting. Consider me corrected.
For anything nontrivial, we need software support to do that—and it still won't work very well. You might not be absolutely correct, but you're close.
IDA is a wonderful piece of software, though. A heck of a lot better than working manually.