You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Lumifer comments on The Hardcore AI Box Experiment - Less Wrong Discussion

3 Post author: tailcalled 30 March 2015 06:35PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (39)

You are viewing a single comment's thread. Show more comments above.

Comment author: Lumifer 30 March 2015 08:49:43PM -1 points [-]

it seems easier than to prove

Does it, now? How do you know?

Comment author: tailcalled 30 March 2015 08:58:07PM 2 points [-]

They're both questions about program verification. However, one of the programs is godshatter while the other is just a universe. Encoding morality is a highly complicated project dependent on huge amounts of data (in order to capture human values). Designing a universe for the AI barely even needs empiricism, and it can be thoroughly tested without a world-ending disaster.

Comment author: Lumifer 31 March 2015 12:03:42AM 0 points [-]

They're both questions about program verification.

No, I don't think so at all. Thinking that an AI box is all about program verification is like thinking that computer security is all about software bugs.