Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

cousin_it comments on Announcing the AI Alignment Prize - Less Wrong Discussion

7 Post author: cousin_it 03 November 2017 03:45PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (10)

You are viewing a single comment's thread. Show more comments above.

Comment author: cousin_it 19 December 2017 03:51:47PM *  0 points [-]

I hope at least you care if everyone on Earth dies painfully tomorrow. We don't have any theory that would stop AI from doing that, and any progress toward such a theory would be on topic for the contest.

Sorry, I'm feeling a bit frustrated. It's as if the decade of LW never happened, and people snap back out of rationality once they go off the dose of Eliezer's writing. And the mode they snap back to is so painfully boring.

Comment author: entirelyuseless 20 December 2017 03:43:01AM 0 points [-]

I do care about tomorrow, which is not the long run.

I don't think we should assume that AIs will have any goals at all, and I rather suspect they will not, in the same way that humans do not, only more so.

Comment author: Lumifer 19 December 2017 04:40:27PM 0 points [-]

tomorrow

That's not conventionally considered to be "in the long run".

We don't have any theory that would stop AI from doing that

The primary reason is that we don't have any theory about what a post-singularity AI might or might not do. Doing some pretty basic decision theory focused on the corner cases is not "progress".