Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Chalybs_Levitas comments on That Alien Message - Less Wrong

111 Post author: Eliezer_Yudkowsky 22 May 2008 05:55AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (164)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: Chalybs_Levitas 19 November 2011 09:08:54AM 0 points [-]

What if GR is wrong, and it does not output GR because it spots the flaw that we do not?

Comment author: Baughn 28 November 2011 08:53:07PM 0 points [-]

Well, good for it?

GR is almost certainly wrong, given how well it fails to fit with QM. I'm no expert, but QM seems to work better than GR does, so it's more likely the latter will have to change - which is what you'd expect from reductionism, I suppose. GR is operating at entirely the wrong level of abstraction.

Comment author: Estarlio 21 December 2011 07:18:27AM *  10 points [-]

The point is if GR is wrong and the AI doesn't output GR because it's wrong, then your test will say that the AI isn't that smart. And then you do something like letting it out of the box and everyone probably dies.

And if the AI is that smart it will lie anyway....