Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Will_Pearson comments on That Alien Message - Less Wrong

111 Post author: Eliezer_Yudkowsky 22 May 2008 05:55AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (164)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Will_Pearson 22 May 2008 08:03:32AM 0 points [-]

I'm reminded of his master's voice by stanislaw lem by this story, which has a completely different outcome to when humanity tries to decode a message from the stars.

Some form of proof of concept would be nice. Alter OOPS to use ockhams razor or implement AIXItl and then give it a picture of a bent piece of grass or three ball frames, and see what you get. As long as GR is in the hypothesis space it should by your reasoning be the most probable after these images. The unbounded uncomputable versions shouldn't have any advantage in this case.

I'd be suprised if you got anything like modern physics popping out. I'll do this test on any AI I create. If any of them have hypothesis like GR I'll stop working on them until the friendliness problem has been solved. This should be safe, unless you think it could deduce my psychology from this as well.

Comment author: Chalybs_Levitas 19 November 2011 09:08:54AM 0 points [-]

What if GR is wrong, and it does not output GR because it spots the flaw that we do not?

Comment author: Baughn 28 November 2011 08:53:07PM 0 points [-]

Well, good for it?

GR is almost certainly wrong, given how well it fails to fit with QM. I'm no expert, but QM seems to work better than GR does, so it's more likely the latter will have to change - which is what you'd expect from reductionism, I suppose. GR is operating at entirely the wrong level of abstraction.

Comment author: Estarlio 21 December 2011 07:18:27AM *  10 points [-]

The point is if GR is wrong and the AI doesn't output GR because it's wrong, then your test will say that the AI isn't that smart. And then you do something like letting it out of the box and everyone probably dies.

And if the AI is that smart it will lie anyway....