Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Recovering_irrationalist comments on Artificial Addition - Less Wrong

36 Post author: Eliezer_Yudkowsky 20 November 2007 07:58AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (117)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Recovering_irrationalist 20 November 2007 04:13:25PM 1 point [-]

Me: AGI is a William Tell target. A near miss could be very unfortunate. We can't responsibly take a proper shot till we have an appropriate level of understanding and confidence of accuracy.

Caledonian: That's not how William Tell managed it. He had to practice aiming at less-dangerous targets until he became an expert, and only then did he attempt to shoot the apple.

Yes, by "take a proper shot" I meant shooting at the proper target with proper shots. And yes, practice on less-dangerous targets is necessary, but it's not sufficient.

It is not clear to me that it is desirable to prejudge what an artificial intelligence should desire or conclude, or even possible to purposefully put real constraints on it in the first place. We should simply create the god, then acknowledge the truth: that we aren't capable of evaluating the thinking of gods.

I agree we can't accurately evaluate superintelligent thoughts, but that doesn't mean we can't or shouldn't try to affect what it thinks or what it's goals are.

I couldn't do this argument justice. I encourage interested readers to read Eliezer's paper on coherent extrapolated volition.