Eliezer_Yudkowsky comments on Fake Utility Functions - Less Wrong

22 Post author: Eliezer_Yudkowsky 06 December 2007 04:55PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (54)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Eliezer_Yudkowsky 07 December 2007 12:49:36PM 6 points [-]

Toby Ord, you should probably sign out and sign back on. Re: utilitarianism, if we are neither consciously nor unconsciously striving for nothing except happiness, and if I currently take a stand against rewiring my brain to enjoy an illusion of scientific discovery, and if I regard this as a deeply important and moral decision, then why on Earth should I listen to the one who comes and says, "Ah, but happiness is all that is good for you, whether you believe it or not"? Why would I not simply reply "No"?

And, all:

WE ARE NOT READY TO DISCUSS GENERAL ISSUES OF FRIENDLY AI YET. It took an extra month, beyond what I had anticipated, just to get to the point where I could say in defensible detail why a simple utility function wouldn't do the trick. We are nowhere near the point where I can answer, in defensible detail, most of these other questions. DO NOT PROPOSE SOLUTIONS BEFORE INVESTIGATING THE PROBLEM AS DEEPLY AS POSSIBLE WITHOUT PROPOSING ANY. If you do propose a solution, then attack your own answer, don't wait for me to do it! Because any resolution you come up with to Friendly AI is nearly certain to be wrong - whether it's a positive policy or an impossibility proof - and so you can get a lot further by attacking your own resolution than defending it. If you have to rationalize, it helps to be rationalizing the correct answer rather than the wrong answer. DON'T STOP AT THE FIRST ANSWER YOU SEE. Question your first reaction, then question the questions.

But above all, wait on having this discussion, okay?