Eugine_Nier comments on Evidence for the orthogonality thesis - Less Wrong

11 Post author: Stuart_Armstrong 03 April 2012 10:58AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (289)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eugine_Nier 04 April 2012 04:41:07AM 2 points [-]

How's that any different from:

"What should we have the AI's beliefs be?"

"Eh, just make it self-improve, once it's smart it can figure out the true beliefs."

Comment author: JGWeissman 04 April 2012 05:16:06AM 2 points [-]

It's not very different. They are both different from:

"The AI will acquire accurate beliefs by using a well understood epistemology to process its observations, as it is explicitly designed to do."

Comment author: komponisto 08 April 2012 03:33:34PM 0 points [-]

"Smart" implicitly entails "knows the true beliefs", whereas it doesn't entail "has the right goals".

Comment author: TheAncientGeek 28 September 2013 11:44:02PM 1 point [-]

"Smart" implicitly entails "knows the true beliefs", whereas it doesn't entail "has the right goals".

It doesn't exclude having the right goals, either. You could engineer something whose self-improvement was restricted from affecting its goals. But if that is dangerous, why would you?

Comment author: Manfred 04 April 2012 12:58:55PM *  0 points [-]

Well, the difference is that building an AI without figuring out where goals come from gives you a dangerous AI, while building an AI without figuring out where beliefs come from gives you a highly-optimized compiler that wants to save humanity.

Comment author: Stuart_Armstrong 04 April 2012 09:17:53AM 0 points [-]

factual beliefs != moral beliefs

And the methods for investigating them are very different.