Will_Newsome comments on Evidence for the orthogonality thesis - Less Wrong

11 Post author: Stuart_Armstrong 03 April 2012 10:58AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (289)

You are viewing a single comment's thread.

Comment author: Will_Newsome 03 April 2012 11:55:56AM 5 points [-]

One of the most annoying arguments when discussing AI is the perennial "But if the AI is so smart, why won't it figure out the right thing to do anyway?" It's often the ultimate curiosity stopper.

How is this a curiosity stopper? It's a good question, as is evidenced by your trying to find an answer to it.

Comment author: grouchymusicologist 03 April 2012 12:08:34PM 7 points [-]

It's a curiosity stopper in the sense that people don't worry any more about risks from AI when they assume that intelligence correlates with doing the right thing, and that superintelligence would do the right thing all the time.

Stuart is trying to answer a different question, which is "Given that we think that's probably false, what are some good examples that help people to see its falsity?"

Comment author: Manfred 03 April 2012 12:28:49PM *  7 points [-]

"What should we have the AI's goals be?"

"Eh, just make it self-improve, once it's smart it can figure out the right goals."

Comment author: Eugine_Nier 04 April 2012 04:41:07AM 2 points [-]

How's that any different from:

"What should we have the AI's beliefs be?"

"Eh, just make it self-improve, once it's smart it can figure out the true beliefs."

Comment author: JGWeissman 04 April 2012 05:16:06AM 2 points [-]

It's not very different. They are both different from:

"The AI will acquire accurate beliefs by using a well understood epistemology to process its observations, as it is explicitly designed to do."

Comment author: komponisto 08 April 2012 03:33:34PM 0 points [-]

"Smart" implicitly entails "knows the true beliefs", whereas it doesn't entail "has the right goals".

Comment author: TheAncientGeek 28 September 2013 11:44:02PM 1 point [-]

"Smart" implicitly entails "knows the true beliefs", whereas it doesn't entail "has the right goals".

It doesn't exclude having the right goals, either. You could engineer something whose self-improvement was restricted from affecting its goals. But if that is dangerous, why would you?

Comment author: Manfred 04 April 2012 12:58:55PM *  0 points [-]

Well, the difference is that building an AI without figuring out where goals come from gives you a dangerous AI, while building an AI without figuring out where beliefs come from gives you a highly-optimized compiler that wants to save humanity.

Comment author: Stuart_Armstrong 04 April 2012 09:17:53AM 0 points [-]

factual beliefs != moral beliefs

And the methods for investigating them are very different.