RichardKennaway comments on The Strangest Thing An AI Could Tell You - Less Wrong

81 Post author: Eliezer_Yudkowsky 15 July 2009 02:27AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (574)

You are viewing a single comment's thread. Show more comments above.

Comment author: RichardKennaway 28 July 2009 02:33:37PM 4 points [-]

If the AGI can't convince me of something, maybe it's not because it's not smart enough to explain, but because I'm not smart enough to understand.

Comment author: DanielLC 09 April 2011 08:27:57PM 5 points [-]

You don't have to understand the real reason. It just has to convince you. Eliezer Yudkowski can convince someone to let an AI out of a box in a thought experiment, and give him money in real life, despite not believing that to be the logical course of action.

Comment author: UnholySmoke 28 July 2009 02:44:47PM 4 points [-]

Dead right. It would seem very silly to believe that rationality hits a glass ceiling at human level intelligence. Unlikely though it is, if the AI could predict the number in my head by looking at my facial expressions, then told me to cut my arm off for the good of the human race, I'd suddenly feel very conflicted indeed.

Comment author: Strange7 20 June 2012 08:10:14AM 0 points [-]

If the AI isn't smart enough to at least come up with a reason I'll accept at face value, something's very wrong. People can be convinced to do incredibly stupid-seeming things for enough money, and if whatever the AI wants is as good for the world as it's supposed to be, there's going to be some way to make money by doing it.

Comment author: eirenicon 28 July 2009 03:06:08PM 0 points [-]

Would an AGI ever try to convince you of something you can't understand? I wouldn't try to explain special relativity to a kindergarten class. Surely an AGI would know perfectly well what you are capable of grasping. If it tries to convince me of something, knowing it cannot, what then are its intentions?

Comment author: UnholySmoke 29 July 2009 02:50:00PM 4 points [-]

Ack. 'Surely an AGI would be able to...' should be made illegal. I can quite easily conceive of an artificial mind that cannot model my thought processes. There's a great big long stretch of cleverness above human level before you reach omniscience!

There are also some humans who can understand lots of things, and some who can understand only very few things. If I'm being asked to sever a limb or stamp on a puppy, I at least want my shiny new master to have a stab at explaining why.