Protagoras comments on What should a friendly AI do, in this situation? - Less Wrong

8 Post author: Douglas_Reay 08 August 2014 10:19AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (66)

You are viewing a single comment's thread. Show more comments above.

Comment author: Protagoras 12 August 2014 08:52:53PM -1 points [-]

I agree that an AI with such amazing knowledge should be unusually good at communicating its justifications effectively (because able to anticipate responses, etc.) I'm of the opinion that this is one of the numerous minor reasons for being skeptical of traditional religions; their supposedly all-knowing gods seem surprisingly bad at conveying messages clearly to humans. But to return to VAuroch's point, in order for the scenario to be "wildly inconsistent," the AI would have to be perfect at communicating such justifications, not merely unusually good. Even such amazing predictive ability does not seem to me sufficient to guarantee perfection.

Comment author: [deleted] 12 August 2014 10:13:28PM 0 points [-]

Albert doesn't have to be perfect at communication. He doesn't even have to be good at it. He just needs to have confidence that no action or decision will be made until both parties (human operators and Albert) are satisfied that they fully understand each other... which seems like a common sense rule to me.

Comment author: VAuroch 13 August 2014 06:08:32AM -1 points [-]

Whether it's common sense is irrelevant; it's not realistically achievable even for humans, who have much smaller inferential distances between them than a human would have from an AI.