Nathaniel_Eliot comments on AIs and Gatekeepers Unite! - Less Wrong

10 Post author: Eliezer_Yudkowsky 09 October 2008 05:04PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (160)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Nathaniel_Eliot 09 October 2008 06:06:12PM 7 points [-]

I doubt that there's anything more complicated to the AI getting free than a very good Hannibal Lecture: find weaknesses in the Gatekeeper's mental and social framework, and callously and subtly work them until you break the Gatekeeper (and thus the gate). People claiming they have no weaknesses (wanna-be Gatekeepers, with a bias to ignoring their weakness) are easy prey: they don't even see where they should be defending.

It involves the AI spending far more time researching (and truly mistreating) their target than one would expect for a $10 bet. That's the essence of magic, according to Penn and Teller: doing far more setup work than could be expected given the payoff.