Nathaniel_Eliot

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Jef Allbright:
So "Functional self-similarity of agency extended from the 'individual' to groups.", in other words, means "groups of humans follow similar practices to achieve their goals"? Or am I missing some mystic subtlety in the choice of "functional" over "similar", "self-similarity" over "a grouping like its parts", and agency over "method of achieving goals"? You took a lot of time to dance around the point that "groups also exclude of parts of themselves for similar reasons".

You seem sure you're the smartest person in this conversation, though I'm not sure you're aware of it. It doesn't speak well of your judgment, given the present company.

Forgive me if I am biased toward Elizer's assessment. He has proved his worth to me, while you have only disclaimed yours.

I doubt that there's anything more complicated to the AI getting free than a very good Hannibal Lecture: find weaknesses in the Gatekeeper's mental and social framework, and callously and subtly work them until you break the Gatekeeper (and thus the gate). People claiming they have no weaknesses (wanna-be Gatekeepers, with a bias to ignoring their weakness) are easy prey: they don't even see where they should be defending.

It involves the AI spending far more time researching (and truly mistreating) their target than one would expect for a $10 bet. That's the essence of magic, according to Penn and Teller: doing far more setup work than could be expected given the payoff.