PhilGoetz comments on To what degree do you model people as agents? - Less Wrong

34 Post author: Swimmer963 25 August 2013 07:29PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (130)

You are viewing a single comment's thread.

Comment author: PhilGoetz 26 August 2013 10:40:49PM *  2 points [-]

I've used "agentness" in at least one LessWrong post to mean the amount of information you need to predict their behavior, given their environment, though I don't think I defined it that way. A person whose actions can always be predicted from existing social conventions, or from the content of the Bible, is not a moral agent. You might call them a moral person, but they've surrendered their agency.

Perhaps I first got this notion of agency from the Foundation Trilogy: Agenthood is the degree to which you mess up Hari Seldon's equations.

My preference in cases like this is not to puzzle over what the word "agent" means, but to try to come up with a related concept that is useful and (theoretically) measurable. Here I'd suggest measuring the number of bits that a person requires you to add to your model of the world. This has the advantage that a complex person who you're able to predict through long association with them still has a large number, while a random process is impossible to predict yet adds few bits to your model.

This kind of distinction is one of the reasons I say that the difference between highly-intelligent people and people of average intelligence is greater than the difference between people of average intelligence and dogs.

(bits of surprise provided by a highly-intelligent person - bits of surprise provided by a human of average intelligence) > (bits of surprise provided by a human of average intelligence - bits of surprise provided by a dog).

Though my greater experience with humans, their greater homogeneity owing to language and culture, and whatever rationality they possess, biases each human to require adding fewer bits, so that bits of surprise provided by a dog may on average be greater than bits of surprise provided by an average person. There's something wrong with this measure if it penalizes rationality and language use.