Warning: grouchiness follows.
A draft-reader suggested to me that this question is poorly motivated: what other kinds of agents could there be, besides “could”/“would”/“should” agents?
Actually, I made the same criticism of that category, except in more detail. Was that acausal, or am I just more worthy of reviewing your drafts?
And your response in the footnote looks like little more than, "don't worry, you'll get it some day, like schoolkids and fractions". Not helpful.
Humans ... have CSA-like structure. That is, we consider “alternatives” and act out the alternative from which we “expect” the highest payoff
Excuse me, isn't this just the classical "rational agent" model that research has long since refuted? For one thing, many actions people perform are trivially impossible to interpret this way (in the sense of your diagram), given reaction times and known computational properties of the brain. That is, the brain doesn't have enough time to form enough distinct substates isomorphic to several human-like responses, then evaluate them, then compare the evaluations.
For another, the whole heuristics and biases literature repeated ad infinitum on OB/LW.
Finally, even when humans do believe they're evaluating several choices looking for the best payoff (per some multivariable utility function), what really happens is that they pick one quickly based on "gut instinct" -- meaning some heuristic, good or bad -- and then bend all conscious evaluation to favor it. In at least some laboratory settings, this is shown explicitly: the researchers can predict what the subject will do, and then the subject gives some plausible-sounding rationalization for why they did it.
(And if you say, "using heuristics is a kind of evaluation of alternatives", then you're again stretching the boundaries of the concept of a CSA wide enough to be unhelpful.)
There are indeed cases where people do truly consider the alternatives and make sure they are computing the actual consequences and the actual congruence with their actual values, but this is an art people have to genuinely work towards; it is not characteristic of general human action.
In any case, all of the above assumes a distinction I'm not convinced you've made. To count as a CSA, is it necessary that you be physically able to extract the alternatives under consideration ("Silas considered making his post polite, but assigned it low utility")? Because the technology certainly doesn't exist to do that on humans. Or is it only necessary that it be possible in principle? If the latter, you run back into the problem of the laws of physics being embedded in all parts of the universe:
I observe a pebble. Therefore, I know the laws of the universe. Therefore, I can compute arbitrary counterfactuals. Therefore, I compute a zero pebble-utility for everything the pebble "pebble-could" do, except follow the laws of physics.
Therefore, there is no "not-CSA" option.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
-- John Tukey
FWIW, the exact quote (from pp.13-14 of this article) is:
Your paraphrase is snappier though (as well as being less ambiguous; it's hard to tell in the original whether Tukey intends the adjectives "vague" and "precise" to apply to the questions or the answers).