This is a chapter-by-chapter review of Thinking and Deciding by Jonathan Baron (UPenn, twitter). It won't be a detailed summary like badger's excellent summary of Epistemology and the Psychology of Human Judgment, in part because this is a 600-page textbook and so a full summary would be far longer that I want to write here. I'll try to provide enough details that people can seek out the chapters that they find interesting, but this is by no means a replacement for reading the chapters that you find interesting. Every chapter is discussed below, with a brief "what should I read?" section if you know what you're interested in.

We already have a thread for textbook recommendations, but this book is central enough to Less Wrong's mission that it seems like it's worth an in-depth review. I'll state my basic impression of the whole book up front: I expect most readers of LW would gain quite a bit from reading the book, especially newer members, as it seems like a more focused and balanced introduction to the subject of rationality than the Sequences.

Baron splits the book into three sections: Thinking in General, Probability and Belief, and Decisions and Plans.

I may as well quote the first page in its entirety, as I feel it gives a good description of the book:

Beginning with its first edition and through three subsequent editions, Thinking and Deciding has established itself as the required text and important reference work for students and scholars of human cognition and rationality. In this, the fourth edition, Jonathan Baron retains the comprehensive attention to the key questions addressed in previous editions- How should we think? What, if anything, keeps us from thinking that way? How can we improve our thinking and decision making? - and his expanded treatment of topics such as risk, utilitarianism, Bayes's theorem, and moral thinking. With the student in mind, the fourth edition emphasizes the development of an understanding of the fundamental concepts in judgment and decision making. This book is essential reading for students and scholars in judgment and decision making and related fields, including psychology, economics, law, medicine, and business.
Jonathan Baron is Professor of Psychology at the University of Pennsylvania. He is the author and editor of several other books, most recently Against Bioethics. Currently he is editor of the journal Judgment and Decision Making and president of the Society for Judgment and Decision Making (2007) .

1. What is thinking?

This chapter will be mostly familiar to readers of Less Wrong; in the second paragraph, Baron says (in more words) 'rationality is what wins.' It may still be helpful as Baron expresses a number of things often left unsaid here.

He splits thinking into three parts: thinking about decisions (instrumental rationality), thinking about beliefs (epistemic rationality), and thinking about goals. The last is a notoriously sticky subject. He also discusses his search-inference framework, which is how he describes minds as actually operating- coming across ideas, evaluating them, and proceeding from there. Most decision analysis views itself as operating over a fixed set with a well-defined objective function, but those are the two main problems for real decision-makers: identifying possibilities worth considering and comparing two dissimilar outcomes.

The chapter is filled out with a discussion of understanding, knowledge as design, and examples of thinking processes (worth skimming over, but many of which will be familiar to experts in the relevant fields).

2. The study of thinking

Kahneman and Tversky get their first of many references here. Baron discusses a number of the methods used to learn about human cognition, mentioning a few of their pitfalls.

One, which bears repeating, is that most study of biases just reports means, rather than distributions. I remember learning the actual numerical size of the Asch conformity experiments about five years after I heard about the experiment itself, and was underwhelmed (32% incorrect answers, ~75% of subjects gave at least one incorrect answer). A general human tendency is different from a sizeable subset of weak-willed people. Similarly, our article on Prospect Theory had a link to graphs of subjective probability in one of the comments, of which the most noteworthy were the two people who were nearly linear. While Baron brings up this issue, he doesn't give many examples of it here.

He also mentions three models of thought: descriptive models, prescriptive models, and normative models. Descriptive models are what people actually do; normative models are what thinkers should do with infinite cognitive resources; prescriptive models are what thinkers should do with limited cognitive resources. This has come up on LW before, though the focus here has often been exclusively on the normative, though the prescriptive seems most useful.

Computer models of thinking are briefly discussed, but at a superficial level.

This chapter sees the first set of exercises. Overall, the exercises in the book seem to provide a brief example / check, rather than being enough to develop mastery. I think this is what I'd recommend but it has the potential to be a weakness.

3. Rationality

Again, Baron identifies rationality as “the kind of thinking that helps us achieve our goals.” Refreshingly, he focuses on optimal search, keeping in mind the costs of decision-making and information-gathering.

Much of this chapter will be familiar to someone who has read the Sequences, but it's presented tersely and lucidly. The section on rationality and emotion, for example, is only three pages long but is clear, quickly identifying how the two interact in a way that'll clear up common confusions.

4. Logic

The content in this chapter seems mostly unimportant- I imagine most readers of LW are much more interested in probabilistic reasoning than syllogisms. Still, Baron gives a readable (and not very favorable) description of the usefulness of formal logic as a normative model of thinking.

What is fascinating, though, is the section of the chapter that delves into the four-card problem and variations of it. Particularly noteworthy is the variation designed so that most people's intuitions are correct- people give the correct explanations of why they selected the cards they selected, and why they didn't select the cards they didn't select. But when their intuition is wrong, they give explanations that are just as sophisticated- but wrong. It's more evidence that the decision-making and verbal reason-providing modules are different- even someone who gives the correct explanation of the correct answer may stumble on a problem where their underlying simple heuristic (pick the cards mentioned in the question) fails.

He presents a method of mental modeling that makes logical statements easier to correctly evaluate, and then there are a few logical inference exercises.

5. Normative theory of probability

Yet another introduction to Bayes. Baron focuses primarily on Bayesianism (called the “personal” theory of probability) but still introduces alternatives (the “frequency” theory, i.e. frequentism, and “logical” theory, which is a subset of frequentism where all events are required to have the same probability.) This chapter will be useful for someone who doesn't have a firm probabilistic foundation, but holds little interest for others.

There are a handful of exercises for applying Bayes.

6. Descriptive theory of probability judgment

This chapter primarily covers biases related to numerical probability estimates, many of which are classics in the heuristics and biases field (and so have probably been mentioned on Less Wrong at least once). The chapter shines when Baron goes into the detail of an experiment and its variations, as that gives a firmer view of what the experiment actually shows (and, importantly, what it does not show)- descriptions of biases where he only quotes a single experiment (or single feature of an experiment) feel weaker.

A major feature of this chapter is the implication that people are bad at numerical probability estimation mostly because they're unfamiliar with it, implying that calibration exercises may improve probability estimation. A 1977 study of weatherman calibration suggested they were very well calibrated, both with their estimates and with the confidence that should be placed in those estimates. More recent work shows that weathermen have systematic calibration biases.

7. Hypothesis testing

I was gratified to discover that this chapter was not about statistics, but how to come up with and test hypotheses. Baron discusses different models of scientific advancement, focusing on the sorts of likelihood ratios that they look for, as well as discussing the sort of mistakes people make when choosing tests for hypotheses. Many of the stories will probably be familiar- Ignaz Semmelweis gets a mention, though in more detail than I had seen before, as well as the 2-4-6 rule familiar to HPMOR fans and a variation of the four card experiment that makes the typical mistake more obvious.

He gives a baking example to suggest why people might search primarily for positive evidence- there may be benefits to getting a “yes” answer besides the information involved. If you're experimenting with cake recipes, and you think your last cake was good because of a feature, it makes sense to alter other features but keep the one you suspect the same, as that means a good cake is more likely; if you think a cake was bad because of a feature, it makes sense to alter that feature but keep the others the same, as that also means a good cake is more likely. In a purely scientific context, it makes sense to vary the element you think has an impact just to maximize the expected size of the impact, positive or negative.

He describes in more detail a methodology he's been discussing, “actively open-minded thinking,” which seems to boil down to “don't just be willing to accept disconfirming evidence, go looking for it,” but the full explanation comes in a few chapters.

8. Judgment of correlation and contingency

This chapter is descriptive; it begins with a description of correlations and then discusses human judgment of correlations. Unsurprisingly, people suffer from the illusion of control- they think there's more likely to be a correlation if their effort is involved- and from confirmation bias. There are some examples of the latter, where people find correlations that make intuitive sense but aren't in the data, and don't discover correlations that don't make intuitive sense that are in the data. There's also a brief section on how people use nearly useless evidence to support theories or dismiss evidence that doesn't support their theory. Overall, it's a short chapter that won't be surprising to LW readers (although some of the studies referenced may be new).

9. Actively open-minded thinking

I'll quote part of this chapter in full because I think it's a great description:

[G]ood thinking consists of (1) search that is thorough in proportion to the importance of the question, (2) confidence that is appropriate to the amount and quality of thinking done, and (3) fairness to other possibilities than the one we initially favor.

The chapter overall is very solid- it deftly combines normative predictions with descriptive biases to weave a prescriptive recommendation of how to think better. There are several great examples of actively open-minded thinking; in particular, the thought process of two students as they attempt to make sense of a story sentence by sentence.

Many of the suggestions in the chapter are extended by various LW posts, but the chapter seems useful as a concise description of the whole problem and illustration of a general solution. If you're having trouble fitting together various rationality hacks, this seems like a good banner to unite them under.

10. Normative theory of choice under uncertainty

This chapter is an introduction to utility theory, describing how it works, how multiple attributes can be consolidated into one score, and a way to resolve conflicts between agents with different utilities. It's a good introduction to decision analysis / utility theory, and there are some exercises, but there are no surprises for someone who's seen this before.

11. Descriptive theory of choice under uncertainty

This chapter is an introduction to different theories of how humans actually make decisions, like prospect theory and regret theory. There are a handful of exercises for understanding prospect theory.

Baron takes an even-handed approach to deviations from the normative theory. For example, when discussing regret theory, regrets have a real emotional cost (and real learning benefit)- but behaving according to descriptive theories because they're descriptive rather than because they're useful is a mistake. In many cases, those emotions can be manipulated by choice of reference point.

He also discusses the ambiguity effect- where people treat known probabilities differently from unknown probabilities, giving examples both of laboratory situations (drawing balls from an urn with a partially known composition) and real-life situations (insuring unprecedented or unrepeatable events). Baron describes this as incompatible with personal probability and suggests it's related to framing- situations where the probabilities seem known can be changed into situations where probabilities seem unknown. This aversion to ambiguity, though, can be perfectly sensible insofar as it pushes decision-makers to acquire more information.

He also discusses a Tversky study in which most students make a decision to pay money to defer a decision until they receive relevant information, but when asked how they would make the decision in the case of either possible piece of information, most students realize they would make the same decision and choose not to defer the decision.

12. Choice under certainty

This chapter is primarily descriptive, focusing on the problem of thinking about goals. Most people favor categorical goal systems- Baron gives a great example, from Gardiner and Edwards, of the California Coastal Commission, tasked to decide which development projects to allow on the Pacific Coast. The commission was split into pro-development and pro-environment factions, which almost never agreed on which projects to allow and disallow. When asked to rank projects, most would rank them solely by their preferred criterion, creating lists that strongly disagreed. When asked to take both criteria into account- but with whatever weighting they wanted- the subjects would heavily weight their preferred criterion, but the projects which were both very valuable and not very environmentally damaging floated to the top of both lists, creating significant agreement.

The list of biases is long, and each has a study or story associated with. Many of the effects have been mentioned on LW somewhere, but it's very useful to have them placed next to each other (and separated from probabilistic biases), and so I'd recommend everyone read this chapter.

13. Utility measurement

This descriptive chapter discusses the difficult challenge of measuring utilities. It introduces both decision analysis and cost-benefit analysis- the latter converts outcomes to dollars to guide decisions, while the former converts outcomes to utility values to guide decisions.

People are not very skilled at satisfying axioms we would like them to satisfy. For example, consider the challenge of valuing a certain $50 against a p chance of $100 (and $0 otherwise). A subject will often give an answer like .7. Then, when later asked how much a 70% chance of $100 is worth, the subject will answer $60. That inconsistency needs to be resolved before their answers are used as parameters for any decisions. Thankfully, this is an area of active research, and ways to elicit probabilities and values that hold up to reflective equilibrium are gradually being developed. (This particular chapter, while it sounds that note of hope, is mostly negative: here are methods that have been tried and have crippling problems.)

This seems like a chapter that would be useful for anyone who wants to use utilities in an argument or model- treating them like they're unambiguous, easily measured objects when they actually seem to be fuzzy and hard to pin down can lead to significant problems, and thinking clearly about values is a spot where LW could do better.

14. Decision analysis and values

This chapter is a more prescriptive approach to the same problem- given that utilities and values are hard to find, where do we look for them? A dichotomy familiar to LW readers- instrumental and terminal values- appears here as "means-ends objective hierarchy" or "means values" and "fundamental values."

It contains a wealth of examples, including a computer-buying one with potential memories of 64KB to 640KB, with the hilarious comment that "you are buying this computer many years ago, when these numbers made sense!" There are also practical elicitation suggestions- rather than try to figure out a point estimate, start from a number that's too high until you're indifferent, and then start from a number that's too low until you're indifferent, giving you an indifference range (that you can either report or use the middle of as a point estimate).

Lexical preferences (also called categorical preferences elsewhere) and tradeoffs are discussed- Baron takes the position (that I share) that lexical preferences are actually tradeoffs with very, very high weights. (How do we trade off human lives and dollars? We should require a lot of dollars for a life- but not an infinite amount.) There's a discussion of micromorts (though he doesn't use that term) and of historical attempts to teach decision analysis that should be interesting to CFAR (though the references are a few decades old, now). The discussion of the examples contains quite a bit of practical advice, and the chapter seems worthwhile for almost everyone.

15. Quantitative judgment

This chapter describes three common quantitative problems- scoring, ranking, and classifying, and discusses some biases that hamper human decision-making along those lines and some recommendations. Statistical prediction rules make an appearance, though they're not called that. One fascinating suggestion is that models of people can actually perform better than those people, since the models don't have off days and people do.

This chapter will have some new material for LWers, and seems like a good extension of the previous chapter.

16. Moral Judgment and Choice

This chapter discusses morality from the point of decision-making- which is a refreshing perspective. Baron strongly endorses consequentialism and weakly endorses utilitarianism, providing a host of moral questions in which many people deviate from the consequentialist or utilitarian position.

A recurring theme is omission bias: people tend to judge active involvement in a situation in which someone is made worse off as worse than passive involvement in such a situation, even if the end result is better for everyone. People also weight intentions, which doesn't fit a direct consequentialist view.

Overall, the chapter seems valuable for reframing moral questions- placing them within the realm of pragmatism by moving to the perspective of decisions- but provides very little in the way of answers. Both the consequentialist and utilitarian positions are controversial and come with significant drawbacks, and Baron is fair enough in presenting those drawbacks and controversies, though in a rather abridged form.

17. Fairness and justice

This chapter is an extension of the previous chapter, focusing on intuitions dealing with fairness and justice. Baron details situations in which they agree and disagree with utilitarian analysis. Noteworthy is the undercurrent of adaptation-execution and not utility-maximization - fairness has tangible benefits, but people will often pursue fairness even at the cost of tangible benefits.

This chapter (and to a lesser extent the previous one) seem odd in light of chapter 15, in which the fallibility of individual judgment took center stage, with the recommendation that applying rules derived from individual judgment can often do better. It is good to know the reasoning that justifies moral intuitions, especially if one is interested in their boundaries, but when those boundaries impact outcomes they become political questions. If the sole point of punishment is deterrence (and that is the only sensible utilitarian justification), the question of whether or not a decision can impact future decisions is a sticky one. Perhaps the full consequentialist reckoning will recommend unthinking application of the rules, even in cases where direct consequentialist reckoning recommends suspending the rules.

18. Social dilemmas: cooperation versus defection

This chapter focuses on descriptive experiments- how people actually behave in social dilemmas- finding them to be much more cooperative than normative theory would recommend. There is some ambiguity, which he discusses, in what the "normative theory" is- utilitarianism recommends cooperation on the prisoner's dilemma, for example, because it maximizes total utility, whereas expected utility theory recommends defection on the prisoner's dilemma, because it's a dominating strategy.

The value of the chapter mostly lies in the study results- a few are interesting, like that discussing the social dilemma with other participants beforehand significantly increases cooperation, or that subjects are more likely to defect on the prisoner's dilemma if they know their partner's response than if they are uncertain, even if they know their partner cooperated.

Typically, for social dilemmas (scenarios in which private gain requires public loss, or public gain requires private loss), decision-making biases increase the level that people cooperate. (This is somewhat unsurprising, since the normative recommendation is typically defection, and biases move real decisions away from the normative recommendation.) People fail to distinguish between casual influence- "my voting makes people like me more likely to vote"- from diagnostic influence- "people like me voting makes me more likely to vote"- but one of the major reasons people give for voting is that it has a causal influence, rather than a merely diagnostic one.

19. Decisions about the future

This chapter is unlikely to contain any surprises for LWers, but serves as a fine introduction to discounting, both exponential and hyperbolic, and thus dynamic inconsistency. Also interesting (but too brief) is the discussion of goals in the context of time and plans and of goals as malleable objects.

Baron describes four methods of self-control: extrapsychic devices (removing a tempting option), control of attention (thinking about things other than the tempting option), control of emotion (cultivating an incompatible emotion), or personal rules (viewing situations as instances of general policies, rather than isolated events). Again, the discussion is brief- only two pages- though the subject is of great interest to many here.

20. Risk

This chapter focuses on descriptive approaches to risk- survey responses and government regulation- as the normative approach to risk has mostly been detailed in the rest of the book: use expected utility theory. Most people are beset by biases and innumeracy, though, and so there's a whole chapter of material on misjudgments of risk and insurance.

Many of the biases, though perhaps not the examples, will be familiar to LWers. On the whole, they're somewhat uninteresting since most of them seem to just result from innumeracy: when given a table of deaths per year from four causes with wildly different prevalences, subjects were correctly willing to pay more to reduce larger risks by the same percentage as smaller risks. But their preferences scaled much more slowly than the risks- the subjects were, on average, willing to pay 20 times as much to prevent 20% of the deaths from a cause of death that killed 10,000 times as many people. Those distorted willingnesses to pay show up in government regulations. People were also more willing to pay for protection against the unfamiliar than the familiar- even though the relative benefit was far higher for protection against the familiar. (The illusion of control also shows up, distorting perceptions of risk.)

 


What should I read?

  • Almost everyone: 7 and 9.
  • I'm hunting biases: 6, 8, 11, 12, and then 15-20 (perhaps without 18).
  • I'm interested in moral reasoning: 13 and 16 should be required reading. 14, 15, and 17-19 will be useful.
  • I'm a decision maker: 10 and 14 will be directly useful, but check out the bias chapters too.
  • I'm new to rationality: Start off with 1-4.
  • I'm an expert at rationality but haven't heard of Baron: Still read 1-4, just to get his perspective of the field.
  • I don't have a strong background in Bayesianism: read chapter 5.

 

New Comment
16 comments, sorted by Click to highlight new comments since: Today at 7:22 PM

He also mentions three models of thought: descriptive models, prescriptive models, and normative models. Descriptive models are what people actually do; normative models are what thinkers should do with infinite cognitive resources; prescriptive models are what thinkers should do with limited cognitive resources. This has come up on LW before, though the focus here has often been exclusively on the normative, though the prescriptive seems most useful.

This seems like a pretty useful classification scheme.

I've heard people mention Gary Drescher's Good and Real as also overlapping with the sequences a fair amount. How do y'all feel about mentioning these two books on the sequences page?

Minor error: 'adaption' should be 'adaptation'.

Thanks, fixed.

I found the summary itself fairly useful and now intend to read the book, if I can find the time and a reasonably priced copy.

If you want the book, I have uploaded it to the following hosts:

http://dropcanvas.com/mri3r/1

OR

http://www.peejeshare.com/files/363225114/Thinking_and_Deciding_-_Jonathan_Baron.mobi.html

Edited to update links.

i've recently started reading this book, but the search-inference framework seems obviously silly, neglecting simple concepts such as "system 1 does thinking"

what is up with this?

the search-inference framework seems obviously silly

The search-inference framework matches my introspective account of how I make most of my decisions. It also seems to match my professional experience in numerical optimization. For example, we have four trucks and fifty deliveries to make; which deliveries go in which truck, and what order should they be delivered in? We write out what a possibility looks like, what our goals are, and how a program can go from one possibility to other (hopefully better) possibilities, and when it should stop looking and tell us what orders to give the drivers. Does it clash with your experience of decision-making?

neglecting simple concepts such as "system 1 does thinking"

It's not clear to me what you mean by "System 1 does thinking." Could you unpack that for me?

Does it clash with your experience of decision-making?

so, it seems a decent model for system-2 decision making

however, most of our minds is system-1 and is nowhere near so spocky

It's not clear to me what you mean by "System 1 does thinking." Could you unpack that for me?

most of our minds and our cognitive power is instantiated as subconscious system 1 mechanics, not anything as apparent as search-inference

for example, http://cogsci.stackexchange.com/questions/1/how-is-it-that-taking-a-break-from-a-problem-sometimes-allows-you-to-figure-out


or, it says things like "Naive theories are systems of beliefs that result from incomplete thinking." and i think "uh sure but if you treat it as a binary then you'll have to classify all theories as naive . i don't think you have any idea what complete thinking would actually look like" and then it goes on to talk about the binary between naive and non-naive theories and gives commonplace examples of both

it's like the book is describing meta concepts (models for human minds) purely by example (different specific wrong models about human minds) without even acknowledging that they're meta-level

i am experiencing this as disgusting and i notice that i am confused

visible likely resolutions to this confusion are "i am badly misunderstanding the book" and "people on lesswrong are stupider than i thought"

Sorry about the delay in responding! I was much busier this holiday season than I expected to be.

however, most of our minds is system-1 and is nowhere near so spocky ...

most of our minds and our cognitive power is instantiated as subconscious system 1 mechanics, not anything as apparent as search-inference

I'm not sure I would describe search-inference as spocky. I agree that having introspective access to it is spocky, but I don't think that's necessary for it to be search-inference, and I don't think Baron is making the claim that the decision process is always accessible. Baron's example on pages 10-11 seems to include both subconscious and conscious elements, in a way that his earlier description might not seem to, and I think that in this book Baron doesn't really care whether thinking happens in System 1 or System 2.

A lot of the time, I suspect people don't even realize that there's a search going on, because the most available response comes to mind and they don't think to look for more (see Your inner Google, availability bias, and so on), but it seems likely that System 1 did some searching before coming up with the most available response. Indeed, one of the things that another decision book, Decisive, proposes as a heuristic is that whenever a serious issue is under consideration, there should be at least two alternatives on the table (rather than just "A" or "not A," search so you're considering "A" or "B" at least).

or, it says things like "Naive theories are systems of beliefs that result from incomplete thinking." and i think "uh sure but if you treat it as a binary then you'll have to classify all theories as naive

Agree that "naive theory" is not a very good category. In the book, they define the binary as:

What makes them "naive" is that they are now superceded by better theories.

But calling the child's belief that the Earth is flat "naive" because we know better is useless for determining which of our current beliefs are naive, and as you rightly point out if we interpret that as "this belief is naive if someone knows better" then all beliefs must be suspected to be naive.

I think Baron began with naive theories because it makes it easy to give many examples of different mental models of the same phenomena, to highlight that mental models do not have to be concordant with reality, and to show that they can be fluid and changeable (and, implicitly, to be worried about changing the model too little). It sets up the concept of understanding, which is more important and which I remember thinking was sensible.

it's like the book is describing meta concepts (models for human minds) purely by example (different specific wrong models about human minds) without even acknowledging that they're meta-level

The start of the Knowledge, thinking, and understanding section is:

Thinking leads to knowledge. This section reviews some ideas about knowledge from cognitive psychology. These ideas are important as background to what follows.

I can read that as acknowledgement of it being meta-level, but I'm not sure that's what was intended.

visible likely resolutions to this confusion are "i am badly misunderstanding the book" and "people on lesswrong are stupider than i thought"

I'm unlikely to endorse the second resolution! :P My suspicion is that the primary differences in our reaction stem from our different backgrounds, different underlying models of cognition, and different practices to unclear statements. Typically, when I come across a sentence that seems wrong, I default to asking "is there a weak interpretation of this sentence that I could agree with?". Sometimes the author has led with the wrong foot, and later it seems they meant something I could agree with; other times, no, they do appear to be wrong. If you get to the end of the Understanding section (basically, finishing chapter 1) and still don't think that Baron is coming from a reasonable place, that seems like it's worth an extended discussion.

"Prescriptive" seems like it can be split further. There's "what is the best thing to do in general with limited resources", i.e., "how to write an AI" -- this is close to normative but not quite the same thing -- and then there's "what specifically a human should do to compensate for biases". Which is meant by "prescriptive" in the book? The description above doesn't make it clear. We should have terms for both.

The book is focused on humans.

I'm not quite sure if I agree that that split is valuable. A lot of the prescriptive recommendations I know try to replace parts of decision-making entirely, which is different from bias-compensation, but building from scratch is very different from adapting a currently working system. I'll have to chew on that for a while (but feel free to put forth some implications of having such a split).

(For example, one thing I'm considering is that "limited resources" implies multiple limits to me- the decision-making system I would prescribe for myself and the one I would prescribe for an IQ 70 person are different. If I'm comfortable calling both of those "prescriptive," do I really need another word for what I'd tell an AI to do?)

This is great. Many thanks.

People fail to distinguish between casual influence- "my voting makes people like me more likely to vote"- from diagnostic influence- "people like me voting makes me more likely to vote"- but one of the major reasons people give for voting is that it has a causal influence, rather than a merely diagnostic one.

Good! Score 1 for people. Because, after all, I am "people like me."