All of Kevin Dorst's Comments + Replies

I get where you're coming from, but where do you get off the boat? The result is a theorem of probability: if (1) you update by conditioning on e, and (2) you had positive covariance for your own opinion and the truth, then you commit hindsight bias.  So to say this is irrational we need to either say that (1) you don't update by conditioning, or (2) you don't have positive covariance between your opinion and the truth. Which do you deny, and why?

The standard route is to deny (2) by implicitly assuming that you know exactly what your prior probability... (read more)

3[anonymous]
I deny that "hindsight bias", as a term used in common and specialized parlance, has anything to do with (1). If you respond to the implicit question of "what did you expect at time t" with anything that involves updates from stuff after time t, you are likely committing hindsight bias. If you are a Bayesian updater, you do change your credence in something by conditioning, as time passes. But it is precisely the action of changing the subjective probability distribution you are talking about from the old one to the new one that is epistemically incorrect, if you are focused on the question of what you had actually believed before obtaining new information (and thus doing the Bayesian update). As I said earlier:

Agreed that people have lots of goals that don't fit in this model. It's definitely a simplified model.  But I'd argue that ONE of (most) people's goals to solve problems; and I do think, broadly speaking, it is an important function (evolutionarily and currently) for conversation.  So I still think this model gets at an interesting dynamic.

I think it depends on what we mean by assuming the truth is in the center of the spectrum.  In the model at the end, we assume is at the extreme left of the initial distribution—i.e. µ=40, while everyone's estimates are higher than 40.  Even then, we end up with a spread where those who end up in the middle (ish—not exactly the middle) are both more accurate and less biased.

What we do need is that wherever the truth is, people will end up being on either side of it.  Obviously in some cases that won't hold. But in many cases it will—it's basically inevitable if people's estimates are subject to noise and people's priors aren't in the completely wrong region of logical space.

Hm, I'm not following your definitions of P and Q. Note that there's no (that I know of) easy closed-form expression for the likelihoods of various sequences for these chains; I had to calculate them using dynamic programming on the Markov chains.

The relevant effect driving it is that the degree of shiftiness (how far it deviates from 50%-heads rate) builds up over a streak, so although in any given case where Switchy and Sticky deviate (say there's a streak of 2, and Switchy has a 30% of continuing while Sticky has a 70% chance), they have the same degree... (read more)

See the discussion in §6 of the paper.  There are too many variations to run, but it at least shows that the result doesn't depend on knowing the long-run frequency is 50%; if we're uncertain about both the long-run hit rate and about the degree of shiftiness (or whether it's shifty at all), the results still hold.

Does that help?

Mathematica notebook is here! Link in the full paper.

How did you define Switchy and Sticky? It needs to be >= 2-steps, i.e. the following matrices won't exhibit the effect.  So it won't appear if they are eg

Switchy = (0.4, 0.6; 0.6, 0.4)

Sticky = (0.6,0.4; 0.4,0.6)

But it WILL appear if they build up to (say) 60%-shiftiness over two steps. Eg:

Switchy = (0.4, 0 ,0.6, 0; 0.45, 0, 0.55, 0; 0, 0.55, 0, 0.45, 0, 0.6, 0, 0.4)

Sticky = (0.6, 0 ,0.4, 0; 0.55, 0, 0.45, 0; 0, 0.45, 0, 0.55, 0, 0.4, 0, 0.6)

Would it have helped if I added the attached paragraphs (in the paper, page 3, cut for brevity)?

Frame the conclusion as a disjunction: "either we construe 'gambler's fallacy' narrowly (as by definition irrational) or broadly (as used in the blog post, for expecting switches).  If the former, we have little evidence that real people commit the gambler's fallacy.  If the latter, then the gambler's fallacy is not a fallacy."

2Richard_Kennaway
This seems to be an argument against the very idea of an error. How can people possibly make errors of reasoning? If the gambler knows the die rolls are independent, how could they believe in streaks? How could someone who knows the spelling of a word possibly mistype it? There seems to be a presumption of logical omniscience and consistency.

I see the point, though I don't see why we should be too worried about the semantics here. As someone mentioned below, I think the "gambler's fallacy" is a folk term for a pattern of beliefs, and the claim is that Bayesians (with reasonable priors) exhibit the same pattern of beliefs.  Some relevant discussion in the full paper (p. 3), which I (perhaps misguidedly) cut for the sake of brevity:

Good question.  It's hard to tell exactly, but there's lots of evidence that the rise in "affective polarization" (dislike of the other side) is linked to "partisan sorting" (or "ideological sorting")—the fact that people within political parties increasingly agree on more and more things, and also socially interact with each other more.  Lilliana Mason has some good work on this (and Ezra Klein got a lot of his opinions in his book on this from her).  

This paper raises some doubts about the link between the two, though.  It's hard to k... (read more)

I think it depends a bit on what we mean by "rational". But it's standard to define as "doing the best you CAN, to get to the truth (or, in the case of practical rationality, to get what you want)".  We want to put the "can" proviso in there so that we don't say people are irrational for failing to be omniscient.  But once we put it in there, things like resource-constraints look a lot like constraints on what you CAN do, and therefore make less-ideal performance rational.  

That's controversial, of course, but I do think there's a case to be made that (at least some) "resource-rational" theories ARE ones on which people are being rational.

Interesting!  A middle-ground hypothesis is that people are just as (un)reasonable as they've always been, but the internet has given people greater exposure to those who disagree with them.

Nice point! I think I'd say where the critique bites is in the assumption that you're trying to maximize the expectation of q_i.  We could care about the variance as well, but once we start listing the things we care about—chance of publishing many papers, chance of going into academia, etc—then it looks like we can rephrase it as a more-complicated expectation-maximizing problem. Let U be the utility function capturing the balance of these other desired traits; it seems like the selectors might just try to maximize E(U_i).  

Of course, that's abs... (read more)

Very nice point!  We had definitely thought about the fact that when slots are large and candidates are few, that would give people from less prestigious/legible backgrounds an advantage.  (We were speculating idly whether we could come up with uncontroversial examples...)

But I don't think we'd thought about the point that people might intentionally manipulate how legible their application is. That's a very nice point!  I'm wondering a bit how to model it. Obviously if the Bayesian selectors know that they're doing this and exactly how, they... (read more)

Nope, it's the same thing!  Had meant to link to that post but forgot to when cross-posting quickly.  Thanks for pointing that out—will add a link.

I agree you could imagine someone who didn't know the factions positions.  But of course any real-world person who's about to become politically opinionated DOES know the factions positions.

More generally, the proof is valid in the sense that if P1 and P2 are true (and the person's degrees of belief are representable by a probability function), then Martingale fails.  So you'd have to somehow say how adding that factor would lead one of P1 or P2 to be false.  (I think if you were to press on this you should say P1 fails, since not knowing what the positions are still lets you know that people's opinions (whatever they are) are correlated.)

4tailcalled
Maybe a clearer way to frame it is that I'm objecting to this assumption:

Nice point! Thanks.  Hadn't thought about that properly, so let's see.  Three relevant thoughts:

1) For any probabilistic but non-omniscient agent, you can design tests on which it's poorly calibrated on.  (Let its probability function be P, and let W = {q: P(q) > 0.5 & ¬q} be the set of things it's more than 50% confident in but are false.  If your test is {{q,¬q}: q ∈ W}, then the agent will have probability above 50% in all its answers, but its hit rate will be 0%.)  So it doesn't really make sense to say that a system is ... (read more)

How does that argument go?  The same is true of a person doing (say) the cognitive reflection task. 

"A bat and a ball together cost $1.10; the bat costs $1 more than the ball; how much does the ball cost?"

Standard answer: "$0.10".  But also standardly, if you say "That's not correct", the person will quickly realize their mistake.

3AnthonyC
Well, that's true. People do also do that. I was trying to point to the idea of LLMs being able to act like multiple different people when properly prompted to do so.

Hm, I'm not sure I follow how this is an objection to the quoted text.  Agreed, it'll use bits of the context to modify its predictions. But when the context is minimal (as it was in all of my prompts, and in many other examples where it's smart), it clearly has a default, and the question is what we can learn from that default. 

Clearly that default behaves as if it is much smarter and clearer than the median internet user. Ask it to draw a tikz diagram, and it'll perform better than 99% of humans. Ask it about the Linda problem, and it'll perfor... (read more)

Thanks for the thoughtful reply! Two points.

1) First, I don't think anything you've said is a critique of the "cautious conclusion", which is that the appearance of the conjunction fallacy (etc) is not good evidence that the underlying process is a probabilistic one.  That's still interesting, I'd say, since most JDM psychologists circa 1990 would've confidently told you that the conjunction fallacy + gambler's fallacy + belief inertia show that the brain doesn't work probabilistically. Since a vocal plurality of cognitive scientists now think they're... (read more)

Yeah, that looks right! Nice. Thanks!

Fair! I didn't work out the details of the particular case, partly for space and partly from my own limited bandwidth in writing the post.  I'm actually having more trouble writing it out now that I sit down with it, in part because of the choice-dependent nature of how your values change.

Here's how we'd normally money-pump you when you have a predictable change in values.  Suppose at t1 you value X at $1 and at t2 you predictably will come to value it at $2.  Suppose at t1 you have X; since you value it at $1, you'll trade it to me for $1, ... (read more)

3Sweetgum
I think I got it. Right after the person buys X for $1, you offer to buy it off them for $2, but with a delay, so they keep X for another month before the sale goes through. After the month passes, they now value X at $3 so they are willing to pay $3 to buy it back from you, and you end up with +$1.

Nice point. Yeah, that sounds right to me—I definitely think there are things in the vicinity and types of "rationalization" that are NOT rational.  The class of cases you're pointing to seems like a common type, and I think you're right that I should just restrict attention. "Preference rationalization" sounds like it might get the scope right.

Sometimes people use "rationalization" to by definition be irrational—like "that's not a real reason, that's just a rationalization".  And it sounds like the cases you have in mind fit that mold.

I hadn't t... (read more)

Ah, sorry!  Yes, they're exchanging with the experimenters, who have a excesses of both mugs and pens.  That's important, sorry to be unclear!

Yeah, I think it's a good question how much of a role some sort of salient default is doing. In general ("status quo bias") people do have a preference for default choices, and this is of course generally reasonable since "X is the default option" is generally evidence that most people prefer X. (If they didn't, the people setting the defaults should change it!).  So that phenomenon clearly exists, and seems like it'd help explain the effect.

I don't know much empirical literature off-hand looking at variants like you're thinking of, but I imagine some... (read more)

Not sure I totally follow, but does this help?  Suppose it's true that 10 of 50 people who got mugs prefer the pen, so 20% of them prefer the pen. Since assignments were randomized, we should also expect 10 of 50  (20% of) people who got pens to prefer the pens. That means that the other 40 pen-receivers prefer mugs, so those 40 will trade too.  Then we have 10 mugs-to-pens trades + 40 pens-to-mugs trades, for a total of 50 of 100 trades.

2AprilSR
...are they trading with, like, a vending machine, rather than with each other?

Thanks, yeah I agree that this is a good place to press.  A few thoughts:

  1. I agree with what Herb said below, especially about default aversion to trading especially in contexts where you have uncertainty
  2. I think you're totally right that those other explanations could play a role. I doubt the endowment effect has a single explanation, especially since manipulations of the experimental setup can induce big changes in effect sizes. So presumably the effect is the combination of a lot of factors—I didn't mean incomparability to be the only one, just one co
... (read more)
3Richard_Ngo
I think this is assuming the phenomenon you want to explain. If we agree that there are benefits to not trading in general (e.g. less regret/foolishness if it goes wrong), then we should expect that the benefits will outweigh the benefits of trading not just when you're precisely indifferent, but also when you have small preferences between them (it would be bizarre if people's choices were highly discontinuous in that way). So then you don't need to appeal to incomparability at all. So then the salient question becomes: why would not trading be a "salient default" at all? If you think about it just in terms of actions, there are many cases where it's just as easy as trading (e.g. IIRC the endowment effect still applies even when you're not physically in possession of either good yet, and so where trading vs not would just be the difference between saying "yes" and "no"). But at least conceptually trading feels like an action and not trading feels like inaction. So then the question I'm curious about becomes "does the endowment effect apply when the default option is to trade, i.e. when trading feels like inaction and not trading feels like action?" E.g. when the experimenter says "I'm going to trade unless you object". That would help distinguish between "people get attached to things they already have" vs "people just go with whatever option is most salient in their mind", i.e. if it's really about the endowment or just inaction bias.

Yeah that's a reasonable way to look at it. I'm not sure how much the two approaches really disagree: both are saying that the actual intervals people are giving are narrower than their genuine 90% intervals, and both presumably say that this is modulated by the fact that in everyday life, 50% intervals tend to be better. Right?

I take the point that the bit at the end might misrepresent what the irrationality interpretation is saying, though!

 

I haven't come across any interval-estimation studies that ask for intervals narrower than 20%, though Don Moo... (read more)

3Lukas Finnveden
Yeah sounds right to me! Nice, thanks!

Oops, must've gotten my references crossed!  Thanks.

This wikipedia page says the height of a "Gerald R Ford-class" aircraft carrier is 250 feet; so, close.  

https://en.wikipedia.org/wiki/USS_Gerald_R._Ford

Crossposting from Substack:

Super interesting!

I like the strategy, though (from my experience) I do think it might be a big ask for at least online experimental subjects to track what's going on. But there are also ways in which that's a virtue—if you just tell them that there are no (good) ways to game the system, they'll probably mostly trust you and not bother to try to figure it out. So something like that might indeed work! I don't know exactly what calibration folks have tried in this domain, so will have to dig into it more. But it definitely seems l... (read more)

Thanks for the thoughtful reply!  Cross-posting the reply I wrote on Substack as well:

I like the objection, and am generally very sympathetic to the "rationality ≈ doing the best you can, given your values/beliefs/constraints" idea, so I see where you're coming from.  I think there are two places I'd push back on in this particular case.

1) To my knowledge, most of these studies don't use incentive-compatible mechanisms for eliciting intervals. This is something authors of the studies sometimes worry about—Don Moore et al talk about it as a concer... (read more)

2Violet Hour
The first point is extremely interesting. I’m just spitballing without having read the literature here, but here’s one quick thought that came to mind. I’m curious to hear what you think. 1. First, instruct participants to construct a very large number of 90% confidence intervals based on the two-point method. 2. Then, instruct participants to draw the shape of their 90% confidence interval. 3. Inform participants that you will take a random sample from these intervals, and tell them they’ll be rewarded based on both: (i) the calibration of their 90% confidence intervals, and (ii) the calibration of the x% confidence intervals implied by their original distribution — where x is unknown to the participants, and will be chosen by the experimenter after inspecting the distributions. 4. Allow participants to revise their intervals, if they so desire.   So, if participants offered the 90% confidence interval [0, 10^15] on some question, one could back out (say) a 50% or 5% confidence interval from the shape of their initial distribution. Experimenters could then ask participants whether they’re willing to commit to certain implied x% confidence intervals before proceeding.  There might be some clever hack to game this setup, and it’s also a bit too clunky+complicated. But I think there’s probably a version of this which is understandable, and for which attempts to game the system are tricky enough that I doubt strategic behavior would be incentivized in practice. On the second point, I sort of agree. If people were still overprecise, another way of putting your point might be to say that we have evidence about the irrationality of people’s actions, relative to a given environment. But these experiments might not provide evidence suggesting that participants are irrational characters. I know Kenny Easwaran likes (or at least liked) this distinction in the context of Newomb's Problem. That said, I guess my overall thought is that any plausible account of the “rati