All of mark_spottswood's Comments + Replies

That critique might sound good in theory, but I think it falls flat in practice. Hearsay is a rule with more than 30 exceptions, many of which seem quite technical and arbitrary. But I have seen no evidence that the public views legal systems that employ this sort of convoluted hearsay regime as less legitimate than legal systems that take a more naturalistic, Benthamite approach.

In practice, even laypeople who are participating in trials don't really see the doctrine that lies beneath the surface of evidentiary rulings, so I doubt they form their judgments of the system's legitimacy based on such details.

A few comments:

  1. It is somewhat confusing (at least to legal readers) that you use legal terms in non-standard ways. Conflating confrontation with hearsay issues is confusing because making people available for cross-examination solves the confrontation problem but not always the hearsay one.

  2. I like your emphasis on the filtering function of evidentiary rules. Keep in mind, however, that these rules have little effect in bench trials (which are more common than jury trials in state courts of general jurisdiction). And relatively few cases reach trial

... (read more)
0CornellEngr2008
I'm skeptical. After all, the anchoring effect isn't weakened by being reminded that it exists. It seems that anything the jury sees will influence their decision, and they will likely be unable to discount its influence appropriately to account for its unreliability (especially if its emotionally charged). I've always been uneasy when the judge on some court TV drama sustains an objection or asks that something be stricken from the record, as if that means it's stricken from the minds of the jury so it won't influence their decision. We have good reason to believe that that's impossible - the jury's brains have been primed with a piece of argumentation that the judge has recognized is unadmissible. It's too late. At least, it has always seemed that way to me. What does the legal literature say about this?

Good points.

This may be why very smart folks often find themselves unable to commit to an actual view on disputed topics, despite being better informed than most of those who do take sides. When attending to informed debates, we hear a chorus of disagreement, but very little overt agreement. And we are wired to conduct a head count of proponents and opponents before deciding whether an idea is credible. Someone who can see the flaws in the popular arguments, and who sees lots of unpopular expert ideas but few ideas that informed people agree on, may giv... (read more)

Internal credibility is of little use when we want to compare the credentials of experts in widely differing fields. But is is useful if we want to know whether someone is trusted in their own field. Now suppose that we have enough information about a field to decide that good work in that field generally deserves some of our trust (even if the field's practices fall short of the ideal). By tracking internal credibility, we have picked out useful sources of information.

Note too that this method could be useful if we think a field is epistemically rotte... (read more)

True. But it is still easier in many cases to pick good experts than to independently assess the validity of expert conclusions. So we might make more overall epistemic advances by a twin focus: (1) Disseminate the techniques for selecting reliable experts, and (2) Design, implement and operate institutions that are better at finding the truth.

Note also that your concern can also be addressed as one subset of institutional design questions: How should we reform fields such as medicine or economics so that influence will better track true expertise?

Experts don't just tell us facts; they also offer recommendations as to how to solve individual or social problems. We can often rely on the recommendations even if we don't understand the underlying analysis, so long as we have picked good experts to rely on.

2[anonymous]
There is a key right there. Ability in rational thinking and understanding of common biasses can drastically impact who we consider as a good expert. The most obvious examples are 'experts' in medicine and economics. I suggest that the most influential experts in those fields are not those with the most accurate understanding. Rationalist training could be expected to improve our judgement when choosing experts.

One can think that individuals can profit from being more rational, while also thinking that improving our social epistemic systems or participating in them actively will do more to increase our welfare than focusing on increasing individual rationality.

Care to explain the basis for your skepticism?

Interestingly, there may be a way to test this question, at least partially. Most legal systems have procedures in place to allow judgments to be revisited upon the discovery of new evidence that was not previously available. There are many procedural complications in making cross-national comparisons, but it would be interesting to compare the rate at which such motions are granted in systems that are more adversarially driven versus more inquisitorial systems (in which a neutral magistrate has more control over the collection of evidence).

Obviously it helps if the experts are required to make predictions that are scoreable. Over time, we could examine both the track records of individual experts and entire disciplines in correctly predicting outcomes. Ideally, we would want to test these predictions against those made by non-experts, to see how much value the expertise is actually adding.

Another proposal, which I raised on a previous comment thread, is to collect third-party credibility assessments in centralized databases. We could collect the rates at which expert witnesses are permitt... (read more)

2Eliezer Yudkowsky
The suggestions from the second paragraph all seem rather incestuous. Propagating trust is great but it should flow from a trustworthy fountain. Those designated "experts" need some non-incestuous test as their foundation (a la your first paragraph).

Another good example is the legal system. Individually it serves many participants poorly on a truth-seeking level; it encourages them to commit strongly to an initial position and make only those arguments that advance their cases, while doing everything they can to conceal their cases' flaws short of explicit misrepresentation. They are rewarded for winning, whether or not their position is correct. On the other hand, this set-up (in combined with modern liberalized disclosure rules) works fairly well as a way of aggregating all the relevant evidence ... (read more)

5RobinHanson
The legal system does supposedly encourage individual bias to aggregate evidence; I'm more of a skeptic about how well it actually does this in practice though.

Words can become less useful when they attach to too much as well as too little. A perfectly drawn map that indicates only the position and exact shape of North America will often be less useful than a less-accurate map that gives the approximate location of its major roads and cities. Similarly, a very clearly drawn map that does not correspond to the territory it describes is useless. So defining terms clearly is only one part of the battle in crafting good arguments; you also need terms that map well onto the actual territory and that do so at a usef... (read more)

Imagining that someone "had a reason to seriously present" to Obama-Mammoth hypothesis is to make the hypothesis non-absurd. If there is real evidence in favor of the hypothesis, than it is obviously worth considering. But that is just to fight the example; it doesn't tell us much about the actual line between absurd claims and claims that are worth considering.

In the world we actually inhabit, an individual who believed that they had good reasons to think that the president was an extinct quadruped would obviously be suffering from a thought d... (read more)

Christianity is false, but it is harder to falsify it then it is to show that Barrack Obama is not a non-sapient extinct mammal. I can prove the second false to a five-year-old of average intelligence by showing a picture of Obama and an artist's rendition of a mammoth. It would take some time to explain to the same five-year-old child why Christianity does not make sense as a description of the world.

This difference—that while both claims are false, one claim is much more obviously false than the other—explains why Christianity has many adherents but ... (read more)

A good reason to take this suggestion to heart: The terms "rationality" and "rational" have a strong positive value for most participants here—stronger, I think, than the value we attach to words like "truth-seeking" or "winning." This distorts discussion and argument; we push overhard to assert that things we like or advocate are "rational" in part because it feels good to associate our ideas with the pretty word.

If you particularize the conversation—i.e., you are likely to get more money by one-boxing o... (read more)

Not necessarily. The vast majority of propositions are false. Most of them are obviously false; we don't need to spend much mental energy to reject the hypothesis that "Barack Obama is a wooly mammoth," or "the moon is made of butternut squash." "Absurd" is a useful label for statements that we can reject with minimal mental effort. And it makes sense that we refuse to consider most such statements; our mental time and energy is very limited, and if we want to live productive lives, we have to focus on things that have som... (read more)

1jimmy
I agree, and think that you explained it well, but I would personally go back to calling christianity absurd after looking into it and finding no evidence. If you look, and find no evidence, what seperates christianity from "Barack Obama is a wooly mammoth"?
3thomblake
But you don't reject the hypothesis that "Barack Obama is a wooly mammoth" because it's absurd - nobody has seriously presented it. If someone had a reason to seriously present it, then I'd not dismiss it out of hand - if only because I was interested enough to hear it in the first place, so would want to see if the speaker was making a clever joke, or perhaps needed immediate medical care. As EY might say, noticing a hypothesis is unlikely enough in the first place that you should probably pay some attention to it, if the speaker was one of the people you listen to. cf. Einstein's Arrogance

The fact that you do not value something does not serve very well as an argument for why others should stop valuing it. For those of us who do experience a conflict between a desire to deter and a desire to punish fairly, you have not explained why we should prioritize the first goal over the second when trying to reduce this conflict.

We have at least two goals when we punish: to prevent the commission of antisocial acts (by deterrence or incapacitation) and to express our anger at the breach of social norms. On what basis should we decide that the first type of goal takes priority over the second type, when the two conflict? You seem to assume that we are somehow mistaken when we punish more or less than deterrence requires; perhaps the better conclusion is that our desire to punish is more driven by retributive goals than it is by utilitarian ones, as Sunstein et al. suggest.

In other words, if two of our terminal values are conflicting, it is hard to see a principled basis for choosing which one to modify in order to reduce the conflict.

On this we agree. If we have 60% confidence that a statement is correct, we would be misleading others if we asserted that it was true in a way that signalled a much higher confidence. Our own beliefs are evidence for others, and we should be careful not to communicate false evidence.

Stripped down to essentials, Eliezer is asking you to assert that God exists with more confidence than it sounds like you have. You are not willing to say it without weasel words because to do so would be to express more certainty than you actually have. Is that right?

There seems to be some confusion here concerning authority. I have the authority to say "I like the color green." It would not make sense for me to say "I believe I like the color green" because I have first-hand knowledge concerning my own likes and dislikes and I'm sufficiently confident in my own mental capacities to determine whether or not I'm deceiving myself concerning so simple a matter as my favorite color.

I do not have the authority to say, "Jane likes the color green." I may know Jane quite well, and the probabilit

... (read more)

Sure, it is useful to ask for clarification when we don't understand what someone is saying. But we don't need to settle on one "correct" meaning of the term in order to accomplish this. We can just recognize that the word is used to refer to a combination of characteristics that cognitive activity might possess. I.e. "rationality" usually refers to thinking that is correct, clear, justified by available evidence, free of logical errors, non-circular, and goal-promoting. Sometimes this general sense may not be specific enough, particularly where different aspects of rationality conflict with each other. But then we should use other words, not seek to make rationality into a different concept.

0Annoyance
"But we don't need to settle on one "correct" meaning of the term in order to accomplish this. " We do in order to understand what we're saying, and for others to understand us. Switching back and forth between different meanings can not only confuse other people but confuse ourselves. To reach truly justified conclusions, our reasoning must be logically equivalent to syllogisms, with all of the precision and none of the ambiguity that implies.

It depends how much relative value you assign to the following things:

  1. Increasing your well-being and life satisfaction.
  2. Your reputation (drug users have low status, mostly).
  3. Not having unpleasant contacts with the criminal justice system.
  4. Viewing the world through your current set of perceptive and affective filters, rather than through a slightly different set of filters.
1[anonymous]
Hmm, good point Mark

Because we can have preferences over our preferences. For instance, I would prefer it if I preferred to eat healthier foods because that preference would clash less with my desire to stay fit and maintain my health. There is nothing irrational about wishing for more consistent (and thus more achievable) preferences.

0[anonymous]
Right, but your new set of preferences would have to be consistent with the old one. But a positive perspective change doesn't necessarily mean that your new preferences are consistent with the old one.

Arguing over definitions is pointless, and somewhat dangerous. If we define the word "rational" in some sort of site-specific way, we risk confusing outsiders who come here and who haven't read the prior threads.

Use the word "rational" or "rationality" whenever the difference between its possible senses does not matter. When the difference matters, just use more specific terminology.

General rule: When terms are confusing, it is better to use different terms than to have fights over meanings. Indeed, your impulse to fig... (read more)

3Annoyance
Arguing over definitions is pointless if we're trying to name ideas. Arguing over definitions is absolutely necessary if there's disagreement over how to understand the stated positions of a third party. Establishing clear definitions is extremely important. If someone has committed themselves to rationality, it's natural for us to ask "what do they mean by 'rationality'?" They should already have a clear and ready definition, which once provided, we can use to understand their commitment.

I think the idea of a nested dialogue is a great one. You could also incorporate reader voting, so that weak arguments get voted off of the dialogue while stronger ones remain, thus winnowing down the argument to its essence over time.

I wonder if our hosts, or any contributors, would be interested in trying out such a procedure as a way of exploring a future disagreement?

Useful practice: Systematize credibility assessments. Find ways to track the sincerity and accuracy of what people have said in the past, and make such information widely available. (An example from the legal domain would be a database of expert witnesses, which includes the number of times courts have qualified them as experts on a particular subject, and the number of times courts adopted or rejected their conclusions.) To the extent such info is widely available, it both helps to "sterilize" the information coming from untrustworthy sources and to promote the contributions that are most likely to be helpful. It also helps improve the incentive structure of truth-seeking discussions.

Sorry -- I meant, but did not make clear, that the word "rationality" should be avoided only when the conversation involves the clash between "winning" and "truth seeking." Otherwise, things tend to bog down in arguments about the map, when we should be talking about the territory.

1Kenny
I agree – in contexts where 'truth seeking' and 'winning' are different, we should qualify references to 'rationality'.

Eliezer said: This, in turn, ends up implying epistemic rationality: if the definition of "winning" doesn't require believing false things, then you can generally expect to do better (on average) by believing true things than false things - certainly in real life, despite various elaborate philosophical thought experiments designed from omniscient truth-believing third-person standpoints.

--

I think this is overstated. Why should we only care what works "generally," rather than what works well in specific subdomains? If rationality mean... (read more)

4Eliezer Yudkowsky
Maybe "truth-seeking" versus "winning", if there's a direct appeal to one and not the other. But I am generally willing to rescue the word "rationality".

Pwno said: I find it hard to imagine a time where truth-seeking is incompatible with acting rationally (the way I defined it). Can anyone think of an example?


The classic example would invoke the placebo effect. Believing that medical care is likely to be successful can actually make it more successful; believing that it is likely to fail might vitiate the placebo effect. So, if you are taking a treatment with the goal of getting better, and that treatment is not very good (but it is the best available option), then it is better from a rationalist goal... (read more)