A few comments:
It is somewhat confusing (at least to legal readers) that you use legal terms in non-standard ways. Conflating confrontation with hearsay issues is confusing because making people available for cross-examination solves the confrontation problem but not always the hearsay one.
I like your emphasis on the filtering function of evidentiary rules. Keep in mind, however, that these rules have little effect in bench trials (which are more common than jury trials in state courts of general jurisdiction). And relatively few cases reach trial
Good points.
This may be why very smart folks often find themselves unable to commit to an actual view on disputed topics, despite being better informed than most of those who do take sides. When attending to informed debates, we hear a chorus of disagreement, but very little overt agreement. And we are wired to conduct a head count of proponents and opponents before deciding whether an idea is credible. Someone who can see the flaws in the popular arguments, and who sees lots of unpopular expert ideas but few ideas that informed people agree on, may giv...
Internal credibility is of little use when we want to compare the credentials of experts in widely differing fields. But is is useful if we want to know whether someone is trusted in their own field. Now suppose that we have enough information about a field to decide that good work in that field generally deserves some of our trust (even if the field's practices fall short of the ideal). By tracking internal credibility, we have picked out useful sources of information.
Note too that this method could be useful if we think a field is epistemically rotte...
True. But it is still easier in many cases to pick good experts than to independently assess the validity of expert conclusions. So we might make more overall epistemic advances by a twin focus: (1) Disseminate the techniques for selecting reliable experts, and (2) Design, implement and operate institutions that are better at finding the truth.
Note also that your concern can also be addressed as one subset of institutional design questions: How should we reform fields such as medicine or economics so that influence will better track true expertise?
Care to explain the basis for your skepticism?
Interestingly, there may be a way to test this question, at least partially. Most legal systems have procedures in place to allow judgments to be revisited upon the discovery of new evidence that was not previously available. There are many procedural complications in making cross-national comparisons, but it would be interesting to compare the rate at which such motions are granted in systems that are more adversarially driven versus more inquisitorial systems (in which a neutral magistrate has more control over the collection of evidence).
Obviously it helps if the experts are required to make predictions that are scoreable. Over time, we could examine both the track records of individual experts and entire disciplines in correctly predicting outcomes. Ideally, we would want to test these predictions against those made by non-experts, to see how much value the expertise is actually adding.
Another proposal, which I raised on a previous comment thread, is to collect third-party credibility assessments in centralized databases. We could collect the rates at which expert witnesses are permitt...
Another good example is the legal system. Individually it serves many participants poorly on a truth-seeking level; it encourages them to commit strongly to an initial position and make only those arguments that advance their cases, while doing everything they can to conceal their cases' flaws short of explicit misrepresentation. They are rewarded for winning, whether or not their position is correct. On the other hand, this set-up (in combined with modern liberalized disclosure rules) works fairly well as a way of aggregating all the relevant evidence ...
Words can become less useful when they attach to too much as well as too little. A perfectly drawn map that indicates only the position and exact shape of North America will often be less useful than a less-accurate map that gives the approximate location of its major roads and cities. Similarly, a very clearly drawn map that does not correspond to the territory it describes is useless. So defining terms clearly is only one part of the battle in crafting good arguments; you also need terms that map well onto the actual territory and that do so at a usef...
Imagining that someone "had a reason to seriously present" to Obama-Mammoth hypothesis is to make the hypothesis non-absurd. If there is real evidence in favor of the hypothesis, than it is obviously worth considering. But that is just to fight the example; it doesn't tell us much about the actual line between absurd claims and claims that are worth considering.
In the world we actually inhabit, an individual who believed that they had good reasons to think that the president was an extinct quadruped would obviously be suffering from a thought d...
Christianity is false, but it is harder to falsify it then it is to show that Barrack Obama is not a non-sapient extinct mammal. I can prove the second false to a five-year-old of average intelligence by showing a picture of Obama and an artist's rendition of a mammoth. It would take some time to explain to the same five-year-old child why Christianity does not make sense as a description of the world.
This difference—that while both claims are false, one claim is much more obviously false than the other—explains why Christianity has many adherents but ...
A good reason to take this suggestion to heart: The terms "rationality" and "rational" have a strong positive value for most participants here—stronger, I think, than the value we attach to words like "truth-seeking" or "winning." This distorts discussion and argument; we push overhard to assert that things we like or advocate are "rational" in part because it feels good to associate our ideas with the pretty word.
If you particularize the conversation—i.e., you are likely to get more money by one-boxing o...
Not necessarily. The vast majority of propositions are false. Most of them are obviously false; we don't need to spend much mental energy to reject the hypothesis that "Barack Obama is a wooly mammoth," or "the moon is made of butternut squash." "Absurd" is a useful label for statements that we can reject with minimal mental effort. And it makes sense that we refuse to consider most such statements; our mental time and energy is very limited, and if we want to live productive lives, we have to focus on things that have som...
The fact that you do not value something does not serve very well as an argument for why others should stop valuing it. For those of us who do experience a conflict between a desire to deter and a desire to punish fairly, you have not explained why we should prioritize the first goal over the second when trying to reduce this conflict.
We have at least two goals when we punish: to prevent the commission of antisocial acts (by deterrence or incapacitation) and to express our anger at the breach of social norms. On what basis should we decide that the first type of goal takes priority over the second type, when the two conflict? You seem to assume that we are somehow mistaken when we punish more or less than deterrence requires; perhaps the better conclusion is that our desire to punish is more driven by retributive goals than it is by utilitarian ones, as Sunstein et al. suggest.
In other words, if two of our terminal values are conflicting, it is hard to see a principled basis for choosing which one to modify in order to reduce the conflict.
On this we agree. If we have 60% confidence that a statement is correct, we would be misleading others if we asserted that it was true in a way that signalled a much higher confidence. Our own beliefs are evidence for others, and we should be careful not to communicate false evidence.
Stripped down to essentials, Eliezer is asking you to assert that God exists with more confidence than it sounds like you have. You are not willing to say it without weasel words because to do so would be to express more certainty than you actually have. Is that right?
...There seems to be some confusion here concerning authority. I have the authority to say "I like the color green." It would not make sense for me to say "I believe I like the color green" because I have first-hand knowledge concerning my own likes and dislikes and I'm sufficiently confident in my own mental capacities to determine whether or not I'm deceiving myself concerning so simple a matter as my favorite color.
I do not have the authority to say, "Jane likes the color green." I may know Jane quite well, and the probabilit
Sure, it is useful to ask for clarification when we don't understand what someone is saying. But we don't need to settle on one "correct" meaning of the term in order to accomplish this. We can just recognize that the word is used to refer to a combination of characteristics that cognitive activity might possess. I.e. "rationality" usually refers to thinking that is correct, clear, justified by available evidence, free of logical errors, non-circular, and goal-promoting. Sometimes this general sense may not be specific enough, particularly where different aspects of rationality conflict with each other. But then we should use other words, not seek to make rationality into a different concept.
It depends how much relative value you assign to the following things:
Because we can have preferences over our preferences. For instance, I would prefer it if I preferred to eat healthier foods because that preference would clash less with my desire to stay fit and maintain my health. There is nothing irrational about wishing for more consistent (and thus more achievable) preferences.
Arguing over definitions is pointless, and somewhat dangerous. If we define the word "rational" in some sort of site-specific way, we risk confusing outsiders who come here and who haven't read the prior threads.
Use the word "rational" or "rationality" whenever the difference between its possible senses does not matter. When the difference matters, just use more specific terminology.
General rule: When terms are confusing, it is better to use different terms than to have fights over meanings. Indeed, your impulse to fig...
I think the idea of a nested dialogue is a great one. You could also incorporate reader voting, so that weak arguments get voted off of the dialogue while stronger ones remain, thus winnowing down the argument to its essence over time.
I wonder if our hosts, or any contributors, would be interested in trying out such a procedure as a way of exploring a future disagreement?
Useful practice: Systematize credibility assessments. Find ways to track the sincerity and accuracy of what people have said in the past, and make such information widely available. (An example from the legal domain would be a database of expert witnesses, which includes the number of times courts have qualified them as experts on a particular subject, and the number of times courts adopted or rejected their conclusions.) To the extent such info is widely available, it both helps to "sterilize" the information coming from untrustworthy sources and to promote the contributions that are most likely to be helpful. It also helps improve the incentive structure of truth-seeking discussions.
Eliezer said: This, in turn, ends up implying epistemic rationality: if the definition of "winning" doesn't require believing false things, then you can generally expect to do better (on average) by believing true things than false things - certainly in real life, despite various elaborate philosophical thought experiments designed from omniscient truth-believing third-person standpoints.
--
I think this is overstated. Why should we only care what works "generally," rather than what works well in specific subdomains? If rationality mean...
Pwno said: I find it hard to imagine a time where truth-seeking is incompatible with acting rationally (the way I defined it). Can anyone think of an example?
The classic example would invoke the placebo effect. Believing that medical care is likely to be successful can actually make it more successful; believing that it is likely to fail might vitiate the placebo effect. So, if you are taking a treatment with the goal of getting better, and that treatment is not very good (but it is the best available option), then it is better from a rationalist goal...
That critique might sound good in theory, but I think it falls flat in practice. Hearsay is a rule with more than 30 exceptions, many of which seem quite technical and arbitrary. But I have seen no evidence that the public views legal systems that employ this sort of convoluted hearsay regime as less legitimate than legal systems that take a more naturalistic, Benthamite approach.
In practice, even laypeople who are participating in trials don't really see the doctrine that lies beneath the surface of evidentiary rulings, so I doubt they form their judgments of the system's legitimacy based on such details.