Walter Raleigh is also famous for leading an expedition to discover El Dorado. He didn't find it, but he wrote a book saying that he definitely had, and that if people gave him funding for a second expedition he would bring back limitless quantities of gold. He got his funding, went on his second expedition, and of course found nothing. His lieutenant committed suicide out of shame, and his men decided the Spanish must be hoarding the gold and burnt down a Spanish town. On his return to England, Raleigh was tried for treason based on a combination of the attack on Spain (which England was at peace with at the time) and defrauding everyone about the El Dorado thing. He was executed in 1618.
For conflict theorists, the moral of this story is that accusing everyone else of being lying and corrupt can sometimes be a strategy con men use to deflect suspicion. For mistake theorists, the moral is that it's really easy to talk yourself into a biased narrative where you are a lone angel in a sea full of corruption, and you should try being a little more charitable to other people and a little harsher on yourself.
I think the Vassarian–Taylorist conflict–mistake synthesis moral is that in order to perform its function, the English court system needs to be able to punish Raleigh for "fraud" on the basis of his actions relative what he knew or could have reasonably been expected to know, even while Raleigh is subjectively the hero of his own story and a sympathetic psychologist could eloquently and truthfully explain how easy it was for him to talk himself into a biased narrative.
Where mistake theorists treat politics as "science, engineering, or medicine" and conflict theorists treat politics as war, this view treats politics as evolutionary game theory: the unfolding over time of a population of many dumb, small agents executing strategies, forming coalitions, occasionally switching strategies to imitate those that are more successful in the local environment, &c. The synthesis view is mistake-theoretic insofar as the little agents are understood to be playing far from optimally and could do much better if they were smarter, but conflict-theoretic insofar as the games being played have large zero-sum components and you mostly can't take the things the little agents say literally. The "mistakes" aren't random and not easily fixable with more information (in contrast to how if I said 57 was prime and you said "But 3 × 19", I would immediately say "Oops"), but rather arise from the strategies being executed: it's not a coincidence that Raleigh talked himself into a narrative where he was lone angel who would discover limitless gold.
Agents select beliefs on the basis of either their being true (and therefore useful for navigating the world) or because they successfully deceive other agents into mis-navigating the world in a way that benefits the belief-holder. "Be more charitable to other people" isn't necessarily great advice in general, because while sometimes other agents have useful true information to offer (Raleigh's The Discovery of Guiana "includes some material of a factual nature"), it's hard to distinguish from misinformation that was optimized to benefit the agents who propogate it (Discovery of Guiana also says you should invest in Raleigh's second expedition).
Mistake theorists think conflict theorists are making a mistake; conflict theorists think mistake theorists are the enemy. Evolutionary game theorists think that conflict theorists are executing strategies adapted to an environment predominated by zero-sum games, and that mistake theorists are executing strategies adapted to an environment containing cooperative games (where the existence of a mechanism for externally enforcing agreements, like a court system, aligns incentives and thereby makes it easier to propogate true infromation).
I think you might be wrong about how fraud is legally defined. If the head of Pets.com says "You should invest in Pets.com, it's going to make millions, everyone wants to order pet food online", and then you invest in them, and then they go bankrupt, that person was probably biased and irresponsible, but nobody has committed fraud.
If Raleigh had simply said "Sponsor my expedition to El Dorado, which I believe has lots of gold", that doesn't sound like fraud either. But in fact he said:
For the rest, which myself have seen, I will promise these things that follow, which I know to be true. Those that are desirous to discover and to see many nations may be satisfied within this river, which bringeth forth so many arms and branches leading to several countries and provinces, above 2,000 miles east and west and 800 miles south and north, and of these the most either rich in gold or in other merchandises. The common soldier shall here fight for gold, and pay himself, instead of pence, with plates of half-a-foot broad, whereas he breaketh his bones in other wars for provant and penury. Those commanders and chieftains that shoot at honour and abundance shall find there more rich and beautiful cities, more temples adorned with golden images, more sepulchres filled with treasure, than either Cortes found in Mexico or Pizarro in Peru. And the shining glory of this conquest will eclipse all those so far-extended beams of the Spanish nation.
There were no Indian cities, and essentially no gold, anywhere in Guyana.
I agree with you that lots of people are biased! I agree this can affect their judgment in a way somewhere between conflict theory and mistake theory! I agree you can end up believing the wrong stories, or focusing on the wrong details, because of your bias! I'm just not sure that's how fraud works, legally, and I'm not sure it's an accurate description of what Sir Walter Raleigh did.
Oh, sorry, I wasn't trying to offer a legal opinion; I was just trying to convey worldview-material while riffing off your characterization of "defrauding everyone about the El Dorado thing."
Sometimes it may take a thief to catch a thief. If it was written in 1592, Rayleigh was at his height then, and had much opportunity to see inside the institutions he attacks.
I'm reminded of a book review I wrote last week about famed psychologist Robert Rosenthal's book on bias and error in psychology & the sciences.
Rosenthal writes lucidly about how experimenter biases can skew results or skew the analysis or cause publication bias (which he played a major role in raising awareness of & developing meta-analysis), gives many examples, and proposes novel & effective measures like result-blind peer review. A veritable former day Ioannidis, you might say. But in the same book, he shamelessly reports some of the worst psychological research ever done, like the 'Pygmalion effect', which he helped develop meta-analysis to defend (despite its nonexistence), and the book is a tissue of unreplicable absurd effects from start to finish, and Rosenthal has left a toxic legacy of urban legends and statistical gimmicks which are still being used to defend psi, among other things.
Something something the line goes through every human heart...
I'm confused by your confusion. The first paragraph establishes that Raleigh was at least as deceptive as the institutions he claimed to be criticizing. The second paragraph argues that if deceptive people can write famous poems about how they are the lone voice of truth in a deceptive world, we should be more careful about taking claims like that completely literally.
If you want more than that, you might have to clarify what part you don't understand.
This account of Walter Raleigh’s life seems… misleading, at best (and in parts just plain inaccurate)—assuming, that is, that we can trust the Wikipedia page. There seems to be quite a bit of conflict (of interpretation, at least) between that page and this one about Raleigh’s book.
I don’t think we should draw any moral from this story, without first thoroughly verifying it from reliable sources. As it stands, we have several Wikipedia pages, which paint a murky and contradictory picture (and some of which are inconsistent with Scott’s summary).
What exactly is contradictory? I only skimmed the relevant pages, but they all seemed to give a pretty similar picture. I didn't get a great sense of exactly what was in Raleigh's book, but all of them (and whoever tried him for treason) seemed to agree it was somewhere between heavily exaggerated and outright false, and I get the same impression from the full title "The discovery of the large, rich, and beautiful Empire of Guiana, with a relation of the great and golden city of Manoa (which the Spaniards call El Dorado)"
One thing I see: "Raleigh was arrested on 19 July 1603, charged with treason for his involvement in the Main Plot against Elizabeth's successor, James I, and imprisoned in the Tower of London"
The Wikipedia article states that he was tried for treason at least two times, once for his involvement in the Main Plot, and once for the things he did on his El Dorado adventure. So I think that doesn't contradict what Scott said.
Would your views on speaking truth to power change if the truth were 2x as offensive as you currently think it is? 10x? 100x? (If so, are you sure that's not why you don't think the truth is more offensive than you currently think it is?) Immaterial souls are stabbed all the time in the sense that their opinions are discredited.
Would your views on speaking truth to power change if the truth were 2x as offensive as you currently think it is? 10x? 100x?
For some multiplier, yes. (I don't know what the multiplier is.) If potentates would murder me on the spot unless I deny that that they live acting by others' action, and affirm that they are loved even if they don't give and are strong independently of a faction, then I will say those things in order to not be murdered on the spot.
I guess I need to clarify something: I tend to talk about this stuff in the language of virtues and principles rather than the language of consequentialism, not because I think the language of virtues and principles is literally true as AI theory, but because humans can't use consequentialism for this kind of thing. Some part of your brain is performing some computation that, if it works, to the extent that it works, is mirroring Bayesian decision theory. But that doesn't help the part of you can that talk, that can be reached by the part of me that can talk.
"Speak the truth, even if your voice trembles" isn't a literal executable decision procedure—if you programmed your AI that way, it might get stabbed. But a culture that has "Speak the truth, even if your voice trembles" as a slogan might—just might be able to do science or better—to get the goddamned right answer even when the local analogue of the Pope doesn't like it. I falsifiably predict that a culture that has "Use Bayesian decision theory to decide whether or not to speak the truth" as its slogan won't be able to do science—Platonically, the math has to exist, but letting humans appeal to Platonic math whenever they want is just too convenient of an excuse.
Would your views on speaking truth to power change if the truth were 2x less expensive as you currently think it is? 10x? 100x? I falsifiably predict that your answer is "Yes." Followup question: have you considered performing an experiment to test whether the consequences of speech are as dire as you currently think? I think I have more data than you! (We probably mostly read the same blogs, but I've done field work.)
(If so, are you sure that's not why you don't think the truth is more offensive than you currently think it is?)
Great question! No, I'm not sure. But if my current view is less wrong than the mainstream, I expect to do good by talking about it, even if there exists an even better theory that I wouldn't be brave enough to talk about.
Immaterial souls are stabbed all the time in the sense that their opinions are discredited.
Can you be a little more specific? "Discredited" is a two-place function (discredited to whom).
Would your views on speaking truth to power change if the truth were 2x less expensive as you currently think it is? 10x? 100x?
Maybe not; probably; yes.
Followup question: have you considered performing an experiment to test whether the consequences of speech are as dire as you currently think? I think I have more data than you! (We probably mostly read the same blogs, but I've done field work.)
Most of the consequences I'm worried about are bad effects on the discourse. I don't know what experiment I'd to to figure those out. I agree you have more data than me, but you probably have 2x the personal data instead of 10x the personal data, and most relevant data is about other people because there are more of them. Personal consequences are more amenable to experiment than discourse consequences, but I already have lots of low-risk data here, and high-risk data would carry high risk and not be qualitatively more informative. (Doing an Experiment here doesn't teach you qualitatively different things here than watching the experiments that the world constantly does.)
Can you be a little more specific? "Discredited" is a two-place function (discredited to whom).
Discredited to intellectual elites, who are not only imperfectly rational, but get their information via people who are imperfectly rational, who in turn etc.
"Speak the truth, even if your voice trembles" isn't a literal executable decision procedure—if you programmed your AI that way, it might get stabbed. But a culture that has "Speak the truth, even if your voice trembles" as a slogan might—just might be able to do science or better—to get the goddamned right answereven when the local analogue of the Pope doesn't like it.
It almost sounds like you're saying we should tell people they should always speak the truth even though it is not the case that people should always speak the truth, because telling people they should always speak the truth has good consequences. Hm!
I don't like the "speak the truth even if your voice trembles" formulation. It doesn't make it clear that the alternative to speaking the truth, instead of lying, is not speaking. It also suggests an ad hominem theory of why people aren't speaking (fear, presumably of personal consequences) that isn't always true. To me, this whole thing is about picking battles versus not picking battles rather than about truth versus falsehood. Even though if you pick your battles it means a non-random set of falsehoods remains uncorrected, picking battles is still pro-truth.
If we should judge the Platonic math by how it would be interpreted in practice, then we should also judge "speak the truth even if your voice trembles" by how it would be interpreted in practice. I'm worried the outcome would be people saying "since we talk rationally about the Emperor here, let's admit that he's missing one shoe", regardless of whether the emperor is missing one shoe, is fully dressed, or has no clothes at all. All things equal, being less wrong is good, but sometimes being less wrong means being more confident that you're not wrong at all, even though you are wrong at all.
(By the way, I think of my position here as having a lower burden of proof than yours, because the underlying issue is not just who is making the right tradeoffs, but whether making different tradeoffs than you is a good reason to give up on a community altogether.)
(This comment is really helpful for me to understand your positions.)
Some part of your brain is performing some computation that, if it works, to the extent that it works, is mirroring Bayesian decision theory. But that doesn’t help the part of you can that talk, that can be reached by the part of me that can talk.
Why not? It seems likely to me that the part of my brain that is doing something like Bayesian decision theory can be trained in certain directions by the part of me that talks/listens (for example by studying history or thinking about certain thought experiments).
I falsifiably predict that a culture that has “Use Bayesian decision theory to decide whether or not to speak the truth” as its slogan won’t be able to do science
I'm not convinced of this. Can you say more about why you think this?
I don't know where else to say a thing I haven't said, so I'll say it here. I really appreciate your passion for truth and outing deception, Zack.
I interpret the later stanzas as taking an essentially Hansonian view, where the lie is self-deception and willful ignorance that serves some other, adaptive purpose.
Followup to: Rationalist Poetry Fans, Unite!, Act of Charity
This is my favorite poem about revealing information about deception! It goes like this (sources: Wikipedia, Poetry Foundation, Bartleby)—
The English is a bit dated; Walter Raleigh (probably) wrote it in 1592 (probably). "Give the lie" here is an expression meaning "accuse them of lying" (not "tell them this specific lie", as modern readers not familiar with the expression might interpret it).
The speaker is telling his soul to go to all of Society's respected institutions and reveal that the stories they tell about themselves are false: the court's shining standard of Justice is really about as shiny as a decaying stump; the chruch teaches what's good but doesn't do any good; kings think they're so powerful and mighty, but are really just the disposable figurehead of a coalition; &c. (I'm not totally sure exactly what all of the stanzas mean because of the dated language, but I feel OK about this.)
The speaker realizes this campaign is kind of suicidal ("Go, since I needs must die") and will probably result in getting stabbed. That's why he's telling his soul to do it, because—ha-ha!—immaterial souls can't be stabbed!
What about you, dear reader? Have you given any thought to revealing information about deception?!