The motivating example for this post is whether you should say "So, I actually checked with some of their former employees, and if what they say and my corresponding calculations are right, they actually only saved 138 puppies", with Quinn arguing that you shouldn't say it because saying it has bad consequences. The problem is, saying this has very clearly good consequences, which means trying to use it as a tool for figuring out what you think of appeals to consequences sets up your intuitions to confuse you.
(It has clearly good consequences because "how much money goes to PADP right now" is far less import than "building a culture of caring about the actual effectiveness of organizations and truly trying to find/make the best ones". Plus if, say, Animal Charity Evaluators trusted this higher number of puppies saved and it had lead them to recommend PADP as I've if their top charities, that that would mean displacing funds that could have gone to more effective animal charities. The whole Effective Altruism project is about trying to figure out how to get the biggest positive impact, and you can't do this if you declare discussing negative information about organizations off limit
...The post would be a lot clearer if it had a motivating example that really did have bad consequences, ask things considered.
The extreme case would be a scientific discovery which enabled anyone to destroy the world, such as the supernova thing in Three Worlds Collide or the thought experiment that Bostrom discusses in The Vulnerable World Hypothesis:
So let us consider a counterfactual history in which Szilard invents nuclear fission and realizes that a nuclear bomb could be made with a piece of glass, a metal object, and a battery arranged in a particular configuration. What happens next? Szilard becomes gravely concerned. He sees that his discovery must be kept secret at all costs. But how. His insight is bound to occur to others. He could talk to a few of his physicist friends, the ones most likely to stumble upon the idea, and try to persuade them not to publish anything on nuclear chain reactions or on any of the reasoning steps leading up to the dangerous discovery. (That is what Szilard did in actual history.)
[...] Soon, figuring out how to initiate a nuclear chain reaction with pieces of metal, glass, and electricity will no longer take genius but will be within reach of any STEM student with an inventive mindset.
My main objection is that the post is built around a case where Quinn is very wrong in their initial "bad consequences" claim, and that this leads people to have misleading intuitions. I was trying to propose an alternative situation where the "bad consequences" claim was true or closer to true, but where Quinn would still be wrong to suggest Carter shouldn't describe what they'd found.
(Also, for what it's worth, I find the Quinn character's argumentative approach very frustrating to read. This makes it hard to take anything that character describes seriously.)
Instead of Quinn admitting lying is sometimes good, I wish he had said something like:
“PADP is widely considered a good charity by smart people who we trust. So we have a prior on it being good. You’ve discovered some apparent evidence that it’s bad. So now we have to combine the prior and the evidence, and we end up with some percent confidence that they’re bad.
If this is 90% confidence they’re bad, go ahead. What if it’s more like 55%? What’s the right action to take if you’re 55% sure a charity is incompetent and dishonest (but 45% chance you misinterpreted the evidence)? Should you call them out on it? That’s good in the world where you’re right, but might disproportionately tarnish their reputation in the world where they're wrong. It seems like if you’re 55% sure, you have a tough call. You might want to try something like bringing up your concerns privately with close friends and only going public if they share your opinion, or asking the charity first and only going public if they can’t explain themselves. Or you might want to try bringing up your concerns in a nonconfrontational way, more like ‘Can anyone figure out what’s going on with PADP’s math?’ rather than ‘PADP...
The part about climate science seems like a pretty bog-standard outside view argument, which in turn means I find it largely uncompelling. Yes, there are people who are so stupid, they can only be saved from their own stupidity by executing an epistemic maneuver that works regardless of the intelligence of the person executing it. This does not thereby imply that everyone should execute the same maneuver, including people who are not that stupid, and therefore not in need of saving. If someone out there is so incompetent that they mistakenly perceive themselves as competent, then they are already lost, and the fact that an illegal (from the perspective of normative probability theory) epistemic maneuver exists which would save them if they executed it, does not thereby make that maneuver a normatively good move. (And even if it were, it's not as though the people who would actually benefit from said maneuver are going to execute it--the whole reason that such people are loudly, confidently mistaken is that they don't take the outside view seriously.)
In short: there is simply no principled justification for modesty-based arguments, and--though it may be somewhat impolite to say--I a
...The bar that is set for appeals to consequences imply the sort of equilibrium world you'll end up in. Erring on the side of higher is better, because it is hard to go the other way because epistemic standards tend to slide in the face of local incentives.
I also want to note an argumentative tactic that occurs on the tacit level whereby people will push you into a state where you need to expend more energy on average per truth bit than they do, so they eventually win by attrition. Related to evaporative cooling. The subjective experience of this feels like talking to the cops. You sense that no big wins are available (because they have their bottom line) but big losses are, so you stop talking. If you've encountered this dynamic, you recognize things like this
> "You still haven't refuted my argument. If you don't do so, I win by default."
as part of the supporting framework for the dynamic and it will make you very angry...which others will then use as part of the dynamic which makes you angry which......
When someone is arguing that a belief of yours is wrong, listen to the argument at the object level, instead of jumping to the question of whether saying the relevant arguments out loud is a good idea, which is a much harder question.”
It seems to me that they key issue here is the need for both public and private conversational spaces.
In public spaces, arguments are soldiers. They have to be, because others treat them that way, and because there are actual policies that we're all fighting / negotiating over. In those contexts it is reasonable (I don't know if it is correct, or not), to constrain what things you say, even if they're true, because of their consequences. It is often the case that one piece of information, though true, taken out of context, does more harm than good, and often conveying the whole informational context to a large group of people is all but impossible.
But we need to be able to figure out which policies to support, somehow, separately from supporting them on this political battlefield. We also need private spaces, where we can think and our initial thoughts can be isolated from their possible consequences, or we won't be able to think freely.
It seems like Carter thinks they are having a private conversation, in a private space, and Quinn thinks they're having a public conversation in a public space.
(Strong-upvoted for making something explicit that is more often tacitly assumed. Seriously, this is an incredibly useful comment; thanks!!)
In public spaces, arguments are soldiers. They have to be, because others treat them that way, and because there are actual policies that we're all fighting / negotiating over
Can you unpack what you mean by "have to be" in more detail? What happens if you just report your actual reasoning (even if your voice trembles)? (I mean that as a literal what-if question, not a rhetorical one. If you want, I can talk about how I would answer this in a future comment.)
I can imagine creatures living in a hyper-Malthusian Nash equilibrium where the slightest deviation from the optimal negotiating stance dictated by the incentives just gets you instantly killed and replaced with someone else who will follow the incentives. In this world, if being honest isn't the optimal negotiating stance, then honesty is just suicide. Do you think this is a realistic description of life for present-day humans? Why or why not? (This is kind of a leading question on my part. Sorry.)
...But we need to be able to figure out which policies to support, somehow, separately from
Okay, I was getting too metaphorical with the encyclopedia; sorry about that. The proposition I actually want to defend is, "Private deliberation is extremely dependent on public information." This seems obviously true to me. When you get together with your trusted friends in private to decide which policies to support, that discussion is mostly going to draw on evidence and arguments that you've heard in public discourse, rather than things you've directly seen and verified for yourself. But if everyone in Society is, like you, simplifying their public arguments in order to minimize their social "attack surface", then the information you bring to your private discussion is based on fear-based simplifications, rather than the best reasoning humanity has to offer.
In the grandparent comment, the text "report your actual reasoning" is a link to the Sequences post "A Rational Argument", which you've probably read. I recommend re-reading it.
If you omit evidence against your preferred conclusion, people can't take your reasoning at face value anymore: if you first write at the bottom of a piece of paper, "... and therefore, Policy P is the best," it doesn't matter what you write on the l
...you're pushing more for an abstract principle than a concrete change
I mean, the abstract principle that matters is of the kind that can be proved as a theorem rather than merely "pushed for." If a lawful physical process results in the states of physical system A becoming correlated with the states of system B, and likewise system B and system C, then observations of the state of system C are evidence about the state of system A. I'm claiming this as technical knowledge, not a handwaved philosophical intuition; I can write literal computer programs that exhibit this kind of evidential-entanglement relationship.
Notably, the process whereby you can use your observations about C to help make better predictions about A doesn't work if system B is lying to make itself look good. I again claim this as technical knowledge, and not a political position.
Any time that mood seems to cropping up or underlying someone's decision procedure it should be pushed back against.
The word "should" definitely doesn't belong here. Like, that's definitely a fair description of the push I'm making. Because I actually feel that way. But obviously, other people shouldn't passionately advocate for open
...Not really? The concept of a "policy proposal" seems to presuppose control over some powerful central decision node, which I don't think is true of me. This is a forum website. I write things. Maybe someone reads them. Maybe they learn something. Maybe me and the people who are better at open and honest discourse preferentially collaborate with each other (and ignore people who we can detect are playing a different game), have systematically better ideas, and newcomers tend to imitate our ways in a process of cultural evolution.
I separated out the question of "stuff individuals should do unilaterally" from "norm enforcement" because it seems like at least some stuff doesn't require any central decision nodes.
In particular, while "don't lie" is an easy injunction to follow, "account for systematic distortions in what you say" is actually quite computationally hard, because there are a lot of distortions with different mechanisms and different places one might intervene on their thought process and/or communication process. "Publicly say literally ever inconvenient thing you think of" probably isn't what you meant (or maybe it was?), and it might cause you to end not having a harder time thinking inconvenient thoughts.
I'm asking because I'm actually interested in improving on this dimension.
I agree that that's much less bad—but "better"? "Better"!? By what standard? What assumptions are you invoking without stating them?
I should clarify: I'm not saying submitting to censorship is never the right thing to do. If we live in Amazontopia, and there's a man with a gun on the streetcorner who shoots anyone who says anything bad about Jeff Bezos, then indeed, I would not say anything bad about Jeff Bezos—in this specific (silly) hypothetical scenario with that specific threat model.
But ordinarily, when we try to figure out which cognitive algorithms are "better" (efficiently produce accurate maps, or successful plans), we tend to assume a "fair" problem class unless otherwise specified. The theory of "rational thought, except you get punished if you think about elephants" is strictly more complicated than the theory of "rational thought." Even if we lived in a world where robots with MRI machines who punish elephant-thoughts were not unheard of and needed to be planned for, it would be pedagogically weird to treat that as the central case.
I hold "discourse algorithms" to the same standard: we need to figure out how to think together in the simple, unconstrained case before
...we need to figure out how to think together
This is probably not the crux of our disagreement, but I think we already understand perfectly well how to think together and we're limited by temperament rather than understanding. I agree that if we're trying to think about how to think together we can treat no censorship as the default case.
worthless cowards
If cowardice means fear of personal consequences, this doesn't ring true as an ad hominem. Speaking without any filter is fun and satisfying and consistent with a rationalist pro-truth self-image and other-image. The reason why I mostly don't do it is because I'd feel guilt about harming the discourse. This motivation doesn't disappear in cases where I feel safe from personal consequences, e.g. because of anonymity.
who just assume as if it were a law of nature that discourse is impossible
I don't know how you want me to respond to this. Obviously I think my sense that real discourse on fraught topics is impossible is based on extensively observing attempts at real discourse on fraught topics being fake. I suspect your sense that real discourse is possible is caused by you underestimating how far re...
The reason why I mostly don't do it is because I'd feel guilt about harming the discourse
Woah, can you explain this part in more detail?! Harming the discourse how, specifically? If you have thoughts, and your thoughts are correct, how does explaining your correct thoughts make things worse?
Consider the idea that the prospect of advanced AI implies the returns from stopping global warming are much smaller than you might otherwise think. I think this is a perfectly correct point, but I'm also willing to never make it, because a lot of people will respond by updating against the prospect of advanced AI, and I care a lot more about people having correct opinions on advanced AI than on the returns from stopping global warming.
I want to distinguish between "harming the discourse" and "harming my faction in a marketing war."
When I say that public discourse is really important, what I mean is that if you tell the truth in public about what you believe and why (possibly investing a lot of effort and using a lot of hyperlinks to bridge the inferential distance), then other people who aren't already your closest trusted friends have the opportunity to learn from the arguments and evidence that actually convinced you, combine it with their own knowledge, and potentially make better decisions. ("Discourse" might not be the right word here—the concept I want to point to includes unilateral truthtelling, as on a blog with no comment section, or where your immediate interlocutor doesn't "reciprocate" in good faith, but someone in the audience might learn something.)
If you think other people can't process arguments at all, but that you can, how do you account for your own existence? For myself: I'm smart, but I'm not that smart (IQ ~130). The Sequences were life-changingly great, but I was still interested in philosophy and argument before that. Our little robot cult does not have a monopoly on reasoning itself.
...a
No, I agree that authors should write in language that their audience will understand. I'm trying to make a distinction between having intent to inform (giving the audience information that they can use to think with) vs. persuasion (trying to exert control over the audience's conclusion). Consider this generalization of a comment upthread—
Consider the idea that X implies Y. I think this is a perfectly correct point, but I'm also willing to never make it, because a lot of people will respond by concluding that not-X, because they're emotionally attached to not-Y, and I care a lot more about people having correct beliefs about the truth value of X than Y.
This makes perfect sense as part of a consequentialist algorithm for maximizing the number of people who believe X. The algorithm works just as well, and for the same reasons whether X = "superintelligence is an existential risk" and Y = "returns from stopping global warming are smaller than you might otherwise think" (when many audience members have global warming "cause-area loyalty"), or whether X = "you should drink Coke" and Y = "returns from drinking Pepsi are smaller than you might otherwise think" (when many audience mem
..."Intent to inform" jives with my sense of it much more than "tell the truth."
On reflection, I think the 'epistemic peer' thing is close but not entirely right. Definitely if I think Bob "can't handle the truth" about climate change, and so I only talk about AI with Bob, then I'm deciding that Bob isn't an epistemic peer. But if I have only a short conversation with Bob, then there's a Gricean implication point that saying X implicitly means I thought it was more relevant to say than Y, or is complete, or so on, and so there are whole topics that might be undiscussed because I don't want to send the implicit message that my short thoughts on the matter are complete enough to reconstruct my position or that this topic is more relevant than other topics.
---
More broadly, I note that I often see "the discourse" used as a term of derision, I think because it is (currently) something more like a marketing war than an open exchange of information. Or, like a market left to its own devices, it has Goodharted on marketing. It is unclear to me whether it's better to abandon it (like, for example, not caring about what people think on Twitter) or attempt to recapture it (by pushing for the sorts of 'public goods' and savvy customers that cause markets to Goodhart less on marketing).
To be clear, if you don't think you're talking to an epistemic peer, strategically routing around the audience's psychological defenses might be the right thing to do!
I'm confused reading this.
It seems to me that if you think routing around psychological defenses is a sometimes reasonable thing to do with people who aren't your epistemic peers.
But you said above that you thought the overall position of having private discourse spaces and public discourse spaces is abhorrent?
How do these fit together? The the vast majority of people are not your (or my) epistemic peers, even the robot cult doesn't have a monopoly on truth or truth seeking. And so you would behave differentely in private spaces with your peers, and public spaces that include the whole world.
Can you clarify?
Bidding to move to a private space isn't necessarily bad but at the same time it's not an argument. "I want to take this private" doesn't argue for any object-level position.
It seems that the text of what you're saying implies you think humans have no agency over discourse norms, regulations, rules of games, etc, but that seems absurd so I don't think you actually believe that. Perhaps you've given up on affecting them, though.
("What wins" is underdetermined given choice is involved in what wins; you can't extrapolate from two player zero sum games (where there's basically one best strategy) to multi player zero sum games (where there isn't, at least due to coalitional dynamics implying a "weaker" player can win by getting more supporters))
It seems that the text of what you're saying implies you think humans have no agency over discourse norms, regulations, rules of games, etc, but that seems absurd so I don't think you actually believe that.
How much agency we have is proportional to how many other actors are in a space. I think it's quite achievable (though requires a bit of coordination) to establish good norms for a space with 100 people. It's still achievable, but... probably at least (10x?) as hard to establish good norms for 1000 people.
But "public searchable internet" is immediately putting things in in a context with at least millions if not billions of potentially relevant actors, many of whom don't know anything about your norms. I'm still actually fairly optimistic about making important improvements to this space, but those improvements will have a lot of constraints for anyone with major goals that affect the world-stage.
Well, I certainly agree with the position you’re defending. Yet I can’t help but feel that the arguments in the OP lack… a certain concentrated force, which I feel this topic greatly deserves.
Without disagreeing, necessarily, with anything you say, here is my own attempt, in two (more or less independent) parts.
If the truth is precious, its pursuit must be unburdened by such considerations as “what will happen if we say this”. This is impractical, in the general case. You may not be interested in consequences, after all, but the consequences are quite interested in you…
There is, however, one way out of the quagmire of consequential anxiety. Let there be a place around which a firewall of epistemology is erected. Let appeals to consequences outside that citadel, be banned within its walls. Let no one say: “if we say such a thing, why, think what might happen, out there, in the wider world!”. Yes, if you say this thing out there, perhaps unfortunate consequences may follow out there. But we are not speaking out there; so long as we speak in here, to each other, let us consider it irrelevant what effects our words may produce upon the world outside. In here, we c
...It seems like a quite desirable property to able to talk freely about which local orgs and people deserve money and prestige – but I don’t currently know of robust game mechanics that will actually, reliably enable this in any environment where I don’t personally know and trust each person.
There should not be any “local orgs” inside the citadel; and if the people who participate in the citadel also happen to, together, constitute various other orgs… well, first of all, that’s quite a bad sign; but in any case discussions of them, and whether they deserve money and so on, should not take place inside the citadel.
If this is not obvious, then I have not communicated the concept effectively. I urge you to once again consider this part:
...Any among us who have something to protect, in the world beyond the citadel, may wish to take the truths we find, and apply them to that outside world, and discuss these things with others who feel as they do. In these discussions, of plans and strategies for acting upon the wider world, the consequences of their words, for that world, may be of the utmost importance. But if so, to have such discussions, these
[speaking for myself, not for any organization]
If this is an allegory against appeals to consequences generally, well and good.
If there's some actual question about whether wrong cost effectiveness numbers are being promoted, could people please talk about those numbers specifically so we can all have a try at working out if that's really going on? E.g. this post made a similar claim to what's implied in this allegory, but it was helpful that it used concrete examples so people could work out whether they agreed (and, in that case, identify factual errors).
I think this is strawmanning the appeal to consequences argument, by mixing up private beliefs and public statements, and by ending with a pretty superficial agreement on rule-consequentialism without exploring how to pick which rules (among one for improving private beliefs, one for sharing relevant true information and one for suppressing harmful information) applies.
The participants never actually attempt to resolve the truth about puppies saved per dollar, calling the whole thing into question - both whether their agreement is real and whether it's the right thing. Many of these discussions should include a recitation of [ https://wiki.lesswrong.com/wiki/Litany_of_Tarski ], and a direct exploration whether it's beliefs (private) or publication (impacting presumed-less-rational agents) that is at issue.
In any case, appeals to consequences at the meta/rule level still HAS to be grounded in appeals to consequences at the actual object consequence level. A rule that has so many exceptions that it's mostly wrong is actively harmful. My objection to the objection to "appeal to consequences" is that the REAL objection is to bad epistemology of consequence...
Carter is a mistake theorist, Quinn is a conflict theorist. At no point does Quinn ever talk about truth, or about anything, really. His words are weapons to achieve an end by whatever means possible. There is no more meaning in them than in a fist. Carter's meta-mistake is to believe that he is arguing with someone. Quinn is not arguing; he is in a fist fight.
Quinn: “Hold it right there. Regardless of whether that’s true, it’s bad to say that.”
Carter: “That’s an appeal to consequences, well-known to be a logical fallacy.”
The link in Carter's statement leads to a page that clearly contradicts Carter's claim:
In logic, appeal to consequences refers only to arguments that assert a conclusion's truth value (true or false) without regard to the formal preservation of the truth from the premises; appeal to consequences does not refer to arguments that address a premise's consequential desirability (good or bad, or right or wrong) instead of its truth value.
It sounds to me like Jessica is using "appeal to consequences" expansively to include not just "X has bad consequences so you should not believe X" to "saying X has bad consequences so you should not say X"?
Yes. In practice, if people are discouraged from saying X on the basis that it might be bad to say it, then the discourse goes on believing not-X. So, the discourse itself makes an invalid step that's analogous to an appeal to consequences "if it's bad for us to think X is true then it's false".
Summary: I'm aware of a lot of examples of real debates that inspired this dialogue. It seems in those real cases, a lot of disagreement or criticism of public claims or accusations of lying of different professional organizations in effective altruism, or AI risk, have repeatedly been generically interpreted as a blanket refusal to honestly engage with the clams being made. Instead of a good-faith effort to resolve different kinds of disputes with public accusations of lying being made, repeat accusations, and justifications for them, are made into l...
So, in summary: if we’re going to have suppression of some facts being said out loud, we should have that through either clear norms designed with consequences (including consequences for epistemology) in mind, or individuals deciding not to say things, but otherwise our norms should be protecting true speech, and outlawing appeals to consequences.
Are you happy with a LW with multiple norm sets, where this is one of the norm sets you can choose?
What's your plan if communities or sub-communities with these norms don't draw enough participants to bec
So if evidence against X is being suppressed, then people's belief in X is unreliable, so it can't justify suppressing evidence against X. That's a great argument for free speech, thanks! Do you know if it's been stated before?
This doesn’t seem quite right to me.
Consider this example:
“Evidence against the Holocaust is being suppressed[1]. Therefore people’s belief in the Holocaust is unreliable. And so we cannot justify suppressing Holocaust denial by appealing to the (alleged) fact of the Holocaust having occurred.”
Something is wrong here, it seems to me. Not with the conclusion, mind you, the policy proposal, as it were; that part is all right. But the logic feels odd, don’t you think?
I don’t have a full account, yet, of cases like this, but it seems to me that some of the relevant considerations are as follows. Firstly, we previously undertook a comprehensive project (or multiple such) to determine the truth of the matter, which operated under no such restrictions as we now defend, and came to conclusions which cannot be denied. Secondly, we have people whose belief in the facts of the matter come from personal experience, and are not at all contingent on (nor even alterable by) any evidence we may or may not now present. Thirdly, as the question is one of historical fact, no new evidence may be generated; previously unknown but existing evidence may be uncovered, or currently known evidence may be sh
..."If we want to do those things, we have to do them by getting to the truth"
This seems fair if it focuses on the rationalist strategy on trying to interface with the world and how truth is essential. However it's probably not literally true in that there are probably Dark Arts and such which provide those spesific sought goods with outrageous prices. "Have" in this context means "within our options we have created for ourselfs" and not "it is not possible to produce the effect via other means"
Carter states that t...
[note: the following is essentially an expanded version of this LessWrong comment on whether appeals to consequences are normative in discourse. I am exasperated that this is even up for debate, but I figure that making the argumentation here explicit is helpful]
Carter and Quinn are discussing charitable matters in the town square, with a few onlookers.
Carter: "So, this local charity, People Against Drowning Puppies (PADP), is nominally opposed to drowning puppies."
Quinn: "Of course."
Carter: "And they said they'd saved 2170 puppies last year, whereas their total spending was $1.2 million, so they estimate they save one puppy per $553."
Quinn: "Sounds about right."
Carter: "So, I actually checked with some of their former employees, and if what they say and my corresponding calculations are right, they actually only saved 138 puppies."
Quinn: "Hold it right there. Regardless of whether that's true, it's bad to say that."
Carter: "That's an appeal to consequences, well-known to be a logical fallacy."
Quinn: "Is that really a fallacy, though? If saying something has bad consequences, isn't it normative not to say it?"
Carter: "Well, for my own personal decisionmaking, I'm broadly a consequentialist, so, yes."
Quinn: "Well, it follows that appeals to consequences are valid."
Carter: "It isn't logically valid. If saying something has bad consequences, that doesn't make it false."
Quinn: "But it is decision-theoretically compelling, right?"
Carter: "In theory, if it could be proven, yes. But, you haven't offered any proof, just a statement that it's bad."
Quinn: "Okay, let's discuss that. My argument is: PADP is a good charity. Therefore, they should be getting more donations. Saying that they didn't save as many puppies as they claimed they did, in public (as you just did), is going to result in them getting fewer donations. Therefore, your saying that they didn't save as many puppies as they claimed to is bad, and is causing more puppies to drown."
Carter: "While I could spend more effort to refute that argument, I'll initially note that you only took into account a single effect (people donating less to PADP) and neglected other effects (such as people having more accurate beliefs about how charities work)."
Quinn: "Still, you have to admit that my case is plausible, and that some onlookers are convinced."
Carter: "Yes, it's plausible, in that I don't have a full refutation, and my models have a lot of uncertainty. This gets into some complicated decision theory and sociological modeling. I'm afraid we've gotten sidetracked from the relatively clear conversation, about how many puppies PADP saved, to a relatively unclear one, about the decision theory of making actual charity effectiveness clear to the public."
Quinn: "Well, sure, we're into the weeds now, but this is important! If it's actually bad to say what you said, it's important that this is widely recognized, so that we can have fewer... mistakes like that."
Carter: "That's correct, but I feel like I might be getting trolled. Anyway, I think you're shooting the messenger: when I started criticizing PADP, you turned around and made the criticism about me saying that, directing attention against PADP's possible fraudulent activity."
Quinn: "You still haven't refuted my argument. If you don't do so, I win by default."
Carter: "I'd really rather that we just outlaw appeals to consequences, but, fine, as long as we're here, I'm going to do this, and it'll be a learning experience for everyone involved. First, you said that PADP is a good charity. Why do you think this?"
Quinn: "Well, I know the people there and they seem nice and hardworking."
Carter: "But, they said they saved over 2000 puppies last year, when they actually only saved 138, indicating some important dishonesty and ineffectiveness going on."
Quinn: "Allegedly, according to your calculations. Anyway, saying that is bad, as I've already argued."
Carter: "Hold up! We're in the middle of evaluating your argument that saying that is bad! You can't use the conclusion of this argument in the course of proving it! That's circular reasoning!"
Quinn: "Fine. Let's try something else. You said they're being dishonest. But, I know them, and they wouldn't tell a lie, consciously, although it's possible that they might have some motivated reasoning, which is totally different. It's really uncivil to call them dishonest like that. If everyone did that with the willingness you had to do so, that would lead to an all-out rhetorical war..."
Carter: "God damn it. You're making another appeal to consequences."
Quinn: "Yes, because I think appeals to consequences are normative."
Carter: "Look, at the start of this conversation, your argument was that saying PADP only saved 138 puppies is bad."
Quinn: "Yes."
Carter: "And now you're in the course of arguing that it's bad."
Quinn: "Yes."
Carter: "Whether it's bad is a matter of fact."
Quinn: "Yes."
Carter: "So we have to be trying to get the right answer, when we're determining whether it's bad."
Quinn: "Yes."
Carter: "And, while appeals to consequences may be decision theoretically compelling, they don't directly bear on the facts."
Quinn: "Yes."
Carter: "So we shouldn't have appeals to consequences in conversations about whether the consequences of saying something is bad."
Quinn: "Why not?"
Carter: "Because we're trying to get to the truth."
Quinn: "But aren't we also trying to avoid all-out rhetorical wars, and puppies drowning?"
Carter: "If we want to do those things, we have to do them by getting to the truth."
Quinn: "The truth, according to your opinion-"
Carter: "God damn it, you just keep trolling me, so we never get to discuss the actual facts. God damn it. Fuck you."
Quinn: "Now you're just spouting insults. That's really irresponsible, given that I just accused you of doing something bad, and causing more puppies to drown."
Carter: "You just keep controlling the conversation by OODA looping faster than me, though. I can't refute your argument, because you appeal to consequences again in the middle of the refutation. And then we go another step down the ladder, and never get to the truth."
Quinn: "So what do you expect me to do? Let you insult well-reputed animal welfare workers by calling them dishonest?"
Carter: "Yes! I'm modeling the PADP situation using decision-theoretic models, which require me to represent the knowledge states and optimization pressures exerted by different agents (both conscious and unconscious), including when these optimization pressures are towards deception, and even when this deception is unconscious!"
Quinn: "Sounds like a bunch of nerd talk. Can you speak more plainly?"
Carter: "I'm modeling the actual facts of how PADP operates and how effective they are, not just how well-liked the people are."
Quinn: "Wow, that's a strawman."
Carter: "Look, how do you think arguments are supposed to work, exactly? Whoever is best at claiming that their opponent's argumentation is evil wins?"
Quinn: "Sure, isn't that the same thing as who's making better arguments?"
Carter: "If we argue by proving our statements are true, we reach the truth, and thereby reach the good. If we argue by proving each other are being evil, we don't reach the truth, nor the good."
Quinn: "In this case, though, we're talking about drowning puppies. Surely, the good in this case is causing fewer puppies to drown, and directing more resources to the people saving them."
Carter: "That's under contention, though! If PADP is lying about how many puppies they're saving, they're making the epistemology of the puppy-saving field worse, leading to fewer puppies being saved. And, they're taking money away from the next-best-looking charity, which is probably more effective if, unlike PADP, they're not lying."
Quinn: "How do you know that, though? How do you know the money wouldn't go to things other than saving drowning puppies if it weren't for PADP?"
Carter: "I don't know that. My guess is that the money might go to other animal welfare charities that claim high cost-effectiveness."
Quinn: "PADP is quite effective, though. Even if your calculations are right, they save about one puppy per $10,000. That's pretty good."
Carter: "That's not even that impressive, but even if their direct work is relatively effective, they're destroying the epistemology of the puppy-saving field by lying. So effectiveness basically caps out there instead of getting better due to better epistemology."
Quinn: "What an exaggeration. There are lots of other charities that have misleading marketing (which is totally not the same thing as lying). PADP isn't singlehandedly destroying anything, except instances of puppies drowning."
Carter: "I'm beginning to think that the difference between us is that I'm anti-lying, whereas you're pro-lying."
Quinn: "Look, I'm only in favor of lying when it has good consequences. That makes me different from pro-lying scoundrels."
Carter: "But you have really sloppy reasoning about whether lying, in fact, has good consequences. Your arguments for doing so, when you lie, are made of Swiss cheese."
Quinn: "Well, I can't deductively prove anything about the real world, so I'm using the most relevant considerations I can."
Carter: "But you're using reasoning processes that systematically protect certain cached facts from updates, and use these cached facts to justify not updating. This was very clear when you used outright circular reasoning, to use the cached fact that denigrating PADP is bad, to justify terminating my argument that it wasn't bad to denigrate them. Also, you said the PADP people were nice and hardworking as a reason I shouldn't accuse them of dishonesty... but, the fact that PADP saved far fewer puppies than they claimed actually casts doubt on those facts, and the relevance of them to PADP's effectiveness. You didn't update when I first told you that fact, you instead started committing rhetorical violence against me."
Quinn: "Hmm. Let me see if I'm getting this right. So, you think I have false cached facts in my mind, such as PADP being a good charity."
Carter: "Correct."
Quinn: "And you think those cached facts tend to protect themselves from being updated."
Carter: "Correct."
Quinn: "And you think they protect themselves from updates by generating bad consequences of making the update, such as fewer people donating to PADP."
Carter: "Correct."
Quinn: "So you want to outlaw appeals to consequences, so facts have to get acknowledged, and these self-reinforcing loops go away."
Carter: "Correct."
Quinn: "That makes sense from your perspective. But, why should I think my beliefs are wrong, and that I have lots of bad self-protecting cached facts?"
Carter: "If everyone were as willing as you to lie, the history books would be full of convenient stories, the newspapers would be parts of the matrix, the schools would be teaching propaganda, and so on. You'd have no reason to trust your own arguments that speaking the truth is bad."
Quinn: "Well, I guess that makes sense. Even though I lie in the name of good values, not everyone agrees on values or beliefs, so they'll lie to promote their own values according to their own beliefs."
Carter: "Exactly. So you should expect that, as a reflection to your lying to the world, the world lies back to you. So your head is full of lies, like the 'PADP is effective and run by good people' one."
Quinn: "Even if that's true, what could I possibly do about it?"
Carter: "You could start by not making appeals to consequences. When someone is arguing that a belief of yours is wrong, listen to the argument at the object level, instead of jumping to the question of whether saying the relevant arguments out loud is a good idea, which is a much harder question."
Quinn: "But how do I prevent actually bad consequences from happening?"
Carter: "If your head is full of lies, you can't really trust ad-hoc object-level arguments against speech, like 'saying PADP didn't save very many puppies is bad because PADP is a good charity'. You can instead think about what discourse norms lead to the truth being revealed, and which lead to it being obscured. We've seen, during this conversation, that appeals to consequences tend to obscure the truth. And so, if we share the goal of reaching the truth together, we can agree not to do those."
Quinn: "That still doesn't answer my question. What about things that are actually bad, like privacy violations?"
Carter: "It does seem plausible that there should be some discourse norms that protect privacy, so that some facts aren't revealed, if such norms have good consequences overall. Perhaps some topics, such as individual people's sex lives, are considered to be banned topics (in at least some spaces), unless the person consents."
Quinn: "Isn't that an appeal to consequences, though?"
Carter: "Not really. Deciding what privacy norms are best requires thinking about consequences. But, once those norms have been decided on, it is no longer necessary to prove that privacy violations are bad during discussions. There's a simple norm to appeal to, which says some things are out of bounds for discussion. And, these exceptions can be made without allowing appeals to consequences in full generality."
Quinn: "Okay, so we still have something like appeals to consequences at the level of norms, but not at the level of individual arguments."
Carter: "Exactly."
Quinn: "Does this mean I have to say a relevant true fact, even if I think it's bad to say it?"
Carter: "No. Those situations happen frequently, and while some radical honesty practitioners try not to suppress any impulse to say something true, this practice is probably a bad idea for a lot of people. So, of course you can evaluate consequences in your head before deciding to say something."
Quinn: "So, in summary: if we're going to have suppression of some facts being said out loud, we should have that through either clear norms designed with consequences (including consequences for epistemology) in mind, or individuals deciding not to say things, but otherwise our norms should be protecting true speech, and outlawing appeals to consequences."
Carter: "Yes, that's exactly right! I'm glad we came to agreement on this."