I consider you to be basically agreeing with me for 90% of what I intended and your disagreements for the other 10% to be the best written of any so far, and basically valid in all the places I'm not replying to it. I still have a few objections:
What if my highest value is getting a pretty girl with a country-sized dowry, while having not betrayed the Truth? ... In short, no, Rationality absolutely can be about both Winning and about The Truth.
I agree the utility function isn't up for grabs and that that is a coherent set of values to have, but I have this criticism that I want to make that I feel I don't have the right language to make. Maybe you can help me. I want to call that utility function perverse. The kind of utilityfunction that an entity is probably mistaken to imagine itself as having.
For any particular situation you might find yourself in, for any particular sequence of actions you might do in that situation, there is a possible utilityfunction you could be said to have such that the sequence of actions is the rational behaviour of a perfect omniscient utility maximiser. If nothing else, pick the exact sequence of events that will result, declare that your utility function is +100 for that sequence of events and 0 for anything else, and then declare yourself a supremely efficient rationalist.
Actually doing that would be a mistake. It wouldn't be making you better. This is not a way to succeed at your goals, this is a way to observe what you're inclined to do anyway and paint the target around it. Your utility function (fake or otherwise) is supposed to describe stuff you actually want. Why would you want specifically that in particular?
I think the stronger version of Rationality is the version that phrases it as about getting the things you want, whatever those things might be. In that sense, if The Truth is merely a value, you should carefully segment it in your brain out from your practice of rationality: Your rationality is about mirroring the mathematical structure best suited for obtaining goals, and then to whatever degree you value The Truth above its normal instrumental value is something you buy where it's cheapest like all your other values. Mixing the two makes both worse, you pollute your concept of rational behaviour with a love of the truth (and therefore, for example, are biased towards imagining that other people who display rationality are probably honest, or other people who display honesty are probably rational) and you damage your ability to pursue the truth by not putting in the values category where it belongs where it will lead you to try to cheaply buy more of it.
Of course maybe you're just the kind of guy who really loves mixing his value for The Truth in with his rationality into a weird soup. That'd explain your actiosn without making you a walking violation of any kind of mathematical law, it'd just be a really weird thing for you to innately want.
I am still trying to find a better way to phrase this argument such that someone might find it persuasive of something, because I don't expect this phrasing to work.
I say and write things[3] because I consider those things to be true, relevant, and at least somewhat important. That by itself is very often (possibly usually) sufficient for a thing to be useful in a general sense (i.e., I think that the world is better for me having said it, which necessarily involves the world being better for the people in it). Whether the specific person to whom the thing is nominally or factually addressed will be better off as a result of what I said or wrote is not my concern in any way other than that.
I think I meant something subtly different that what you've taken that part to mean. I think you understand that, f other people noticed a pattern that everything you said was false, irrelevant, or unimportant, they would eventually stop bothering to listen when you talk, and this would mean you'd lose the ability to get other people to know things, which is a useful ability to have. This is basically my position! Whether the specific person you address is better off in each specific case isn't materal because you aren't trying to always make them better off, you're just trying to avoid being seen as someone who predictibly doesn't make them better off. I agree that calculating the full expected consequences to every person of every thing you say isn't necessary for this purpose.
No, this is a terrible idea. Do not do this. Act consequentialism does not work. ... Look, this is going to sound fatuous, but there really isn’t any better general rule than this: you should only lie when doing so is the right thing to do.
I agree that Act Consequentialism doesn't really work. I was trying to be a Rule consequentialist instead wben I wrote the above rule. I agree that that sounds fatuous, but I think the immediate feeling is pointing at a valid retort: You haven't operationalized this position into a decision process that a person can actually do (or even pretend to do).
I took great effort to try to right down my policy as something explicit in terms a person could try to do (even though I am willing to admit it is not really correct mostly because finite agent problems), because a person can't be a real Rule Consequentialist without actually having a Rule. What is the rule for "Only lie when doing so is the right thing to do"? It sounds like an instruction to pass the act to my rightness calculator, but if I program that rule into my rightness calculator, and then give it any input, it gets into an infinite loop. I have an Act Consequentialist rightness calculator as a backup, but if I pass the rule "only lie when doing so is the right thing to do" into that as a backup I'm just right back at doing act consequentialism.
If you can write down a better rule for when to lie the than what I've put above (that is also better than the "never" or "only by coming up with galaxy-brained ways it technically isn't lying" or Eliezer's meta-honesty idea that I've read before) I'd consider you to have (possibly) won this issue, but that's the real price of entry. It's not enough to point out the flaws where all my rules don't work, you have to produce rules that work better.
I do have examples that motivated me to write this, but they're all examples where people are still strongly disagreeing about the object level of what happened, or possibly lying about how they disagree on the object level and pretending they're committed to honesty. I thought about putting them in the essay but decided it wouldn't be fair and I didn't want to distract my actual thesis into a case analysis of how maybe all my examples have a problem other than over-adherence to bad honesty norms. Should I put them in a comment? I'm genuinely unsure. I could probably DM you them if you really want?
EDIT: okay fine you win. The public examples with nice writeups that I am most willing to cite are: Eneasz Brodski, Zack M Davis, Scott Alexander. There are other posts related to some of those but I don't want to exhaustively link everything anyone's said about it in this comment. I claim there are other people making in my opinion similar mistakes but I'm either unable or unwilling to provide evidence so you shouldn't believe me. I would prefer to leave as an exercise for the reader what any of those things have to do with my position because this whole line of inquiry seems incredibly cursed.
If that was me not getting it than probably I am not going to get it and continuing to talk has deminishing returns, but I'll try to answer your other questions too and am happy to continue replying in what I hope comes across as mutual good faith.
What did you think about my objection to the Flynn example
It was incredibly cute but the kind of thing where people's individual results tend to vary wildly. I am glad you are happy even if it was achieved by a different policy, but I don't think any of my main claims are strongly undermined by it.
or the value of the rationalist community as something other than an autism support group
I agree the rationalist community is not actually an autism support group, and in particular that it has value as a way for people who want to believe true things to collaborate around getting more accurate beliefs, as well as for people who want to improve the ways they think, make better decisions, optimise their lives etc. I think my thesis that truthtelling does not have the same essential character as truthseeking or truthbelieving is if not correct at least coherent and justifiable, and can be argued on its merits. I can want to believe true things so I can make better decisions without having an ideological commitment to honest speech, and people can collaborate around reaching true conclusions based on interrogating positions and seeking evidence rather than expecting and assuming honesty. For example I do not think at any point in interrogating my claims in this post you have had to assume I am honest, because I am trying to methodically attach my reasoning and justifications to everything I say and am not really expecting to be believed about things where I don't.
The way that you constructed the hypothetical there was plenty of time to come up with an honest way to talk about how much he enjoyed widgets.
This seems like a non-central objection. If it is your only objection, note that I could with more careful thought have constructed a hypothetical where there was even more time pressure and an honest way to achieve their goal was even less within reach, and then we'd be back at the position my first hypothetical was intended to provoke. Unless I suppose you think there is no possible plausible social situation ever where refusing to lie predictibly backfires, but I somehow really doubt that.
I also wouldn’t shoot someone so I could tell someone else the truth. I don’t know where you got these numbers.
The only number in my "how much bad is a lie if you think a lie is bad" hypothetical is taken from https://www.givewell.org/charities/top-charities under "effectiveness", rounded up. The assumption that you have to assign a number is a reference to coherent decisions imply consistent utilities, and the other numbers are made up to explore the consequences of doing so.
The one of the things I value is people knowing and understanding the truth, which I find to be a beautiful thing. It’s not because someone told me to be honest at some point, it’s because I’ve done a lot of mathematics and read a lot of books and observed that the truth is beautiful.
This is a more interesting reason than what I had (pessimistically) imagined, and I would count it a valid response to the side point I was making that intrinsic concern for personal truthtelling is prima facie weird. I think I agree with you that the truth is beautiful, I also read mathematics for fun and have observed it and felt the same way. I just don't attach the same feeling to honest speech. I would want to retort that people knowing the truth is not always best served by you saying the truth, and you could still justify making terribly cutthroat utilitarian trade-offs around e.g. committing fraud to get money to fund teaching mathematics to millions of people in the third world, since it increases total amount of people knowing and understanding the truth overall. I also acknowledge regular utilitarians don't behave like that for obvious second order reasons but my position is only that you have to think through the actual decision and not just assume the conclusion.
I feel like you sort of ignored my stronger points ... without engaging with my explanation of how it doesn’t miss the point
If I ignored your strongest argument it was probably because I didn't think it was central, didn't think it was your strongest, or otherwise misunderstood it. I'm actually unsure looking back which part I didn't focus on you meant for me to focus on. The "Sure, we judge actions by their consequences, but we do not judge all actions in the same way. Some of them are morally repugnant, and we try very, very hard to never take them unless our hands our forced" part maybe? The example you give is torture, which 1) always causes immediate severe pain by the definition of torture and 2) has been basically proven to be never useful for any goal other than causing pain in any situation you might reasonably end up in. Saying Torture is always morally repugnant is much more supported by evidence, and is very different from saying the same of an action that frequently hurts nobody and happens a hundred times a day in normal small talk.
I enjoyed reading this reply, since it's exactly the position I'm dissenting against phrased perfectly to make the disagreements salient.
I don't know, he could say "Honestly, I enjoy designing widgets so much that others sometimes find it strange!" That would probably work fine. I think you can actually get a way with a bit more if you say honestly first and then are actually sincere. This would also signal social awareness.
I think this is what eliezer describes as "The code of literal truth only lets people navigate anything like ordinary social reality to the extent that they are very fast on their verbal feet". This reply works if you can come up with it, or notice this problem in advice and plan it out, but in a face to face interview it takes quite a lot of skill (more than most people have) to phrase somethlng like that so that it comes off smoothly on a first try and without pausing to think for ten minutes. People who do not have the option of doing this because they didn't think of it quickly enough, get to choose between telling the truth as it sits in their head or else the first lie they come up with in the time it took the interviewer to ask the question.
I'm a bit of a rationalist dedicate/monk and I'd prefer to fight than lie - however I don't think everyone is rationally or otherwise compelled to follow suit, for reasons that will be further explained.
Now, you're probably going to say that I can't convince you by pure reason to intrinsically value the truth. That's right. However, I also can't convince you by pure reason to intrinsically value literally anything
This is exactly the heart of the disagreement! Truthtelling is a value, and you can if you want assign it so high a utility score that you wouldn't tell one lie to stop a genocide, but that's a fact about the values you've assigned things, not about what behaviours are rational in the general case or whether other people would be well-served by adopting the behavioural norms you'd encourage of them. It shouldn't be treated as intrinsically tied to rationalism, for the same reason that Effective Altruism is a different website. In the general case, do the actions that get you the things you value, and lying is just an action, an action that harms some things and benefits others that you may or may not value.
I could try to attack the behaviour of people claiming this value if I wanted, since it doesn't seem to make a huge amount of sense: If you value The Truth for it's own sake while still being a Utilitarian, how much disutility is one lie in human lives? If it is more than 1/5000 the average person tells more than 5000 lies in their life and it'd be a public good to kill newborns before they can learn language and get started, and if it is less than 1/5000 Givewell sells lives for ~$5k each so you should be happy lying for a dollar. This is clearly absurd, and what you value is your own truthtelling or maybe the honesty of specifically your immediate surroundings, but again why? What is it you're actually valuing, and have you thought about how to buy more of it?
The meaning of the foot fetish tangent at the start is, I don't understand this value that gets espoused as so important or how it works internally. It'd be incredibly surprising to learn evolution baked something like that into the human genome. I don't think Disney gave it to you. If it is culture it is not the sort of culture that happens because your ancestors practiced it and obtained success, but instead your parents told you not to lie because they wanted the truth from you whether it served you well to give it to them or not and then when you grow up you internalise that commandment even as everyone else is visibly breaking it in front of you. I have a hard time learning the internals of this value that many others claim to hold, because they don't phrase it like a value, they phrase it like an iron moral law that they must obey up to the highest limits of their ability without really bothering to do consequentialism about it, even those hear who seem like devout consequentialists about other moral things like human lives.
I'm not so much of a pragmatist to say that you should run naked scams (for several reasons including that your students will notice when they don't become millionaires later and possibly be vengeful about it, other smarter people will notice the obviously fraudulent offer and assume everything else you offer is some kind of fraud too, the greater prevalence of fraud in the economy will make everyone less willing to buy anything ever until the whole economy stops, etc.) but I am enough of a pragmatist to demand actual reasons about why it isn't wise or why it will have negative consequence.
As for the landlord airbnb case, well I'd want to first ask questions about circumstance. You claimed a bandit doesn't have the right to the information, do you have a moral theory by which to say whether the landlord has a right to the information or not? Is the landlord already basically assuming you'll do this because everybody else does and they've factored it into the price of the rent, or would they spend resources trying to stop you? How much additional wear and tear would it cause, and would it be unfair to the landlord to impose those damages without additional compensation?
The health inspector rats case, I'd similarly think it depends on whether the rats are a real safety hazard likely to make customers sick, or just a politically imposed rule that doesn't really matter that you're arbitrarily being forced to comply with anyway (in which case sure cover it up).
I agree and have editted. Sorry for overstating the position here (though not in original post).
Are you sure at the critical point in the plan EDT really would choose to take randomly from the lighter pair than the heavier pair? She's already updated from knowing the weights of the pairs, and surely a random box from the more heavy pair has more money in expectation than a random box from the less heavy pair, the expected value of it is just half the total weight?
If it was a tie (as it certainly will be) it wouldn't matter. If there's not a tie somehow one Host made an impossible mistake: if she chooses from the lighter she can expect the Hosts mistake was not putting money in since that would have been optimal (so the boxes have 301, 301, 301, 201, and choosing from the lighter has expected value 251), but if she chooses from the heavier the Hosts mistake was putting money in when it shouldn't have (so the boxes weigh 101, 101, 101, 1), and choosing from the heavier guarantees 101, which would be less?
Actually okay yeah I'm persuaded that this works. I imagined when I first wrote this that weighing a group of boxes lets you infer the total value, so she'd defect on the plan and choose from the heavier pair expecting more returns that way, but so long as she only knows which pair of boxes is heavier (a comparative weighing) instead of how much each pair of boxes actually weighs exactly (from which she would infer the amount on money in each pair total) she can justify choosing the lighter and get 301, I think?
What would you say to the suggestion that rationalists ought to aspire to have the "optimal" standard of truthtelling, and that standard might well be higher or lower than what the average person is doing already (since there's no obvious reason why they'd be biased in a particular direction), and that we'd need empirical observation and seriously looking at the payoffs that exist to figure out approximately how readily to lie is the correct readiness to lie?
I think a distinction can be made between the sort of news article that's putting a qualifier in a statement because they actually mean it, and are trying to make sure the typical reader notices the qualifier, and the sort putting "anonymous sources told us" in front of a claim that they're 99% sure is made up, and then doing whatever they can within the rules to sell it as true anyway, because they want their audience of rubes to believe it. The first guy isn't being technically truthist, they're being honest about a somewhat complicated claim. The second guy is no better than a journalist who'd outright lie to you in terms of whether it's useful to read what they write.
Yeah I think it's an irrelevant tangent where we're describing the same underlying process a bit differently, not really disagreeing.
I think I disagree with this framing. In my model of the sort of person who asks that, they're sometimes selfish-but-honourable people who have noticed telling the truth ends badly for them and will do it if it is an obligation but would prefer to help themselves otherwise, but they are just as often altruistic-and-honourable people who have noticed telling the truth ends badly for everyone and are trying to convince themselves it's okay to do the thing that will actually help. There are also selfish-but-cowardly people who just care if they'll be socially punished for lying, or selfish-and-cruel people chewing at the bit to punish someone else for it, and similar, but moral arguments don't move to them either way so it doesn't matter.
More strongly I disagree because I think a lot of people have harmed themselves or their altruistic causes by failing to correctly determine where the line is, either lying when they shouldn't or not lying when they should, and it is too the communities shame that we haven't been more help with illuminating how to tell those cases apart. If smart hardworking people are getting it wrong so often, you can't just say the task is easy.
This is in total a fair response. I am not sure I can say that you have changed my mind without more detail and I'm not going to take down my original post (as long as there isn't a better post to take its place) because it's still I think directionally correct but thank you for your words.