What you are describing is my native way of thinking. My mind fits large amounts of information together into an aesthetic whole. I took me a while to figure out that other people don't think this way, and they can't easily just absorb patterns from evidence.
This mode of thinking has been described as Introverted Thinking in Ben Kovitz's obscure psychology wiki about Lenore Thomson's obscure take on Jungian psychology. Some of you are familiar with Jungian functions through MBTI, the Myers-Briggs Type Indicator. Introverted Thinking (abbreviated Ti) is the dominant function of the INTP type.
It will only take a few quotes to illustrate why you are talking about the same thing:
Introverted Thinking (Ti) is the attitude that beneath the complexity of what is manifest (apparent, observed, experienced) there is an underlying unity: a source or essence that emerges and takes form in different ways depending on circumstances. What is manifest is seen as a manifestation of something. From a Ti standpoint, the way to respond to things is in a way that is faithful to that underlying cause or source and helps it emerge fully and complete, without interference from any notion of self. The way to understand that underlying essence is to learn to simultaneously see many relationships within what is manifest, to see every element in relation to every other element, the relationships being the "signature" of the underlying unity. This can only be experienced directly, not second-hand.
Introverted thinking is a form of mental representation in which every input, every variable, every aspect of things is considered simultaneously and holistically to perceive causal, mathematical, and aesthetic order. What you know by Ti, you know with your hands, your eyes, your muscles, even a tingling sensation "downstairs" because you sense that everything fits. Every variable is fair game to vary, every combination of variables worthy of consideration; the only ultimate arbiter is how well the parts form a unified whole rather than a jumble.
Introverted Thinking (Ti) is contrasted with Extraverted Thinking (Te):
From the Te perspective, anything for which you can't give an operational definition in terms of measurement (an "objective test") doesn't exist. The decision criteria are defined not exactly in terms of the things: they're defined in terms of observations of a sort that anyone can do and get the same result. You put the totality of the real-world situation onto your scales, so that all causal factors come into play--both known and unknown. What's accessible to you is the reading on the scale: that and only that is the basis for your decision.
As a dominant function, Te typically leads one to pursue and collect reliable ways of making decisions to get predictable results. The repeatability of a process becomes one of the main criteria for finding it valuable. Repeatable processes are valuable from a Te perspective because they enable you to make agreements with other people, where there is no doubt as to whether each party has fulfilled its part of the agreement. Making and delivering on promises is often how a Te attitude leads one to understand ethics.
Introverted Thinking about language:
From the Ti standpoint, communication is possible only between people who share some common experience of the things that they're talking about. To say something that you can understand, I need to relate it logically to things in your own experience. To show you how far a piece of wood bends, instead of giving a numerical measure (Te), I'd either encourage you to bend a piece of wood yourself, or find some mathematically similar thing that you know about and relate wood-bending to that. Words cannot be defined prior to the reality that they're about; words and criteria defined independently of the reality would be meaningless. The world itself provides a natural set of reference points, arising from the real, causal structure of things. Ultimately, to talk is to say, "I mean *that)."
Introverted Thinking uses language and concepts merely as pointers to patterns in reality that are incredibly more complex than anything that can be described in words. In contrast, Extraverted Thinking is about step-by-step justification according to shared language and critera. A common failure mode of Extraverted Thinking is King on The Mountain, which I think everyone will instantly recognize.
Introverted Thinking and Extraverted Thinking, along with Extraverted Intuition and Introverted Intuition, are combined to create rationality. Extraverted Intuition provides the idea generation, Introverted Thinking provides pattern recognition, Extraverted Thinking handles justification, and Introverted Intuition avoids bias. According to the Jung-Thomson-Kovitz theory, all of these modes of thinking provide benefits and failure modes. For example, a failure mode of Introverted Thinking is that since it is aesthetic and subjective, it can be very hard for Introverted Thinkers with different inputs to reconcile worldviews if they differ, whereas Extraverted Thinkers could slowly hammer out agreement step-by-step.
LessWrong seems mostly dominated by INTJs, who have Introverted Intuition and Extraverted Thinking. They are mostly focused on justification and bias. These are important skills, but Introverted Thinking is important for marshaling the priors of the totality of your experience.
My understanding of consequentialism is similar to yours and TheOtherDave's. In a chain of events, I consider all events in the chain to be a consequence of whatever began the chain, not just the final state.
My views on lying are similar to your friend's. Thanks for having a charitable reaction.
After reading some of the attitudes in this thread, I find it disconcerting to think that a friend might suddenly view me as having inscrutable or dangerous psychology, if they found out that I believe in white lies in limited situations, like the vast majority of humans. It's distressing that upon finding this out, that they might so confused about my ethics or behavior patterns... even though presumably, since they were friends with me, they had a positive impression of my ethics and behavior before.
Maybe finding out that a friend is willing to lie causes you to change your picture of their ethics (rhetorical "you"). But why is it news that they lie sometimes? The vast majority of people do. Typical human is typical.
Maybe the worry is that if you don't know the criteria by which your friends lie, then they might lie to you without you expecting it.
If so, then perhaps there are ways to improve your theory of mind regarding your friends, and then avoid being deceived. You could ask your friends about their beliefs about ethics, or try to discover common reasons or principles behind "white lies." While people vary on their beliefs about lying, there is probably a lot of intersubjectivity. Just because someone isn't aware of intersubjective beliefs about the acceptability of lying, it doesn't mean that their neurotypical friends are capricious about lying. (Of course, if future evidence shows that everyone lies in completely unpredictable ways, then I would change my view.)
For example, if you know that your friend lies in response to compliment-fishing, then you can avoid fishing for compliments from them, or discount their answers if you do. If you know that your friend lies to people he believes are trying to exploit him, then you don't need to be worried about him lying to you, unless (a) you plan on exploiting him, or (b) you worry that he might think that you are exploiting him even if you aren't, and he lies rather than notify you.
If that's the case, then the real worry should be that your friend might feel antagonized by you without you realizing it and without him being able to talk to you about it. As long as you have good reasons to believe that you won't have conflict with your friend, or work it out if conflict occurs, then your friend lying for adversarial reasons is probably not likely.
Just because your friends don't give you (rhetorical "you") an exhaustive list of the situations where they might lie, or a formalized set of principles, it doesn't mean that you are in the dark about when they might lie, unless your theory of their mind leaves you in the dark about their behavior in general.
Means, motive, opportunity: they’ve demonstrated that they have the means, that while they dislike lying they’re not absolutely opposed to it, and there would certainly have been plenty of opportunities for them to lie to me. And while I cannot imagine a reason for them to lie to me, I also don’t have a full understanding of how their mind works, so I must take into account the possibility of something unforeseen.
As you correctly observe in your excellent trust post, unforeseen circumstances are always a possibility in relationships. I think your post leads to the conclusion that trusting a person is related to your theory of mind regarding them.
Never-lies vs believes-that-at-least-some-lies-are-justified is probably not a very useful way to reduce unforeseen conflict. Someone who says that they "never lie" could have a different definition of "lies" than you. They might be very good at telling the literal truth in deceptive way. They might change their ethical view of lying without telling you. They might lie accidentally. Or they might be lying that they "never lie," or they may be speaking figuratively (and mean "I never lie about the important stuff").
The most useful distinctions between people is not if they will lie, but when. Predicting when your friends might lie is not just a function of your friends behavior, it's also a function of your theory of mind.
Certain interactions with the government (assuming you are behaving peacefully) seem like a special case of dealing with an adversarial or exploitative agent. When an agent has social power over you, they might easily be able to harm or inconvenience you if you answer some questions truthfully, whereas it would be hard for you to harm them if you lied. Telling the truth in that case hurts you, but lying harms nobody (aside from foiling the exploitative plans of the other agent, which doesn't really count).
A more mundane example would be if a website form asks you for more personal information than it needs, and requires this information. For instance, let's say the website asks for your phone number or address when there is suspiciously no reason why they should need to call you or ship you anything. If you fill in a false phone number to be able to submit the form, then you are technically lying to them, but I think it's justified. Same thing for websites that require you to fill in a name, but where they don't actually need it (e.g. unlike financial transactions, or social networks that deal with real identities).
The website probably isn't trying to violate your rights, but it's trying to profit from your private information, either for marketing to you (which you consider pointless), or selling the information (which is exploitative, and could result in other people intruding into your privacy). Gaining your info will predictably create zero sum or negative sum outcomes. Lying is an appropriate response to exploitation attempts like these.And if they aren't trying to exploit your private information, or use it to give you a service, then they don't really need it, so lying doesn't hurt them at all, and you might as well do it to be safe from spam.
Telling the truth is a good default because human relationships are cooperative or neutral by default. But the ethics of lying are much more complex in adversarial or exploitative situations.
Yes. There are also questions which interviewers are legally prohibited from asking during job interviews, which probably have good moral reasons behind them, not just legal ones.
In my recent comments, I've been developing the concept of a "right to information," or "undeserving questions."
Good questions.
If you know that the other person believes that the information isn't private, then you know that they aren't knowingly doing something which they believe is prying. So they don't have mens rea for being an asshole by their own standards. (Yes, I believe that sometimes people are assholes by their own standards, and these are exactly the sort of people who don't deserve the truth about my private matters.)
If they don't know my feelings about privacy, then they are not knowingly intruding. But if they do know my views on the privacy of that information, they are knowingly asking for information that I consider private. That could be...disrespectful. If my feelings about privacy on that matter are strong, and they ask anyway, then they may have mens rea for being an asshole by my standards. Perhaps they believe that my standards are wrong and that I should not judge them as an asshole for violating them.
If I thought there was legitimate disagreement about whether the information should be considered private, then I wouldn't view the other person as defecting on me, and I wouldn't feel motivated to lie to them to punish their defection. If I still felt motivated to lie, it would be for purely self-defensive reasons (for instance, I might lie to conceal health issues which don't effect anyone else).
As examples, I think there are many questions between relationship partners, where the ethics of privacy vs. transparency are up for debate, e.g. "how many partners have you had in the past?", "do you still have feelings for your ex?", "have you had any same-sex partners?"
On the other hand, if I thought their view of privacy was ridiculous, and they can't defend their view against mine, then I would be pretty annoyed if they still pressured me for information anyway. That sounds like a breakdown of cooperative communication, or the beginning of a fight. Lying might be an acceptable way to get out of this situation.
Surely there is some point where communication becomes sufficiently adversarial that you are no longer obligated to tell the truth? Especially if both people can tell there is a conflict, so they know to discount the other's truthfulness?
For example, if your nosy aunt says "I feel that your current dating situation shouldn't be private," you say "I think it should be private," and she continues to ask about your dating situation, then I think you are justified in lying. Your aunt is knowingly pushing for information that you want to keep hidden. She has no defensible argument that her view of privacy should trump yours.
Since you have stated that you think your dating situation should be private, your aunt shouldn't even expect to get the truth out of you here, so if you lie, there is less danger of her being deceived. People are known to lie about matters that they consider private, and your aunt should take this into account if she chooses to needle you.
When I'm discussing lying to a prying person, I'm mostly imagining conversations that are non-cooperative or hostile, or which involve protecting secrets which mostly effect oneself. I am imagining nosy relatives, slanderous reporters, totalitarian judges, or ignorant coworkers who ask you why you are taking pills. Remember, my ethics generally prefers evading or refusing prying questions. If evasion doesn't work, that suggests an uncooperative discussion or cornering has occurred.
The thing is that the prying person likely considers the private affair to potentially involve wrongdoing.
Maybe. There are several scenarios:
A prying person might believe that you might be engaged in actual wrongdoing.
A prying person believes that you are engages in something that they think is wrong, but actually isn't wrong.
A prying person doesn't believe that you are doing anything wrong. They are just trying to get on your case because they are controlling or malicious. Or they think it's fun.
In SaidAchmiz's example of a nosy relative, it's not at all clear that the relative believes he might be engaging in any moral infraction, unless that relative has an incredibly expansive notion of morality, as some relatives do.
So if you were in a culture that permitted say, slavery, and considered how one treats one's slaves a private affair, you would you still be willing to apply the above reasoning.
No, and I don't think this is accurate reading of my comment, though perhaps I allowed for confusion. In my comment, I discuss multiple conditions for ethically lying to people prying into private information:
That information is considered private in the relevant culture, such that the questioner knows (or should know) they are asking for information that is culturally considered private. If they know they are potentially defecting on you, then their behavior is worse. If they don't know they are defecting on you, then their apparent defection may have been a mistake on their part, in which case, you should be less enthusiastic to engage in tit-for-tat defection.
The "private" information does not include " lying to cover up wrongdoing you've committed, or in ways that will cause them tangible harm."
Since slavery is wrongdoing, then a slaveholder is not justified in lying about treatment of slaves, even in a past culture where slavery was considered acceptable and private.
Yes, some slaveholders may have believed that slavery was justifiable, and they were then justified in lying to cover up their treatment of slaves. But they were wrong, and they should have known better.
To conclude, I suggest there are some circumstances where it is justified to lie in response to prying questions about private information. This principle is contingent on classifying some kinds of questions as undeserving of true responses. I have not attempted a rigorous or comprehensive discussion of which questions are undeserving; that would be a much longer discussion, and you are welcome to provide your own thoughts if you consider it interesting.
I do believe that cultural notions of privacy are useful to estimate whether a questioner is being an asshole, though norms aren't the only factor. If indeed a questioner is asking a question which should be considered unethical, abusive, or overly intrusive... and that type of question is also culturally recognized as unethical, abusive, or overly intrusive, then the questioner should know that they are being an asshole.
If it's a morally ambiguous situation, where the other person can morally justify getting into your private business, or the ethics or intentions of their questions are unclear, then lying to them to protect your privacy is much less defensible.
These are good questions. It seems like deontologists have difficulty reconciling seeming conflicting rights.
In my main reply to the original post, I discuss some of the conflicts between truthfulness and privacy. If people have a right to not be lied to, and people also have privacy rights, then these rights could clash in some situations.
I think this is a great post. I fully agree about accepting other people's right to lie... in limited circumstances, of course (which is how I interpreted the post). I figured it was primarily talking about situations of self-defense or social harmony about subjective topics.
I think privacy is very important. Many cultures recognize that some subjects are private or personal, and has norms against asking about people's personal business without the appropriate context (which might depend on friendship, a relationship, consent, etc...). Some "personal" subjects may include:
The ethics of lying when asked about personal subjects seems more complicated. In fact, the very word "lying" may poison the well, as if the default is that people should tell the truth. I do not accept such a default without privacy issues being addressed. I will suggest that people do not have a right to other people's truthful responses about private information by default; whether they do depends on the relationship and context.
If someone asks you for information about yourself in one of these areas, and this request is inappropriate or unethical in the current context, then you are justified in keeping the truth away from them.
There are two main ways of withholding the truth: evasion, or lying. As several people in this thread have observed, there are often multiple methods of evading the question, such as exiting the situation, refusing to answer the question, omitting the answer in your response, or remaining silent.
If an evasive solution is feasible, then it's probably morally preferable. But if evasion isn't feasible, because you are trapped in the situation, because refusal to answer to the question would lead to greater punishment, or because evading the question would tip off the nosy asker to the truth (which they don't have a right to know), then lying seems like the only option.
While I admire the creative methods proposed in this thread to evade questions, such a tactic isn't always cognitive available or feasible for everyone. Sometimes, when dealing with a hostile or capricious questioner, pausing to come up with a creative deflection, or refusing to answer, will indicate weaknesses for them to attack. And if dealing with an ignorant or bumbling (but non-malicious) questioner, refusing to answer a question might cause them more embarrassment than you want.
An example from my recent experience: I was at work, and grabbing some Ibuprofen from the kitchen. A new employee walking into the kitchen and asked, "oh, is that Ibuprofen? You're taking it for a headache, right?" I said, "yes."
I lied. I was taking Ibuprofen for a chronic pain condition, which I did not want to reveal.
To me, information about health conditions is private, and I considered the truth to be none of his business. I'm sure there are ways I could have evaded the question, but I couldn't think of any. I viewed his question as a social infraction, but not such a big infraction that I wanted to embarrass him by scolding him, or be explicitly refusing to answer the question (which would be another form of scolding). I didn't sufficiently understand his motivation to want to scold him; maybe he was genuinely curious about what Ibuprofen is used for.
It's possible that he would have liked me to reveal that his question was overly nosy, to improve his social skills in the future and avoid offending people. The problem is that I didn't know him very well, and I couldn't know he would desire this sort of feedback. In a work context, where social harmony is important, I wasn't feeling like educating him on this subject. It's too bad that he has no way of learning from his mistake, but it's not my job to give it to him when it's costly to me. In situations that don't involve my body's health conditions, I am vastly more enthusiastic about helping other people with epistemic rationality.
I endorse lying as a last resort in response to people being unethically, inappropriately, or prematurely inquisitive about private matters. Conversely, if I want to question someone else about a private matter, I keep in mind the relationship and context, I note that they may not be ready or willing to tell me the truth, and I discount their answers appropriately. That way, I am less likely to be deceived if they feel the need to lie to protect their privacy.
I want to have an epistemically accurate picture of people, but I don't want to inappropriately intrude into their privacy, because I consider privacy valuable across the board. I recognize that other people have traumas and negative experiences which might lead them to rationally fear disclosure of facts about themselves or their state of mind, and that it can be ethical for them to hide that information, perhaps using lies if necessary.
If the topic isn't entirely personal to them, and effects me in tangible ways, then I would expect them to be more truthful, and be less likely to endorse lying to hide information. Lying in order to protect privacy should be a narrowly applied tool, but these situations do come up. Consequently, I agree with the original post that there are at least some situations where we should accept that other people can ethically lie.
Continuing a bit…
It’s truly strange seeing you say something like “Very high level epistemic rationality is about retraining one's brain to be able to see patterns in the evidence in the same way that we can see patterns when we observe the world with our eyes.” I already compulsively do the thing you talking about training yourself to do! I can’t stop seeing patterns. I don’t claim that the patterns I see are always true, just that’s it’s really easy for me to see them.
For me, thinking is like a gale wind carrying puzzle pieces that dance in the air and assemble themselves in front of me in gigantic structures, without any intervention by me. I do not experience this as an “ability” that I could “train”, because it doesn’t feel like there is any sort of “me” that is doing it: I am merely the passive observer. “Training” pattern recognition sounds as strange to me training vision itself: all I have to do is open my eyes, and it happens. Apparently it isn’t that way for everyone?
The only ways I’ve discovered to train my pattern recognition is to feed myself more information of higher quality (because garbage-in, garbage out), and to train my attention. Once I can learn to notice something, I will start to compulsively see patterns in it. For someone who isn’t compulsively maxing out their pattern recognition already, maybe it’s trainable.
Another example: my brain is often lining people in rows of 3 or 4 according to some collection of traits. There might “something” where Alice has more of it than Bob, and Bob has more of it than Carol. I see them standing next to each other, kind of like pieces on a chessboard. Basically, I think what my brain is doing is some kind of factor analysis where it is identifying unnamed dimensions behind people’s personalities and using them to make predictions. I’m pretty sure that not everyone is constantly doing this, but I could be wrong.
Perhaps someone smarter than me might be able to visualize a larger number of people in multiple dimensions in people-space. That would be pretty cool.
On a trivial level, everyone can do pattern-recognition to some degree, merely by virtue of being a human with general intelligence. Yet some people can synthesize larger amounts of information collected over a longer period of time, update their synthesis faster and more frequently, and can draw qualitatively different sorts of connections.
I think that’s what you are getting at when you talk about pattern recognition being important for epistemic rationality. Pattern recognition is like a mental muscle: some people have it stronger, some people have different types of muscles, and it’s probably trainable. There is only one sort of deduction, but perhaps there are many approaches to induction.
Luke’s description of Carl Shulman reminds me of Ben Kovitz’s description of Introverted Thinking as constantly writing and rewriting a book. When you ask Carl Shulman a question on AI, and he starts giving you facts instead of a straight answer, he is revealing part of his book.
“Many weak arguments” is not how this feels from the inside. From the inside, it all feels like one argument. Except the thing you are hearing from Carl Shulman is really only the tip of the iceberg because he cannot talk fast enough. His real answer to your question involves the totality of his knowledge of AI, or perhaps the totality of the contents of his brain.
For another example of taking arguments in totality vs. in isolation, see King On The Mountain, describing an immature form of Extraverted Thinking:
Some of the failure modes of Introverted Thinking involves seeing imaginary patterns, dealing with corrupted input, or having aesthetic biases (aesthetic bias is when you are biased towards an explanation that look neat or harmonious). Communication is also hard, but your true arguments would take a book to describe, if they could even be put into words at all.