New Comment
17 comments, sorted by Click to highlight new comments since:

Batesian mimicry is optimized to be misleading, "I"ll get to it tomorrow" is denotatively false, "I did not have sexual relations with that woman" is ambiguous as to its conscious intent to be denotatively false.

The time on a clock is pretty close to being a denotative statement.

Structure Rebel, Content Purist: people who disagree with me are lying (unless they say "I think that", "My view is", or similar)

Structure Rebel, Content Neutral: people who disagree with me are lying even when they say "I think that", "My view is", or similar

Structure Rebel, Content Rebel: trying to unlock the front door with my back door key is a lie

I asked ChatGPT to explain the image, and it pulled a Westworld "Doesn't look like anything to me" reply on the "language model hallucinations are lies" box:

This image is a play on the concept of "alignment charts," which are often used in role-playing games to define character behavior. Here, it classifies different types of lies based on two axes: content (what is being said) and structure (the nature of the lie).

1. **Content purist** vs. **Content rebel**: This axis distinguishes lies based on whether the statement itself is denotatively false (purist) or if any kind of untruthfulness, including misleading but technically true statements, is considered a lie (rebel).

2. **Structure purist** vs. **Structure rebel**: This axis is about the intent and form of the lie. A structure purist requires a conscious intent to deceive, while a structure rebel includes any form of falsehood, whether intentional or not.

The chart then gives examples of lies for each quadrant:
- **Content purist / Structure purist**: A classic, intentional lie about a specific fact ("I did not have sexual relations with that woman").
- **Content purist / Structure neutral**: A fantastical lie without a clear intent to deceive ("The Jews are lizard people").
- **Content purist / Structure rebel**: This category is left blank in the image but would include any intentional lie that is factually incorrect.
- **Content neutral / Structure purist**: A polite lie with intent to deceive but not about a factual matter ("Oh, thank you so much; you shouldn't have").
- **Content neutral / Structure neutral**: A procrastinating lie ("I'll get to it tomorrow").
- **Content neutral / Structure rebel**: Lies that occur due to language misunderstandings or miscommunications ("false cognates are lying").
- **Content rebel / Structure purist**: This category is left blank in the image but would include lies that are misleading in any way with intent to deceive.
- **Content rebel / Structure neutral**: Lies that result from errors in reasoning or belief, not language ("Batesian mimicry is lying").
- **Content rebel / Structure rebel**: Lies that are not due to intentional deceit or language issues, but other causes, like a broken clock being incorrect about the time.

The chart is a humorous attempt to categorize lies in a way that mirrors the alignment charts used in gaming, highlighting the complexity and variety of what we consider to be "lies."

“structure” feels off the mark for labeling the vertical axis; it feels like it wants to denote the structure of or within the (broadly defined) utterance, but instead it's mapping to part of the structure around it. If I consider some possible replacements:

  • “cause” feels much closer but maybe too specific.
  • “impetus” feels weird for a reason I can't immediately place; maybe because it implies too much of an intentional stance when that's the very thing under question?
  • “process” currently feels best to me of the ones I've considered.
  • “generation”, “root”, “initiation”, “emplacement” are other possibilities.
  • “intent” interestingly doesn't feel as bad as “impetus”, perhaps because it centers on a slightly different communicative mark that is nonetheless interpretable as the right thing.

I think this is a neat model improvement from Scott Alexander's list of media lies from his series on media/news companies:

  1. Reasoning well, and getting things right
  2. Reasoning well, but getting things wrong because the world is complicated and you got unlucky.
  3. Reasoning badly, because you are dumb.
  4. Reasoning badly, because you are biased, and on some more-or-less subconscious level not even trying to reason well.
  5. Reasoning well, having a clear model of the world in your mind, but more-or-less subconsciously and unthinkingly presenting technically true facts in a deceptive way that leaves other people confused, without ever technically lying.
  6. Reasoning well, having a clear model of the world in your mind, but very consciously, and with full knowledge of what you’re doing, presenting technically true facts in a deceptive way intended to make other people confused, without ever technically lying.
  7. Reasoning well, having a clear model of the world in your mind, and literally lying and making up false facts to deceive other people.

In a perfect world, we would have separate words for all of these. In our own world, to save time and energy we usually apply a few pre-existing words to all of them.

(I think that last statement is wrong; we aren't applying a few pre-existing words in order to save time, we're applying pre-existing words because the millions of people who created and established the use of those few pre-existing words were largely clueless about the differences between these 7 separate instances, because Scott Alexander wrote this list in 2023 instead of hundreds of years ago).

I don’t understand the “aspartame” example. What’s the map-territory mismatch there? The territory, presumably, is aspartame, the actual substance… what’s the map?

[-]gwern40

Presumably the 'map-territory' mismatch is 'tastes like glucose (and like it has calories), but is not glucose (and has 0 calories)'.

Is that really “map”? The fact that it tastes like glucose is part of the territory, and so is the fact that it is not glucose… is the idea here that any perceptual quality is “the map”…?

How does this framework deal with visual representations? Is a picture of an orange (a) a lie because it’s not an orange (the Magritte perspective), (b) a lie only if it’s not painted with orange paint, (c) not a lie because it looks like a picture and is a picture, (d) other…?

This feels like it wants to be a poll to me.

My first idea is to just have a poll like the other two we've had recently, where there's 9 entries and you agree/disagree with whether each statement is a lie.

I'm interested in any other suggestions for how to set up a poll.

This definitely does not want to be a poll. (A poll on "Does Foo have the Bar property?" is interesting when people have a shared, unambiguous concept of what the Bar property is and disagree whether Foo has it. Ambiguity about how different senses of Bar relate to each other wants to be either a sequence of multi-thousand-word blog posts, or memes.)

I think there's a variant that wants to be a poll. If you have a lot of different concepts that have lots of different ways of relating to each other, then this wants to be a survey so one can do some sort of factor/cluster analysis to identify the different philosophies people might have, and maybe correlate it with other variables of interest.

I think you could use a better example for "structure purist, content neutral": that's where carefully crafted deception (without being actually false) would go, and you undersell it by using a polite "white lie" as your central example.

"You could save up to 15% or more on car insurance"?

(maybe too political but TBH the best example) "Iraq’s government openly praised the attacks of September the 11. And al Qaeda terrorists escaped from Afghanistan are known to be in Iraq."

Hmmm... what if I require intent but the intent needs not be conscious? What makes intent specifically conscious is simply that you model yourself as having the intent; a kind of map-territory correspondence between your intent (territory) and your self-model of your intent (map). We can be conscious of our intentions, but it is not the intentions themselves that are conscious.

In fact, I consider it more dishonest for people to have dishonest intentions they are unaware of than for them to knowingly lie. Insofar as the liar is not making excuses even in his own mind, I would call it an "honest lie" — a term I take from Nietzsche, whose point was that most people of his time are insufficiently honest to be capable of such a lie.

As for the content axis, I am content-neutral.

It'd be cool if this was interactive and highlighted the things that you consider a lie based on where you are on the chart, since any particular stance implies that things other stances consider to be lies would also be lies to you.

I think, but am not sure, that if someone thinks a particular box has a lie in it then they also would think anything directly above or to the left of that box is also a lie. E.g. if you think false cognates are lies then you're probably also not on board with "I'll get to it tomorrow."