I'd categorize that as an exfohazard rather than an infohazard.
Info on how to build a nuke using nothing but parts of a microwave doesn't harm the bearer, except possibly by way of some other cognitive flaw/vulnerability (e.g. difficulty keeping secrets)
Maybe "cognitohazard" is a closer word to the thing I'm trying to point towards. Though, I would be interested in learning about pure infohazards that aren't cognitohazards.
(If you know of one and want to share it with me, it may be prudent to dm rather than comment here)
We currently live in a world full of double-or-nothing gambles on resources. Bet it all on black. Invest it all in risky options. Go on a space mission with a 99% chance of death, but a 1% chance of reaching Jupiter, which has about 300 times the mass-energy of earth, and none of those pesky humans that keep trying to eat your resources. Challenge one such pesky human to a duel.
Make these bets over and over again and your chance of total failure (i.e. death) approaches 100%. When convex agents appear in real life, they do this, and very quickly die. For these agents, that is all part of the plan. Their death is worth it for a fraction of a percent chance of getting a ton of resources.
But we, as concave agents, don't really care. We might as well be in completely logically disconnected worlds. Convex agents feel the same about us, since most of their utility is concentrated on those tiny-probability worlds where a bunch of their bets pay off in a row (for most value functions, that means we die). And they feel even more strongly about each other.
This serves as a selection argument for why agents we see in real life (including ourselves) tend to be concave (with some notable exceptions). The convex ones take a bunch of double-or-nothing bets in a row, and, in almost all worlds, eventually land on "nothing".
If you're thinking without writing, you only think you're thinking.
-Leslie Lamport
This seems..... straightforwardly false. People think in various different modalities. Translating that modality into words is not always trivial. Even if by "writing", Lamport means any form of recording thoughts, this still seems false. Often times, an idea incubates in my head for months before I find a good way to represent it as words or math or pictures or anything else.
Also, writing and thinking are separate (albiet closely related) skills, especially when you take "writing" to mean writing for an audience, so the thesis of this Paul Graham post is also false. I've been thinking reasonably well for about 16 years, and only recently have I started gaining much of an ability to write.
Are Lamport and Graham just wordcels making a typical mind fallacy, or is there more to this that I'm not seeing? What's the steelman of this claim that good thinking == good writing?
Contrary to what the current wiki page says, Simulacrum levels 3 and 4 are not just about ingroup signalling. See these posts and more, as well as Beaudrillard's original work if you're willing to read dense philosophy.
Here is an example where levels 3 and 4 don't relate to ingroups at all, which I think may be more illuminating than the classic "lion across the river" example:
Alice asks "Does this dress makes me look fat?" Bob says "No."
Depending on the simulacrum level of Bob's reply, he means:
Here are some potentially better definitions, of which the group association definitions are a clear special case:
Communication of object-level truth.
Optimization over the listener's belief that the speaker is communicating on simulacrum level 1, i.e. desire to make the listener believe what the listener says.
These are the standard old definitions. The transition from 1 to 2 is pretty straightforward. When I use 2, I want you to believe I'm using 1. This is not necessarily lying. It is more like Frankfurt's bullshit. I care about the effects of this belief on the listener, regardless of its underlying truth value. This is often (naively considered) prosocial, see this post for some examples.
Now, the transition from 2 to 3 is a bit tricky. Level 3 is a result of a social equilibrium that emerges after communication in that domain gets flooded by prosocial level 2. Eventually, everyone learns that these statements are not about object-level reality, so communication on levels 1 and 2 become futile. Instead, we have:
E.g. that Alice cares about Bob's feelings, in the case of the dress, or that I'm with the cool kids that don't cross the river, in the case of the lion. Another example: bids to hunt stag.
3 to 4 is analogous to 1 to 2.
Like with the jump from 1 to 2, the jump from 3 to 4 has the quality of bullshit, not necessarily lies. Speaker intent matters here.
I've been working on applying the anti-infohazard to the "infohazards" I know.