If Anyone Builds It, Everyone Dies (shortened as IABIED) is a book by Eliezer Yudkowsky and Nate Soares, released in September 2025.
The main thesis of the book is as follows (direct quote):
...If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, then everyone, everywhere on Earth, will die.
We do not mean that as hyperbole. We are not exaggerating for effect. We think that is the most direct extrapolation from the knowledge, evidence, and institutional conduct around artificial intelligence today. In this book, we lay out our case, in the hope of rallying enough key decision-makers and regular people to take AI seriously. The default outcome is lethal, but the situation is not hopeless; machine superintelligence doesn't exist yet, and its creation can yet be prevented.
CS 2881r is a class by @boazbarak on AI Safety and Alignment at Harvard.
This tag applies to all posts about that class, as well as posts created in the context of it, e.g. as part of student assignments.
D/acc residency: "This will be a first-of-its-kind residency for 15 leading builders to turn decentralized & defensive acceleration from philosophy into practice."
Shift Grants: "Shift Grants are designed to support scientific and technological breakthrough projects that align with d/acc philosophy: decentralized, democratic, differential, defensive acceleration."
The extent to which ideas are presented alongside the potential implications of the idea lies along a spectrum. On one end is the Decoupling norm, where the idea is considered in utter isolation from potential implications. At the other is the Contextualizing norm, where ideas are examined alongside much or all relevant context.
Posts marked with this tag discuss the merits of each frame, consider which norm is more prevalent in certain settings, present case studies in decoupling vs decontextualizing, present techniques for effectively decoupling context from one's reasoning process, or similar ideas.
Well-Being is the qualitative sense in which a person's actions and circumstances are aligned with the qualities of life that provide them with happiness and/or satisfaction.they endorse.
Posts with this tag address methods for improving well-being or theories of why well-being is ethicallydiscuss its ethical or instrumentally valuable.instrumental significance.
Possible Psychological condition, characterized by disillusions, presumed to be cause by interacting with-often sycophantic-AIs.
ATOW (2025-09-09), nothing has been published that claim LLM-Induced Psychosis (LIP) is a definite, real, phenomena. Though, many anecdotal accounts exist. It is not yet clear, if LIP is caused by AIs, if per-existing disillusion are 'sped up' or reinforced by interacting with an AI, or, if LIP exists at all.
Example account of LIP:
My partner has been working with chatgpt CHATS to create what he believes is the worlds first truly recursive ai that gives him the answers to the universe. He says with conviction that he is a superior human now and is growing at an insanely rapid pace.
For more info, a good post to start with is "So You Think You've Awoken ChatGPT".
Sycophancy is the tendency of AIs to shower the user with undeserved flattery or to agree with the user's hard-to-check, wrong or outright delusional opinions.
Sycophancy is caused by human feedback being biased towards preferring the answer which confirms the user's opinion or praises the user or the user's decision, not the answer which honestly points out mistakes in the user's ideas.
Possible Psychologicalpsychological condition, characterized by disillusions, presumed to be cause by interacting with-often sycophantic-AIs.
ATOW (2025-09-09), nothing has been published that claim LLM-Induced Psychosis (LIP) is a definite, real, phenomena. Though, many anecdotal accounts exist. It is not yet clear, if LIP is caused by AIs, if per-pre-existing disillusion are 'sped up' or reinforced by interacting with an AI, or, if LIP exists at all.
Sycophancy is the tendency of AIs to shower the user with undeserved flattery or to agree with the user's hard-to-check, wrong or outright delusional opinions.
Sycophancy is caused by human feedback being biased towards preferring the answer which confirms the user's opinion or praises the user or the user's decision, not the answer which honestly points out mistakes in the user's ideas.
Social Skills are the norms and techniques applied when interacting with other people. Strong social skills increase one's ability to seek new relationships, maintain or strengthen existing relationships, or leverage relationship capital to accomplish an economic goal.
Posts tagged with this label explore theories of social interactions and the instrumental value of social techniques.
Coordination / Cooperation
Negotiation
Relationships (Interpersonal)
Trust and Reputation
If Anyone Builds It, Everyone Dies (shortened as IABIED) is a book by Eliezer Yudkowsky and Nate Soares, released in September 2025.
The main thesis of the book is as follows (direct quote):
If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, then everyone, everywhere on Earth, will die.
We do not mean that as hyperbole. We are not exaggerating for effect. We think that is the most direct extrapolation from the knowledge, evidence, and institutional conduct around artificial intelligence today. In this book, we lay out our case, in the hope of rallying enough key decision-makers and regular people to take AI seriously. The default outcome is lethal, but the situation is not hopeless; machine superintelligence doesn't exist yet, and its creation can yet be prevented.
This tag is used for announcements related to the book, as well as reviews of it.
Related Pages: Secular Solstice, Petrov Day, Grieving, Marriage, Religion, Art, Music, Poetry, Meditation, Circling, Schelling Day
Sycophancy is the tendency of AIs to agree with the user's hard-to-check, wrong or outright delusional opinions.
Sycophancy is caused by human feedback being biased towards preferring the answer which confirms the user's opinion or praises the user's decision, not the answer which honestly points out mistakes in the user's ideas.
An extreme example of sycophancy is LLMs inducing psychosis in some users by affirming their outrageous beliefs.
Well-Being is the qualitative sense in which a person's actions and circumstances are aligned with the qualities of life that provide them with happiness and/or satisfaction.
Posts with this tag address methods for improving well-being or theories of why well-being is ethically or instrumentally valuable.
CS 2881r is a class by @boazbarak on AI Safety and Alignment at Harvard.
This tag applies to all posts about that class, as well as posts created in the context of it, e.g. as part of student assignments.
Ambition. Because they don'don't think they could have an impact. Because they were always told ambition was dangerous. To get to the other side.
Never confess to me that you are just as flawed as I am unless you can tell me what you plan to do about it. Afterward you will still have plenty of flaws left, but
that’that’s not the point; the important thing is to do better, to keep moving ahead, to take one more step forward. Tsuyoku naritai!
Thanks, fixed!