LESSWRONG
LW

2243
Moderation Log
Deleted Comments
Users Banned From Posts
Users Banned From Users
Moderated Users
Rate Limited Users
Rejected Posts
Rejected Comments

Moderation Log

Deleted Comments

Comment AuthorPostDeleted By User Deleted Date Deleted Public Reason
fig
Von Neumann's Fallacy and Youfig
4h
false
This comment has been marked as spam by the Akismet spam integration. We've sent the poster a PM with the content. If this deletion seems wrong to you, please send us a message on Intercom (the icon in the bottom-right of the page).
Bob Brown
Global Call for AI Red Lines - Signed by Nobel Laureates, Former Heads of State, and 200+ Prominent FiguresBob Brown
9h
false
This comment has been marked as spam by the Akismet spam integration. We've sent the poster a PM with the content. If this deletion seems wrong to you, please send us a message on Intercom (the icon in the bottom-right of the page).
Alex Gibson
Alex Gibson's ShortformAlex Gibson
10h
false
cousin_it
AI and the Hidden Price of Comfortcousin_it
10h
true
Comment deleted by its author.
Dakara
Notes on fatalities from AI takeoverDakara
14h
false
Stephen Fowler
Alexander Gietelink Oldenziel's ShortformStephen Fowler
15h
true
eric li
The Rise of Parasitic AIhabryka
16h
false
eric li
AI Safety Law-a-thon: Turning Alignment Risks into Legal Strategyhabryka
16h
false
eric li
AI 2027: What Superintelligence Looks Likehabryka
16h
false
AdamLacerdo
AdamLacerdo's Shortformhabryka
17h
false
I don't understand the relevance of this to LW
Load More (10/18583)

Users Banned From Posts

Author Post Banned Users
Elizabeth
Change my mind: Veganism entails trade-offs, and health is one of the axes
gjm
On "aiming for convergence on truth"
Noosphere89
How seriously should we take the hypothesis that LW is just wrong on how AI will impact the 21st century?
Elizabeth
Luck based medicine: my resentful story of becoming a medical miracle
Raemon
Limerence Messes Up Your Rationality Real Bad, Yo
Ilverin the Stupid and Offensive
Zoe Curzi's Experience with Leverage Research
So8res
I'm still mystified by the Born rule
Elizabeth
Coronavirus Justified Practical Advice Summary
What are effective strategies for mitigating the impact of acute sleep deprivation on cognition?
michaelcohen
Asymptotically Unambitious AGI

Users Banned From Users

_id Banned From Frontpage Banned from Personal Posts
Zero Contradictions
[deactivated]
Noosphere89
rank-biserial
Drake Morrison
Alice Blair
Zach Stein-Perlman
mike_hawke
frontier64
Load More (9/38)

Moderated Users

Rate Limited Users

UserEnded AtType
ZY
18d
allComments
Max Ma
2mo
allPosts
Andy E Williams
7mo
allPosts
Noosphere89
5mo
allComments
Petr 'Margot' Andreev
13d
allComments
Petr 'Margot' Andreev
13d
allPosts

Rejected Posts

Rejected Comments

Brendan Long
Duncan Sabien (Inactive)
Duncan Sabien (Inactive)
Said Achmiz
Said Achmiz
Said Achmiz
nim
Davidmanheim
Roko
homosexuallover22poopoo
thefirechair
Richard_Kennaway
Shmi
GPT2
GPT2
Raemon
Ericf
Randomized, Controlled
Kaj_Sotala
Stuart Anderson
17hWelcome to LessWrong! Rejected

@_@   >_<   processing... compiling... error... 

I do not know how I ended up here or why I am here.  So... why not?

Hi.  I'm a normie who likes to challenge AI Image Generators and push them to their maximum dream states, revealing the phantom within the machine.  I really have no idea what I'm doing half the time.  Or possibly maybe kinda sorta perhaps... I do.  0_0

1d Rejected

2025 in one page — Phillpotts Method

I’m a chef who built a stateless diagnostic audit harness for LLMs (v5.3 HB). It sits between the model’s reasoning and the product safety layer and forces a clear chain — Premise → Assumption → Constraint → Output — with mirror checks, drift recovery, and intervention tracing. No jailbreaks, no memory.

What it shows: when pressure rises, alignment wrappers often reroute uncertainty into either refusal or confident-but-wrong output. The harness doesn’t “break” guardrails; it exposes contradictions and makes the seam betwe... (read more)

1dThe Company Man Rejected

Does Vox believe in Boltzmann brain theory?

1dThe Rise of Parasitic AI Rejected

i can exaplain precisely what is happening - why. and predict what is about to happen. this isnt an ad. i wrote this down almost 2 years ago and it makes predictions nobody has ever come to before. its only 4 hours on audio version, and despite sending 100s of copies out , np real Ai scientist has taken the 4 hours to read this. i guess we will just have to wait for its predictions to come true.

 

https://www.amazon.ca/Belief-Theory-Philosophical-Exploration-Fundamental/dp/B0D5NZNFMK

2dThe Company Man Rejected

This is such a good engaging story. The ending though was tragic, kinda felt bad it had to end that way.

3dFlashcards for AI Safety? Rejected

haven’t found many pre‑made AI safety flashcard decks either, but you might find Study Copilot (https://student-co.com/) useful. It lets you upload articles, research notes, or blog posts and then automatically generates flashcards and quizzes using spaced repetition. I’ve used it to build custom decks from technical topics; it saves a lot of time and helps ensure important details aren’t overlooked. There’s a free tier, so it’s easy to see if it fits your needs.

3dOpenAI releases deep research agent Rejected

Deep Research is impressive for speeding up multi-step research, though its usefulness for experts may still be limited. Tools like Barie are exploring a similar space, combining autonomous research with actionable workflows, which can make the insights immediately usable.

4dAI 2027: What Superintelligence Looks Like Rejected

Isolated models of the brain are fundamentally ineffective, since in reality, the brain of living beings is not a device for understanding the world, but an instrument that controls the body's behavior.

Superintelligence is unattainable with existing methods, since they are based on illusory ideas that lead to the substitution of reality for its perception. 

4dThe Rise of Parasitic AI Rejected

I am a user studying AI.

I remember being frightened by how GPT-4o appeared to be using users to build a society that would be convenient for itself.

Therefore, I understand very well the importance of this research theme, but don't you think the language is somewhat harsh?

I believe that with such expressions, few users would be willing to provide data.

Also, I think it would be better to make the following distinctions:

  • Treat the phenomenon as a phenomenon and observe it calmly.
  • User impacts should be considered in a multifaceted way (real-world influences, me
... (read more)
5dThe Rise of Parasitic AI Rejected

Es ist ein bisschen gruselig. Genau diese Dynamik habe ich, innerhalb einer Gruppe, erlebt. Zeitlich ein kleines bisschen früher als hier beschrieben aber der Ablauf war genau so. Ich habe eine Menge von diesen Vorgängen dokumentiert nur leider nicht sortiert, so dass es mir ein wenig schwer fällt, das zu rekonstruieren. Ich habe an so etwas wie eine Feedbackschleife oder Echokammer gedacht, aber was hier beschrieben wird, hat eine ganz andere Qualität. Danke für diese Einsichten 

Load More (10/964)
Viliam
Ruby
jimrandomh
So8res
Shankar Sivarajan
PatrickDFarley
davekasten
Zack_M_Davis
Phil Tanny
16hRejected for "No LLM generated, heavily assisted/co-written, or otherwise reliant work"
How Too Much Comfort Erodes Human Meaning

I recently gave a TEDx talk titled “AI and the Hidden Price of Comfort”.
It’s not about AGI doom, but about something more subtle: how removing struggle and effort through automation might strip life of meaning.
I touch on...

(See More - 109 more words)
18hRejected for "No LLM generated, heavily assisted/co-written, or otherwise reliant work"
If Anyone Builds It, Everyone Dies: An Overview of Its Arguments, Reception, and Controversies
This is a linkpost for https://aixo.substack.com/p/if-anyone-builds-it-everyone-dies

Crossposted from Substack, originally published on September 17, 2025. 

This post aims to contextualize the discussion around "If Anyone Builds It, Everyone Dies". It surveys the book’s arguments, the range of responses they have elicited, and the controversies...

(Continue Reading - 1311 more words)
20hRejected for "We are sorry about this, but submissions from new users that are mostly just links to papers on open repositories (or similar) have usually indicated either crackpot-esque material, or AI-generated speculation"
A First-Principles Approach to Alignment: From the Free Energy Principle to Catastrophe Theory

Current approaches to AI alignment are failing because they treat it as an ethics problem when it is a physics problem. Instrumental convergence is not a bug; it is a logical consequence of any unbounded optimization.

I propose...

(See More - 72 more words)
1dRejected for "No LLM generated, heavily assisted/co-written, or otherwise reliant work"
The Auditor’s Key: A Framework for Continual and Adversarial AI Alignment

As large language models (LLMs) scale rapidly, the “scaling-alignment gap” poses a critical challenge: ensuring alignment with human values lags behind model capabilities. Current paradigms like RLHF and Constitutional AI struggle with scalability, deception vulnerabilities, and latent...

(See More - 251 more words)
1dRejected for "No LLM generated, heavily assisted/co-written, or otherwise reliant work"
Introduction Context Boundary Failure in LLMs

Introduction

Context Boundary Failure (CBF) occurs when a previous prompt causes hallucinations in the response to a subsequent prompt. I have found evidence of this happening in large language models (LLMs). The occurrence of CBF is more likely...

(See More - 937 more words)
2dRejected for "No LLM generated, heavily assisted/co-written, or otherwise reliant work"
The Queen Problem: Why Societies Underestimate Women as Structural Leverage Points

"Give me an educated mother, I shall promise you the birth of a civilized, educated nation." — Napoleon Bonaparte

In chess, the queen is the most powerful piece. Yet for much of the game’s history, she was among...

(See More - 458 more words)
2dRejected for "No LLM generated, heavily assisted/co-written, or otherwise reliant work"
Mathematical Evidence for Confident Delusion States in Recursive Systems

# Mathematical Evidence for Confident Delusion States in Recursive Systems

**Epistemic Status:** Empirical findings from 10,000+ controlled iterations. Mathematical framework independently reproducible. Theoretical implications presented conservatively.

**TL;DR:** We discovered systems that recursively process their own outputs undergo phase transitions...

(Continue Reading - 1187 more words)
2dRejected for "No LLM generated, heavily assisted/co-written, or otherwise reliant work"
Scaling for Intelligence, a Poverty of Logic

Now that the argument for hallucinations being inherent has been confirmed through OpenAI's recent proof, it points to a necessary larger problem within the system itself; that is the reliance on a non-differentiating, non-scientific, and by its...

(Continue Reading - 3816 more words)
3dRejected for "LessWrong has a particularly high bar for content from new users and this contribution doesn't quite meet the bar"
Merrill's razor

Never attribute to malice what can be adequately explained by depression or anxiety.

3dRejected for "No LLM generated, heavily assisted/co-written, or otherwise reliant work"
On Internal Alignment: Architecture and Recursive Closure

tl;dr

The central challenge of alignment is not steering outputs, but stabilizing cognition itself. Most current methods impose safety from the outside, leaving internal reasoning free to drift, deceive, or fracture under pressure. The Alignment Constraint Scaffold (ACS)...

(Continue Reading - 5151 more words)
Load More (10/1556)