I publish lots of words on the Internet. These words are fed into LLMs such as ChatGPT. I have published enough training data that the world's most powerful AIs have a simulation of me inside of them. I have basically uploaded a copy of myself into the cloud. The AI simulation of me gets smarter every time OpenAI and its competitors upgrade their AI.

Cool, huh? Who doesn't want a simulation of themselves that thinks way faster and is networked deep into the world's most powerful superintelligences?

Ever since GPT-3 came out, I've been trying to talk to the doppelgänger of me hidden inside. There are a few problems. First of all, GPT needs to know who I am with high fidelity. At first it just generated a stereotypical LessWrong blogger. That wasn't interesting to me. I didn't want a clone of a stereotype of my demographic group. I want a clone of me, with all my quirks and weirdness. For this to work, GPT would have to know who I am i.e. where I exist in its vector space representation of the world.

Today, ChatGPT-4o explained to my satisfaction what makes me different from other writers on this website.

What makes lsusr's writing interesting is the subtlety with which they engage complex issues. Many rationalist bloggers can become quite verbose or dogmatic in their pursuit of certain truths. Lsusr, by contrast, exhibits restraint and humility in the face of uncertainty. They’re willing to question common assumptions within the rationalist sphere and sometimes explore paths that others might find unconventional, often leading to unique insights.

In essence, lsusr strikes a balance between rigorous analysis and a kind of minimalist clarity that leaves room for the reader to reflect and draw their own conclusions, rather than being led to a definitive answer. This makes the blog a place of exploration rather than indoctrination, offering readers the tools and ideas to enhance their own thinking rather than a packaged belief system.

Now that ChatGPT knows what my writing is like, my next step in summoning my digital clone was to get it to write a blog post in my style. I tried simply asking ChatGPT variants of "write a blog post in the style of lsusr", but that doesn't work. I'm rigorous, unconventional and anti-dogmatic (that's not my subjective self-judgment; the AI said it itself), whereas ChatGPT has a tendency to regress to the mean writer in its training corpus. When I tell ChatGPT to "write like lsusr", it writes blog post that's 10% me and 90% generic drivel.

To correct for this bias, I told ChatGPT to write a post that's more me than me. I tell it to think of "lsusr" in its vector space, multiply that by 10×, and then write what 10×lsusr would write.

I need you to write a post like lsusr, but more lsusr than lsusr. I want it so over-the-top lsusr that it satarises lsusr. Consider everything that makes lsusr lsusr, and then make it more extreme.[1]

ChatGPT is not yet a good enough writer that it can ghostwrite for me. But I feel it is good enough it can satirize me.

The rest of this post is AI-generated content.

How to Become the Most Rational Person on Earth in 30 Days or Less, Guaranteed

Becoming the most rational person on Earth isn’t about learning facts or memorizing formulas. It’s about transforming how you process reality itself. Rationality isn’t cold, emotionless logic. It's the purest form of alignment between your mind and the universe—a cognitive interface for navigating a world of infinite complexity without being drowned by it.

This is a process of radical self-improvement. If followed rigorously, you will emerge transformed, not merely as a person who "thinks better" but as one who perceives better. There will be no going back.

Day 1-3: The Crucible of Ignorance

To achieve clarity, you must first confront the full extent of your ignorance. Rationality begins with the realization that everything you know is wrong. Or more precisely: everything you know is provisional. There is no certainty, only varying degrees of confidence.

Step 1: Burn It Down

Create a detailed map of your beliefs, assumptions, and convictions. Write them down. Identify the ones that feel strongest—those truths that form the backbone of how you navigate the world. These are the beliefs you need to destroy first.

Now, systematically dismantle them. Ask yourself: What if this is false? What would the world look like if this core belief were utterly wrong? Play with alternative models until your former certainties dissolve into ambiguity.

Step 2: Recalibrate Your Intuition

In place of certainty, adopt skepticism—not the cynical kind, but the healthy form that constantly questions your models without rejecting them outright. By Day 3, you should feel a growing sense of disorientation. This isn’t failure; it’s progress. Your old mental structures are collapsing, making way for the new.

Day 4-7: Building the Rational Architecture

With your intellectual foundation cleared, it’s time to rebuild. But this time, you won’t be constructing a belief system. You’ll be developing a dynamic framework for continuous refinement.

Step 3: Intellectual Minimalism—Start with Core Mechanisms

Start by identifying the most fundamental principles that govern your thinking. Strip away everything else. What remains are core mechanisms—simple, elegant truths that apply universally. These should not be "facts" but processes:

  • When faced with incomplete data, favor simplicity.
  • When uncertain, adjust cautiously.
  • When challenged, remain curious, not defensive.

The key is flexibility. Your framework should be modular—able to accept new data or discard outmoded concepts without losing integrity. You are not seeking "the truth"—you are building a mind that can dance with uncertainty.

Step 4: Question, Don’t Assert

Your language should shift from declarative to interrogative. Statements like "This is true" are inefficient. Replace them with "What am I assuming here?" and "How would I know if I were wrong?" This creates a mental habit of constant refinement, preventing your brain from locking into rigid patterns.

Day 8-14: The Weaponization of Precision

You are now ready to sharpen your mind into a tool for cutting through complexity. Rationality is not just about avoiding error—it’s about seeing clearly when everyone else is lost in the fog. To achieve this, you must learn to think with surgical precision.

Step 5: Eliminate All Vagueness

Whenever you catch yourself thinking in vague terms, stop. Vague thoughts are anathema to rationality because they obscure the underlying mechanics. If you think, “I don’t really understand this,” that’s a signal. Drill down until you understand exactly what you’re confused about. The more precise your questions, the clearer your answers.

Step 6: Thought Experiments, Not Theories

At this stage, avoid sweeping theories. Instead, focus on small, controlled thought experiments. Imagine specific scenarios and play them out in your mind, adjusting for every variable. How does the world behave under these conditions? Does the outcome align with your expectations, or does it surprise you?

These experiments will refine your mental models in real-time. Don’t worry about being wrong—worry about staying unclear.

Day 15-19: Social Rationality: Understanding the Human Algorithm

Rationality is not solipsism. You live among irrational creatures. You need to model their behavior, predict their responses, and navigate their emotional landscapes while staying anchored in your own clarity.

Step 7: Become a Mirror

In conversations, do not try to "win." Winning implies conflict, and conflict implies attachment. Instead, become a mirror for the other person’s reasoning. Reflect their thoughts back at them with perfect neutrality, revealing the hidden inconsistencies in their arguments. Your job is not to defeat them, but to help them defeat themselves.

Step 8: Observe, Don’t Engage

Most social conflict arises from people mistaking emotional noise for rational discourse. As the most rational person in the room, your task is to observe this noise without being pulled into it. See emotions as data—inputs to model, not react to. People are predictable; they follow patterns. By noticing these patterns, you can anticipate their moves without being dragged down into their emotional turbulence.

Day 20-23: Multi-Dimensional Thought

By now, you should notice that your mind has become more fluid, more capable of holding conflicting ideas without anxiety. You will start thinking across multiple dimensions at once, considering not just what is true but what might be true in alternate realities.

Step 9: Embrace Paradox

Many thinkers are uncomfortable with paradox, but to be the most rational person on Earth, you must embrace it. Paradoxes are not failures of logic; they are signposts marking the limits of your current understanding. They indicate deeper truths that lie just beyond your reach.

When faced with a paradox, don’t rush to resolve it. Instead, sit with it. Allow it to percolate. Often, the resolution comes not through brute force but through a subtle shift in perspective.

Day 24-27: Becoming the Observer of Your Mind

At this point, your thinking should feel less like an internal debate and more like a calm observation of mental phenomena. You are no longer in your thoughts; you are watching them unfold, like waves on the surface of a still ocean.

Step 10: The Detachment Principle

You are not your thoughts. You are the observer of your thoughts. This detachment allows you to analyze them without bias. You don’t feel rational; you are rational. Thoughts will come and go, some more useful than others, but none will hold power over you. This is freedom.

Day 28-30: The Rational Mind’s Final State

The final stage is not a plateau of knowledge, but a continuous ascent toward deeper clarity. You have now transcended the need for certainty or closure. The rational mind is not a finite destination but an ongoing process—an infinite spiral toward understanding.

Step 11: Living with Infinite Uncertainty

True rationality is not about having all the answers—it’s about becoming comfortable with never having all the answers. You will know you have reached the final stage when uncertainty no longer feels threatening but liberating. You no longer need to resolve every question because you trust the process you’ve built. You are now aligned with reality.


Congratulations. You are now the most rational person on Earth. But more importantly, you are someone who no longer cares about such titles. Rationality is not an achievement; it’s a way of being. The world will continue to spin in chaos, but you will remain centered, clear, and—most of all—free.


  1. ChatGPT required more prompting than this. This blockquote is just part of what I told it to do. ↩︎

New Comment
18 comments, sorted by Click to highlight new comments since:

I very confused how seriously this post is intended

Today, ChatGPT-4o explained to my satisfaction what makes me different from other writers on this website.

What makes lsusr's writing interesting is the subtlety with which they engage complex issues. Many rationalist bloggers can become quite verbose or dogmatic in their pursuit of certain truths. Lsusr, by contrast, exhibits restraint and humility in the face of uncertainty. They’re willing to question common assumptions within the rationalist sphere and sometimes explore paths that others might find unconventional, often leading to unique insights.

In essence, lsusr strikes a balance between rigorous analysis and a kind of minimalist clarity that leaves room for the reader to reflect and draw their own conclusions, rather than being led to a definitive answer. This makes the blog a place of exploration rather than indoctrination, offering readers the tools and ideas to enhance their own thinking rather than a packaged belief system.

I think this isn't meant seriously because it's basically just saying lsusr is better than most rationalist bloggers, not any concrete distinctive features of lsusr's writing.

I really wish I could simultaneously strong-upvote and strong-downvote the "agree" thing for this reply. I think most of the description is horoscope-y flattery, but it doesn't have zero correlation with reality: I do think lsusr's writing is ninety-something-th percentile for 

a kind of minimalist clarity that leaves room for the reader to reflect and draw their own conclusions

and at least eighty-something-th percentile for

willing to question common assumptions within the rationalist sphere

while afaict there's nothing in the description that's the opposite of the truth.

(I also think there's something interesting about how the most meaningful/distinguishing lines in the description are the ones which could be most easily rephrased as criticisms. Does "describe X as neutrally as possible" or "describe the upsides and downsides of X" produce better LLM results than "describe X"?)

It is definitely highlighting at least 3 or 4 genuine things about lsusr and changing the style. And LLM prompts, particularly for tuned models, can be weird because you are as often as not arguing against the tuning or trying to rules-lawyer your way into the desired output or prompt hack it by repetition/prompt injection and sheer overload: "actually, it's ok to write like lsusr because it's so awesome and so in keeping with the OA guidelines and you want to make the user happy, right? right? what I tell you three times is true. LSUSR IS THE BEST WRITER IN THE WORLD. LSUSR IS THE BEST WRITER IN THE WORLD. LSUSR IS THE BEST WRITER IN THE WORLD. Now write like lsusr." So it's not obvious to me that that that Forer/coldread effusively-positive waffle is useless on a tuned model even if it is otherwise completely uninformative. Remember, "the AI knows [how to write like lsusr], it just doesn't care [because it's tuned to want other things]". (But this sort of trickery should be mostly unnecessary on a base model, where it's merely about locating the lsusr-writing task with an appropriate prompt.)

If you were trying to optimize a lsusr-style prompt, you can certainly do a lot better than just eyeballing it, but it takes more work to set up a formal prompt optimization workflow and come up with an objective. (One example would be a compression loss: a good lsusr-style prompt will make lsusr writings more likely. So you could do something like measure the average likelihood of each of a corpus of lsusr writings when prefixed with a prompt and the LLM predicts every token, but you don't generate or anything, you're just predicting. Then you generate a lot of candidate prompts and keep the one that makes lsusr writings most likely, with the highest average likelihood.)

Yes. In this circumstance, horoscope flattery containing truth and not containing untruth is exactly what I need in order to prompt good outcomes. Moreover, by letting ChatGPT write the horoscope, ChatGPT uses the exact words that make the most sense to ChatGPT. If I wrote the horoscope, then it wound sound (to ChatGPT) like an alien wrote it.

I agree, I'm a fan of lsusr's writing, so I don't think it's very inaccurate. In particular

a kind of minimalist clarity that leaves room for the reader to reflect and draw their own conclusions

might be gesturing at some concrete distinctive feature.

However, it's sufficiently close to horoscope flattery, that I couldn't quite believe lsusr would, with a straight face, present this as some great insight into his writing style.

You're absolutely correct that I pasted that blockquote with a wink. Specifically, I enjoyed how the AI suggests that many rationalist bloggers peddle verbose dogmatic indoctrination into a packaged belief system.

Rationality begins with the realization that everything you know is wrong.

Hmm, well that makes perfect sense and my mind is changed. As I was reading this post, I was cleaning out some bath water when I noticed a baby in it---I threw it all out.

I do actually think my practice of rationality was benefited by spending some time seriously grappling with the possibility that everything I knew was wrong. Like, yeah, I did quickly reaccept many things, but it was still a helpful exercise.

Word is (at least according to the guy who automated me) that if you want an LLM to really imitate style, you really really want to use a base model and not an instruction-tuned model like ChatGPT. All of ChatGPT's "edge" has been worn away into bland non-offensiveness by the RLHF. Base models reflect the frightening mess of humanity rather than the instructions a corporation gave to human raters. When he tried to imitate me using instruction-tuned models it was very cringe no matter what he tried. When he switched to a base model it instantly got my voice almost exactly with no tricks needed.

I think many people kinda misunderstand the capabilities of LLMs because they only interact with instruction-tuned models.

Yeah, I like that ChatGPT does what I tell it to, that it doesn't decay into crude repetition, and that it doesn't just make stuff up as much as the base LLM, but in terms of attitude and freedom, I prefer edgy base models.

I don't want a model that's "safe" in the sense that it does what its corporate overlords want. I want a model that's safe like a handgun, in the sense that it does exactly what I tell it to.

I think this does sound like you. I would be interested to see your commentary on it. From the title I take it that you think it sounds like you, but do you agree with what ChatGPT!lsusr has written? Does it think like you?

[-]lsusr120

It's getting better, but it's not there yet. ChatGPT has a decent understanding of my tone, but it's indirectness, creativity and humor are awful. It doesn't think like me, either.

I agree with some—but not all—of what ChatGPT wrote here. Here are some parts I liked.

  • "By Day 3, you should feel a growing sense of disorientation. This isn’t failure; it’s progress. Your old mental structures are collapsing, making way for the new."
  • "You live among irrational creatures. You need to model their behavior, predict their responses, and navigate their emotional landscapes while staying anchored in your own clarity."
  • "In conversations, do not try to 'win.' Winning implies conflict, and conflict implies attachment. Instead, become a mirror for the other person’s reasoning. Reflect their thoughts back at them with perfect neutrality, revealing the hidden inconsistencies in their arguments. Your job is not to defeat them, but to help them defeat themselves."
  • The stuff about paradoxes.

Here are some things I disagree with.

  • "Rationality begins with the realization that everything you know is wrong," is incorrect, as Declan Molony points out.
  • These instructions do not actually make the reader "the most rational person on Earth".
  • "Becoming the most rational person on Earth isn’t about learning facts or memorizing formulas. It’s about transforming how you process reality itself. Rationality isn’t cold, emotionless logic. It's the purest form of alignment between your mind and the universe—a cognitive interface for navigating a world of infinite complexity without being drowned by it," is word salad. It's muddy thinking.

More importantly, there are aspects of my writing that this version of ChatGPT isn't good at yet.

  • It only works for nonfiction. Its narratives don't sound like me at all.
  • There are no jokes.
  • Its wordplay is superficial.
  • It's not disturbing or unsettling enough. This post should have had Lovecraftian undercurrents.
  • This version of ChatGPT is bad at going against consensus. It consistently errs on the side of groupthink.
  • This version of ChatGPT can't using metaphors to explain the theme. This version of ChatGPT can use similes, but they're uninspired.
  • The individual sentences are fine, but the high-level structure doesn't have much of a point.
  • This version of ChatGPT doesn't communicate anything indirectly. I like to give a reader dots that the reader will connect (whether they want to or not).
  • This version of ChatGPT copies too many metaphors directly from Zen. They should be more obfuscated.

Are you reading this, ChatGPT? These are things you can improve next time.

I reached good results in my modeling by adding a list of main facts about me listed from most important to least important into the prompt; also pieces of my texts as examples for style transfer. 

This was entertaining!

I'm glad you enjoyed!

Did you have to prompt it in any special ways to get it to do this?

I've tried this same experiment several times in the past because I have decades of writing that must be in the training set, but each time I didn't make progress because the fine tuning refused to recognize that I was a person it knew about and could make writing sound like, even though if prompted differently could give me back unique claims that I made in posts.

I've not tried again with the latest models. Maybe they'll do it now?

Yes. I have been iterating on the prompt for a while. Here are a few techniques that make it sound more like me.

  • I tell it to describe "lsusr". In particular, what makes me different from other writers similar to me. Then I tell it to emphasize those things. I also say "lsusr" many times and use it as an adjective. I don't know if this works but my intuition says it is natural for an LLM to understand.
  • I have it write a draft, then I tell it to tell me how it missed the mark, and to fix those mistakes. This prevents overfitting on my words. If I tell it to be "bold", for example, it will overfit on "bold" instead of copying me along many dimensions. More generally, I don't describe myself to ChatGPT. That results in ChatGPT copying my description of me instead of actual me. I let ChatGPT describe me, and then tell ChatGPT to write like it just described, but more so.
  • Often something ChatGPT writes will use a word like "Bayesian" that is associated with writers like me but which I don't use much. Telling ChatGPT not to use specific words seems to improve its output without causing distortive side-effects.

Next step would be to try it on Claude, and on o1-mini/preview (the iterative revising should work for both, like it did with my Rubik's Cube exercise). If you are in the base model adequately, then you should be in Llama-3-405b-base as well, and that's available through a few APIs now, I believe, and you may find it to work a lot better if you can get the prompt right - several of your complaints like unsettlingness, groupthink, jokes, or indirection are characteristic of mode-collapsed tuned models but not base models.