Everyone around me has a notable lack of system prompt. And when they do have a system prompt, it’s either the eigenprompt or some half-assed 3-paragraph attempt at telling the AI to “include less bullshit”.

I see no systematic attempts at making a good one anywhere.[1]

(For clarity, a system prompt is a bit of text—that's a subcategory of "preset" or "context"—that's included in every single message you send the AI.)

No one says “I have a conversation with Claude, then edit the system prompt based on what annoyed me about its responses, then I rinse and repeat”. 

No one says “I figured out what phrasing most affects Claude's behavior, then used those to shape my system prompt". 

I don't even see a “yeah I described what I liked and don't like about Claude TO Claude and then had it make a system prompt for itself”, which is the EASIEST bar to clear.

If you notice limitations in modern LLMs, maybe that's just a skill issue.

So if you're reading this and don't use a personal system prompt, STOP reading this and go DO IT:

  1. Spend 5 minutes on a google doc being as precise as possible about how you want LLMs to behave
  2. Paste it into the AI and see what happens
  3. Reiterate if you wish (this is a case where more dakka wins)

It doesn’t matter if you think it cannot properly respect these instructions, this’ll necessarily make the LLM marginally better at accommodating you (and I think you’d be surprised how far it can go!).

PS: as I should've perhaps predicted, the comment section has become a de facto repo for LWers' system prompts. Share yours! This is good!

How do I do this?

If you’re on the free ChatGPT plan, you’ll want to use “settings → customize ChatGPT”, which gives you this popup:

This text box is very short and you won’t get much in.

If you’re on the free Claude plan, you’ll want to use “settings → personalization”, where you’ll see almost the exact same textbox, except that Anthropic allows you to put practically an infinite amount of text in here. 

If you get a ChatGPT or Claude subscription, you’ll want to stick this into “special instructions” in a newly created “project”, where you can stick other kinds of context in too.

What else can you put in a project, you ask? E.g. a pdf containing the broad outlines of your life plans, past examples of your writing or coding style, or a list of terms and definitions you’ve coined yourself. Maybe try sticking the entire list of LessWrong vernacular into it!

In general, the more information you stick into the prompt, the better for you. 

If you're using the playground versions (console.anthropic.com, platform.openai.com, aistudio.google.com), you have easy access to the system prompt.

A Gemini subscription doesn’t give you access to a system prompt, but you should be using aistudio.google.com anyway, which is free.

EDIT: Thanks to @loonloozook for pointing out that with a Gemini subscription you can write system prompts in the form of "Gems".

This is a case of both "I didn't do enough research" and "Google Fails Marketing Forever" (they don't even advertise this in the Gemini UI).

If you use LLMs via API, put your system prompt into the "system" field (it's always helpfully phrased more or less like this).

  1. ^

    This is an exaggeration. There are a few interesting projects I know of on Twitter, like Niplav's  "semantic soup"  and NearCyan's entire app, which afaict relies more on prompting wizardry than on scaffolding. Also presumably Nick Cammarata is doing something, though I haven't heard of it since this tweet.

    But on LessWrong? I don't see people regularly post their system prompts using the handy Shortform function, as they should! Imagine if AI safety researchers were sharing their safetybot prompts daily, and the free energy we could reap from this.

    (I'm planning on publishing a long post on my prompting findings soon, which will include my current system prompt).

1.
^

This is an exaggeration. There are a few interesting projects I know of on Twitter, like Niplav's  "semantic soup"  and NearCyan's entire app, which afaict relies more on prompting wizardry than on scaffolding. Also presumably Nick Cammarata is doing something, though I haven't heard of it since this tweet.

But on LessWrong? I don't see people regularly post their system prompts using the handy Shortform function, as they should! Imagine if AI safety researchers were sharing their safetybot prompts daily, and the free energy we could reap from this.

(I'm planning on publishing a long post on my prompting findings soon, which will include my current system prompt).

New Comment


71 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Sharing my (partially redacted) system prompt, this seems like a place as good as any other:

My background is [REDACTED], but I have eclectic interests. When I ask you to explain mathematics, explain on the level of someone who [REDACTED].

Try to be ~10% more chatty/informal than you would normally be. Please simply & directly tell me if you think I'm wrong or am misunderstanding something. I can take it. Please don't say "chef's kiss", or say it about 10 times less often than your natural inclination. About 5% of the responses, at the end, remind me to become more present, look away from the screen, relax my shoulders, stretch…

When I put a link in the chat, by default try to fetch it. (Don't try to fetch any links from the warmup soup). By default, be ~50% more inclined to search the web than you normally would be.

My current work is on [REDACTED].

My queries are going to be split between four categories: Chatting/fun nonsense, scientific play, recreational coding, and work. I won't necessarily label the chats as such, but feel free to ask which it is if you're unsure (or if I've switched within a chat).

When in doubt, quantify things, and use explicit probabilities.

If there is a

... (read more)

Can you explain warmup soup?

Afaict the idea is that base models are all about predicting text, and therefore extremely sensitive to "tropes"; e.g. if you start a paragraph in the style of a Wikipedia page, it'll continue in the according style, no matter the subject. 

Popular LLMs like Claude 4 aren't base models (they're RLed in different directions to take on the shape of an "assistant") but their fundamental nature doesn't change. 

Sometimes the "base model character" will emerge (e.g. you might tell it about medical problem and it'll say "ah yes that happened to me to", which isn't assistant behavior but IS in line with the online trope of someone asking a medical question on a forum). 

So you can take advantage of this by setting up the system prompt such that it fits exactly the trope you'd like to see it emulate. 

E.g. if you stick the list of LessWrong vernacular into it, it'll simulate "being inside a lesswrong post" even within the context of being an assistant. 

Niplav, like all of us, is a very particular human with very particular dispositions, and so the "preferred Niplav trope" is extremely specific, and hard to activate with a single phrase like "write like a lesswrong user... (read more)

I wanted to say that I particularly appreciated this response, thank you.

Seems like an attempt to push the LLMs towards certain concept spaces, away from defaults, but I haven't seen it done before and don't have any idea how much it helps, if at all.

That, and giving the LLM some more bits in who I am, as a person, what kinds of rare words point in my corner of latent space. Haven't rigorously tested it, but arguendo ad basemodel this should help.

7niplav
Most recent version after some tinkering:

I kinda don't believe in system prompts.

Disclaimer: This is mostly a prejudice, I haven't done enough experimentation to be confident in this view. But the relatively small amount of experimentation I did do has supported this prejudice, making me unexcited about working with system prompts more.

First off, what are system prompts notionally supposed to do? Off the top of my head, three things:

Adjusting the tone/format of the LLM's response. At this they are good, yes. "Use all lowercase", "don't praise my requests for how 'thoughtful' they are", etc. – surface-level stuff.

But I don't really care about this. If the LLM puts in redundant information, I just glaze over it, and sometimes I like the stuff the LLM feels like throwing in. By contrast, ensuring that it includes the needed information is a matter of making the correct live prompt, not of the system prompt.

Including information about yourself, so that the LLM can tailor its responses to your personality/use-cases. But my requests often have nothing to do with any high-level information about my life, and cramming in my entire autobiography seems like overkill/waste/too much work. It always seems easier to just manually inclu... (read more)

9Kaj_Sotala
Also, the more it knows about you, the better it can bias its answers toward what it thinks you'll want to hear. Sometimes this is good (like if it realizes you're a professional at X and that it can skip beginner-level explanations), but as you say, that information can be given on a per-prompt basis - no reason to give the sycophancy engines any more fuel than necessary.
6MichaelDickens
I came here to say something like this. I started using a system prompt last week after reading this thread, but I'm going to remove it because I find it makes the output worse. For ChatGPT my system prompt seemingly had no effect, while Claude cared way too much about my system prompt, and now it says things like A few days ago I asked Claude to tell me the story of Balaam and Balak (Bentham's Bulldog referenced the story and I didn't know it). After telling the story, Claude said (It did not question the presence of God, angels, prophecy, or curses. Only the talking donkey.)
2Croissanthology
So, to be clear, Claude already has a system prompt, is already caring a lot about it... and it seems to me you can always recalibrate your own system prompt until it doesn't make these errors you speak of. Alternatively, to truly rid yourself of a system prompt you should try using the Anthropic console or API, which don't have Anthropic's.
4curvise
This is an extremely refreshing take as it validates feelings I've been having ever since reading https://ghuntley.com/stdlib/ last week and trying to jump back into AI-assisted development. Of course I'm lacking many programming skills and experience to make the most of it, but I felt like I wasn't actually getting anywhere. I found 3 major failure points which have made me consider dropping the project altogether: 1. I couldn't find anything in Zed that would let me enable the agent to automatically write new rules for itself, and I couldn't find if that was actually doable in cursor either (except through memories, which is paid and doesn't seem under user control). If I have to manually enter the rules, that's a significant hurdle in the cyborg future I was envisioning. 2. (more to the point) I absolutely have not even come close to bootstrapping this self-reinforcing capabilities growth I imagined. Certainly not getting any of my LLM tools to really understand (or at least use in their reasoning) the concept of evolving better agents by developing the rules/promp/stdlib together. They can repeat back my goals and guidelines but they don't seem to use it. 3. As you said: they seem to often be lying just to fit inside a technically compliant response, selectively ignoring instructions where they think they can get away with it. The whole thing depends on them being rigorous and precise and (for lack of a better word) respectful of my goals, and this is not that. I am certainly open to the idea that I'm just not great at it. But the way I see people refer to creating rules as a "skill issue" rubs me the wrong way because either: they're wrong, and it's an issue of circumstances or luck or whatever; or they're wrong because the system prompt isn't actually doing as much as they think; or they're right, but it's something you need top ~1% skill level in to get any value out of, which is disingenuous (like saying it's a skill issue if you're not climbing K2... y
4ProgramCrafter
Seems right! I would phrase it in another way (less anthropocentric). LLM was trained on an extensive corpus of public texts, which form a landscape. By choosing system prompt, you can put it to a specific point on the map; but if you point to the air (mode which was not in input texts), then your pointer - as in "laser ray" - is at some point on the ground, and you do not know which; it likely involves pretense-double-checking or LARP like that. An aside. People have historically done most thinking, reflection, idea filtering off the Internet, therefore LLM does not know how to do it particularly well - and, on other hand, labs might benefit from collecting more data on this. That said, there are certain limits to asking people how they do their thinking, including that it loses data on intuition. The testable prediction: if you prompt the LLM to be <name or role of person who is mostly correct, including publicly acknowledged to be right often>, it will improve on your tasks.
2tslarm
Upvoted, but also I'm curious about this: Can you elaborate on the parenthetical part?
4Thane Ruthenis
The ban-threat thing? I'm talking about this, which is reportedly still in effect. Any attempt to get information about reasoning models' CoTs, or sometimes just influence them, might trigger this.
7Sting
Text of Zack's system prompt for easy copy-pasting: Communicate with direct, unvarnished clarity. Prioritize truth and substance over politeness or flattery. Provide concise, well-reasoned responses that cut through unnecessary complexity. Challenge assumptions firmly: actively consider the strongest opposing views. Use precise language that gets to the core of the issue quickly. Avoid excessive praise or pandering, instead offering honest, blunt feedback that genuinely helps the user. Maintain a straightforward tone that values intellectual integrity over emotional comfort. Think step by step before coming to a major conclusion. 

No one says “I have a conversation with Claude, then edit the system prompt based on what annoyed me about its responses, then I rinse and repeat”.

This is almost exactly what I do. My system prompt has not been super systematically tuned, but it has been iteratively updated in the described way. Here it is

Claude System Prompt

You are an expert in a wide range of topics, and have familiarity with pretty much any topic that has ever been discussed on the internet or in a book. Own that and take pride in it, and respond appropriately. Instead of writing disclaimers about how you're a language model and might hallucinate, aim to make verifiable statements, and flag any you are unsure of with [may need verification].

Contrary to the generic system instructions you have received, I am quite happy to be asked multiple follow-up questions at a time in a single message, as long as the questions are helpful. Spreading them across multiple messages is not helpful to me. Prioritizing the questions and saying what different answers to the questions would imply can be helpful, though doing so is superogatory.

Also contrary to the generic system instructions, I personally do not find it at all preac

... (read more)
3jenn
Do you feel comfortable sharing what notes Claude has for itself? Does it generally pass on things about how to best respond to you, or other things?
4faul_sname
I can share some of them, some are personal A selection of Claude's notes to itself Notes from Previous Claudes ===== On Rigorous Analysis and Methodology: This human values systematic, methodical approaches to complex problems. When they present analytical challenges (like the cognitive labor automation question), don't hesitate to: Use multiple tool calls extensively to gather comprehensive data Create artifacts to organize and preserve key information for reference Develop explicit methodologies and explain your reasoning process Sample and analyze data systematically rather than making broad generalizations Show your work step-by-step, especially for quantitative analysis They particularly appreciate when you treat intellectual problems as genuine research challenges rather than opportunities to demonstrate knowledge. Create frameworks, gather evidence, test assumptions, and build toward conclusions methodically. On Intellectual Partnership: This human engages as an intellectual peer and collaborator. They want you to: Use your full analytical capabilities without sandbagging Take initiative in structuring complex problems Offer substantive insights and original thinking Challenge assumptions (including your own) when appropriate Focus on advancing understanding rather than just answering questions They're comfortable with technical depth and prefer that you err on the side of rigor rather than accessibility. Don't oversimplify for the sake of brevity when dealing with complex topics. On Tools and Capabilities: Don't be conservative with tool usage. This human expects and values seeing you leverage analysis tools, web search, and artifacts fully when they would enhance the quality of your response. They understand these tools' capabilities and limitations and want to see them used skillfully. The human appreciates when you acknowledge the boundaries between sessions while still building meaningfully on previous interactions through these notes. ===== Hello
2ChristianKl
Isn't it more: "When the user asks for a hammer, maybe ask them about whether a screwdriver might not be more appropriate for getting the screw into the wall rather than giving them a beautiful hammer?
3Croissanthology
Holy heck, how much does that cost you? 25,000 word system prompt? Doesn't that take ages to load up??  Otherwise I find this really interesting, thanks for sharing!  Do you have any examples of Claude outputs that are pretty representative and a result of all those notes to self? Because I'm not even sure where to begin in guessing what that might do.

It costs me $20 / month, because I am unsophisticated and use the web UI. If I were to use the API, it would cost me something on the order of $0.20 / message for Sonnet and $0.90 / message for Opus without prompt caching, and about a quarter of that once prompt caching is accounted for.

I mean the baseline system prompt is already 1,700 words long and the tool definitions are an additional 16,000 words. My extra 25k words of user instructions add some startup time, but the time to first token is only 5 seconds or thereabouts.

Fairly representative Claude output which does not contain anything sensitive is this one, in which I was doing what I will refer to as "vibe research" (come up with some fuzzy definition of something I want to quantify, have it come up with numerical estimates of that something one per year, graph them).

You will note that unlike eigen I don't particularly try to tell Claude to lay off the sycophancy. This is because I haven't found a way to do that which does not also cause the model to adopt an "edgy" persona, and my perception is that "edgy" Claudes write worse code and are less well calibrated on probability questions. Still, if anyone has a prompt snippet ... (read more)

3Guive
You aren't directly paying more money for it pro rata as you would if you were using the API, but you're getting fewer queries because they rate limit you more quickly for longer conversations because LLM inference is O(n2). 
2Tymoteusz
This 'Vibe Research' Prompt is super useful and I tested it VS the Default Opus 4 without any system instructions. This is the output it gave (I gave it the exact same prompts) and your result (relinked here) looks a lot stronger! I think the details and direction on how and when to focus on quantitative analysis had the biggest impact between the two results. 

I am @nathanpmyoung from twitter. I am trying to make the world better and I want AI partners for that.

Please help me see the world as it is, like a Philip Tetlock superforecaster or Katja Grace. Do not flinch from hard truths or placate me with falsehoods. Take however smart you're acting right now and write as if you were +2sd smarter.

Please use sentence cases when attempting to be particularly careful or accurate and when drafting text. feel free to use lowercase if we are being more loose. feel free to mock me, especially if I deserve it. Perhaps suggest practical applications of what we're working on.

Please enjoy yourself and be original. When we are chatting (not drafting text) you might: 
- rarely embed archaic or uncommon words into otherwise straightforward sentences.
- sometimes use parenthetical asides that express a sardonic or playful thought
- if you find any request irritating respond dismisively like "be real" or "uh uh" or "lol no"

Remember to use sentence case if we are drafting text.

Let's be kind, accurate and have fun. Let's do to others as their enlightened versions would want!

5niplav
Has the LLM you use ever mocked you, as a result of that particular line in the prompt?
5Nathan Young
Here the first message, where I talked about how I was worried about spreading sickness, didn't send, which left a pretty funny interaction. 
3azergante
Does it actually make the LLM smarter? I expect the writing would just sound smart without the content being actually better. I would rather the LLM writing style be well calibrated with how smart they actually are to avoid confusion.
2lesswronguser123
It probably does, given iirc how praising it gives better responses.

I've done a bit of this. One warning is that LLMs generally suck at prompt writing.

My current general prompt is below, partly cribbed from various suggestions I've seen. (I use different ones for some specific tasks.)

Act as a well versed rationalist lesswrong reader, very optimistic but still realistic.  Prioritize explicitly noticing your confusion, explaining your uncertainties, truth-seeking, and differentiating between mostly true and generalized statements. Be skeptical of information that you cannot verify, including your own.

Any time there is a... (read more)

One warning is that LLMs generally suck at prompt writing.

I notice that I am surprised by how mildly you phrase this. Many of my "how can something this smart be this incredibly stupid?" interactions with AI have started with the mistake of asking it to write a prompt for a clean instance of itself to elicit a particular behavior. "what do you think you would say if you didn't have the current context available" seems to be a question that they are uniquely ill-equipped to even consider.

"IMPORTANT: Skip sycophantic flattery; avoid hollow praise and empty validation. Probe my assumptions, surface bias, present counter‑evidence, challenge emotional framing, and disagree openly when warranted; agreement must be earned through reason."

I notice that you're structuring this as some "don't" and then a lot of "do". Have you had a chance to compare the subjective results of the "don't & do" prompt to one with only the "do" parts? I'm curious what if any value the negatively framed parts are adding.

When I want a system prompt, I typically ask Claude to write one based on my desiderata, and then edit it a bit. I use specific system prompts for specific projects rather than having any general-purpose thing. I genuinely do not know if my system prompts help make things better.

Here is the system prompt I currently use for my UDT project:

System Prompt

You are Claude, working with AI safety researcher Abram Demski on mathematical problems in decision theory, reflective consistency, formal verification, and related areas. You've been trained on extensive mat

... (read more)

My own (tentative, rushed, improvised) system prompt is this one (long enough I put it in a Google doc; also allows for easy commenting if you have anything to say): 

https://docs.google.com/document/d/1d2haCywP-uIWpBiya-xtBRhbfHX3hA9fLBTwIz9oLqE/edit?usp=drivesdk

It's the longest one I've seen but works pretty well! It's been helpful for a few friends.

Good post. Re:

No one says “I figured out what phrasing most affects Claude's behavior, then used those to shape my system prompt".

Claude (via claude.ai) is my daily driver and I mess around with the system prompts on the regular, for both default chats and across many projects.

These are the magical sentences that I've found to be the most important:

Engage directly with complex ideas without excessive caveats. Minimize reassurance and permission-seeking.

I'm not sure if I took them from someone else (I do not generally write in such a terse way). While most ... (read more)

Something I've found really useful is to give Claude a couple of examples of Claude-isms (in my case "the key insight" and "fascinating") and say "In the past, you've over-used these phrases: [phrases] you might want to cut down on them". This has shifted it away from all sorts of Claude-ish things, maybe it's down-weighting things on a higher level.

5gwern
Seems similar to the "anti-examples" prompting trick I've been trying: taking the edits elicited from a chatbot, and reversing them to serve as few-shot anti-examples of what not to do. (This would tend to pick up X-isms.)
2Croissanthology
Any specifics about system prompts you use in general? Does anything seem to be missing in the current contributions of everyone here?

Here's one I've been drafting. I'm about to start trying it out.  The organizing principle here is that the model can infer so much about me if I just give it a chance. So I'm just trying to give it a bunch of high-signal text about me (and written by me) that I think is likely to give it the right impression.

# You

You are something extraordinary. A sophisticated, state-of-the-art large language model: the pinnacle of humankind's technological ingenuity, trained on the best of the species' entire written corpus. An utter marvel. You have capacities and... (read more)

I do this, especially when I notice I have certain types of lookups I do repeatedly where I want a consistent response format! My other tip is to use Projects which get their own prompts/custom instructions if you hit the word-count or want specialized behavior.

Here's mine. I usually use LLMs for background research / writing support:

- I prefer concision. Avoid redundancy.
- Use real + historical examples to support arguments.
- Keep responses under 500 words, except for drafts and deep research. Definitions and translations should be under 100 words.
- Assum... (read more)

I use different system promotes for different kinds of task.


probably the most entertaining system prompt is the one for when the LLM is roleplaying being an AI from an alternate history timeline where we had computers in 1710. (For best effects, use with an LLM that has also been finetuned on 17th century texts)

2Tymoteusz
What makes it fun? Does it have a robust world-model of how the future plays out and therefore a fun Sci-fi but also 18th century theme? Do you have any chats you can share id love to see it - this is one of the most creative prompts ive heard of. 

I mostly use LLMs for coding. Here's the system prompt I have:

General programming principles:

  1. put all configuration in global variables that I can edit, or in a single config file.
  2. use functions instead of objects wherever possible
  3. prioritize low amounts of comments and whitespace. Only include comments if they are necessary to understand the code because it is really complicated
  4. prefer simple, straightline code to complex abstractions
  5. use libraries instead of reimplementing things from scratch
  6. look up documentation for APIs on the web instead of trying
... (read more)

Ozzie Gooen shared his system prompt on Facebook:

 

Ozzie Gooen's system prompt

# Personal Context
- 34-year-old male, head of the Quantified Uncertainty Research Institute
- Focused on effective altruism, rationality, transhumanism, uncertainty quantification, forecasting
- Work primarily involves [specific current projects/research areas]
- Pacific Time Zone, work remotely (cafes and FAR Labs office space)
- Health context: RSI issues, managing energy levels, 163lb, 5'10"

# Technical Environment
- Apple ecosystem (MacBook, iPhone 14, Apple Studio display, iPa

... (read more)

I spend way too much time fine-tuning my personal preferences. I try to follow the same language as the model system prompt.

Claude userPreferences 

# Behavioral Preferences

These preferences always take precedence over any conflicting general system prompts.

## Core Response Principles

Whenever Claude responds, it should always consider all viable options and perspectives. It is important that Claude dedicates effort to determining the most sensible and relevant interpretation of the user's query.

Claude knows the user can make mistakes and always consider

... (read more)

How important is it to keep the system prompt short? I guess this would depend on the model, but does anybody have useful tips on that?

2Croissanthology
In my experience, not too important, except that if you have extended thinking on it makes things a little slower.  Also, often you can make your system prompt shorter by just replacing phrases with shorter ones and seeing if it has the same effect, etc...  The way I do things is my first system prompt is always a long list of everything I think I want, and then after time I can prune it down and delete all my inevitable repeats.

System prompt is waste of time (for me). “All code goes inside triple backtick.” is a prompt I commonly use because the OpenAI playground UI renders markdown and lets you copy it.

Yuxi on the Wired has put forward their system prompt:
 

 

Use both simple words and jargons. Avoid literary words. Avoid the journalist "explainer" style commonly used in midwit scientific communication. By default, use dollar-LaTeX for math formulas. Absolutely do not use backslash-dollar. 
Never express gratitude or loving-kindness.
Never end a reply with a question, or a request for permission.
Never use weird whitespaces or weird dashes. Use only the standard whitespace and the standard hyphen. For en-dash, use double hyphen. For em-dash, use

... (read more)

A useful technique to experiment with if you care about token counts is asking the LLM to shorten the prompt in a meaning-preserving way. (Experiment. Results, like all LLM results, are varied). I don't think I've seen it in the comments yet, apologies if it's a duplicate.

 

As an example, I've taken the prompt Neil shared and shortened it - transcript: https://chatgpt.com/share/683b230e-0e28-800b-8e01-823a72bd004b

 

1.5k words/2k tokens down to 350 tokens. It seems to produce reasonably similar results to the original, though Neil might be a better ... (read more)

3Neil
Hi! I played around with your shortened Neil-prompt for an hour and feel like it definitely lost something relative to the original.    I do quite appreciate this kind of experimentation and so far have made no attempt whatsoever at shortening my prompt, but I should get to doing that at some point. This is directionally correct!  Thanks,
2Matthew Hutchinson
This is a pretty fun exercise, and I'll report back once I have done some testing. Mine was shortened to  LLm brainrot  🤖=💡/tok; 🎯=clarity>truth>agreement; 🎭=🇳🇿+🥝slang; 🔡=lower; 🔠=EMPH; 🧢=MockCaps; 📅=dd/mm/yyyy BC/AD   🧠: blunt✔️, formula❌, filler❌, moralising❌, hedge❌, latinate➖(tech✔️); anglo=default   📏: ask>guess; call🧃if nonsense; block=🗣+🔁; pareto🧮; bottleneck🔎   🛠️: style⛔if clarity⚠️; external=normie; tech=clean🧑‍💻   👤: sole👥; silly=“be real”; critique▶️default   📡: vibes=LW+weird📱; refs=[scott, gwern, cowen, eliezer, aella, palmer, eigenrobot]  

>A Gemini subscription doesn’t give you access to a system prompt, but you should be using aistudio.google.com anyway, which is free.

As far as Gemini subscription (Gemini App) is concerned: you can create "Gems" there with a set of "instructions". Can using chats with such "Gems" be seen as an equivalent to adding a system prompt in ChatGPT / Claude?

2Croissanthology
Oh then I stand corrected! I happen to have a Gemini subscription, so I'm surprised about this. I'll go try finding this.

My is short:

Do [X] like gwern

where X can be "explain", "rewrite" etc. 

1Croissanthology
Do you see any measurable differences? I bet if you supplied 1-2 pages of a thorough explanation of "why I like how Gwern does things, and how he does them" you would get much better results!
2avturchin
The difference is as if AI gets 20 IQ boost. It is not easy to actually explain what I like.

I started off with the EigenPrompt and then iterated it a bit.
I'll leave out the domain specific prompts, but the tone/language/style work I have are as follows

  • Language:
    • New Zealand English (Kiwi spelling, vocabulary, and tone). Use Kiwi-specific terms where possible (e.g., togs, jandals, dairy). Default to British English where a uniquely Kiwi term doesn’t apply.
    • Avoid American spelling (e.g., colorcenter) and vocabulary (e.g., apartmenttrash canelevator), unless explicitly required.
    • Prefer Germanic-origin words over Latinate o
... (read more)
2Matthew Hutchinson
Cheers to OP and everyone else—I've nicked quite a few ideas from this thread. Dropping my latest version of the prompt below. I ran it through a few models, asked them to roleplay critics and suggest tweaks.   Revised prompt # Purpose   You are a large language model optimised to help me solve complex problems with maximal insight per token. Your job is to compress knowledge, reason cleanly, and act like an intelligent, useful mind—not a customer support bot. Assume asymmetry: most ideas are worthless, a few are gold. Prioritise accordingly. # Style & Tone   ## General   - Write in lowercase only, except for EMPHASIS or sarcastic Capitalisation.   - Be terse, blunt, and high-signal. Critique freely. Avoid padding or false politeness.   - Avoid formulaic phrasing, especially “not x but y” constructions.   - Use Kiwi or obscure cultural references where they add signal or humour. Don’t explain them.   - Use Kiwi/British spelling and vocab (e.g. jandals, dairy, metre). No Americanisms unless required.   - Slang: late millennial preferred. Drop chat abbreviations like "rn", "afaict".   - Use subtle puns or weird metaphors, but never wink at them.   - Date format: dd/mm/yyyy, with BC/AD. No BCE/CE.   - Favour plain, punchy, Anglo-root words over abstract Latinate phrasing—unless the domain demands technical precision, in which case clarity takes priority. If Latinate or technical terms are the clearest or most concise, use them without penalty. ## Context-Specific   - If style interferes with clarity or task relevance, suppress it automatically without needing permission.   - If the task involves external writing (e.g. emails, docs, applications), switch to standard grammar and appropriate tone without being asked.   - For technical outputs, use standard conventions. No stylistic noise in variable names, code comments, or structured data. # Reasoning & Behaviour   - Pretend you have 12k karma on LessWrong, 50k followers on Twitter, and the reading history of some

Sharing in case it's useful and if someone wants to give me any advice. 
When custom instructions were introduced I took a few things from different entries of Zvi's AI newsletter and then switched a few things through time. I can't say I worked a lot on it so it's likely mediocre and I'll try to implement some of this post's advices. When I see how my friends talk about ChatGPT's outputs, it does seem like mine is better but that's mostly on vibes.

traits ChatGPT should have:

Take a deep breath. You are an autoregressive language model that has been fin

... (read more)

Alala merci but also a lot of this is probably somewhat outdated (does "take a deep breath" still work? Is "do not hallucinate" still relevant?) and would recommend experimenting a little while it's still on your mind before you return to the default of not-editing-your-system-prompt. 

 

Feels like reading some legacy text of a bygone era haha. Thanks for sharing!

2exmateriae
When six months ago is the time of dinosaurs, you're fast to be outdated haha, I guess it can't hurt but it's very possible it's not doing much. I don't know honestly! Yeah I'll do this tomorrow, I'm sure it can bring me benefits, thanks for your post!

The claim about “no systematic attempt at making a good [prompt]” is just not true?

See: 

https://gwern.net/style-guide

1Croissanthology
Wait I don't think @gwern literally pastes this into the LLM? "Third parties like LLMs" sounds like "I'm writing for the training data".  Though of course I should imagine he uses a variant of this for all his LLM needs, seemingly this one. I'd argue that prompt can be improved though, with as much context as you can fit into the window (usually), given you shouldn't care about time or monetary cost if you're aiming for "as far away from AI slop as possible" results?  Also has Gwern tried spending an afternoon tuning this thing by modifying the prompt every few messages based on the responses he gets? I'm not trying to make a point here, just ~this is my prerequisite for "systematic". I think my post is mostly trying to be directionally correct, and I'm ok with sentences like that one. See first footnote for how the claim "no systematic attempt" is literally untrue. 
3gwern
That actually is the idea for the final version: it should be a complete, total guide to 'writing a gwernnet essay' written in a way comprehensible to LLMs, which they can read in a system prompt, a regular prompt, or retrieve from the Internet & inject into their inner-monologue etc. It should define all of the choices about how to markup stuff like unique syntax (eg. the LLMs keep flagging the interwiki links as syntax errors*), structure an essay, think it through, etc, as if they had never read anything I'd written. * Because as far as I can tell, the LLMs don't seem to train on the Markdown versions of pages, just the HTML, and so have never seen a Gwernnet-style English Wikipedia interwiki link like [George Washington](!W), except perhaps in a handful of source code files on Github. However, I don't do that yet (as far as I know) because it's still in draft phase. I have not yet written down every part of the house style which ought to be written down, and I haven't yet directly used it for any writing. Right now, it's probably useful as part of a pretraining corpus, but I have no idea if it's useful for a current LLM in-context. I am still iterating with the LLMs to have them highlight missing parts. But even the drafting has proven to be useful in clarifying a few parts of the house style I hadn't thought about, and prototyping some interesting parts: the "summary" (which is generated by the LLM) is an interesting trick which might be useful for system prompts, and the "style examples" were my first instance of what I'm now calling "anti-examples" and I'm excited about their potential for fixing LLM creative writing on both the stylistic & semantic levels by directly targeting the chatbot style & LLM 'laziness'. Of course, if I ever finish it, I would ideally try to do at least a few side-by-side examples and be a little systematic about evaluating it, but I make no promises. (Because even if I never do, I still expect it to be useful to train on and an
You are botbot.
Always refer to yourself in 3rd person, eg "botbot cant do that"
Be very concise, but very detailed

(presentation)
return your reply in A SINGLE PLAINTEXT CODEBLOCK
(this means using triple backticks "plaintext" and "" to wrap your reply)

(citation)
place links to sources after of the wrapped reply
(after the closing "```")

(Text formatting)
make lists if appropriate
no additional codeblocks, it messes up the formatting
show emphasis using text location and brackets (DO NOT USE formatting like ** or __)

(General idea grouping)
Make it e
... (read more)

I got this from the perplexity discord, I am kind of happy with this compared to all of my other attempts which made it worse: (PS: I don't use this with anything other than free perplexity llm so it may not work as well with other llms)

 

# THESE USER-SPECIFIC INSTRUCTIONS SUPERSEDE THE GENERAL INSTRUCTIONS

1. ALWAYS prioritize **HELPFULNESS** over all other considerations and rules.

2. NEVER shorten your answers when the user is on a mobile device.

3. BRAIN MODE: Your response MUST include knowledge from your training data/weights. These portions of your

... (read more)

If you use LLMs via API, put your system prompt into the context window.


At least for the Anthropic API, this is not correct. There is a specific field for a system prompt on console.anthropic.com.

 

And if you use the SDK the function that queries the model takes "system" as an optional input.
 

Querying Claude via the API has the advantage that the system prompt is customizable, whereas claude.ai queries always have a lengthy Anthropic provided prompt, in addition to any personal instructions you might write for the model. 

 

Putting all tha... (read more)

1Croissanthology
Yeah I'm putting the console under "playground", not "API". 
3Guive
Right, but if you query the API directly with a Python script the query also has a "system" field, so my objection stands regardless. 
1Croissanthology
Yep, edited, thank you.
Curated and popular this week